Add latest changes from gitlab-org/gitlab@master

This commit is contained in:
GitLab Bot 2025-07-15 03:15:13 +00:00
parent a659c0555d
commit 8085e3512a
23 changed files with 257 additions and 101 deletions

View File

@ -115,6 +115,8 @@ config/bounded_contexts.yml @fabiopitino @grzesiek @stanhu @cwoolley-gitlab @tku
/ee/app/finders/
/ee/spec/finders/
/rubocop/rubocop-migrations.yml
/lib/tasks/gitlab/db.rake
/spec/tasks/gitlab/db_rake_spec.rb
[Pipeline configuration] @gl-dx/pipeline-maintainers
/.gitlab-ci.yml
@ -1920,7 +1922,7 @@ ee/app/workers/ai/repository_xray/
/jh/doc/
[Geo]
/doc/**/geo/ @luciezhao @nhxnguyen
/doc/**/geo/ @luciezhao @nhxnguyen @axil @phillipwells
/ee/**/geo/ @gitlab-org/geo-team/geo-backend
/qa/**/geo/ @gitlab-org/geo-team/geo-backend
geo_* @gitlab-org/geo-team/geo-backend

View File

@ -0,0 +1,12 @@
---
name: gitlab_base.TableDelimiterRows
description: |
Ensures tables don't have unnecessarily short delimiter row cells.
extends: existence
message: "Use at least three hyphens in each cell in the table delimiter row."
link: https://docs.gitlab.com/development/documentation/styleguide/#creation-guidelines
vocab: false
level: error
scope: raw
raw:
- '(?<=\|\n) *\| ?-{0,2} ?\|'

View File

@ -73,15 +73,15 @@ We provide two debugging scripts to help administrators verify their self-hosted
poetry run troubleshoot [options]
```
The `troubleshoot` command supports the following options:
The `troubleshoot` command supports the following options:
| Option | Description | Default | Example |
|--------|-------------|---------|---------|
| `--endpoint` | AI Gateway endpoint | `localhost:5052` | `--endpoint=localhost:5052` |
| `--model-family` | Model family to test. Possible values are `mistral`, `mixtral`, `gpt`, or `claude_3` | - | `--model-family=mistral` |
| `--model-endpoint` | Model endpoint. For models hosted on vLLM, add the `/v1` suffix. | - | `--model-endpoint=http://localhost:4000/v1` |
| `--model-identifier` | Model identifier. | - | `--model-identifier=custom_openai/Mixtral-8x7B-Instruct-v0.1` |
| `--api-key` | Model API key. | - | `--api-key=your-api-key` |
| Option | Default | Example | Description |
|----------------------|------------------|---------------------------------------------------------------|-------------|
| `--endpoint` | `localhost:5052` | `--endpoint=localhost:5052` | AI Gateway endpoint |
| `--model-family` | - | `--model-family=mistral` | Model family to test. Possible values are `mistral`, `mixtral`, `gpt`, or `claude_3` |
| `--model-endpoint` | - | `--model-endpoint=http://localhost:4000/v1` | Model endpoint. For models hosted on vLLM, add the `/v1` suffix. |
| `--model-identifier` | - | `--model-identifier=custom_openai/Mixtral-8x7B-Instruct-v0.1` | Model identifier. |
| `--api-key` | - | `--api-key=your-api-key` | Model API key. |
**Examples**:

View File

@ -366,7 +366,7 @@ For example, with [`trigger:inputs`](../yaml/_index.md#triggerinputs):
```yaml
trigger-job:
trigger:
strategy: depend
strategy: mirror
include:
- local: path/to/child-pipeline.yml
inputs:
@ -382,7 +382,7 @@ trigger-job:
```yaml
trigger-job:
trigger:
strategy: depend
strategy: mirror
project: project-group/my-downstream-project
inputs:
job-name: "defined"

View File

@ -53,7 +53,7 @@ In blocking manual jobs:
enabled can't be merged with a blocked pipeline.
- The pipeline shows a status of **blocked**.
When using manual jobs in triggered pipelines with [`strategy: depend`](../yaml/_index.md#triggerstrategy),
When using manual jobs in triggered pipelines with a [`trigger:strategy`](../yaml/_index.md#triggerstrategy),
the type of manual job can affect the trigger job's status while the pipeline runs.
### Run a manual job

View File

@ -38,7 +38,7 @@ Child pipelines:
- Do not directly affect the overall status of the ref the pipeline runs against. For example,
if a pipeline fails for the main branch, it's common to say that "main is broken".
The status of child pipelines only affects the status of the ref if the child
pipeline is triggered with [`strategy:depend`](../yaml/_index.md#triggerstrategy).
pipeline is triggered with [`trigger:strategy`](../yaml/_index.md#triggerstrategy).
- Are automatically canceled if the pipeline is configured with [`interruptible`](../yaml/_index.md#interruptible)
when a new pipeline is created for the same ref.
- Are not displayed in the project's pipeline list. You can only view child pipelines on
@ -67,7 +67,7 @@ Multi-project pipelines:
choose the ref of the downstream pipeline, and pass CI/CD variables to it.
- Affect the overall status of the ref of the project it runs in, but does not
affect the status of the triggering pipeline's ref, unless it was triggered with
[`strategy:depend`](../yaml/_index.md#triggerstrategy).
[`trigger:strategy`](../yaml/_index.md#triggerstrategy).
- Are not automatically canceled in the downstream project when using [`interruptible`](../yaml/_index.md#interruptible)
if a new pipeline runs for the same ref in the upstream pipeline. They can be
automatically canceled if a new pipeline is triggered for the same ref on the downstream project.
@ -428,7 +428,7 @@ In this example:
### Mirror the status of a downstream pipeline in the trigger job
You can mirror the status of the downstream pipeline in the trigger job
by using [`strategy: depend`](../yaml/_index.md#triggerstrategy):
by using [`strategy: mirror`](../yaml/_index.md#triggerstrategy):
{{< tabs >}}
@ -439,7 +439,7 @@ trigger_job:
trigger:
include:
- local: path/to/child-pipeline.yml
strategy: depend
strategy: mirror
```
{{< /tab >}}
@ -450,7 +450,7 @@ trigger_job:
trigger_job:
trigger:
project: my/project
strategy: depend
strategy: mirror
```
{{< /tab >}}
@ -931,7 +931,7 @@ stages:
trigger:
project: project-group/deployment-project
branch: main
strategy: depend
strategy: mirror
deploy-review:
stage: deploy

View File

@ -154,7 +154,7 @@ deploy:
stage: deploy
trigger:
include: deploy.gitlab-ci.yml
strategy: depend
strategy: mirror
resource_group: AWS-production
```
@ -175,9 +175,8 @@ deployment:
environment: production
```
You must define [`strategy: depend`](../yaml/_index.md#triggerstrategy)
with the `trigger` keyword. This ensures that the lock isn't released until the downstream pipeline
finishes.
You must define [`trigger:strategy`](../yaml/_index.md#triggerstrategy) to ensure
the lock isn't released until the downstream pipeline finishes.
## Related topics
@ -202,7 +201,7 @@ test:
stage: test
trigger:
include: child-pipeline-requires-production-resource-group.yml
strategy: depend
strategy: mirror
deploy:
stage: deploy
@ -212,7 +211,7 @@ deploy:
```
In a parent pipeline, it runs the `test` job that subsequently runs a child pipeline,
and the [`strategy: depend` option](../yaml/_index.md#triggerstrategy) makes the `test` job wait until the child pipeline has finished.
and the [`strategy: mirror` option](../yaml/_index.md#triggerstrategy) makes the `test` job wait until the child pipeline has finished.
The parent pipeline runs the `deploy` job in the next stage, that requires a resource from the `production` resource group.
If the process mode is `oldest_first`, it executes the jobs from the oldest pipelines, meaning the `deploy` job is executed next.
@ -228,7 +227,7 @@ test:
stage: test
trigger:
include: child-pipeline.yml
strategy: depend
strategy: mirror
resource_group: production # Specify the resource group in the parent pipeline
deploy:

View File

@ -3458,7 +3458,7 @@ successfully finished job in its parent pipeline or another child pipeline in th
stage: test
trigger:
include: child.yml
strategy: depend
strategy: mirror
variables:
PARENT_PIPELINE_ID: $CI_PIPELINE_ID
```
@ -5911,6 +5911,12 @@ trigger-multi-project-pipeline:
#### `trigger:strategy`
{{< history >}}
- `strategy:mirror` option [introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/431882) in GitLab 18.2.
{{< /history >}}
Use `trigger:strategy` to force the `trigger` job to wait for the downstream pipeline to complete
before it is marked as **success**.
@ -5919,13 +5925,19 @@ This behavior is different than the default, which is for the `trigger` job to b
This setting makes your pipeline execution linear rather than parallel.
**Supported values**:
- `mirror`: Mirrors the status of the downstream pipeline exactly.
- `depend`: The trigger job status shows **failed**, **success** or **running**,
depending on the downstream pipeline status. See additional details.
**Example of `trigger:strategy`**:
```yaml
trigger_job:
trigger:
include: path/to/child-pipeline.yml
strategy: depend
strategy: mirror
```
In this example, jobs from subsequent stages wait for the triggered pipeline to
@ -5936,13 +5948,14 @@ successfully complete before starting.
- [Optional manual jobs](../jobs/job_control.md#types-of-manual-jobs) in the downstream pipeline
do not affect the status of the downstream pipeline or the upstream trigger job.
The downstream pipeline can complete successfully without running any optional manual jobs.
- By default, jobs in later stages do not start until the trigger job completes.
- [Blocking manual jobs](../jobs/job_control.md#types-of-manual-jobs) in the downstream pipeline
must run before the trigger job is marked as successful or failed. The trigger job
shows **running** ({{< icon name="status_running" >}}) if the downstream pipeline status is
**waiting for manual action** ({{< icon name="status_manual" >}}) due to manual jobs.
By default, jobs in later stages do not start until the trigger job completes.
- If the downstream pipeline has a failed job, but the job uses [`allow_failure: true`](#allow_failure),
the downstream pipeline is considered successful and the trigger job shows **success**.
must run before the trigger job is marked as successful or failed.
- When using `stratgy:depend`:
- The trigger job shows **running** ({{< icon name="status_running" >}}) if the downstream pipeline status is
**waiting for manual action** ({{< icon name="status_manual" >}}) due to manual jobs.
- If the downstream pipeline has a failed job, but the job uses [`allow_failure: true`](#allow_failure),
the downstream pipeline is considered successful and the trigger job shows **success**.
#### `trigger:forward`

View File

@ -270,8 +270,8 @@ Factories consist of three main parts - the **Name** of the factory, the **Trait
Given: `create(:iteration, :with_title, :current, title: 'My Iteration')`
| | |
|:--------------------------|:-|
| | |
|:--------------------------|:---|
| **:iteration** | This is the **Name** of the factory. The filename will be the plural form of this **Name** and reside under either `spec/factories/iterations.rb` or `ee/spec/factories/iterations.rb`. |
| **:with_title** | This is a **Trait** of the factory. [See how it's defined](https://gitlab.com/gitlab-org/gitlab/-/blob/9c2a1f98483921dd006d70fdaed316e21fc5652f/ee/spec/factories/iterations.rb#L21-23). |
| **:current** | This is a **Trait** of the factory. [See how it's defined](https://gitlab.com/gitlab-org/gitlab/-/blob/9c2a1f98483921dd006d70fdaed316e21fc5652f/ee/spec/factories/iterations.rb#L29-31). |

View File

@ -141,20 +141,20 @@ This table is a simplified version of the `users` table which contains several r
smaller gaps in the `id` column to make the example a bit more realistic (a few records were
already deleted). One index exists on the `id` field:
| `ID` | `sign_in_count` | `created_at` |
| -- | :-------------: | ------------ |
| 1 | 1 | 2020-01-01 |
| 2 | 4 | 2020-01-01 |
| 9 | 1 | 2020-01-03 |
| 300 | 5 | 2020-01-03 |
| 301 | 9 | 2020-01-03 |
| 302 | 8 | 2020-01-03 |
| 303 | 2 | 2020-01-03 |
| 350 | 1 | 2020-01-03 |
| 351 | 3 | 2020-01-04 |
| 352 | 0 | 2020-01-05 |
| 353 | 9 | 2020-01-11 |
| 354 | 3 | 2020-01-12 |
| `ID` | `sign_in_count` | `created_at` |
|-------|:----------------|--------------|
| `1` | `1` | 2020-01-01 |
| `2` | `4` | 2020-01-01 |
| `9` | `1` | 2020-01-03 |
| `300` | `5` | 2020-01-03 |
| `301` | `9` | 2020-01-03 |
| `302` | `8` | 2020-01-03 |
| `303` | `2` | 2020-01-03 |
| `350` | `1` | 2020-01-03 |
| `351` | `3` | 2020-01-04 |
| `352` | `0` | 2020-01-05 |
| `353` | `9` | 2020-01-11 |
| `354` | `3` | 2020-01-12 |
Loading all users into memory (avoid):

View File

@ -105,20 +105,20 @@ the class method `Integration.supported_events` in your model.
The following events are supported for integrations:
| Event type | Default | Value | Trigger |
|:-----------------------------------------------------------------------------------------------|:--------|:---------------------|:--|
|:-----------------------------------------------------------------------------------------------|:--------|:---------------------|:--------|
| Alert event | | `alert` | A new, unique alert is recorded. |
| Commit event | ✓ | `commit` | A commit is created or updated. |
| [Deployment event](../../user/project/integrations/webhook_events.md#deployment-events) | | `deployment` | A deployment starts or finishes. |
| [Work item event](../../user/project/integrations/webhook_events.md#work-item-events) | ✓ | `issue` | An issue is created, updated, or closed. |
| [Confidential issue event](../../user/project/integrations/webhook_events.md#work-item-events) | ✓ | `confidential_issue` | A confidential issue is created, updated, or closed. |
| [Job event](../../user/project/integrations/webhook_events.md#job-events) | | `job` | |
| [Job event](../../user/project/integrations/webhook_events.md#job-events) | | `job` | |
| [Merge request event](../../user/project/integrations/webhook_events.md#merge-request-events) | ✓ | `merge_request` | A merge request is created, updated, or merged. |
| [Comment event](../../user/project/integrations/webhook_events.md#comment-events) | | `comment` | A new comment is added. |
| [Confidential comment event](../../user/project/integrations/webhook_events.md#comment-events) | | `confidential_note` | A new comment on a confidential issue is added. |
| [Pipeline event](../../user/project/integrations/webhook_events.md#pipeline-events) | | `pipeline` | A pipeline status changes. |
| [Push event](../../user/project/integrations/webhook_events.md#push-events) | ✓ | `push` | A push is made to the repository. |
| [Tag push event](../../user/project/integrations/webhook_events.md#tag-events) | ✓ | `tag_push` | New tags are pushed to the repository. |
| Vulnerability event | | `vulnerability` | A new, unique vulnerability is recorded. Ultimate only. |
| Vulnerability event | | `vulnerability` | A new, unique vulnerability is recorded. Ultimate only. |
| [Wiki page event](../../user/project/integrations/webhook_events.md#wiki-page-events) | ✓ | `wiki_page` | A wiki page is created or updated. |
#### Event examples
@ -298,35 +298,35 @@ To add your custom properties to the form, you can define the metadata for them
This method should return an array of hashes for each field, where the keys can be:
| Key | Type | Required | Default | Description |
|:---------------|:--------|:---------|:-----------------------------|:--|
| `type:` | symbol | true | `:text` | The type of the form field. Can be `:text`, `:number`, `:textarea`, `:password`, `:checkbox`, `:string_array` or `:select`. |
| `section:` | symbol | false | | Specify which section the field belongs to. |
| `name:` | string | true | | The property name for the form field. |
| `required:` | boolean | false | `false` | Specify if the form field is required or optional. Note [backend validations](#define-validations) for presence are still needed. |
| `title:` | string | false | Capitalized value of `name:` | The label for the form field. |
| `placeholder:` | string | false | | A placeholder for the form field. |
| `help:` | string | false | | A help text that displays below the form field. |
| `api_only:` | boolean | false | `false` | Specify if the field should only be available through the API, and excluded from the frontend form. |
| `description` | string | false | | Description of the API field. |
| `if:` | boolean or lambda | false | `true` | Specify if the field should be available. The value can be a boolean or a lambda. |
| Key | Type | Required | Default | Description |
|:---------------|:------------------|:---------|:-----------------------------|:------------|
| `type:` | symbol | true | `:text` | The type of the form field. Can be `:text`, `:number`, `:textarea`, `:password`, `:checkbox`, `:string_array` or `:select`. |
| `section:` | symbol | false | | Specify which section the field belongs to. |
| `name:` | string | true | | The property name for the form field. |
| `required:` | boolean | false | `false` | Specify if the form field is required or optional. Note [backend validations](#define-validations) for presence are still needed. |
| `title:` | string | false | Capitalized value of `name:` | The label for the form field. |
| `placeholder:` | string | false | | A placeholder for the form field. |
| `help:` | string | false | | A help text that displays below the form field. |
| `api_only:` | boolean | false | `false` | Specify if the field should only be available through the API, and excluded from the frontend form. |
| `description` | string | false | | Description of the API field. |
| `if:` | boolean or lambda | false | `true` | Specify if the field should be available. The value can be a boolean or a lambda. |
### Additional keys for `type: :checkbox`
| Key | Type | Required | Default | Description |
|:------------------|:-------|:---------|:------------------|:--|
|:------------------|:-------|:---------|:------------------|:------------|
| `checkbox_label:` | string | false | Value of `title:` | A custom label that displays next to the checkbox. |
### Additional keys for `type: :select`
| Key | Type | Required | Default | Description |
|:-----------|:------|:---------|:--------|:--|
|:-----------|:------|:---------|:--------|:------------|
| `choices:` | array | true | | A nested array of `[label, value]` tuples. |
### Additional keys for `type: :password`
| Key | Type | Required | Default | Description |
|:----------------------------|:-------|:---------|:------------------|:--|
|:----------------------------|:-------|:---------|:------------------|:------------|
| `non_empty_password_title:` | string | false | Value of `title:` | An alternative label that displays when a value is already stored. |
| `non_empty_password_help:` | string | false | Value of `help:` | An alternative help text that displays when a value is already stored. |

View File

@ -52,9 +52,9 @@ All clicks on the nav items should be automatically tracked in Snowplow, but may
We use `data-tracking` attributes on all the elements in the nav to send the data up to Snowplow.
You can test that they're working by [setting up snowplow on your GDK](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/snowplow_micro.md).
| Field | Data attribute | Notes | Example |
| -- | -- | -- | -- |
| Category | `data-tracking-category` | The page that the user was on when the item was clicked. | `groups:show` |
| Action | `data-tracking-action` | The action taken. In most cases this is `click_link` or `click_menu_item` | `click_link` |
| Label | `data-tracking-label` | A descriptor for what was clicked on. This is inferred by the ID of the item in most cases, but falls back to `item_without_id`. This is one to look out for. | `group_issue_list` |
| Property | `data-tracking-property` | This describes where in the nav the link was clicked. If it's in the main nav panel, then it needs to describe which panel. | `nav_panel_group` |
| Field | Data attribute | Example | Notes |
|----------|--------------------------|--------------------|-------|
| Category | `data-tracking-category` | `groups:show` | The page that the user was on when the item was clicked. |
| Action | `data-tracking-action` | `click_link` | The action taken. In most cases this is `click_link` or `click_menu_item` |
| Label | `data-tracking-label` | `group_issue_list` | A descriptor for what was clicked on. This is inferred by the ID of the item in most cases, but falls back to `item_without_id`. This is one to look out for. |
| Property | `data-tracking-property` | `nav_panel_group` | This describes where in the nav the link was clicked. If it's in the main nav panel, then it needs to describe which panel. |

View File

@ -107,7 +107,7 @@ shard_consumption = shard_rps * shard_duration_avg
If we expect an increase of **less than 5%**, then no further action is needed.
Otherwise, ping `@gitlab-org/scalability` on the merge request and ask
Otherwise, ping `@gitlab-com/gl-infra/data-access/durability` on the merge request and ask
for a review.
## Jobs with External Dependencies

View File

@ -490,10 +490,10 @@ For tests that are above the thresholds, we automatically report slowness occurr
For tests that are slow for a legitimate reason and to skip issue creation, add `allowed_to_be_slow: true`.
| Date | Feature tests | Controllers and Requests tests | Unit | Other | Method |
| :-: | :-: | :-: | :-: | :-: | :-: |
| 2023-02-15 | 67.42 seconds | 44.66 seconds | - | 76.86 seconds | Top slow test eliminating the maximum |
| 2023-06-15 | 50.13 seconds | 19.20 seconds | 27.12 | 45.40 seconds | Avg for top 100 slow tests|
| Date | Feature tests | Controllers and Requests tests | Unit | Other | Method |
|:----------:|:-------------:|:------------------------------:|:-----:|:-------------:|:------:|
| 2023-02-15 | 67.42 seconds | 44.66 seconds | - | 76.86 seconds | Top slow test eliminating the maximum |
| 2023-06-15 | 50.13 seconds | 19.20 seconds | 27.12 | 45.40 seconds | Avg for top 100 slow tests |
## Handling issues for flaky or slow tests

View File

@ -31,9 +31,9 @@ You can connect ClickHouse to GitLab either:
## Supported ClickHouse versions
| First GitLab version | ClickHouse versions | Comment |
|-|-|-|
|17.7.0 | 23.x (24.x, 25.x) | For using ClickHouse 24.x and 25.x see the [workaround section](#database-schema-migrations-on-gitlab-1800-and-earlier). |
|18.1.0 | 23.x, 24.x, 25.x | |
|----------------------|---------------------|---------|
| 17.7.0 | 23.x (24.x, 25.x) | For using ClickHouse 24.x and 25.x see the [workaround section](#database-schema-migrations-on-gitlab-1800-and-earlier). |
| 18.1.0 | 23.x, 24.x, 25.x | |
{{< alert type="note" >}}

View File

@ -51,7 +51,7 @@ For deprecated, [certificate-based clusters](../../user/infrastructure/clusters/
### Example configurations
| Cluster name | Cluster environment scope | `KUBE_INGRESS_BASE_DOMAIN` value | `KUBE CONTEXT` value | Variable environment scope | Notes |
| :------------| :-------------------------| :------------------------------- | :--------------------------------- | :--------------------------|:--|
|:-------------|:--------------------------|:---------------------------------|:-----------------------------------|:---------------------------|:------|
| review | `review/*` | `review.example.com` | `path/to/project:review-agent` | `review/*` | A review cluster that runs all [review apps](../../ci/review_apps/_index.md). |
| staging | `staging` | `staging.example.com` | `path/to/project:staging-agent` | `staging` | Optional. A staging cluster that runs the deployments of the staging environments. You must [enable it first](cicd_variables.md#deploy-policy-for-staging-and-production-environments). |
| production | `production` | `example.com` | `path/to/project:production-agent` | `production` | A production cluster that runs the production environment deployments. You can use [incremental rollouts](cicd_variables.md#incremental-rollout-to-production). |

View File

@ -2,9 +2,11 @@
stage: Application Security Testing
group: Dynamic Analysis
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: 'Tutorial: Perform fuzz testing in GitLab'
title: 'Tutorial: Perform fuzz testing in GitLab (deprecated)'
---
<!--- start_remove The following content will be removed on remove_date: '2026-08-15' -->
{{< details >}}
- Tier: Ultimate
@ -12,6 +14,13 @@ title: 'Tutorial: Perform fuzz testing in GitLab'
{{< /details >}}
{{< alert type="warning" >}}
Coverage-guided fuzz testing was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/517841) in GitLab 18.0
and is planned for removal in 19.0. This is a breaking change.
{{< /alert >}}
<!-- vale gitlab_base.FutureTense = NO -->
[Coverage-guided fuzz testing](../../user/application_security/coverage_fuzzing/_index.md#coverage-guided-fuzz-testing-process) sends unexpected, malformed, or random data to your application, and then monitors
@ -250,3 +259,5 @@ Congratulations, you've successfully run a fuzz test and fixed the identified
security vulnerabilities!
For more information, see [coverage-guided fuzz testing](../../user/application_security/coverage_fuzzing/_index.md).
<!--- end_remove -->

View File

@ -159,9 +159,9 @@ There are cases where the document is autogenerated with an invalid schema or ca
To detect and correct elements that don't comply with the OpenAPI specifications, we recommend using an editor. An editor commonly provides document validation, and suggestions to create a schema-compliant OpenAPI document. Suggested editors include:
| Editor | OpenAPI 2.0 | OpenAPI 3.0.x | OpenAPI 3.1.x |
| -- | -- | -- | -- |
|--------|-------------|---------------|---------------|
| [Stoplight Studio](https://stoplight.io/solutions) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON |
| [Swagger Editor](https://editor.swagger.io/) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="dotted-circle" >}} YAML, JSON |
| [Swagger Editor](https://editor.swagger.io/) | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="check-circle" >}} YAML, JSON | {{< icon name="dotted-circle" >}} YAML, JSON |
If your OpenAPI document is generated manually, load your document in the editor and fix anything that is non-compliant. If your document is generated automatically, load it in your editor to identify the issues in the schema, then go to the application and perform the corrections based on the framework you are using.

View File

@ -2,10 +2,12 @@
stage: Application Security Testing
group: Dynamic Analysis
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Coverage-guided fuzz testing
title: Coverage-guided fuzz testing (deprecated)
description: Coverage-guided fuzzing, random inputs, and unexpected behavior.
---
<!--- start_remove The following content will be removed on remove_date: '2026-08-15' -->
{{< details >}}
- Tier: Ultimate
@ -13,6 +15,13 @@ description: Coverage-guided fuzzing, random inputs, and unexpected behavior.
{{< /details >}}
{{< alert type="warning" >}}
This feature was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/517841) in GitLab 18.0
and is planned for removal in 19.0. This is a breaking change.
{{< /alert >}}
## Getting started
Coverage-guided fuzz testing sends random inputs to an instrumented version of your application in
@ -388,3 +397,5 @@ corpus file extracts into a folder named `corpus`.
If you see this error message when running the fuzzing job with `COVFUZZ_USE_REGISTRY` set to `true`,
ensure that duplicates are allowed. For more details, see
[duplicate Generic packages](../../packages/generic_packages/_index.md#disable-publishing-duplicate-package-names).
<!--- end_remove -->

View File

@ -23,6 +23,7 @@ The compliance center is the central location for compliance teams to manage the
The compliance center comprises the:
- [Compliance overview dashboard](compliance_overview_dashboard.md).
- [Compliance status report](compliance_status_report.md).
- [Compliance standards adherence dashboard](compliance_standards_adherence_dashboard.md).
- [Compliance violations report](compliance_violations_report.md).

View File

@ -0,0 +1,104 @@
---
stage: Software Supply Chain Security
group: Compliance
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://handbook.gitlab.com/handbook/product/ux/technical-writing/#assignments
title: Compliance overview dashboard
---
{{< details >}}
- Tier: Ultimate
- Offering: GitLab.com, GitLab Self-Managed, GitLab Dedicated
{{< /details >}}
{{< history >}}
- [Introduced](https://gitlab.com/groups/gitlab-org/-/epics/13909) in GitLab 18.2 with a flag named `compliance_group_dashboard`. Enabled by default.
{{< /history >}}
{{< alert type="flag" >}}
The availability of this feature is controlled by a feature flag.
For more information, see the history.
This feature is available for testing, but not ready for production use.
{{< /alert >}}
The compliance overview dashboard provides visual insights into your group's compliance posture through interactive
charts and metrics. It helps you quickly identify areas that need attention and track your overall compliance status.
The compliance overview dashboard displays four key areas of compliance monitoring:
- Compliance framework coverage.
- Failed requirements status.
- Failed controls status.
- Compliance frameworks that need attention.
## View the compliance overview dashboard
Prerequisites:
- You must be an administrator or have the Owner role for the group.
To view the compliance overview dashboard:
1. On the left sidebar, select **Search or go to** and find your group.
1. Select **Secure > Compliance center**.
1. Select **Overview** to view the compliance dashboard.
## Compliance framework coverage
The framework coverage section provides an overview of how many projects in your group have compliance frameworks
assigned.
The framework coverage section displays:
- **Total projects**: The total number of projects in your group.
- **Covered projects**: Number of projects with at least one compliance framework assigned.
- **Coverage percentage**: Visual representation of framework coverage across your projects.
Below the summary metrics, you can see individual framework coverage including:
- Framework name with visual badge.
- Number of projects using each framework.
- Percentage of total projects covered by each framework.
## Failed requirements chart
The failed requirements chart visualizes the compliance status of requirements across your frameworks.
The failed requirements chart displays three categories:
- **Passed**: Requirements that are fully compliant (shown in blue).
- **Pending**: Requirements under review (shown in orange).
- **Failed**: Requirements not meeting compliance standards (shown in magenta).
## Failed controls chart
The failed controls chart provides a visual representation of control compliance status across your organization.
The failed controls chart displays three categories:
- **Passed**: Controls that meet compliance requirements (shown in blue).
- **Pending**: Controls awaiting evaluation (shown in orange).
- **Failed**: Controls that don't meet compliance requirements (shown in magenta).
## Frameworks table
The frameworks table highlights compliance frameworks that require immediate attention. This view helps you identify
frameworks with configuration issues or missing components.
The frameworks table displays:
- **Framework name**: The compliance framework with a visual badge.
- **Projects**: Number of projects using this framework (highlighted in red if zero).
- **Requirements**: Total number of requirements in the framework (highlighted in red if zero).
- **Requirements without controls**: Lists specific requirements that don't have associated controls.
- **Policies**: Security policies linked to the framework, including:
- Scan execution policies.
- Vulnerability management policies.
- Scan result policies.
- Pipeline execution policies.
- **Actions**: Edit framework button (visible to users with admin permissions).

View File

@ -370,12 +370,15 @@ Marked stuck import jobs as failed. JIDs: xyz
### Problems and solutions
| Problem | Possible solutions |
| -------- | -------- |
| [Slow JSON](https://gitlab.com/gitlab-org/gitlab/-/issues/25251) loading/dumping models from the database | [split the worker](https://gitlab.com/gitlab-org/gitlab/-/issues/25252) |
| | Batch export |
| | Optimize SQL |
| | Move away from `ActiveRecord` callbacks (difficult) |
| High memory usage (see also some [analysis](https://gitlab.com/gitlab-org/gitlab/-/issues/18857)) | DB Commit sweet spot that uses less memory |
| | [Netflix Fast JSON API](https://github.com/Netflix/fast_jsonapi) may help |
| | Batch reading/writing to disk and any SQL |
[Slow JSON](https://gitlab.com/gitlab-org/gitlab/-/issues/25251) loading/dumping models from the database:
- [split the worker](https://gitlab.com/gitlab-org/gitlab/-/issues/25252) |
- Batch export
- Optimize SQL
- Move away from `ActiveRecord` callbacks (difficult)
High memory usage (see also some [analysis](https://gitlab.com/gitlab-org/gitlab/-/issues/18857)):
- DB Commit sweet spot that uses less memory
- [Netflix Fast JSON API](https://github.com/Netflix/fast_jsonapi) may help
- Batch reading/writing to disk and any SQL

View File

@ -441,7 +441,7 @@ to iterate over large lists of pipelines and jobs.
print("\nDone collecting data.")
if len(ci_job_artifacts) > 0:
print("|Project|Job|Artifact name|Artifact type|Artifact size|\n|-|-|-|-|-|") #Start markdown friendly table
print("| Project | Job | Artifact name | Artifact type | Artifact size |\n|---------|-----|---------------|---------------|---------------|") # Start markdown friendly table
for artifact in ci_job_artifacts:
print('| [{project_name}]({project_web_url}) | {job_name} | {artifact_name} | {artifact_type} | {artifact_size} |'.format(project_name=artifact['project_path_with_namespace'], project_web_url=artifact['project_web_url'], job_name=artifact['job_id'], artifact_name=artifact['artifact_filename'], artifact_type=artifact['artifact_file_type'], artifact_size=render_size_mb(artifact['artifact_size'])))
else:
@ -454,8 +454,8 @@ content to an issue comment or description, or populate a Markdown file in a Git
```shell
$ python3 get_all_projects_top_level_namespace_storage_analysis_cleanup_example.py
|Project|Job|Artifact name|Artifact type|Artifact size|
|-|-|-|-|-|
| Project | Job | Artifact name | Artifact type | Artifact size |
|---------|-----|---------------|---------------|---------------|
| [gitlab-da/playground/artifact-gen-group/gen-job-artifacts-4](Gen Job Artifacts 4) | 4828297946 | artifacts.zip | archive | 50.0154 |
| [gitlab-da/playground/artifact-gen-group/gen-job-artifacts-4](Gen Job Artifacts 4) | 4828297946 | metadata.gz | metadata | 0.0001 |
| [gitlab-da/playground/artifact-gen-group/gen-job-artifacts-4](Gen Job Artifacts 4) | 4828297946 | job.log | trace | 0.0030 |
@ -865,7 +865,7 @@ The following process describes how the script searches for the artifact expiry
print(f"Exception searching artifacts on ci_pipelines: {e}".format(e=e))
if len(ci_job_artifacts_expiry) > 0:
print("|Project|Job|Artifact expiry|\n|-|-|-|") #Start markdown friendly table
print("| Project | Job | Artifact expiry |\n|---------|-----|-----------------|") #Start markdown friendly table
for k, details in ci_job_artifacts_expiry.items():
if details['job_name'][0] == '.':
continue # ignore job templates that start with a '.'
@ -891,8 +891,8 @@ python3 -m pip install 'python-gitlab[yaml]'
python3 get_all_cicd_config_artifacts_expiry.py
|Project|Job|Artifact expiry|
|-|-|-|
| Project | Job | Artifact expiry |
|---------|-----|-----------------|
| [Gen Job Artifacts 4](https://gitlab.com/gitlab-da/playground/artifact-gen-group/gen-job-artifacts-4) | generator | 30 days |
| [Gen Job Artifacts with expiry and included jobs](https://gitlab.com/gitlab-da/playground/artifact-gen-group/gen-job-artifacts-expiry-included-jobs) | included-job10 | 10 days |
| [Gen Job Artifacts with expiry and included jobs](https://gitlab.com/gitlab-da/playground/artifact-gen-group/gen-job-artifacts-expiry-included-jobs) | included-job1 | 1 days |