Add latest changes from gitlab-org/gitlab@master
This commit is contained in:
parent
6dc323b146
commit
3dfec47781
|
|
@ -321,7 +321,7 @@
|
|||
|
||||
.ai-gateway-services:
|
||||
services:
|
||||
- name: registry.gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/model-gateway:v1.6.1
|
||||
- name: registry.gitlab.com/gitlab-org/modelops/applied-ml/code-suggestions/ai-assist/model-gateway:v1.7.0
|
||||
alias: ai-gateway
|
||||
|
||||
.use-pg13:
|
||||
|
|
|
|||
|
|
@ -1,9 +0,0 @@
|
|||
---
|
||||
name: prevent_issue_epic_search
|
||||
feature_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/457756
|
||||
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/153668
|
||||
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/463698
|
||||
milestone: '17.1'
|
||||
group: group::duo chat
|
||||
type: beta
|
||||
default_enabled: false
|
||||
|
|
@ -491,14 +491,14 @@ It is risky to reuse a secondary site without resetting it because the secondary
|
|||
|
||||
If these kinds of risks do not apply, for example in a test environment, or if you know that the main Postgres database still contains all Geo events since the Geo site was added, then you can bypass this health check:
|
||||
|
||||
1. Get the last processed event time. In Rails console in the secondary site, run:
|
||||
1. Get the last processed event time. In Rails console in the **secondary** site, run:
|
||||
|
||||
```ruby
|
||||
Geo::EventLogState.last.created_at.utc
|
||||
```
|
||||
|
||||
1. Copy the output, for example `2024-02-21 23:50:50.676918 UTC`.
|
||||
1. Update the created time of the secondary site to make it appear older. In Rails console in the primary site, run:
|
||||
1. Update the created time of the secondary site to make it appear older. In Rails console in the **primary** site, run:
|
||||
|
||||
```ruby
|
||||
GeoNode.secondary_nodes.last.update_column(:created_at, DateTime.parse('2024-02-21 23:50:50.676918 UTC') - 1.second)
|
||||
|
|
@ -506,7 +506,7 @@ If these kinds of risks do not apply, for example in a test environment, or if y
|
|||
|
||||
This command assumes that the affected secondary site is the one that was created last.
|
||||
|
||||
1. Update the secondary site's status in **Admin > Geo > Sites**. In Rails console in the secondary site, run:
|
||||
1. Update the secondary site's status in **Admin > Geo > Sites**. In Rails console in the **secondary** site, run:
|
||||
|
||||
```ruby
|
||||
Geo::MetricsUpdateWorker.new.perform
|
||||
|
|
|
|||
|
|
@ -298,7 +298,13 @@ Possible solutions:
|
|||
- Provision larger VMs to gain access to larger network traffic allowances.
|
||||
- Use your cloud service's monitoring and logging to check that the Praefect nodes are not exhausting their traffic allowances.
|
||||
|
||||
## `gitlab-ctl reconfigure` fails with error: `STDOUT: praefect: configuration error: error reading config file: toml: cannot store TOML string into a Go int`
|
||||
## `gitlab-ctl reconfigure` fails with a Praefect configuration error
|
||||
|
||||
If `gitlab-ctl reconfigure` fails, you might see this error:
|
||||
|
||||
```plaintext
|
||||
STDOUT: praefect: configuration error: error reading config file: toml: cannot store TOML string into a Go int
|
||||
```
|
||||
|
||||
This error occurs when `praefect['database_port']` or `praefect['database_direct_port']` are configured as a string instead of an integer.
|
||||
|
||||
|
|
|
|||
|
|
@ -287,9 +287,16 @@ If you get a `404 Page Not Found` response from GitLab Pages:
|
|||
|
||||
Without the `pages:deploy` job, the updates to your GitLab Pages site are never published.
|
||||
|
||||
## 503 error `Client authentication failed due to unknown client, no client authentication included, or unsupported authentication method.`
|
||||
## 503 error `Client authentication failed due to unknown client`
|
||||
|
||||
If Pages is a registered OAuth application and [access control is enabled](../../user/project/pages/pages_access_control.md), this error indicates that the authentication token stored in `/etc/gitlab/gitlab-secrets.json` has become invalid. To resolve:
|
||||
If Pages is a registered OAuth application and [access control is enabled](../../user/project/pages/pages_access_control.md), this error indicates that the authentication token stored in `/etc/gitlab/gitlab-secrets.json` has become invalid:
|
||||
|
||||
```plaintext
|
||||
Client authentication failed due to unknown client, no client authentication included,
|
||||
or unsupported authentication method.
|
||||
```
|
||||
|
||||
To resolve:
|
||||
|
||||
1. Back up your secrets file:
|
||||
|
||||
|
|
|
|||
|
|
@ -895,7 +895,7 @@ Stopping or restarting the Patroni service on the leader node triggers an automa
|
|||
WARNING:
|
||||
In GitLab 16.5 and earlier, PgBouncer nodes do not automatically fail over alongside
|
||||
Patroni nodes. PgBouncer services
|
||||
[must be restarted manually](../../administration/postgresql/replication_and_failover_troubleshooting.md#pgbouncer-errors-error-running-command-gitlabctlerrorsexecutionerror-and-error-database-gitlabhq_production-is-not-paused)
|
||||
[must be restarted manually](../../administration/postgresql/replication_and_failover_troubleshooting.md#pgbouncer-error-error-pgbouncer-cannot-connect-to-server)
|
||||
for a successful switchover.
|
||||
|
||||
While Patroni supports automatic failover, you also have the ability to perform
|
||||
|
|
|
|||
|
|
@ -50,7 +50,7 @@ postgresql['trust_auth_cidr_addresses'] = %w(123.123.123.123/32 <other_cidrs>)
|
|||
|
||||
[Reconfigure GitLab](../restart_gitlab.md#reconfigure-a-linux-package-installation) for the changes to take effect.
|
||||
|
||||
## PgBouncer errors `Error running command: GitlabCtl::Errors::ExecutionError` and `ERROR: database gitlabhq_production is not paused`
|
||||
## PgBouncer nodes don't fail over after Patroni switchover
|
||||
|
||||
Due to a [known issue](https://gitlab.com/gitlab-org/omnibus-gitlab/-/issues/8166) that
|
||||
affects versions of GitLab prior to 16.5.0, the automatic failover of PgBouncer nodes does not
|
||||
|
|
|
|||
|
|
@ -124,7 +124,7 @@ During import, the tarball is cached in your configured `shared_path` directory.
|
|||
disk has enough free space to accommodate both the cached tarball and the unpacked
|
||||
project files on disk.
|
||||
|
||||
### Import is successful, but with a `Total number of not imported relations: XX` message, and issues are not created during the import
|
||||
### Import succeeds with `Total number of not imported relations: XX` message
|
||||
|
||||
If you receive a `Total number of not imported relations: XX` message, and issues
|
||||
aren't created during the import, check [exceptions_json.log](../logs/index.md#exceptions_jsonlog).
|
||||
|
|
|
|||
|
|
@ -190,11 +190,23 @@ GitLab Support can then investigate the issue in the GitLab.com server logs.
|
|||
NOTE:
|
||||
These steps can only be completed by GitLab Support.
|
||||
|
||||
[In Kibana](https://log.gprd.gitlab.net/app/r/s/0FdPP), the logs should be filtered for
|
||||
`json.meta.caller_id: JiraConnect::InstallationsController#update` and `NOT json.status: 200`.
|
||||
If you have been provided the `X-Request-Id` value, you can use that against `json.correlation_id` to narrow down the results.
|
||||
Each `GET` request made to the Jira Connect Proxy URL `https://gitlab.com/-/jira_connect/installations` generates two log entries.
|
||||
|
||||
Each `GET` request to the Jira Connect Proxy URL `https://gitlab.com/-/jira_connect/installations` generates two log entries.
|
||||
To locate the relevant log entries in Kibana, either:
|
||||
|
||||
- If you have the `X-Request-Id` value or correlation ID for the `GET` request to
|
||||
`https://gitlab.com/-/jira_connect/installations`, the
|
||||
[Kibana](https://log.gprd.gitlab.net/app/r/s/0FdPP) logs should be filtered for
|
||||
`json.meta.caller_id: JiraConnect::InstallationsController#update`, `NOT json.status: 200`
|
||||
and `json.correlation_id: <X-Request-Id>`. This should return two log entries.
|
||||
|
||||
- If you have the self-managed URL for the customer:
|
||||
1. The [Kibana](https://log.gprd.gitlab.net/app/r/s/QVsD4) logs should be filtered for
|
||||
`json.meta.caller_id: JiraConnect::InstallationsController#update`, `NOT json.status: 200`
|
||||
and `json.params.value: {"instance_url"=>"https://gitlab.example.com"}`. The self-managed URL
|
||||
must not have a leading slash. This should return one of the log entries.
|
||||
1. Add the `json.correlation_id` to the filter.
|
||||
1. Remove the `json.params.value` filter. This should return the other log entry.
|
||||
|
||||
For the first log:
|
||||
|
||||
|
|
@ -207,7 +219,7 @@ For the second log, you might have one of the following scenarios:
|
|||
- `json.message`, `json.jira_status_code`, and `json.jira_body` are present.
|
||||
- `json.message` is `Proxy lifecycle event received error response` or similar.
|
||||
- `json.jira_status_code` and `json.jira_body` might contain the response received from the self-managed instance or a proxy in front of the instance.
|
||||
- If `json.jira_status_code` is `401 Unauthorized` and `json.jira_body` is empty:
|
||||
- If `json.jira_status_code` is `401 Unauthorized` and `json.jira_body` is `(empty)`:
|
||||
- [**Jira Connect Proxy URL**](jira_cloud_app.md#set-up-your-instance) might not be set to `https://gitlab.com`.
|
||||
- If a [reverse proxy](jira_cloud_app.md#using-a-reverse-proxy) is in front of your self-managed instance,
|
||||
the `Host` header sent to the self-managed instance might not match the reverse proxy FQDN.
|
||||
|
|
|
|||
|
|
@ -182,11 +182,16 @@ that works for this problem. Follow these steps to use the tool in Auto DevOps:
|
|||
|
||||
1. Continue the deployments as usual.
|
||||
|
||||
## `Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached`
|
||||
## `Error: not a valid chart repository or cannot be reached`
|
||||
|
||||
As [announced in the official CNCF blog post](https://www.cncf.io/blog/2020/10/07/important-reminder-for-all-helm-users-stable-incubator-repos-are-deprecated-and-all-images-are-changing-location/),
|
||||
the stable Helm chart repository was deprecated and removed on November 13th, 2020.
|
||||
You may encounter this error after that date.
|
||||
You may encounter this error after that date:
|
||||
|
||||
```plaintext
|
||||
Error: error initializing: Looks like "https://kubernetes-charts.storage.googleapis.com"
|
||||
is not a valid chart repository or cannot be reached
|
||||
```
|
||||
|
||||
Some GitLab features had dependencies on the stable chart. To mitigate the impact, we changed them
|
||||
to use new official repositories or the [Helm Stable Archive repository maintained by GitLab](https://gitlab.com/gitlab-org/cluster-integration/helm-stable-archive).
|
||||
|
|
|
|||
|
|
@ -165,7 +165,7 @@ DETAILS:
|
|||
|
||||
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
|
||||
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.9.0
|
||||
|
||||
|
|
@ -215,7 +215,7 @@ DETAILS:
|
|||
|
||||
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
|
||||
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.8.2
|
||||
|
||||
|
|
@ -259,7 +259,7 @@ DETAILS:
|
|||
- We discovered an issue where [replication and verification of projects and wikis was not keeping up](https://gitlab.com/gitlab-org/gitlab/-/issues/387980) on small number of Geo installations. Your installation may be affected if you see some projects and/or wikis persistently in the "Queued" state for verification. This can lead to data loss after a failover.
|
||||
- Affected versions: GitLab versions 15.6.x, 15.7.x, and 15.8.0 - 15.8.2.
|
||||
- Versions containing fix: GitLab 15.8.3 and later.
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.7.6
|
||||
|
||||
|
|
@ -396,7 +396,7 @@ DETAILS:
|
|||
contents printed. For example, if they were printed in an echo output. For more information,
|
||||
see [Understanding the file type variable expansion change in GitLab 15.7](https://about.gitlab.com/blog/2023/02/13/impact-of-the-file-type-variable-change-15-7/).
|
||||
- Due to [a bug introduced in GitLab 15.4](https://gitlab.com/gitlab-org/gitlab/-/issues/390155), if one or more Git repositories in Gitaly Cluster is [unavailable](../../administration/gitaly/recovery.md#unavailable-repositories), then [Repository checks](../../administration/repository_checks.md#repository-checks) and [Geo replication and verification](../../administration/geo/index.md) stop running for all project or project wiki repositories in the affected Gitaly Cluster. The bug was fixed by [reverting the change in GitLab 15.9.0](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110823). Before upgrading to this version, check if you have any "unavailable" repositories. See [the bug issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390155) for more information.
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
### Geo installations
|
||||
|
||||
|
|
@ -550,7 +550,7 @@ DETAILS:
|
|||
- Affected versions: GitLab versions 15.6.x, 15.7.x, and 15.8.0 - 15.8.2.
|
||||
- Versions containing fix: GitLab 15.8.3 and later.
|
||||
- Due to [a bug introduced in GitLab 15.4](https://gitlab.com/gitlab-org/gitlab/-/issues/390155), if one or more Git repositories in Gitaly Cluster is [unavailable](../../administration/gitaly/recovery.md#unavailable-repositories), then [Repository checks](../../administration/repository_checks.md#repository-checks) and [Geo replication and verification](../../administration/geo/index.md) stop running for all project or project wiki repositories in the affected Gitaly Cluster. The bug was fixed by [reverting the change in GitLab 15.9.0](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/110823). Before upgrading to this version, check if you have any "unavailable" repositories. See [the bug issue](https://gitlab.com/gitlab-org/gitlab/-/issues/390155) for more information.
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.5.5
|
||||
|
||||
|
|
@ -616,7 +616,7 @@ DETAILS:
|
|||
|
||||
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
|
||||
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.4.6
|
||||
|
||||
|
|
@ -695,7 +695,7 @@ DETAILS:
|
|||
|
||||
- `pg_upgrade` fails to upgrade the bundled PostregSQL database to version 13. See
|
||||
[the details and workaround](#pg_upgrade-fails-to-upgrade-the-bundled-postregsql-database-to-version-13).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.3.4
|
||||
|
||||
|
|
@ -790,7 +790,7 @@ This issue is resolved in GitLab 15.3.3, so customers with the following configu
|
|||
- LFS is enabled.
|
||||
- LFS objects are being replicated across Geo sites.
|
||||
- Repositories are being pulled by using a Geo secondary site.
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
#### Incorrect object storage LFS file deletion on secondary sites
|
||||
|
||||
|
|
@ -851,7 +851,7 @@ DETAILS:
|
|||
[the details and workaround](#lfs-transfers-redirect-to-primary-from-secondary-site-mid-session).
|
||||
- Incorrect object storage LFS files deletion on Geo secondary sites. See
|
||||
[the details and workaround](#incorrect-object-storage-lfs-file-deletion-on-secondary-sites).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.1.0
|
||||
|
||||
|
|
@ -894,7 +894,7 @@ DETAILS:
|
|||
[the details and workaround](#lfs-transfers-redirect-to-primary-from-secondary-site-mid-session).
|
||||
- Incorrect object storage LFS files deletion on Geo secondary sites. See
|
||||
[the details and workaround](#incorrect-object-storage-lfs-file-deletion-on-secondary-sites).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](gitlab_16_changes.md#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
## 15.0.0
|
||||
|
||||
|
|
|
|||
|
|
@ -112,8 +112,8 @@ see [Packaged PostgreSQL deployed in an HA/Geo Cluster](https://docs.gitlab.com/
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
## 16.10.0
|
||||
|
||||
|
|
@ -178,8 +178,8 @@ For more information on the changes introduced between version 2.1.0 and version
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
## 16.9.0
|
||||
|
||||
|
|
@ -236,8 +236,8 @@ planned for release in 16.9.1.
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
### Linux package installations
|
||||
|
||||
|
|
@ -314,8 +314,8 @@ planned for release in 16.9.1.
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
## 16.7.0
|
||||
|
||||
|
|
@ -404,8 +404,8 @@ Specific information applies to Linux package installations:
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
## 16.6.0
|
||||
|
||||
|
|
@ -493,8 +493,8 @@ Specific information applies to Linux package installations:
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
## 16.5.0
|
||||
|
||||
|
|
@ -649,8 +649,8 @@ Specific information applies to installations using Geo:
|
|||
| 16.7 | All | None |
|
||||
| 16.8 | All | None |
|
||||
| 16.9 | All | None |
|
||||
| 16.10 | All | None |
|
||||
| 16.11 | All | None |
|
||||
| 16.10 | 16.10.0 - 16.10.6 | 16.10.7 |
|
||||
| 16.11 | 16.11.0 - 16.11.3 | 16.11.4 |
|
||||
|
||||
## 16.4.0
|
||||
|
||||
|
|
@ -1051,7 +1051,7 @@ Specific information applies to installations using Geo:
|
|||
Affected artifacts are automatically resynced upon upgrade to 16.1.5, 16.2.5, 16.3.1, 16.4.0, or later.
|
||||
You can [manually resync affected job artifacts](https://gitlab.com/gitlab-org/gitlab/-/issues/419742#to-fix-data) if needed.
|
||||
|
||||
#### Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced
|
||||
#### Cloning LFS objects from secondary site downloads from the primary site
|
||||
|
||||
A [bug](https://gitlab.com/gitlab-org/gitlab/-/issues/410413) in the Geo proxying logic for LFS objects meant that all LFS clone requests against a secondary site are proxied to the primary even if the secondary site is up-to-date. This can result in increased load on the primary site and longer access times for LFS objects for users cloning from the secondary site.
|
||||
|
||||
|
|
@ -1136,7 +1136,7 @@ Specific information applies to installations using Geo:
|
|||
- While running an affected version, artifacts which appeared to become synced may actually be missing on the secondary site.
|
||||
Affected artifacts are automatically resynced upon upgrade to 16.1.5, 16.2.5, 16.3.1, 16.4.0, or later.
|
||||
You can [manually resync affected job artifacts](https://gitlab.com/gitlab-org/gitlab/-/issues/419742#to-fix-data) if needed.
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
#### Wiki repositories not initialized on project creation
|
||||
|
||||
|
|
@ -1217,7 +1217,7 @@ Specific information applies to installations using Geo:
|
|||
|
||||
- Some project imports do not initialize wiki repositories on project creation. See
|
||||
[the details and workaround](#wiki-repositories-not-initialized-on-project-creation).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site-even-when-secondary-is-fully-synced).
|
||||
- Cloning LFS objects from secondary site downloads from the primary site even when secondary is fully synced. See [the details and workaround](#cloning-lfs-objects-from-secondary-site-downloads-from-the-primary-site).
|
||||
|
||||
### Gitaly configuration structure change
|
||||
|
||||
|
|
|
|||
|
|
@ -21,9 +21,6 @@ GitLab is [transparent](https://handbook.gitlab.com/handbook/values/#transparenc
|
|||
As GitLab Duo features mature, the documentation will be updated to clearly state
|
||||
how and where you can access these features.
|
||||
|
||||
Each feature uses the large language models (LLMs) listed in this page. However, you can
|
||||
[use your own self-hosted models instead](../../administration/self_hosted_models/index.md).
|
||||
|
||||
## Generally available features
|
||||
|
||||
### GitLab Duo Chat
|
||||
|
|
|
|||
|
|
@ -104,7 +104,7 @@ When you use `-backend-config`, the configuration is:
|
|||
- Cached in the output of the `terraform plan` command.
|
||||
- Usually passed forward to the `terraform apply` command.
|
||||
|
||||
This configuration can lead to problems like [being unable to lock Terraform state files in CI jobs](troubleshooting.md#unable-to-lock-terraform-state-files-in-ci-jobs-for-terraform-apply-using-a-plan-created-in-a-previous-job).
|
||||
This configuration can lead to problems like [being unable to lock Terraform state files in CI jobs](troubleshooting.md#cant-lock-terraform-state-files-in-ci-jobs-for-terraform-apply-with-a-previous-jobs-plan).
|
||||
|
||||
## Access the state from your local machine
|
||||
|
||||
|
|
|
|||
|
|
@ -88,7 +88,7 @@ View the [template-archive](https://gitlab.com/gitlab-org/configure/template-arc
|
|||
|
||||
## Troubleshooting Terraform state
|
||||
|
||||
### Unable to lock Terraform state files in CI jobs for `terraform apply` using a plan created in a previous job
|
||||
### Can't lock Terraform state files in CI jobs for `terraform apply` with a previous job's plan
|
||||
|
||||
When passing `-backend-config=` to `terraform init`, Terraform persists these values inside the plan
|
||||
cache file. This includes the `password` value.
|
||||
|
|
|
|||
|
|
@ -668,6 +668,10 @@ cache with this command:
|
|||
nuget locals all -clear
|
||||
```
|
||||
|
||||
### `Error publishing` or `Invalid Package: Failed metadata extraction error` messages when trying to publish NuGet packages in a Docker-based GitLab installation
|
||||
### Errors when trying to publish NuGet packages in a Docker-based GitLab installation
|
||||
|
||||
Webhook requests to local network addresses are blocked to prevent exploitation of internal web services. If you get `Error publishing` or `Invalid Package` messages when you try to publish NuGet packages, change your network settings to [allow webhook and integration requests to the local network](../../../security/webhooks.md#allow-requests-to-the-local-network-from-webhooks-and-integrations).
|
||||
Webhook requests to local network addresses are blocked to prevent exploitation of
|
||||
internal web services. If you get `Error publishing` or
|
||||
`Invalid Package: Failed metadata extraction error` messages
|
||||
when you try to publish NuGet packages, change your network settings to
|
||||
[allow webhook and integration requests to the local network](../../../security/webhooks.md#allow-requests-to-the-local-network-from-webhooks-and-integrations).
|
||||
|
|
|
|||
|
|
@ -281,11 +281,17 @@ Check that the `Provision Role ARN` is correct. An example of a valid ARN:
|
|||
arn:aws:iam::123456789012:role/gitlab-eks-provision'
|
||||
```
|
||||
|
||||
### Access denied: User `arn:aws:iam::x` is not authorized to perform: `sts:AssumeRole` on resource: `arn:aws:iam::y`
|
||||
### Access denied: User is not authorized to perform: `sts:AssumeRole` on resource: `arn:aws:iam::y`
|
||||
|
||||
This error occurs when the credentials defined in the
|
||||
[Configure Amazon authentication](#configure-amazon-authentication) cannot assume the role defined by the
|
||||
Provision Role ARN. Check that:
|
||||
Provision Role ARN:
|
||||
|
||||
```plaintext
|
||||
User `arn:aws:iam::x` is not authorized to perform: `sts:AssumeRole` on resource: `arn:aws:iam::y`
|
||||
```
|
||||
|
||||
Check that:
|
||||
|
||||
1. The initial set of AWS credentials [has the AssumeRole policy](#additional-requirements-for-self-managed-instances).
|
||||
1. The Provision Role has access to create clusters in the given region.
|
||||
|
|
|
|||
|
|
@ -236,10 +236,16 @@ kubectl create clusterrolebinding permissive-binding \
|
|||
|
||||
## Troubleshooting
|
||||
|
||||
### `There was a problem authenticating with your cluster. Please ensure your CA Certificate and Token are valid`
|
||||
### CA certificate and token errors during authentication
|
||||
|
||||
If you encounter this error while connecting a Kubernetes cluster, ensure you're
|
||||
properly pasting the service token. Some shells may add a line break to the
|
||||
If you encounter this error while connecting a Kubernetes cluster:
|
||||
|
||||
```plaintext
|
||||
There was a problem authenticating with your cluster.
|
||||
Please ensure your CA Certificate and Token are valid
|
||||
```
|
||||
|
||||
Ensure you're properly pasting the service token. Some shells may add a line break to the
|
||||
service token, making it invalid. Ensure that there are no line breaks by
|
||||
pasting your token into an editor and removing any additional spaces.
|
||||
|
||||
|
|
|
|||
|
|
@ -532,6 +532,14 @@ To configure the certificate:
|
|||
|
||||
::EndTabs
|
||||
|
||||
## Configuring firewalls for webhook traffic
|
||||
|
||||
When configuring firewalls for webhooks traffic, you can configure assuming that webhooks are usually sent asynchronously from Sidekiq nodes. However, there are cases
|
||||
when webhooks are sent synchronously from Rails nodes, including when:
|
||||
|
||||
- [Testing a Webhook](#test-a-webhook) in the UI.
|
||||
- [Retrying a Webhook](#inspect-request-and-response-details) in the UI.
|
||||
|
||||
## Related topics
|
||||
|
||||
- [Webhook events and webhook JSON payloads](webhook_events.md)
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ BUILD_TAGS := tracer_static tracer_static_jaeger continuous_profiler_stackdriver
|
|||
OS := $(shell uname | tr A-Z a-z)
|
||||
ARCH ?= $(shell uname -m | sed -e 's/x86_64/amd64/' | sed -e 's/aarch64/arm64/')
|
||||
|
||||
GOLANGCI_LINT_VERSION := 1.59.0
|
||||
GOLANGCI_LINT_VERSION := 1.59.1
|
||||
GOLANGCI_LINT_ARCH ?= ${ARCH}
|
||||
GOLANGCI_LINT_FILE := _support/bin/golangci-lint-${GOLANGCI_LINT_VERSION}
|
||||
|
||||
|
|
|
|||
|
|
@ -293,7 +293,6 @@ internal/redis/keywatcher.go:1:1: package-comments: should have a package commen
|
|||
internal/redis/keywatcher.go:19:6: exported: exported type KeyWatcher should have comment or be unexported (revive)
|
||||
internal/redis/keywatcher.go:28:1: exported: exported function NewKeyWatcher should have comment or be unexported (revive)
|
||||
internal/redis/keywatcher.go:42:2: exported: exported var KeyWatchers should have comment or be unexported (revive)
|
||||
internal/redis/keywatcher.go:79:86: (*KeyWatcher).receivePubSubStream - result 0 (error) is always nil (unparam)
|
||||
internal/redis/keywatcher.go:90:16: Error return value of `kw.conn.Close` is not checked (errcheck)
|
||||
internal/redis/keywatcher.go:120:66: unnecessary conversion (unconvert)
|
||||
internal/redis/keywatcher.go:129:1: exported: exported method KeyWatcher.Process should have comment or be unexported (revive)
|
||||
|
|
|
|||
|
|
@ -293,7 +293,6 @@ internal/redis/keywatcher.go:1:1: package-comments: should have a package commen
|
|||
internal/redis/keywatcher.go:19:6: exported: exported type KeyWatcher should have comment or be unexported (revive)
|
||||
internal/redis/keywatcher.go:28:1: exported: exported function NewKeyWatcher should have comment or be unexported (revive)
|
||||
internal/redis/keywatcher.go:42:2: exported: exported var KeyWatchers should have comment or be unexported (revive)
|
||||
internal/redis/keywatcher.go:79:86: (*KeyWatcher).receivePubSubStream - result 0 (error) is always nil (unparam)
|
||||
internal/redis/keywatcher.go:90:16: Error return value of `kw.conn.Close` is not checked (errcheck)
|
||||
internal/redis/keywatcher.go:120:66: unnecessary conversion (unconvert)
|
||||
internal/redis/keywatcher.go:129:1: exported: exported method KeyWatcher.Process should have comment or be unexported (revive)
|
||||
|
|
|
|||
Loading…
Reference in New Issue