Add latest changes from gitlab-org/gitlab@master

This commit is contained in:
GitLab Bot 2023-10-12 12:11:32 +00:00
parent 7b3a8386ce
commit 3b260cb69f
121 changed files with 471 additions and 3769 deletions

View File

@ -21,8 +21,7 @@
# Tracking Details
- [json schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/0-3-0) used in `gitlab-experiment` tracking.
- see [event schema](../../doc/development/internal_analytics/snowplow/index.md#event-schema) for a guide.
- [json schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_experiment/jsonschema/1-0-3) used in `gitlab-experiment` tracking.
| sequence | activity | category | action | label | property | value |
| -------- | -------- | ------ | ----- | ------- | -------- | ----- |

View File

@ -1,36 +1,16 @@
<!-- Title suggestion: [Feature flag] Enable description of feature -->
<!--
Set the main issue link: The main issue is the one that describes the problem to solve,
the one this feature flag is being added for. For example:
[main-issue]: https://gitlab.com/gitlab-org/gitlab/-/issues/123456
-->
<!-- Title suggestion: [Feature flag] Enable <feature-flag-name> -->
[main-issue]: MAIN-ISSUE-LINK
## Summary
This issue is to rollout [the feature][main-issue] on production,
This issue is to roll out [the feature][main-issue] on production,
that is currently behind the `<feature-flag-name>` feature flag.
<!-- Short description of what the feature is about and link to relevant other issues. -->
## Owners
- Most appropriate Slack channel to reach out to: `#g_TEAM_NAME`
- Best individual to reach out to: NAME_OF_DRI
- PM: NAME_OF_PM
## Stakeholders
<!--
Are there any other stages or teams involved that need to be kept in the loop?
- Name of a PM
- The Support Team
- The Delivery Team
-->
- Best individual to reach out to: GITLAB_USERNAME_OF_DRI
## Expectations
@ -38,30 +18,11 @@ Are there any other stages or teams involved that need to be kept in the loop?
<!-- Describe the expected outcome when rolling out this feature -->
### When is the feature viable?
### What can go wrong and how would we detect it?
<!-- What are the settings we need to configure in order to have this feature viable? -->
<!--
Example below:
1. Enable service ping collection
`ApplicationSetting.first.update(usage_ping_enabled: true)`
-->
### What might happen if this goes wrong?
<!-- Should the feature flag be turned off? Any MRs that need to be rolled back? Communication that needs to happen? What are some things you can think of that could go wrong - data loss or broken pages? -->
### What can we monitor to detect problems with this?
<!-- Data loss, broken pages, stability/availability impact? -->
<!-- Which dashboards from https://dashboards.gitlab.net are most relevant? -->
_Consider mentioning checks for 5xx errors or other anomalies like an increase in redirects
(302 HTTP response status)_
### What can we check for monitoring production after rollouts?
_Consider adding links to check for Sentry errors, Production logs for 5xx, 302s, etc._
## Rollout Steps
@ -85,7 +46,7 @@ For assistance with end-to-end test failures, please reach out via the [`#qualit
### Specific rollout on production
For visibility, all `/chatops` commands that target production should be executed in the [`#production` Slack channel](https://gitlab.slack.com/archives/C101F3796)
and cross-posted (with the command results) to the responsible team's Slack channel (`#g_TEAM_NAME`).
and cross-posted (with the command results) to the responsible team's Slack channel.
- Ensure that the feature MRs have been deployed to both production and canary with `/chatops run auto_deploy status <merge-commit-of-your-feature>`
- [ ] Depending on the [type of actor](https://docs.gitlab.com/ee/development/feature_flags/#feature-actors) you are using, pick one of these options:
@ -138,7 +99,7 @@ To do so, follow these steps:
- [ ] Create a merge request with the following changes. Ask for review and merge it.
- [ ] Set the `default_enabled` attribute in [the feature flag definition](https://docs.gitlab.com/ee/development/feature_flags/#feature-flag-definition-and-validation) to `true`.
- [ ] Review [what warrants a changelog entry](https://docs.gitlab.com/ee/development/changelog.html#what-warrants-a-changelog-entry) and decide if [a changelog entry](https://docs.gitlab.com/ee/development/feature_flags/#changelog) is needed.
- [ ] Decide [which changelog entry](https://docs.gitlab.com/ee/development/feature_flags/#changelog) is needed.
- [ ] Ensure that the default-enabling MR has been included in the release package.
If the merge request was deployed before [the monthly release was tagged](https://about.gitlab.com/handbook/engineering/releases/#self-managed-releases-1),
the feature can be officially announced in a release blog post: `/chatops run release check <merge-request-url> <milestone>`
@ -182,27 +143,6 @@ You can either [create a follow-up issue for Feature Flag Cleanup](https://gitla
/chatops run feature set <feature-flag-name> false
```
<!-- A feature flag can also be used for rolling out a bug fix or a maintenance work.
In this scenario, labels must be related to it, for example; ~"type::feature", ~"type::bug" or ~"type::maintenance".
Please use /copy_metadata to copy the labels from the issue you're rolling out. -->
<!--
Template placeholders
- name: MAIN-ISSUE-LINK
description: the URL of the issue introducing the feature flag
- name: <feature-flag-name>
description: the feature flag name
- name: #g_TEAM_NAME
description: the Slack channel name of the responsible team, e.g. #g_foo
- name: NAME_OF_DRI
description: the GitLab username of the best individual to reach out to, e.g. @foo
- name: NAME_OF_PM
description: the GitLab username of the relevant PM, e.g. @foo
- name: <your-username>
description: the GitLab username of the person who would enable the feature flag on GitLab.com, e.g. @foo
-->
/label ~group::
/label ~"feature flag"
/assign me

View File

@ -43,7 +43,7 @@ Personas are described at https://about.gitlab.com/handbook/product/personas/
<!-- How are you going to track usage of this feature? Think about user behavior and their interaction with the product. What indicates someone is getting value from it?
Create tracking issue using the Snowplow event tracking template. See https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Snowplow%20event%20tracking.md
Explore (../../doc/development/internal_analytics/internal_event_instrumentation/quick_start.md) for a guide.
-->

View File

@ -98,9 +98,6 @@ Test Planning: https://about.gitlab.com/handbook/engineering/quality/quality-eng
### Feature Usage Metrics
<!-- How are you going to track usage of this feature? Think about user behavior and their interaction with the product. What indicates someone is getting value from it?
Create tracking issue using the Snowplow event tracking template. See https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Snowplow%20event%20tracking.md
-->
### What does success look like, and how can we measure that?
@ -108,7 +105,7 @@ Create tracking issue using the Snowplow event tracking template. See https://gi
<!--
Define both the success metrics and acceptance criteria. Note that success metrics indicate the desired business outcomes, while acceptance criteria indicate when the solution is working correctly. If there is no way to measure success, link to an issue that will implement a way to measure this.
Create tracking issue using the Snowplow event tracking template. See https://gitlab.com/gitlab-org/gitlab/-/blob/master/.gitlab/issue_templates/Snowplow%20event%20tracking.md
Explore (../../doc/development/internal_analytics/internal_event_instrumentation/quick_start.md) for a guide.
-->
### What is the type of buyer?

View File

@ -1,46 +0,0 @@
<!--
* Use this issue template for creating requests to track snowplow events
* Snowplow events can be both Frontend (javascript) or Backend (Ruby)
* Snowplow is currently not used for self-hosted instances of GitLab - Self-hosted still rely on usage ping for product analytics - Snowplow is used for GitLab SaaS
* You do not need to create an issue to track generic front-end events, such as All page views, sessions, link clicks, some button clicks, etc.
* What you should capture are specific events with defined business logic. For example, when a user creates an incident by escalating an existing alert, or when a user creates and pushes up a new Node package to the NPM registry.
* For more details read https://about.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/
-->
<!--
We generally recommend events be tracked using a [structured event](https://docs.snowplowanalytics.com/docs/understanding-tracking-design/out-of-the-box-vs-custom-events-and-entities/#structured-events) which has 5 properties you can use. There may be instances where structured events are not sufficient. You may want to track an event where the property changes frequently or is general something very unique. In those cases, use a [self-describing event](https://docs.snowplowanalytics.com/docs/understanding-tracking-design/out-of-the-box-vs-custom-events-and-entities/#self-describing-events)
-->
## Structured Snowplow events to track
* Category: The page or backend area of the application. Unless infeasible, please use the Rails page attribute by default in the frontend, and namespace + classname on the backend. If you're not sure what it is, work with your engineering manager to figure it out.
* Action: A string that is used to define the user action. The first word should always describe the action or aspect: clicks should be `click`, activations should be `activate`, creations should be `create`, etc. Use underscores to describe what was acted on; for example, activating a form field would be `activate_form_input`. An interface action like clicking on a dropdown would be `click_dropdown`, while a behavior like creating a project record from the backend would be `create_project`
* Label: Optional. The specific element, or object that's being acted on. This is either the label of the element (e.g. a tab labeled 'Create from template' may be `create_from_template`) or a unique identifier if no text is available (e.g. closing the Groups dropdown in the top navbar might be `groups_dropdown_close`), or it could be the name or title attribute of a record being created.
* Property: Optional. Any additional property of the element, or object being acted on.
* Value: Optional, numeric. Describes a numeric value (decimal) directly related to the event. This could be the value of an input (e.g. `10` when clicking `internal` visibility)
| Category | Action | Label | Property | Feature Issue | Additional Information |
| ------ | ------ | ------ | ------ | ------ | ------ |
| cell | cell | cell | cell | cell | cell |
| cell | cell | cell | cell | cell | cell |
<!--
Snowplow event tracking starts with instrumentation and completed after a chart is created in Sisense.
Use this checklist to ensure all steps are completed
-->
## Snowplow event tracking checklist
* [ ] Engineering complete work and deploy changes to GitLab SaaS
* [ ] Verify the new Snowplow events are listed in the [Snowplow Event Exploration](https://app.periscopedata.com/app/gitlab/539181/Snowplow-Event-Exploration---last-30-days) dashboard
* [ ] Create chart(s) to track your event(s) in the relevant dashboard
* [ ] Use the [Chart Snowplow Actions](https://app.periscopedata.com/app/gitlab/snippet/Chart-Snowplow-Actions/5546da87ae2c4a3fbc98415c88b3eedd/edit) SQL snippet to quickly visualize usage. See [example](https://app.periscopedata.com/app/gitlab/737489/Health-Group-Dashboard?widget=9797112&udv=0)
<!-- Label reminders - you should have one of each of the following labels.
Use the following resources to find the appropriate labels:
- https://gitlab.com/gitlab-org/gitlab/-/labels
- https://about.gitlab.com/handbook/product/categories/features/
-->
/label ~devops:: ~group: ~Category:
/label ~"snowplow tracking events"

View File

@ -1006,7 +1006,6 @@ Cop/SidekiqApiUsage:
- 'db/post_migrate/**/*'
- 'lib/gitlab/sidekiq_middleware/**/*'
- 'lib/gitlab/background_migration/**/*'
- 'lib/gitlab/hashed_storage/migrator.rb'
- 'lib/api/sidekiq_metrics.rb'
- 'lib/gitlab/sidekiq_config.rb'
- 'lib/gitlab/sidekiq_queue.rb'

View File

@ -194,7 +194,6 @@ Layout/LineEndStringConcatenationIndentation:
- 'lib/gitlab/slash_commands/presenters/run.rb'
- 'lib/gitlab/tracking/standard_context.rb'
- 'lib/tasks/gitlab/db/validate_config.rake'
- 'lib/tasks/gitlab/storage.rake'
- 'qa/qa/ee/page/project/settings/services/jira.rb'
- 'qa/qa/specs/features/api/4_verify/api_variable_inheritance_with_forward_pipeline_variables_spec.rb'
- 'qa/qa/support/system_logs/kibana.rb'

View File

@ -589,10 +589,8 @@ Layout/LineLength:
- 'app/services/projects/destroy_service.rb'
- 'app/services/projects/fork_service.rb'
- 'app/services/projects/hashed_storage/base_attachment_service.rb'
- 'app/services/projects/hashed_storage/base_repository_service.rb'
- 'app/services/projects/hashed_storage/migrate_attachments_service.rb'
- 'app/services/projects/hashed_storage/migrate_repository_service.rb'
- 'app/services/projects/hashed_storage/rollback_repository_service.rb'
- 'app/services/projects/lfs_pointers/lfs_download_service.rb'
- 'app/services/projects/operations/update_service.rb'
- 'app/services/projects/overwrite_project_service.rb'
@ -2726,7 +2724,6 @@ Layout/LineLength:
- 'lib/gitlab/grape_logging/loggers/client_env_logger.rb'
- 'lib/gitlab/graphql/timeout.rb'
- 'lib/gitlab/group_search_results.rb'
- 'lib/gitlab/hashed_storage/migrator.rb'
- 'lib/gitlab/hook_data/key_builder.rb'
- 'lib/gitlab/hotlinking_detector.rb'
- 'lib/gitlab/http_io.rb'
@ -2872,7 +2869,6 @@ Layout/LineLength:
- 'lib/tasks/gitlab/shell.rake'
- 'lib/tasks/gitlab/sidekiq.rake'
- 'lib/tasks/gitlab/snippets.rake'
- 'lib/tasks/gitlab/storage.rake'
- 'lib/tasks/gitlab/terraform/migrate.rake'
- 'lib/tasks/gitlab/update_templates.rake'
- 'lib/tasks/gitlab/usage_data.rake'
@ -4745,8 +4741,6 @@ Layout/LineLength:
- 'spec/services/projects/group_links/destroy_service_spec.rb'
- 'spec/services/projects/hashed_storage/migrate_repository_service_spec.rb'
- 'spec/services/projects/hashed_storage/migration_service_spec.rb'
- 'spec/services/projects/hashed_storage/rollback_repository_service_spec.rb'
- 'spec/services/projects/hashed_storage/rollback_service_spec.rb'
- 'spec/services/projects/import_error_filter_spec.rb'
- 'spec/services/projects/import_export/export_service_spec.rb'
- 'spec/services/projects/import_service_spec.rb'
@ -5006,7 +5000,6 @@ Layout/LineLength:
- 'spec/tasks/gitlab/refresh_project_statistics_build_artifacts_size_rake_spec.rb'
- 'spec/tasks/gitlab/smtp_rake_spec.rb'
- 'spec/tasks/gitlab/snippets_rake_spec.rb'
- 'spec/tasks/gitlab/storage_rake_spec.rb'
- 'spec/tasks/gitlab/terraform/migrate_rake_spec.rb'
- 'spec/tasks/gitlab/uploads/check_rake_spec.rb'
- 'spec/tasks/gitlab/workhorse_rake_spec.rb'

View File

@ -480,7 +480,6 @@ Lint/UnusedMethodArgument:
- 'lib/gitlab/graphql/project/dast_profile_connection_extension.rb'
- 'lib/gitlab/graphql/query_analyzers/ast/logger_analyzer.rb'
- 'lib/gitlab/graphql/tracers/timer_tracer.rb'
- 'lib/gitlab/hashed_storage/rake_helper.rb'
- 'lib/gitlab/hook_data/subgroup_builder.rb'
- 'lib/gitlab/import_export/after_export_strategies/base_after_export_strategy.rb'
- 'lib/gitlab/import_export/fast_hash_serializer.rb'

View File

@ -2777,8 +2777,6 @@ RSpec/ContextWording:
- 'spec/services/projects/group_links/update_service_spec.rb'
- 'spec/services/projects/hashed_storage/migrate_repository_service_spec.rb'
- 'spec/services/projects/hashed_storage/migration_service_spec.rb'
- 'spec/services/projects/hashed_storage/rollback_repository_service_spec.rb'
- 'spec/services/projects/hashed_storage/rollback_service_spec.rb'
- 'spec/services/projects/in_product_marketing_campaign_emails_service_spec.rb'
- 'spec/services/projects/lfs_pointers/lfs_download_link_list_service_spec.rb'
- 'spec/services/projects/lfs_pointers/lfs_download_service_spec.rb'
@ -3058,7 +3056,6 @@ RSpec/ContextWording:
- 'spec/tasks/gitlab/gitaly_rake_spec.rb'
- 'spec/tasks/gitlab/lfs/migrate_rake_spec.rb'
- 'spec/tasks/gitlab/packages/migrate_rake_spec.rb'
- 'spec/tasks/gitlab/storage_rake_spec.rb'
- 'spec/tasks/gitlab/terraform/migrate_rake_spec.rb'
- 'spec/tasks/gitlab/workhorse_rake_spec.rb'
- 'spec/tooling/danger/project_helper_spec.rb'

View File

@ -230,8 +230,6 @@ RSpec/ReturnFromStub:
- 'spec/services/projects/create_service_spec.rb'
- 'spec/services/projects/hashed_storage/migrate_attachments_service_spec.rb'
- 'spec/services/projects/hashed_storage/migrate_repository_service_spec.rb'
- 'spec/services/projects/hashed_storage/rollback_attachments_service_spec.rb'
- 'spec/services/projects/hashed_storage/rollback_repository_service_spec.rb'
- 'spec/services/projects/in_product_marketing_campaign_emails_service_spec.rb'
- 'spec/services/projects/update_remote_mirror_service_spec.rb'
- 'spec/services/projects/update_service_spec.rb'

View File

@ -208,7 +208,6 @@ Style/GuardClause:
- 'app/services/projects/after_rename_service.rb'
- 'app/services/projects/create_service.rb'
- 'app/services/projects/destroy_service.rb'
- 'app/services/projects/hashed_storage/rollback_service.rb'
- 'app/services/projects/import_export/export_service.rb'
- 'app/services/projects/import_service.rb'
- 'app/services/projects/lfs_pointers/lfs_object_download_list_service.rb'

View File

@ -302,7 +302,6 @@ Style/IfUnlessModifier:
- 'app/services/projects/enable_deploy_key_service.rb'
- 'app/services/projects/fork_service.rb'
- 'app/services/projects/git_deduplication_service.rb'
- 'app/services/projects/hashed_storage/rollback_service.rb'
- 'app/services/projects/import_export/export_service.rb'
- 'app/services/projects/import_service.rb'
- 'app/services/projects/lfs_pointers/lfs_download_service.rb'
@ -845,7 +844,6 @@ Style/IfUnlessModifier:
- 'lib/gitlab/golang.rb'
- 'lib/gitlab/graphql/pagination/keyset/connection.rb'
- 'lib/gitlab/graphql/queries.rb'
- 'lib/gitlab/hashed_storage/rake_helper.rb'
- 'lib/gitlab/hotlinking_detector.rb'
- 'lib/gitlab/http.rb'
- 'lib/gitlab/http_io.rb'
@ -940,7 +938,6 @@ Style/IfUnlessModifier:
- 'lib/tasks/gitlab/shell.rake'
- 'lib/tasks/gitlab/sidekiq.rake'
- 'lib/tasks/gitlab/snippets.rake'
- 'lib/tasks/gitlab/storage.rake'
- 'lib/tasks/gitlab/update_templates.rake'
- 'qa/qa/ee/resource/settings/elasticsearch.rb'
- 'qa/qa/page/component/snippet.rb'

View File

@ -20,7 +20,6 @@ Style/SoleNestedConditional:
- 'app/services/projects/container_repository/delete_tags_service.rb'
- 'app/services/projects/create_service.rb'
- 'app/services/projects/hashed_storage/migration_service.rb'
- 'app/services/projects/hashed_storage/rollback_service.rb'
- 'ee/app/finders/ee/snippets_finder.rb'
- 'ee/app/services/ee/issue_links/create_service.rb'
- 'ee/app/services/ee/lfs/unlock_file_service.rb'

View File

@ -1 +1 @@
14.28.0
14.29.0

View File

@ -143,7 +143,7 @@ export default {
v-model="minSizeMb"
:state="isMinSizeMbValid"
name="application_setting[inactive_projects_min_size_mb]"
size="md"
width="md"
type="number"
:min="0"
data-testid="min-size-input"
@ -177,7 +177,7 @@ export default {
v-model="deleteAfterMonths"
:state="isDeleteAfterMonthsValid"
name="application_setting[inactive_projects_delete_after_months]"
size="sm"
width="sm"
type="number"
:min="0"
data-testid="delete-after-months-input"
@ -215,7 +215,7 @@ export default {
v-model="sendWarningEmailAfterMonths"
:state="isSendWarningEmailAfterMonthsValid"
name="application_setting[inactive_projects_send_warning_email_after_months]"
size="sm"
width="sm"
type="number"
:min="0"
data-testid="send-warning-email-after-months-input"

View File

@ -42,6 +42,6 @@ export default {
<template>
<div class="gl-display-flex gl-gap-3 gl-align-items-center">
<gl-datepicker v-model="date" />
<gl-form-input v-model="time" size="sm" type="time" data-testid="time-picker" />
<gl-form-input v-model="time" width="sm" type="time" data-testid="time-picker" />
</div>
</template>

View File

@ -62,11 +62,7 @@ export default {
tokensCE() {
const { issue, incident } = this.$options.i18n;
const { types } = this.$options;
const { fetchUsers, fetchLabels } = issueBoardFilters(
this.$apollo,
this.fullPath,
this.isGroupBoard,
);
const { fetchLabels } = issueBoardFilters(this.$apollo, this.fullPath, this.isGroupBoard);
const tokens = [
{
@ -77,7 +73,8 @@ export default {
token: UserToken,
dataType: 'user',
unique: true,
fetchUsers,
isProject: !this.isGroupBoard,
fullPath: this.fullPath,
preloadedUsers: this.preloadedUsers(),
},
{
@ -89,7 +86,8 @@ export default {
token: UserToken,
dataType: 'user',
unique: true,
fetchUsers,
isProject: !this.isGroupBoard,
fullPath: this.fullPath,
preloadedUsers: this.preloadedUsers(),
},
{

View File

@ -1,5 +1,3 @@
import { BoardType } from 'ee_else_ce/boards/constants';
import usersAutocompleteQuery from '~/graphql_shared/queries/users_autocomplete.query.graphql';
import boardLabels from './graphql/board_labels.query.graphql';
export default function issueBoardFilters(apollo, fullPath, isGroupBoard) {
@ -7,17 +5,6 @@ export default function issueBoardFilters(apollo, fullPath, isGroupBoard) {
return isGroupBoard ? data.group?.labels.nodes || [] : data.project?.labels.nodes || [];
};
const fetchUsers = (usersSearchTerm) => {
const namespace = isGroupBoard ? BoardType.group : BoardType.project;
return apollo
.query({
query: usersAutocompleteQuery,
variables: { fullPath, search: usersSearchTerm, isProject: !isGroupBoard },
})
.then(({ data }) => data[namespace]?.autocompleteUsers);
};
const fetchLabels = (labelSearchTerm) => {
return apollo
.query({
@ -34,6 +21,5 @@ export default function issueBoardFilters(apollo, fullPath, isGroupBoard) {
return {
fetchLabels,
fetchUsers,
};
}

View File

@ -168,7 +168,7 @@ export default {
<gl-form-input
v-model="enteredText"
type="text"
size="sm"
width="sm"
class="gl-mt-2"
aria-labelledby="input-label"
autocomplete="off"

View File

@ -78,7 +78,7 @@ export default {
</template>
</gl-sprintf>
<template v-if="stuckData.showTags">
<gl-badge v-for="tag in tags" :key="tag" variant="info">
<gl-badge v-for="tag in tags" :key="tag" size="sm" variant="info">
{{ tag }}
</gl-badge>
</template>

View File

@ -283,7 +283,7 @@ export default {
</template>
</gl-sprintf>
</template>
<gl-form-input id="deploy_token_username" v-model="username" class="gl-form-input-xl" />
<gl-form-input id="deploy_token_username" v-model="username" width="xl" />
</gl-form-group>
<gl-form-group
:label="$options.translations.addTokenScopesLabel"

View File

@ -24,6 +24,7 @@ export default {
variables() {
return {
configuration: this.configuration,
namespace: this.namespace,
};
},
update(data) {

View File

@ -1,5 +1,5 @@
query getK8sServices($configuration: LocalConfiguration) {
k8sServices(configuration: $configuration) @client {
query getK8sServices($configuration: LocalConfiguration, $namespace: String) {
k8sServices(configuration: $configuration, namespace: $namespace) @client {
metadata {
name
namespace

View File

@ -62,10 +62,13 @@ export default {
handleClusterError(err);
});
},
k8sServices(_, { configuration }) {
k8sServices(_, { configuration, namespace }) {
const coreV1Api = new CoreV1Api(new Configuration(configuration));
return coreV1Api
.listCoreV1ServiceForAllNamespaces()
const servicesApi = namespace
? coreV1Api.listCoreV1NamespacedService(namespace)
: coreV1Api.listCoreV1ServiceForAllNamespaces();
return servicesApi
.then((res) => {
const items = res?.data?.items || [];
return items.map((item) => {

View File

@ -93,7 +93,7 @@ export default {
type="number"
min="0"
max="100"
size="xs"
width="xs"
@input="onPercentageChange"
/>
<span class="ml-1">%</span>

View File

@ -59,7 +59,7 @@ export default {
type="number"
min="0"
max="100"
size="xs"
width="xs"
@input="onPercentageChange"
/>
<span class="gl-ml-2">%</span>

View File

@ -294,7 +294,7 @@ export default {
:name="fields.name.name"
:placeholder="$options.i18n.inputs.name.placeholder"
data-testid="group-name-field"
:size="$options.inputSize"
:width="$options.inputSize"
:state="nameFeedbackState"
@invalid="handleInvalidName"
/>
@ -374,7 +374,7 @@ export default {
:maxlength="fields.path.maxLength"
:pattern="fields.path.pattern"
:state="pathFeedbackState"
:size="pathInputSize"
:width="pathInputSize"
required
data-testid="group-path-field"
:data-bind-in="mattermostEnabled ? $options.mattermostDataBindName : null"
@ -397,7 +397,7 @@ export default {
:id="fields.groupId.id"
:value="fields.groupId.value"
:name="fields.groupId.name"
size="sm"
width="sm"
readonly
/>
</gl-form-group>

View File

@ -8,7 +8,7 @@ export const handleTracking = ({ name, data }) => {
if (data && Object.keys(data).length) {
Tracking.event(undefined, snakeCaseEventName, {
/* See GitLab snowplow schema for a definition of the extra field
* https://docs.gitlab.com/ee/development/snowplow/schemas.html#gitlab_standard.
* https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_standard/jsonschema/1-0-9.
*/
extra: convertObjectPropsToSnakeCase(data, {
deep: true,

View File

@ -21,7 +21,6 @@ import getIssuesCountsQuery from 'ee_else_ce/issues/list/queries/get_issues_coun
import LocalStorageSync from '~/vue_shared/components/local_storage_sync.vue';
import { createAlert, VARIANT_INFO } from '~/alert';
import { TYPENAME_USER } from '~/graphql_shared/constants';
import usersAutocompleteQuery from '~/graphql_shared/queries/users_autocomplete.query.graphql';
import CsvImportExportButtons from '~/issuable/components/csv_import_export_buttons.vue';
import { convertToGraphQLId, getIdFromGraphQLId } from '~/graphql_shared/utils';
import IssuableByEmail from '~/issuable/components/issuable_by_email.vue';
@ -384,7 +383,8 @@ export default {
dataType: 'user',
defaultUsers: [],
operators: this.hasOrFeature ? OPERATORS_IS_NOT_OR : OPERATORS_IS_NOT,
fetchUsers: this.fetchUsers,
fullPath: this.fullPath,
isProject: this.isProject,
recentSuggestionsStorageKey: `${this.fullPath}-issues-recent-tokens-author`,
preloadedUsers,
},
@ -395,7 +395,8 @@ export default {
token: UserToken,
dataType: 'user',
operators: this.hasOrFeature ? OPERATORS_IS_NOT_OR : OPERATORS_IS_NOT,
fetchUsers: this.fetchUsers,
fullPath: this.fullPath,
isProject: this.isProject,
recentSuggestionsStorageKey: `${this.fullPath}-issues-recent-tokens-assignee`,
preloadedUsers,
},
@ -634,14 +635,6 @@ export default {
fetchLatestLabels(search) {
return this.fetchLabelsWithFetchPolicy(search, fetchPolicies.NETWORK_ONLY);
},
fetchUsers(search) {
return this.$apollo
.query({
query: usersAutocompleteQuery,
variables: { fullPath: this.fullPath, search, isProject: this.isProject },
})
.then(({ data }) => data[this.namespace]?.autocompleteUsers);
},
getExportCsvPathWithQuery() {
return `${this.exportCsvPath}${window.location.search}`;
},

View File

@ -178,7 +178,7 @@ export default {
id="timeline-input-hours"
v-model="hourPickerInput"
data-testid="input-hours"
size="xs"
width="xs"
type="number"
min="00"
max="23"
@ -189,7 +189,7 @@ export default {
v-model="minutePickerInput"
class="gl-ml-3"
data-testid="input-minutes"
size="xs"
width="xs"
type="number"
min="00"
max="59"

View File

@ -88,7 +88,7 @@ export default {
id="comment-line-start"
:value="commentLineStart"
:options="commentLineOptions"
size="sm"
width="sm"
class="gl-w-auto gl-vertical-align-baseline"
@change="updateCommentLineStart"
/>

View File

@ -70,7 +70,7 @@ export default {
<gl-form-input
:id="id"
:disabled="duplicatesAllowed || loading"
size="lg"
width="lg"
:value="duplicateExceptionRegex"
:state="isExceptionRegexValid"
@change="update(name, $event)"

View File

@ -206,7 +206,7 @@ export default {
:value="spentAt"
show-clear-button
autocomplete="off"
size="small"
width="small"
@input="updateSpentAtDate"
@clear="updateSpentAtDate(null)"
/>

View File

@ -7,6 +7,7 @@ import {
TRACKING_UNKNOWN_ID,
TRACKING_UNKNOWN_PANEL,
} from '~/super_sidebar/constants';
import eventHub from '../event_hub';
import NavItemLink from './nav_item_link.vue';
import NavItemRouterLink from './nav_item_router_link.vue';
@ -69,16 +70,14 @@ export default {
return {
isMouseIn: false,
canClickPinButton: false,
pillCount: this.item.pill_count,
};
},
computed: {
pillData() {
return this.item.pill_count;
},
hasPill() {
return (
Number.isFinite(this.pillData) ||
(typeof this.pillData === 'string' && this.pillData !== '')
Number.isFinite(this.pillCount) ||
(typeof this.pillCount === 'string' && this.pillCount !== '')
);
},
isPinnable() {
@ -182,11 +181,21 @@ export default {
if (this.item.is_active) {
this.$el.scrollIntoView(false);
}
eventHub.$on('updatePillValue', this.updatePillValue);
},
destroyed() {
eventHub.$off('updatePillValue', this.updatePillValue);
},
methods: {
togglePointerEvents() {
this.canClickPinButton = this.isMouseIn;
},
updatePillValue({ value, itemId }) {
if (this.item.id === itemId) {
this.pillCount = value;
}
},
},
};
</script>
@ -249,7 +258,7 @@ export default {
'hide-on-focus-or-hover--target transition-opacity-on-hover--target': isPinnable,
}"
>
{{ pillData }}
{{ pillCount }}
</gl-badge>
</span>
</component>

View File

@ -0,0 +1,3 @@
import createEventHub from '~/helpers/event_hub_factory';
export default createEventHub();

View File

@ -4,6 +4,8 @@ import { compact } from 'lodash';
import { createAlert } from '~/alert';
import { __ } from '~/locale';
import { WORKSPACE_GROUP, WORKSPACE_PROJECT } from '~/issues/constants';
import usersAutocompleteQuery from '~/graphql_shared/queries/users_autocomplete.query.graphql';
import { OPTIONS_NONE_ANY } from '../constants';
import BaseToken from './base_token.vue';
@ -41,6 +43,12 @@ export default {
preloadedUsers() {
return this.config.preloadedUsers || [];
},
namespace() {
return this.config.isProject ? WORKSPACE_PROJECT : WORKSPACE_GROUP;
},
fetchUsersQuery() {
return this.config.fetchUsers ? this.config.fetchUsers : this.fetchUsersBySearchTerm;
},
},
methods: {
getActiveUser(users, data) {
@ -49,11 +57,19 @@ export default {
getAvatarUrl(user) {
return user.avatarUrl || user.avatar_url;
},
fetchUsersBySearchTerm(search) {
return this.$apollo
.query({
query: usersAutocompleteQuery,
variables: { fullPath: this.config.fullPath, search, isProject: this.config.isProject },
})
.then(({ data }) => data[this.namespace]?.autocompleteUsers);
},
fetchUsers(searchTerm) {
this.loading = true;
const fetchPromise = this.config.fetchPath
? this.config.fetchUsers(this.config.fetchPath, searchTerm)
: this.config.fetchUsers(searchTerm);
: this.fetchUsersQuery(searchTerm);
fetchPromise
.then((res) => {

View File

@ -156,7 +156,7 @@ export default {
<gl-form-input
ref="input"
:readonly="readonly"
:size="size"
:width="size"
class="gl-font-monospace! gl-cursor-default!"
v-bind="formInputGroupProps"
:value="value"

View File

@ -41,6 +41,7 @@ class ProjectsController < Projects::ApplicationController
push_frontend_feature_flag(:remove_monitor_metrics, @project)
push_frontend_feature_flag(:explain_code_chat, current_user)
push_frontend_feature_flag(:service_desk_custom_email, @project)
push_frontend_feature_flag(:issue_email_participants, @project)
push_licensed_feature(:file_locks) if @project.present? && @project.licensed_feature_available?(:file_locks)
push_licensed_feature(:security_orchestration_policies) if @project.present? && @project.licensed_feature_available?(:security_orchestration_policies)
push_force_frontend_feature_flag(:work_items, @project&.work_items_feature_flag_enabled?)

View File

@ -507,7 +507,8 @@ module ApplicationSettingsHelper
:allow_account_deletion,
:gitlab_shell_operation_limit,
:namespace_aggregation_schedule_lease_duration_in_seconds,
:ci_max_total_yaml_size_bytes
:ci_max_total_yaml_size_bytes,
:project_jobs_api_rate_limit
].tap do |settings|
next if Gitlab.com?

View File

@ -651,6 +651,7 @@ class ApplicationSetting < MainClusterwide::ApplicationRecord
validates :throttle_authenticated_deprecated_api_period_in_seconds
validates :throttle_protected_paths_requests_per_period
validates :throttle_protected_paths_period_in_seconds
validates :project_jobs_api_rate_limit
end
with_options(numericality: { only_integer: true, greater_than_or_equal_to: 0 }) do

View File

@ -268,7 +268,8 @@ module ApplicationSettingImplementation
gitlab_dedicated_instance: false,
ci_max_includes: 150,
allow_account_deletion: true,
gitlab_shell_operation_limit: 600
gitlab_shell_operation_limit: 600,
project_jobs_api_rate_limit: 600
}.tap do |hsh|
hsh.merge!(non_production_defaults) unless Rails.env.production?
end

View File

@ -831,6 +831,8 @@ class Note < ApplicationRecord
return if namespace_id.present? && !noteable_changed? && !project_changed?
self.namespace_id = if for_issue?
# Some issues are not project noteables (e.g. group-level work items)
# so we need this separate condition
noteable&.namespace_id
elsif for_project_noteable?
project&.project_namespace_id

View File

@ -2685,26 +2685,6 @@ class Project < ApplicationRecord
self.merge_requests_ff_only_enabled || self.merge_requests_rebase_enabled
end
def migrate_to_hashed_storage!
return unless storage_upgradable?
if git_transfer_in_progress?
HashedStorage::ProjectMigrateWorker.perform_in(Gitlab::ReferenceCounter::REFERENCE_EXPIRE_TIME, id)
else
HashedStorage::ProjectMigrateWorker.perform_async(id)
end
end
def rollback_to_legacy_storage!
return if legacy_storage?
if git_transfer_in_progress?
HashedStorage::ProjectRollbackWorker.perform_in(Gitlab::ReferenceCounter::REFERENCE_EXPIRE_TIME, id)
else
HashedStorage::ProjectRollbackWorker.perform_async(id)
end
end
override :git_transfer_in_progress?
def git_transfer_in_progress?
GL_REPOSITORY_TYPES.any? do |type|

View File

@ -75,7 +75,8 @@ module Users
namespace_over_storage_users_combined_alert: 73, # EE-only
# 74 removed in https://gitlab.com/gitlab-org/gitlab/-/merge_requests/132751
vsd_feedback_banner: 75, # EE-only
security_policy_protected_branch_modification: 76 # EE-only
security_policy_protected_branch_modification: 76, # EE-only
vulnerability_report_grouping: 77 # EE-only
}
validates :feature_name,

View File

@ -1,115 +0,0 @@
# frozen_string_literal: true
module Projects
module HashedStorage
# Returned when repository can't be made read-only because there is already a git transfer in progress
RepositoryInUseError = Class.new(StandardError)
class BaseRepositoryService < BaseService
include Gitlab::ShellAdapter
attr_reader :old_disk_path, :new_disk_path, :old_storage_version,
:logger, :move_wiki, :move_design
def initialize(project:, old_disk_path:, logger: nil)
@project = project
@logger = logger || Gitlab::AppLogger
@old_disk_path = old_disk_path
@move_wiki = has_wiki?
@move_design = has_design?
end
protected
def has_wiki?
gitlab_shell.repository_exists?(project.repository_storage, "#{old_wiki_disk_path}.git")
end
def has_design?
gitlab_shell.repository_exists?(project.repository_storage, "#{old_design_disk_path}.git")
end
def move_repository(from_name, to_name)
from_exists = gitlab_shell.repository_exists?(project.repository_storage, "#{from_name}.git")
to_exists = gitlab_shell.repository_exists?(project.repository_storage, "#{to_name}.git")
# If we don't find the repository on either original or target we should log that as it could be an issue if the
# project was not originally empty.
if !from_exists && !to_exists
logger.warn "Can't find a repository on either source or target paths for #{project.full_path} (ID=#{project.id}) ..."
# We return true so we still reflect the change in the database.
# Next time the repository is (re)created it will be under the new storage layout
return true
elsif !from_exists
# Repository have been moved already.
return true
end
gitlab_shell.mv_repository(project.repository_storage, from_name, to_name).tap do |moved|
if moved
logger.info("Repository moved from '#{from_name}' to '#{to_name}' (PROJECT_ID=#{project.id})")
else
logger.error("Repository cannot be moved from '#{from_name}' to '#{to_name}' (PROJECT_ID=#{project.id})")
end
end
end
def move_repositories
result = move_repository(old_disk_path, new_disk_path)
project.reload_repository!
if move_wiki
result &&= move_repository(old_wiki_disk_path, new_wiki_disk_path)
project.clear_memoization(:wiki)
end
if move_design
result &&= move_repository(old_design_disk_path, new_design_disk_path)
project.clear_memoization(:design_repository)
end
result
end
def rollback_folder_move
move_repository(new_disk_path, old_disk_path)
move_repository(new_wiki_disk_path, old_wiki_disk_path)
move_repository(new_design_disk_path, old_design_disk_path) if move_design
end
def try_to_set_repository_read_only!
project.set_repository_read_only!
rescue Project::RepositoryReadOnlyError => err
migration_error = "Target repository '#{old_disk_path}' cannot be made read-only: #{err.message}"
logger.error migration_error
raise RepositoryInUseError, migration_error
end
def wiki_path_suffix
@wiki_path_suffix ||= Gitlab::GlRepository::WIKI.path_suffix
end
def old_wiki_disk_path
@old_wiki_disk_path ||= "#{old_disk_path}#{wiki_path_suffix}"
end
def new_wiki_disk_path
@new_wiki_disk_path ||= "#{new_disk_path}#{wiki_path_suffix}"
end
def design_path_suffix
@design_path_suffix ||= ::Gitlab::GlRepository::DESIGN.path_suffix
end
def old_design_disk_path
@old_design_disk_path ||= "#{old_disk_path}#{design_path_suffix}"
end
def new_design_disk_path
@new_design_disk_path ||= "#{new_disk_path}#{design_path_suffix}"
end
end
end
end

View File

@ -1,29 +0,0 @@
# frozen_string_literal: true
module Projects
module HashedStorage
class RollbackAttachmentsService < BaseAttachmentService
def execute
origin = FileUploader.absolute_base_dir(project)
project.storage_version = ::Project::HASHED_STORAGE_FEATURES[:repository]
target = FileUploader.absolute_base_dir(project)
@new_disk_path = FileUploader.base_dir(project)
result = move_folder!(origin, target)
if result
project.save!(validate: false)
yield if block_given?
else
# Rollback changes
project.rollback!
end
result
end
end
end
end

View File

@ -1,53 +0,0 @@
# frozen_string_literal: true
module Projects
module HashedStorage
class RollbackRepositoryService < BaseRepositoryService
def execute
try_to_set_repository_read_only!
@old_storage_version = project.storage_version
project.storage_version = nil
@new_disk_path = project.disk_path
result = move_repositories
if result
project.set_full_path
project.track_project_repository
else
rollback_folder_move
project.storage_version = ::Project::HASHED_STORAGE_FEATURES[:repository]
end
project.transaction do
project.save!(validate: false)
project.set_repository_writable!
end
result
rescue Gitlab::Git::CommandError => e
logger.error("Repository #{project.full_path} failed to rollback (PROJECT_ID=#{project.id}). Git operation failed: #{e.inspect}")
rollback_migration!
false
rescue OpenSSL::Cipher::CipherError => e
logger.error("Repository #{project.full_path} failed to rollback (PROJECT_ID=#{project.id}). There is a problem with encrypted attributes: #{e.inspect}")
rollback_migration!
false
end
private
def rollback_migration!
rollback_folder_move
project.storage_version = ::Project::HASHED_STORAGE_FEATURES[:repository]
project.set_repository_writable!
end
end
end
end

View File

@ -1,37 +0,0 @@
# frozen_string_literal: true
module Projects
module HashedStorage
class RollbackService < BaseService
attr_reader :logger, :old_disk_path
def initialize(project, old_disk_path, logger: nil)
@project = project
@old_disk_path = old_disk_path
@logger = logger || Gitlab::AppLogger
end
def execute
# Rollback attachments from Hashed Storage to Legacy
if project.hashed_storage?(:attachments)
return false unless rollback_attachments_service.execute
end
# Rollback repository from Hashed Storage to Legacy
if project.hashed_storage?(:repository)
rollback_repository_service.execute
end
end
private
def rollback_attachments_service
HashedStorage::RollbackAttachmentsService.new(project: project, old_disk_path: old_disk_path, logger: logger)
end
def rollback_repository_service
HashedStorage::RollbackRepositoryService.new(project: project, old_disk_path: old_disk_path, logger: logger)
end
end
end
end

View File

@ -56,6 +56,11 @@
= f.label :throttle_authenticated_web_period_in_seconds, _('Authenticated web rate limit period in seconds'), class: 'label-bold'
= f.number_field :throttle_authenticated_web_period_in_seconds, class: 'form-control gl-form-input'
%fieldset
.form-group
= f.label :project_jobs_api_rate_limit, safe_format('Maximum authenticated requests to %{open}project/:id/jobs%{close} per minute', tag_pair(tag.code, :open, :close)), class: 'label-bold'
= f.number_field :project_jobs_api_rate_limit, class: 'form-control gl-form-input'
%fieldset
%legend.h5.gl-border-none
= _('Response text')

View File

@ -6,7 +6,7 @@
= render Pajamas::ButtonComponent.new(button_options: { class: 'js-settings-toggle' }) do
= expanded ? _('Collapse') : _('Expand')
%p.gl-text-secondary
- help_link = link_to('', help_page_path('development/internal_analytics/snowplow/index'), target: '_blank', rel: 'noopener noreferrer')
- help_link = link_to('', help_page_path('development/internal_analytics/internal_event_instrumentation/index'), target: '_blank', rel: 'noopener noreferrer')
- snowplow_link = link_to('', 'https://snowplow.io/', target: '_blank', rel: 'noopener noreferrer')
= safe_format(_('Configure %{snowplow_link_start}Snowplow%{snowplow_link_end} to track events. %{help_link_start}Learn more.%{help_link_end}'), tag_pair(snowplow_link, :snowplow_link_start, :snowplow_link_end), tag_pair(help_link, :help_link_start, :help_link_end))
.settings-content

View File

@ -13,9 +13,6 @@ module HashedStorage
# @param [Integer] start initial ID of the batch
# @param [Integer] finish last ID of the batch
def perform(start, finish)
migrator = Gitlab::HashedStorage::Migrator.new
migrator.bulk_migrate(start: start, finish: finish)
end
def perform(start, finish); end
end
end

View File

@ -13,17 +13,6 @@ module HashedStorage
attr_reader :project_id
def perform(project_id, old_disk_path = nil)
@project_id = project_id # we need to set this in order to create the lease_key
try_obtain_lease do
project = Project.without_deleted.find_by_id(project_id)
break unless project && project.storage_upgradable?
old_disk_path ||= Storage::LegacyProject.new(project).disk_path
::Projects::HashedStorage::MigrationService.new(project, old_disk_path, logger: logger).execute
end
end
def perform(project_id, old_disk_path = nil); end
end
end

View File

@ -13,17 +13,6 @@ module HashedStorage
attr_reader :project_id
def perform(project_id, old_disk_path = nil)
@project_id = project_id # we need to set this in order to create the lease_key
try_obtain_lease do
project = Project.without_deleted.find_by_id(project_id)
break unless project
old_disk_path ||= project.disk_path
::Projects::HashedStorage::RollbackService.new(project, old_disk_path, logger: logger).execute
end
end
def perform(project_id, old_disk_path = nil); end
end
end

View File

@ -13,9 +13,6 @@ module HashedStorage
# @param [Integer] start initial ID of the batch
# @param [Integer] finish last ID of the batch
def perform(start, finish)
migrator = Gitlab::HashedStorage::Migrator.new
migrator.bulk_rollback(start: start, finish: finish)
end
def perform(start, finish); end
end
end

View File

@ -1,7 +1,7 @@
---
name: rate_limit_oauth_api
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/133109
rollout_issue_url:
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/427874
milestone: '16.5'
type: development
group: group::authentication and authorization

View File

@ -1,8 +0,0 @@
---
name: user_pat_rest_api
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131923
rollout_issue_url: https://gitlab.com/gitlab-org/gitlab/-/issues/425967
milestone: '16.5'
type: development
group: group::authentication and authorization
default_enabled: false

View File

@ -60,7 +60,11 @@ class ObjectStoreSettings
next unless object_store_setting && object_store_setting['enabled']
URI(object_store_setting.dig('connection', 'endpoint'))
endpoint = object_store_setting.dig('connection', 'endpoint')
next unless endpoint
URI(endpoint)
end.uniq
end

View File

@ -0,0 +1,7 @@
# frozen_string_literal: true
class AddJobsIndexRateLimitToApplicationSettings < Gitlab::Database::Migration[2.1]
def change
add_column :application_settings, :project_jobs_api_rate_limit, :integer, default: 600, null: false
end
end

View File

@ -0,0 +1 @@
218b30bf9e844ec19b9388980aa5d505fc860a5fb1ad6340e620c1ac90fd799a

View File

@ -11853,6 +11853,7 @@ CREATE TABLE application_settings (
container_registry_db_enabled boolean DEFAULT false NOT NULL,
encrypted_vertex_ai_access_token bytea,
encrypted_vertex_ai_access_token_iv bytea,
project_jobs_api_rate_limit integer DEFAULT 600 NOT NULL,
CONSTRAINT app_settings_container_reg_cleanup_tags_max_list_size_positive CHECK ((container_registry_cleanup_tags_service_max_list_size >= 0)),
CONSTRAINT app_settings_container_registry_pre_import_tags_rate_positive CHECK ((container_registry_pre_import_tags_rate >= (0)::numeric)),
CONSTRAINT app_settings_dep_proxy_ttl_policies_worker_capacity_positive CHECK ((dependency_proxy_ttl_group_policy_worker_capacity >= 0)),

View File

@ -47,6 +47,8 @@ swap:
once that: "after that"
once the: "after the"
once you: "after you"
pack file: packfile
pack files: packfiles
pop-up window: "dialog"
pop-up: "dialog"
popup: "dialog"

View File

@ -1,267 +1,11 @@
---
stage: Systems
group: Distribution
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
redirect_to: '../repository_storage_paths.md'
remove_date: '2024-01-11'
---
# Repository storage Rake tasks **(FREE SELF)**
This document was moved to [another location](../repository_storage_paths.md).
This is a collection of Rake tasks to help you list and migrate
existing projects and their attachments to the new
[hashed storage](../repository_storage_paths.md) that GitLab
uses to organize the Git data.
## List projects and attachments
The following Rake tasks lists the projects and attachments that are
available on legacy and hashed storage.
### On legacy storage
To have a summary and then a list of projects and their attachments using legacy storage:
- Linux package installations:
```shell
# Projects
sudo gitlab-rake gitlab:storage:legacy_projects
sudo gitlab-rake gitlab:storage:list_legacy_projects
# Attachments
sudo gitlab-rake gitlab:storage:legacy_attachments
sudo gitlab-rake gitlab:storage:list_legacy_attachments
```
- Self-compiled installations:
```shell
# Projects
sudo -u git -H bundle exec rake gitlab:storage:legacy_projects RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:storage:list_legacy_projects RAILS_ENV=production
# Attachments
sudo -u git -H bundle exec rake gitlab:storage:legacy_attachments RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:storage:list_legacy_attachments RAILS_ENV=production
```
### On hashed storage
To have a summary and then a list of projects and their attachments using hashed storage:
- Linux package installations:
```shell
# Projects
sudo gitlab-rake gitlab:storage:hashed_projects
sudo gitlab-rake gitlab:storage:list_hashed_projects
# Attachments
sudo gitlab-rake gitlab:storage:hashed_attachments
sudo gitlab-rake gitlab:storage:list_hashed_attachments
```
- Self-compiled installations:
```shell
# Projects
sudo -u git -H bundle exec rake gitlab:storage:hashed_projects RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:storage:list_hashed_projects RAILS_ENV=production
# Attachments
sudo -u git -H bundle exec rake gitlab:storage:hashed_attachments RAILS_ENV=production
sudo -u git -H bundle exec rake gitlab:storage:list_hashed_attachments RAILS_ENV=production
```
## Migrate to hashed storage
WARNING:
In GitLab 13.0, [hashed storage](../repository_storage_paths.md#hashed-storage)
is enabled by default and the legacy storage is deprecated.
GitLab 14.0 eliminates support for legacy storage. If you're on GitLab
13.0 and later, switching new projects to legacy storage is not possible.
The option to choose between hashed and legacy storage in the Admin Area has
been disabled.
This task must be run on any machine that has Rails/Sidekiq configured, and the task
schedules all your existing projects and attachments associated with it to be
migrated to the **Hashed** storage type:
- Linux package installations:
```shell
sudo gitlab-rake gitlab:storage:migrate_to_hashed
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:storage:migrate_to_hashed RAILS_ENV=production
```
If you have any existing integration, you may want to do a small rollout first,
to validate. You can do so by specifying an ID range with the operation by using
the environment variables `ID_FROM` and `ID_TO`. For example, to limit the rollout
to project IDs 50 to 100 in a Linux package installation:
```shell
sudo gitlab-rake gitlab:storage:migrate_to_hashed ID_FROM=50 ID_TO=100
```
To monitor the progress in GitLab:
1. On the left sidebar, select **Search or go to**.
1. Select **Admin Area**.
1. On the left sidebar, select **Monitoring > Background Jobs**.
1. Watch how long the `hashed_storage:hashed_storage_project_migrate` queue
takes to finish. After it reaches zero, you can confirm every project
has been migrated by running the commands above.
If you find it necessary, you can run the previous migration script again to schedule missing projects.
Any error or warning is logged in Sidekiq's log file.
If [Geo](../geo/index.md) is enabled, each project that is successfully migrated
generates an event to replicate the changes on any **secondary** nodes.
You only need the `gitlab:storage:migrate_to_hashed` Rake task to migrate your repositories, but there are
[additional commands](#list-projects-and-attachments) to help you inspect projects and attachments in both legacy and hashed storage.
## Rollback from hashed storage to legacy storage
WARNING:
In GitLab 13.0, [hashed storage](../repository_storage_paths.md#hashed-storage)
is enabled by default and the legacy storage is deprecated.
GitLab 14.0 eliminates support for legacy storage. If you're on GitLab
13.0 and later, switching new projects to legacy storage is not possible.
The option to choose between hashed and legacy storage in the Admin Area has
been disabled.
This task schedules all your existing projects and associated attachments to be rolled back to the
legacy storage type.
- Linux package installations:
```shell
sudo gitlab-rake gitlab:storage:rollback_to_legacy
```
- Self-compiled installations:
```shell
sudo -u git -H bundle exec rake gitlab:storage:rollback_to_legacy RAILS_ENV=production
```
If you have any existing integration, you may want to do a small rollback first,
to validate. You can do so by specifying an ID range with the operation by using
the environment variables `ID_FROM` and `ID_TO`. For example, to limit the rollout
to project IDs 50 to 100 in a Linux package installation:
```shell
sudo gitlab-rake gitlab:storage:rollback_to_legacy ID_FROM=50 ID_TO=100
```
You can monitor the progress in the **Admin Area > Monitoring > Background Jobs** page.
On the **Queues** tab, you can watch the `hashed_storage:hashed_storage_project_rollback` queue to see how long the process takes to finish.
After it reaches zero, you can confirm every project has been rolled back by running the commands above.
If some projects weren't rolled back, you can run this rollback script again to schedule further rollbacks.
Any error or warning is logged in Sidekiq's log file.
If you have a Geo setup, the rollback is not reflected automatically
on the **secondary** node. You may need to wait for a backfill operation to kick-in and remove
the remaining repositories from the special `@hashed/` folder manually.
## Troubleshooting
The Rake task might not be able to complete the migration to hashed storage.
Checks on the instance will continue to report that there is legacy data:
```plaintext
* Found 1 projects using Legacy Storage
- janedoe/testproject (id: 1234)
```
If you have a subscription, [raise a ticket with GitLab support](https://support.gitlab.com)
as most of the fixes are relatively high risk, involving running code on the Rails console.
### Read only projects
If you have set projects as read only they might fail to migrate.
1. [Start a Rails console](../operations/rails_console.md#starting-a-rails-console-session).
1. Check if the project is read only:
```ruby
project = Project.find_by_full_path('janedoe/testproject')
project.repository_read_only
```
1. If it returns `true` (not `nil` or `false`), set it writable:
```ruby
project.update!(repository_read_only: false)
```
1. [Re-run the migration Rake task](#migrate-to-hashed-storage).
1. Set the project read-only again:
```ruby
project.update!(repository_read_only: true)
```
### Projects pending deletion
Check the project details in the Admin Area. If deleting the project failed
it will show as `Marked For Deletion At ..`, `Scheduled Deletion At ..` and
`pending removal`, but the dates will not be recent.
Delete the project using the Rails console:
1. [Start a Rails console](../operations/rails_console.md#starting-a-rails-console-session).
1. With the following code, select the project to be deleted and account to action it:
```ruby
project = Project.find_by_full_path('janedoe/testproject')
user = User.find_by_username('admin_handle')
puts "\nproject selected for deletion is:\nID: #{project.id}\nPATH: #{project.full_path}\nNAME: #{project.name}\n\n"
```
- Replace `janedoe/testproject` with your project path from the Rake take output or from the Admin Area.
- Replace `admin_handle` with the handle of an instance administrator or with `root`.
- Verify the output before proceeding. **There are no other checks performed**.
1. [Destroy the project](../../user/project/working_with_projects.md#delete-a-project-using-console) **immediately**:
```ruby
Projects::DestroyService.new(project, user).execute
```
If destroying the project generates a stack trace relating to encryption or the error `OpenSSL::Cipher::CipherError`:
1. [Verify your GitLab secrets](check.md#verify-database-values-can-be-decrypted-using-the-current-secrets).
1. If the affected projects have secrets that cannot be decrypted it will be necessary to remove those specific secrets.
[Our documentation for dealing with lost secrets](../../administration/backup_restore/backup_gitlab.md#when-the-secrets-file-is-lost)
is for loss of all secrets, but it's possible for specific projects to be affected. For example,
to [reset specific runner registration tokens](../../administration/backup_restore/backup_gitlab.md#reset-runner-registration-tokens)
for a specific project ID:
```sql
UPDATE projects SET runners_token = null, runners_token_encrypted = null where id = 1234;
```
### `Repository cannot be moved from` errors in Sidekiq log
Projects might fail to migrate with errors in the Sidekiq log:
```shell
# grep 'Repository cannot be moved' /var/log/gitlab/sidekiq/current
{"severity":"ERROR","time":"2021-02-29T02:29:02.021Z","message":"Repository cannot be moved from 'janedoe/testproject' to '@hashed<value>' (PROJECT_ID=1234)"}
```
This might be caused by [a bug](https://gitlab.com/gitlab-org/gitlab/-/issues/259605) in the original code for hashed storage migration.
[There is a workaround for projects still affected by this issue](https://gitlab.com/-/snippets/2039252).
<!-- This redirect file can be deleted after <2024-01-11>. -->
<!-- Redirects that point to other docs in the same project expire in three months. -->
<!-- Redirects that point to docs in a different project or site (link is not relative and starts with `https:`) expire in one year. -->
<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/redirects.html -->

View File

@ -60,6 +60,7 @@ In the following table, you can see:
| [Coverage-guided fuzz testing](../../user/application_security/coverage_fuzzing/index.md) | GitLab 16.0 and later |
| [Password complexity requirements](../../administration/settings/sign_up_restrictions.md#password-complexity-requirements) | GitLab 16.0 and later |
| [Group wikis](../../user/project/wiki/group.md) | GitLab 16.5 and later |
| [Issue analytics](../../user/group/issues_analytics/index.md) | GitLab 16.5 and later |
### Enable registration features

View File

@ -29464,6 +29464,7 @@ Name of the feature that the callout is for.
| <a id="usercalloutfeaturenameenumuser_reached_limit_free_plan_alert"></a>`USER_REACHED_LIMIT_FREE_PLAN_ALERT` | Callout feature name for user_reached_limit_free_plan_alert. |
| <a id="usercalloutfeaturenameenumverification_reminder"></a>`VERIFICATION_REMINDER` | Callout feature name for verification_reminder. |
| <a id="usercalloutfeaturenameenumvsd_feedback_banner"></a>`VSD_FEEDBACK_BANNER` | Callout feature name for vsd_feedback_banner. |
| <a id="usercalloutfeaturenameenumvulnerability_report_grouping"></a>`VULNERABILITY_REPORT_GROUPING` | Callout feature name for vulnerability_report_grouping. |
| <a id="usercalloutfeaturenameenumweb_ide_alert_dismissed"></a>`WEB_IDE_ALERT_DISMISSED` | Callout feature name for web_ide_alert_dismissed. |
| <a id="usercalloutfeaturenameenumweb_ide_ci_environments_guidance"></a>`WEB_IDE_CI_ENVIRONMENTS_GUIDANCE` | Callout feature name for web_ide_ci_environments_guidance. |

View File

@ -437,7 +437,7 @@ listed in the descriptions of the relevant settings.
| `help_text` **(PREMIUM ALL)** | string | no | Deprecated: Use `description` parameter in the [Appearance API](../api/appearance.md). Custom text in sign-in page. |
| `hide_third_party_offers` | boolean | no | Do not display offers from third parties in GitLab. |
| `home_page_url` | string | no | Redirect to this URL when not logged in. |
| `housekeeping_bitmaps_enabled` | boolean | no | Deprecated. Git pack file bitmap creation is always enabled and cannot be changed via API and UI. Always returns `true`. |
| `housekeeping_bitmaps_enabled` | boolean | no | Deprecated. Git packfile bitmap creation is always enabled and cannot be changed via API and UI. Always returns `true`. |
| `housekeeping_enabled` | boolean | no | Enable or disable Git housekeeping. Requires additional fields to be set. For more information, see [Housekeeping fields](#housekeeping-fields). |
| `housekeeping_full_repack_period` | integer | no | Deprecated. Number of Git pushes after which an incremental `git repack` is run. Use `housekeeping_optimize_repository_period` instead. For more information, see [Housekeeping fields](#housekeeping-fields). |
| `housekeeping_gc_period` | integer | no | Deprecated. Number of Git pushes after which `git gc` is run. Use `housekeeping_optimize_repository_period` instead. For more information, see [Housekeeping fields](#housekeeping-fields). |

View File

@ -2139,7 +2139,7 @@ Example response:
## Create a personal access token with limited scopes for the currently authenticated user **(FREE SELF)**
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131923) in GitLab 16.5 with a flag named `user_pat_rest_api`.
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/131923) in GitLab 16.5.
Use this API to create a new personal access token for the currently authenticated user.
For security purposes, the scopes are limited to only `k8s_proxy` and by default the token will expire by

View File

@ -281,7 +281,7 @@ about contexts now.
We can assume we run the experiment in one or a few places, but
track events potentially in many places. The tracking call remains the same, with
the arguments you would usually use when
[tracking events using snowplow](../internal_analytics/snowplow/index.md). The easiest example
tracking events using snowplow. The easiest example
of tracking an event in Ruby would be:
```ruby

View File

@ -318,11 +318,11 @@ export default {
<template>
<div>
<gl-form-group :label-for="fields.name.id" :label="__('Name')">
<gl-form-input v-bind="fields.name" size="lg" />
<gl-form-input v-bind="fields.name" width="lg" />
</gl-form-group>
<gl-form-group :label-for="fields.email.id" :label="__('Email')">
<gl-form-input v-bind="fields.email" type="email" size="lg" />
<gl-form-input v-bind="fields.email" type="email" width="lg" />
</gl-form-group>
<gl-button type="submit" category="primary" variant="confirm">{{ __('Update') }}</gl-button>

View File

@ -162,9 +162,8 @@ The following integration guides are internal. Some integrations require access
## Analytics Instrumentation guides
- [Analytics Instrumentation guide](https://about.gitlab.com/handbook/product/analytics-instrumentation-guide/)
- [Service Ping guide](internal_analytics/service_ping/index.md)
- [Snowplow guide](internal_analytics/snowplow/index.md)
- [Internal Events guide](internal_analytics/internal_event_instrumentation/quick_start.md)
## Experiment guide

View File

@ -337,8 +337,8 @@ Implemented using Redis methods [PFADD](https://redis.io/commands/pfadd/) and [P
- `controller_actions`: the controller actions to track.
- `name`: the event name.
- `action`: required if destination is `:snowplow. Action name for the triggered event. See [event schema](../snowplow/index.md#event-schema) for more details.
- `label`: required if destination is `:snowplow. Label for the triggered event. See [event schema](../snowplow/index.md#event-schema) for more details.
- `action`: required if destination is `:snowplow. Action name for the triggered event.
- `label`: required if destination is `:snowplow. Label for the triggered event.
- `conditions`: optional custom conditions. Uses the same format as Rails callbacks.
- `destinations`: optional list of destinations. Currently supports `:redis_hll` and `:snowplow`. Default: `:redis_hll`.
- `&block`: optional block that computes and returns the `custom_id` that we want to track. This overrides the `visitor_id`.

View File

@ -499,7 +499,6 @@ Service Ping reporting process state is monitored with [internal SiSense dashboa
## Related topics
- [Product Intelligence Guide](https://about.gitlab.com/handbook/product/product-intelligence-guide/)
- [Snowplow Guide](../snowplow/index.md)
- [Product Intelligence Direction](https://about.gitlab.com/direction/analytics/product-intelligence/)
- [Data Analysis Process](https://about.gitlab.com/handbook/business-technology/data-team/#data-analysis-process/)
- [Data for Product Managers](https://about.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/)

View File

@ -1,91 +0,0 @@
---
stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Event dictionary guide
NOTE:
The event dictionary is a work in progress, and this process is subject to change.
This guide describes the event dictionary and how it's implemented.
## Event definition and validation
This process is meant to document all Snowplow events and ensure consistency. Every Snowplow event needs to have such a definition. Event definitions must comply with the [JSON Schema](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/events/schema.json).
All event definitions are stored in the following directories:
- [`config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/config/events)
- [`ee/config/events`](https://gitlab.com/gitlab-org/gitlab/-/tree/master/ee/config/events)
Each event is defined in a separate YAML file consisting of the following fields:
| Field | Required | Additional information |
|------------------------|----------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `description` | yes | A description of the event. |
| `category` | yes | The event category (see [Event schema](index.md#event-schema)). |
| `action` | yes | The event action (see [Event schema](index.md#event-schema)). |
| `label_description` | no | A description of the event label (see [Event schema](index.md#event-schema)). |
| `property_description` | no | A description of the event property (see [Event schema](index.md#event-schema)). |
| `value_description` | no | A description of the event value (see [Event schema](index.md#event-schema)). |
| `extra_properties` | no | The type and description of each extra property sent with the event. |
| `identifiers` | no | A list of identifiers sent with the event. Can be set to one or more of `project`, `user`, or `namespace`. |
| `iglu_schema_url` | no | The URL to the custom schema sent with the event, for example, `iglu:com.gitlab/gitlab_experiment/jsonschema/1-0-0`. |
| `product_section` | yes | The [section](https://gitlab.com/gitlab-com/www-gitlab-com/-/blob/master/data/sections.yml). |
| `product_stage` | no | The [stage](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml) for the event. |
| `product_group` | yes | The [group](https://gitlab.com/gitlab-com/www-gitlab-com/blob/master/data/stages.yml) that owns the event. |
| `milestone` | no | The milestone when the event is introduced. |
| `introduced_by_url` | no | The URL to the merge request that introduced the event. |
| `distributions` | yes | The [distributions](https://about.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/#definitions) where the tracked feature is available. Can be set to one or more of `ce` or `ee`. |
| `tiers` | yes | The [tiers](https://about.gitlab.com/handbook/marketing/brand-and-product-marketing/product-and-solution-marketing/tiers/) where the tracked feature is available. Can be set to one or more of `free`, `premium`, or `ultimate`. |
### Example event definition
The linked [`uuid`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/config/events/epics_promote.yml)
YAML file includes an example event definition.
```yaml
description: Issue promoted to epic
category: epics
action: promote
property_description: The string "issue_id"
value_description: ID of the issue
extra_properties:
weight:
type: integer
description: Weight of the issue
identifiers:
- project
- user
- namespace
product_section: dev
product_stage: plan
product_group: group::product planning
milestone: "11.10"
introduced_by_url: https://gitlab.com/gitlab-org/gitlab/-/merge_requests/10537
distributions:
- ee
tiers:
- premium
- ultimate
```
## Create a new event definition
Use the dedicated [event definition generator](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/generators/gitlab/snowplow_event_definition_generator.rb)
to create new event definitions.
The `category` and `action` of each event are included in the filename to standardize file naming.
The generator takes three options:
- `--ee`: Indicates if the event is for EE.
- `--category=CATEGORY`: Indicates the `category` of the event.
- `--action=ACTION`: Indicates the `action` of the event.
```shell
bundle exec rails generate gitlab:snowplow_event_definition --category Groups::EmailCampaignsController --action click
create create config/events/groups__email_campaigns_controller_click.yml
```

View File

@ -1,523 +0,0 @@
---
stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Implement Snowplow tracking
This page describes how to:
- Implement Snowplow frontend and backend tracking
- Test Snowplow events
## Event definitions
Every Snowplow event, regardless of frontend or backend, requires a corresponding event definition. These definitions document the event and its properties to make it easier to maintain and analyze.
These definitions can be browsed in the [event dictionary](https://metrics.gitlab.com/snowplow/). The [event dictionary guide](event_dictionary_guide.md) provides instructions for setting up an event definition.
## Snowplow JavaScript frontend tracking
GitLab provides a `Tracking` interface that wraps the [Snowplow JavaScript tracker](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/)
to track custom events.
For the recommended frontend tracking implementation, see [Usage recommendations](#usage-recommendations).
Structured events and page views include the [`gitlab_standard`](schemas.md#gitlab_standard)
context, using the `window.gl.snowplowStandardContext` object which includes
[default data](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/views/layouts/_snowplow.html.haml)
as base:
| Property | Example |
| -------- | ------- |
| `context_generated_at` | `"2022-01-01T01:00:00.000Z"` |
| `environment` | `"production"` |
| `extra` | `{}` |
| `namespace_id` | `123` |
| `plan` | `"gold"` |
| `project_id` | `456` |
| `source` | `"gitlab-rails"` |
| `user_id` | `789`* |
| `is_gitlab_team_member` | `true`|
_\* Undergoes a pseudonymization process at the collector level._
These properties [are overridden](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/get_standard_context.js)
with frontend-specific values, like `source` (`gitlab-javascript`), `google_analytics_id`
and the custom `extra` object. You can modify this object for any subsequent
structured event that fires, although this is not recommended.
Tracking implementations must have an `action` and a `category`. You can provide additional
properties from the [event schema](index.md#event-schema), in
addition to an `extra` object that accepts key-value pairs.
| Property | Type | Default value | Description |
|:-----------|:-------|:---------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `category` | string | `document.body.dataset.page` | Page or subsection of a page in which events are captured. |
| `action` | string | `'generic'` | Action the user is taking. Clicks must be `click` and activations must be `activate`. For example, focusing a form field is `activate_form_input`, and clicking a button is `click_button`. |
| `data` | object | `{}` | Additional data such as `label`, `property`, `value` as described in [Event schema](index.md#event-schema), `context` for custom contexts, and `extra` (key-value pairs object). |
### Usage recommendations
- Use [data attributes](#implement-data-attribute-tracking) on HTML elements that emit `click`, `show.bs.dropdown`, or `hide.bs.dropdown` events.
- Use the [Vue mixin](#implement-vue-component-tracking) for tracking custom events, or if the supported events for data attributes are not propagating. For example, clickable components that don't emit `click`.
- Use the [tracking class](#implement-raw-javascript-tracking) when tracking in vanilla JavaScript files.
### Implement data attribute tracking
To implement tracking for HAML or Vue templates, add a [`data-track` attribute](#data-track-attributes) to the element.
The following example shows `data-track-*` attributes assigned to a button:
```haml
%button.btn{ data: { track_action: "click_button", track_label: "template_preview", track_property: "my-template" } }
```
```html
<button class="btn"
data-track-action="click_button"
data-track-label="template_preview"
data-track-property="my-template"
data-track-extra='{ "template_variant": "primary" }'
/>
```
#### `data-track` attributes
| Attribute | Required | Description |
|:----------------------|:---------|:------------|
| `data-track-action` | true | Action the user is taking. Clicks must be prepended with `click` and activations must be prepended with `activate`. For example, focusing a form field is `activate_form_input` and clicking a button is `click_button`. Replaces `data-track-event`, which was [deprecated](https://gitlab.com/gitlab-org/gitlab/-/issues/290962) in GitLab 13.11. |
| `data-track-label` | false | The specific element or object to act on. This can be: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown list; or the name or title attribute of a record being created. |
| `data-track-property` | false | Any additional property of the element, or object being acted on. |
| `data-track-value` | false | Describes a numeric value (decimal) directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. If omitted, this is the element's `value` property or `undefined`. For checkboxes, the default value is the element's checked attribute or `0` when unchecked. The value is parsed as numeric before sending the event. |
| `data-track-extra` | false | A key-value pair object passed as a valid JSON string. This attribute is added to the `extra` property in our [`gitlab_standard`](schemas.md#gitlab_standard) schema. |
| `data-track-context` | false | To append a custom context object, passed as a valid JSON string. |
#### Event listeners
Event listeners bind at the document level to handle click events in elements with data attributes.
This allows them to be handled when the DOM re-renders or changes. Document-level binding reduces
the likelihood that click events stop propagating up the DOM tree.
If click events stop propagating, you must implement listeners and [Vue component tracking](#implement-vue-component-tracking) or [raw JavaScript tracking](#implement-raw-javascript-tracking).
#### Helper methods
You can use the following Ruby helpers:
```ruby
tracking_attrs(label, action, property) # { data: { track_label... } }
tracking_attrs_data(label, action, property) # { track_label... }
```
You can also use it on HAML templates:
```haml
%button{ **tracking_attrs('main_navigation', 'click_button', 'navigation') }
// When merging with additional data
// %button{ data: { platform: "...", **tracking_attrs_data('main_navigation', 'click_button', 'navigation') } }
```
If you use the GitLab helper method [`nav_link`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/helpers/tab_helper.rb#L76), you must wrap `html_options` under the `html_options` keyword argument. If you
use the `ActionView` helper method [`link_to`](https://api.rubyonrails.org/classes/ActionView/Helpers/UrlHelper.html#method-i-link_to), you don't need to wrap `html_options`.
```ruby
# Bad
= nav_link(controller: ['dashboard/groups', 'explore/groups'], data: { track_label: "explore_groups",
track_action: "click_button" })
# Good
= nav_link(controller: ['dashboard/groups', 'explore/groups'], html_options: { data: { track_label:
"explore_groups", track_action: "click_button" } })
# Good (other helpers)
= link_to explore_groups_path, title: _("Explore"), data: { track_label: "explore_groups", track_action:
"click_button" }
```
### Implement Vue component tracking
For custom event tracking, use the [Vue mixin](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/tracking.js#L207). It exposes `Tracking.event` as the `track` method.
You can specify tracking options by creating a `tracking` data object or
computed property, and as a second parameter: `this.track('click_button', opts)`.
These options override any defaults and allow the values to be dynamic from props or based on state:
| Property | Type | Default | Example |
| -- | -- | -- | -- |
| `category` | string | `document.body.dataset.page` | `'code_quality_walkthrough'` |
| `label` | string | `''` | `'process_start_button'` |
| `property` | string | `''` | `'asc'` or `'desc'` |
| `value` | integer | `undefined` | `0`, `1`, `500` |
| `extra` | object | `{}` | `{ selectedVariant: this.variant }` |
To implement Vue component tracking:
1. Import the `Tracking` library and call the `mixin` method:
```javascript
import Tracking from '~/tracking';
const trackingMixin = Tracking.mixin();
// Optionally provide default properties
// const trackingMixin = Tracking.mixin({ label: 'right_sidebar' });
```
1. Use the mixin in the component:
```javascript
export default {
mixins: [trackingMixin],
// Or
// mixins: [Tracking.mixin()],
// mixins: [Tracking.mixin({ label: 'right_sidebar' })],
data() {
return {
expanded: false,
};
},
};
```
1. You can specify tracking options in by creating a `tracking` data object
or computed property:
```javascript
export default {
name: 'RightSidebar',
mixins: [Tracking.mixin()],
data() {
return {
expanded: false,
variant: '',
tracking: {
label: 'right_sidebar',
// property: '',
// value: '',
// experiment: '',
// extra: {},
},
};
},
// Or
// computed: {
// tracking() {
// return {
// property: this.variant,
// extra: { expanded: this.expanded },
// };
// },
// },
};
```
1. Call the `track` method. Tracking options can be passed as the second parameter:
```javascript
this.track('click_button', {
label: 'right_sidebar',
});
```
Or use the `track` method in the template:
```html
<template>
<div>
<button data-testid="toggle" @click="toggle">Toggle</button>
<div v-if="expanded">
<p>Hello world!</p>
<button @click="track('click_button')">Track another event</button>
</div>
</div>
</template>
```
#### Testing example
```javascript
export default {
name: 'CountDropdown',
mixins: [Tracking.mixin({ label: 'count_dropdown' })],
data() {
return {
variant: 'counter',
count: 0,
};
},
methods: {
handleChange({ target }) {
const { variant } = this;
this.count = Number(target.value);
this.track('change_value', {
value: this.count,
extra: { variant }
});
},
},
};
```
```javascript
import { mockTracking } from 'helpers/tracking_helper';
// mockTracking(category, documentOverride, spyMethod)
describe('CountDropdown.vue', () => {
let trackingSpy;
let wrapper;
...
beforeEach(() => {
trackingSpy = mockTracking(undefined, wrapper.element, jest.spyOn);
});
const findDropdown = () => wrapper.find('[data-testid="dropdown"]');
it('tracks change event', () => {
const dropdown = findDropdown();
dropdown.element.value = 30;
dropdown.trigger('change');
expect(trackingSpy).toHaveBeenCalledWith(undefined, 'change_value', {
value: 30,
label: 'count_dropdown',
extra: { variant: 'counter' },
});
});
});
```
### Implement raw JavaScript tracking
To track from a vanilla JavaScript file, use the `Tracking.event` static function
(calls [`dispatchSnowplowEvent`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/app/assets/javascripts/tracking/dispatch_snowplow_event.js)).
The following example demonstrates tracking a click on a button by manually calling `Tracking.event`.
```javascript
import Tracking from '~/tracking';
const button = document.getElementById('create_from_template_button');
button.addEventListener('click', () => {
Tracking.event(undefined, 'click_button', {
label: 'create_from_template',
property: 'template_preview',
extra: {
templateVariant: 'primary',
valid: 1,
},
});
});
```
#### Testing example
```javascript
import Tracking from '~/tracking';
describe('MyTracking', () => {
let wrapper;
beforeEach(() => {
jest.spyOn(Tracking, 'event');
});
const findButton = () => wrapper.find('[data-testid="create_from_template"]');
it('tracks event', () => {
findButton().trigger('click');
expect(Tracking.event).toHaveBeenCalledWith(undefined, 'click_button', {
label: 'create_from_template',
property: 'template_preview',
extra: {
templateVariant: 'primary',
valid: true,
},
});
});
});
```
### Form tracking
To enable Snowplow automatic [form tracking](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracking-specific-events/#form-tracking):
1. Call `Tracking.enableFormTracking` when the DOM is ready.
1. Provide a `config` object that includes at least one of the following elements:
- `forms` determines the forms to track. Identified by the CSS class name.
- `fields` determines the fields inside the tracked forms to track. Identified by the field `name`.
1. Optional. Provide a list of contexts as the second argument. The [`gitlab_standard`](schemas.md#gitlab_standard) schema is excluded from these events.
```javascript
Tracking.enableFormTracking({
forms: { allow: ['sign-in-form', 'password-recovery-form'] },
fields: { allow: ['terms_and_conditions', 'newsletter_agreement'] },
});
```
#### Testing example
```javascript
import Tracking from '~/tracking';
describe('MyFormTracking', () => {
let formTrackingSpy;
beforeEach(() => {
formTrackingSpy = jest
.spyOn(Tracking, 'enableFormTracking')
.mockImplementation(() => null);
});
it('initialized with the correct configuration', () => {
expect(formTrackingSpy).toHaveBeenCalledWith({
forms: { allow: ['sign-in-form', 'password-recovery-form'] },
fields: { allow: ['terms_and_conditions', 'newsletter_agreement'] },
});
});
});
```
## Implement Ruby backend tracking
`Gitlab::Tracking` is an interface that wraps the [Snowplow Ruby Tracker](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/ruby-tracker/) for tracking custom events.
Backend tracking provides:
- User behavior tracking
- Instrumentation to monitor and visualize performance over time in a section or aspect of code.
To add custom event tracking and instrumentation, call the `GitLab::Tracking.event` class method.
For example:
```ruby
class Projects::CreateService < BaseService
def execute
project = Project.create(params)
Gitlab::Tracking.event('Projects::CreateService', 'create_project', label: project.errors.full_messages.to_sentence,
property: project.valid?.to_s, project: project, user: current_user, namespace: namespace)
end
end
```
Use the following arguments:
| Argument | Type | Default value | Description |
|------------|---------------------------|---------------|-----------------------------------------------------------------------------------------------------------------------------------|
| `category` | String | | Area or aspect of the application. For example, `HealthCheckController` or `Lfs::FileTransformer`. |
| `action` | String | | The action being taken. For example, a controller action such as `create`, or an Active Record callback. |
| `label` | String | `nil` | The specific element or object to act on. This can be one of the following: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown list; or the name or title attribute of a record being created. |
| `property` | String | `nil` | Any additional property of the element, or object being acted on. |
| `value` | Numeric | `nil` | Describes a numeric value (decimal) directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. |
| `context` | Array\[SelfDescribingJSON\] | `nil` | An array of custom contexts to send with this event. Most events should not have any custom contexts. |
| `project` | Project | `nil` | The project associated with the event. |
| `user` | User | `nil` | The user associated with the event. This value undergoes a pseudonymization process at the collector level. |
| `namespace` | Namespace | `nil` | The namespace associated with the event. |
| `extra` | Hash | `{}` | Additional keyword arguments are collected into a hash and sent with the event. |
### Unit testing
To test backend Snowplow events, use the `expect_snowplow_event` helper. For more information, see
[testing best practices](../../testing_guide/best_practices.md#test-snowplow-events).
### Performance
We use the [AsyncEmitter](https://snowplow.github.io/snowplow-ruby-tracker/SnowplowTracker/AsyncEmitter.html) when tracking events, which allows for instrumentation calls to be run in a background thread. This is still an active area of development.
## Develop and test Snowplow
To develop and test a Snowplow event, there are several tools to test frontend and backend events:
| Testing Tool | Frontend Tracking | Backend Tracking | Local Development Environment | Production Environment | Production Environment |
|----------------------------------------------|--------------------|---------------------|-------------------------------|------------------------|------------------------|
| Snowplow Analytics Debugger Chrome Extension | Yes | No | Yes | Yes | Yes |
| Snowplow Inspector Chrome Extension | Yes | No | Yes | Yes | Yes |
| Snowplow Micro | Yes | Yes | Yes | No | No |
### Test frontend events
Before you test frontend events in development, you must:
1. [Setup Snowplow locally](../internal_event_instrumentation/local_setup_and_debugging.md).
1. Turn off ad blockers that could prevent Snowplow JavaScript from loading in your environment.
1. Turn off "Do Not Track" (DNT) in your browser.
All URLs are pseudonymized. The entity identifier [replaces](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracker-setup/other-parameters-2/#setting-a-custom-page-url-and-referrer-url) personally identifiable
information (PII). PII includes usernames, group, and project names.
Page titles are hardcoded as `GitLab` for the same reason.
#### Snowplow Analytics Debugger Chrome Extension
[Snowplow Analytics Debugger](https://www.iglooanalytics.com/blog/snowplow-analytics-debugger-chrome-extension.html) is a browser extension for testing frontend events. It works in production, staging, and local development environments.
1. Install the [Snowplow Analytics Debugger](https://chrome.google.com/webstore/detail/snowplow-analytics-debugg/jbnlcgeengmijcghameodeaenefieedm) Chrome browser extension.
1. Open Chrome DevTools to the Snowplow Analytics Debugger tab.
#### Snowplow Inspector Chrome Extension
Snowplow Inspector Chrome Extension is a browser extension for testing frontend events. This works in production, staging, and local development environments.
<i class="fa fa-youtube-play youtube" aria-hidden="true"></i>
For a video tutorial, see the [Snowplow plugin walk through](https://www.youtube.com/watch?v=g4rqnIZ1Mb4).
1. Install [Snowplow Inspector](https://chrome.google.com/webstore/detail/snowplow-inspector/maplkdomeamdlngconidoefjpogkmljm?hl=en).
1. To open the extension, select the Snowplow Inspector icon beside the address bar.
1. Click around on a webpage with Snowplow to see JavaScript events firing in the inspector window.
### Test backend events with Snowplow Micro
[Snowplow Micro](https://snowplow.io/blog/introducing-snowplow-micro/) is a
Docker-based solution for testing backend and frontend in a local development environment. Snowplow Micro
records the same events as the full Snowplow pipeline. To query events, use the Snowplow Micro API.
It can be set up automatically using [GitLab Development Kit (GDK)](https://gitlab.com/gitlab-org/gitlab-development-kit).
See the [how-to docs](https://gitlab.com/gitlab-org/gitlab-development-kit/-/blob/main/doc/howto/snowplow_micro.md) for more details.
1. Set the environment variable to tell the GDK to use Snowplow Micro in development. This overrides two `application_settings` options:
- `snowplow_enabled` setting will instead return `true` from `Gitlab::Tracking.enabled?`
- `snowplow_collector_hostname` setting will instead always return `localhost:9090` (or whatever port is set for `snowplow_micro.port` GDK setting) from `Gitlab::Tracking.collector_hostname`.
With Snowplow Micro set up you can now manually test backend Snowplow events:
1. Send a test Snowplow event from the Rails console:
```ruby
Gitlab::Tracking.event('category', 'action')
```
1. Navigate to `localhost:9090/micro/good` to see the event.
#### Useful links
- [Snowplow Micro repository](https://github.com/snowplow-incubator/snowplow-micro)
- [Installation guide recording](https://www.youtube.com/watch?v=OX46fo_A0Ag)
### Troubleshoot
To control content security policy warnings when using an external host, modify `config/gitlab.yml`
to allow or prevent them. To allow them, add the relevant host for `connect_src`. For example, for
`https://snowplow.trx.gitlab.net`:
```yaml
development:
<<: *base
gitlab:
content_security_policy:
enabled: true
directives:
connect_src: "'self' http://localhost:* http://127.0.0.1:* ws://localhost:* wss://localhost:* ws://127.0.0.1:* https://snowplow.trx.gitlab.net/"
```

View File

@ -1,174 +0,0 @@
---
stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Snowplow development guidelines
Snowplow is an enterprise-grade marketing and Analytics Instrumentation platform that tracks how users engage with our website and application.
[Snowplow](https://snowplow.io/) consists of several loosely-coupled sub-systems:
- **Trackers** fire Snowplow events. Snowplow has twelve trackers that cover web, mobile, desktop, server, and IoT.
- **Collectors** receive Snowplow events from trackers. We use different event collectors that synchronize events to Amazon S3, Apache Kafka, or Amazon Kinesis.
- **Enrich** cleans raw Snowplow events, enriches them, and puts them into storage. There is a Hadoop-based enrichment process, and a Kinesis-based or Kafka-based process.
- **Storage** stores Snowplow events. We store the Snowplow events in a flat file structure on S3, and in the Redshift and PostgreSQL databases.
- **Data modeling** joins event-level data with other data sets, aggregates them into smaller data sets, and applies business logic. This produces a clean set of tables for data analysis. We use data models for Redshift and Looker.
- **Analytics** are performed on Snowplow events or on aggregate tables.
![Snowplow flow](../../img/snowplow_flow.png)
## Snowplow request flow
The following example shows a basic request/response flow between the following components:
- Snowplow JS / Ruby Trackers on GitLab.com
- [GitLab.com Snowplow Collector](https://gitlab.com/gitlab-com/gl-infra/readiness/-/blob/master/library/snowplow/index.md)
- The GitLab S3 Bucket
- The GitLab Snowflake Data Warehouse
- Sisense:
```mermaid
sequenceDiagram
participant Snowplow JS (Frontend)
participant Snowplow Ruby (Backend)
participant GitLab.com Snowplow Collector
participant S3 Bucket
participant Snowflake DW
participant Sisense Dashboards
Snowplow JS (Frontend) ->> GitLab.com Snowplow Collector: FE Tracking event
Snowplow Ruby (Backend) ->> GitLab.com Snowplow Collector: BE Tracking event
loop Process using Kinesis Stream
GitLab.com Snowplow Collector ->> GitLab.com Snowplow Collector: Log raw events
GitLab.com Snowplow Collector ->> GitLab.com Snowplow Collector: Enrich events
GitLab.com Snowplow Collector ->> GitLab.com Snowplow Collector: Write to disk
end
GitLab.com Snowplow Collector ->> S3 Bucket: Kinesis Firehose
Note over GitLab.com Snowplow Collector, S3 Bucket: Pseudonymization
S3 Bucket->>Snowflake DW: Import data
Snowflake DW->>Snowflake DW: Transform data using dbt
Snowflake DW->>Sisense Dashboards: Data available for querying
```
For more details about the architecture, see [Snowplow infrastructure](infrastructure.md).
## Event schema
All the events must be consistent. If each feature captures events differently, it can be difficult
to perform analysis.
Each event provides attributes that describe the event.
| Attribute | Type | Required | Description |
| --------- | ------- | -------- | ----------- |
| category | text | true | The page or backend section of the application. Unless infeasible, use the Rails page attribute by default in the frontend, and namespace + class name on the backend, for example, `Notes::CreateService`. |
| action | text | true | The action the user takes, or aspect that's being instrumented. The first word must describe the action or aspect. For example, clicks must be `click`, activations must be `activate`, creations must be `create`. Use underscores to describe what was acted on. For example, activating a form field is `activate_form_input`, an interface action like clicking on a dropdown list is `click_dropdown`, a behavior like creating a project record from the backend is `create_project`. |
| label | text | false | The specific element or object to act on. This can be one of the following: the label of the element, for example, a tab labeled 'Create from template' for `create_from_template`; a unique identifier if no text is available, for example, `groups_dropdown_close` for closing the Groups dropdown list; or the name or title attribute of a record being created. For Service Ping metrics adapted to Snowplow events, this should be the full metric [key path](../metrics/metrics_dictionary.md#metric-key_path) taken from its definition file. |
| property | text | false | Any additional property of the element, or object being acted on. For Service Ping metrics adapted to Snowplow events, this should be additional information or context that can help analyze the event. For example, in the case of `usage_activity_by_stage_monthly.create.merge_requests_users`, there are four different possible merge request actions: "create", "merge", "comment", and "close". Each of these would be a possible property value. |
| value | decimal | false | Describes a numeric value (decimal) directly related to the event. This could be the value of an input. For example, `10` when clicking `internal` visibility. |
| context | vector | false | Additional data in the form of a [self-describing JSON](https://docs.snowplow.io/docs/pipeline-components-and-applications/iglu/common-architecture/self-describing-json-schemas/) to describe the event if the attributes are not sufficient. Each context must have its schema defined to assure data integrity. Refer to the list of GitLab-defined contexts for more details. |
### Examples
| Category* | Label | Action | Property** | Value |
|-------------|------------------|-----------------------|----------|:-----:|
| `[root:index]` | `main_navigation` | `click_navigation_link` | `[link_label]` | - |
| `[groups:boards:show]` | `toggle_swimlanes` | `click_toggle_button` | - | `[is_active]` |
| `[projects:registry:index]` | `registry_delete` | `click_button` | - | - |
| `[projects:registry:index]` | `registry_delete` | `confirm_deletion` | - | - |
| `[projects:blob:show]` | `congratulate_first_pipeline` | `click_button` | `[human_access]` | - |
| `[projects:clusters:new]` | `chart_options` | `generate_link` | `[chart_link]` | - |
| `[projects:clusters:new]` | `chart_options` | `click_add_label_button` | `[label_id]` | - |
| `API::NpmPackages` | `counts.package_events_i_package_push_package_by_deploy_token` | `push_package` | `npm` | - |
_* If you choose to omit the category you can use the default._<br>
_** Use property for variable strings._
### Reference SQL
#### Last 20 `reply_comment_button` events
```sql
SELECT
session_id,
event_id,
event_label,
event_action,
event_property,
event_value,
event_category,
contexts
FROM legacy.snowplow_structured_events_all
WHERE
event_label = 'reply_comment_button'
AND event_action = 'click_button'
-- AND event_category = 'projects:issues:show'
-- AND event_value = 1
ORDER BY collector_tstamp DESC
LIMIT 20
```
#### Last 100 page view events
```sql
SELECT
-- page_url,
-- page_title,
-- referer_url,
-- marketing_medium,
-- marketing_source,
-- marketing_campaign,
-- browser_window_width,
-- device_is_mobile
*
FROM legacy.snowplow_page_views_30
ORDER BY page_view_start DESC
LIMIT 100
```
#### Top 20 users who fired `reply_comment_button` in the last 30 days
```sql
SELECT
count(*) as hits,
se_action,
se_category,
gsc_pseudonymized_user_id
FROM legacy.snowplow_gitlab_events_30
WHERE
se_label = 'reply_comment_button'
AND gsc_pseudonymized_user_id IS NOT NULL
GROUP BY gsc_pseudonymized_user_id, se_category, se_action
ORDER BY count(*) DESC
LIMIT 20
```
#### Query JSON formatted data
```sql
SELECT
derived_tstamp,
contexts:data[0]:data:extra:old_format as CURRENT_FORMAT,
contexts:data[0]:data:extra:value as UPDATED_FORMAT
FROM legacy.snowplow_structured_events_all
WHERE event_action in ('wiki_format_updated')
ORDER BY derived_tstamp DESC
LIMIT 100
```
### Web-specific parameters
Snowplow JavaScript adds [web-specific parameters](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/snowplow-tracker-protocol/#Web-specific_parameters) to all web events by default.
## Related topics
- [Snowplow data structure](https://docs.snowplow.io/docs/understanding-your-pipeline/canonical-event/)
- [Our Iglu schema registry](https://gitlab.com/gitlab-org/iglu)
- [List of events used in our codebase (Event Dictionary)](https://metrics.gitlab.com/snowplow/)
- [Analytics Instrumentation Guide](https://about.gitlab.com/handbook/product/analytics-instrumentation-guide/)
- [Service Ping Guide](../service_ping/index.md)
- [Analytics Instrumentation Direction](https://about.gitlab.com/direction/analytics/analytics-instrumentation/)
- [Data Analysis Process](https://about.gitlab.com/handbook/business-technology/data-team/#data-analysis-process/)
- [Data for Product Managers](https://about.gitlab.com/handbook/business-technology/data-team/programs/data-for-product-managers/)
- [Data Infrastructure](https://about.gitlab.com/handbook/business-technology/data-team/platform/infrastructure/)

View File

@ -1,101 +0,0 @@
---
stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Snowplow infrastructure
Snowplow events on GitLab SaaS fired by a [tracker](implementation.md) go through an AWS pipeline, managed by GitLab.
## Event flow in the AWS pipeline
Every event goes through a collector, enricher, and pseudonymization lambda. The event is then dumped to S3 storage where it can be picked up by the Snowflake data warehouse.
Deploying and managing the infrastructure is automated using Terraform in the current [Terraform repository](https://gitlab.com/gitlab-com/gl-infra/config-mgmt/-/tree/master/environments/aws-snowplow).
```mermaid
graph LR
GL[GitLab.com]-->COL
subgraph aws-cloud[AWS]
COL[Collector]-->|snowplow-raw-good|ENR
COL[Collector]-->|snowplow-raw-bad|FRBE
subgraph firehoserbe[Firehose]
FRBE[AWS Lambda]
end
FRBE-->S3RBE
ENR[Enricher]-->|snowplow-enriched-bad|FEBE
subgraph firehoseebe[Firehose]
FEBE[AWS Lambda]
end
FEBE-->S3EBE
ENR[Enricher]-->|snowplow-enriched-good|FRGE
subgraph firehosege[Firehose]
FRGE[AWS Lambda]
end
FRGE-->S3GE
end
subgraph snowflake[Data warehouse]
S3RBE[S3 raw-bad]-->BE[gitlab_bad_events]
S3EBE[S3 enriched-bad]-->BE[gitlab_bad_events]
S3GE[S3 output]-->GE[gitlab_events]
end
```
See [Snowplow technology 101](https://github.com/snowplow/snowplow/#snowplow-technology-101) for Snowplow's own documentation and an overview how collectors and enrichers work.
### Pseudonymization
In contrast to a typical Snowplow pipeline, after enrichment, GitLab Snowplow events go through a [pseudonymization service](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/snowplow-pseudonymization) in the form of an AWS Lambda service before they are stored in S3 storage.
#### Why events need to be pseudonymized
GitLab is bound by its [obligations to community](https://about.gitlab.com/handbook/product/analytics-instrumentation-guide/service-usage-data-commitment/)
and by [legal regulations](https://about.gitlab.com/handbook/legal/privacy/customer-product-usage-information/) to protect the privacy of its users.
GitLab must provide valuable insights for business decisions, and there is a need
for a better understanding of different users' behavior patterns. The
pseudonymization process helps you find a compromise between these two requirements.
Pseudonymization processes personally identifiable information inside a Snowplow event in an irreversible fashion
maintaining deterministic output for given input, while masking any relation to that input.
#### How events are pseudonymized
Pseudonymization uses an allowlist that provides privacy by default. Therefore, each
attribute received as part of a Snowplow event is pseudonymized unless the attribute
is an allowed exception.
Pseudonymization is done using the HMAC-SHA256 keyed hash algorithm.
Attributes are combined with a secret salt to replace each identifiable information with a pseudonym.
### S3 bucket data lake to Snowflake
See [Data team's Snowplow Overview](https://about.gitlab.com/handbook/business-technology/data-team/platform/snowplow/) for further details how data is ingested into our Snowflake data warehouse.
## Monitoring
There are several tools that monitor Snowplow events tracking in different stages of the processing pipeline:
- [Analytics Instrumentation Grafana dashboard](https://dashboards.gitlab.net/d/product-intelligence-main/product-intelligence-product-intelligence?orgId=1) monitors backend events sent from a GitLab.com instance to a collectors fleet. This dashboard provides information about:
- The number of events that successfully reach Snowplow collectors.
- The number of events that failed to reach Snowplow collectors.
- The number of backend events that were sent.
- [AWS CloudWatch dashboard](https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=SnowPlow;start=P3D) monitors the state of the events in a processing pipeline. The pipeline starts from Snowplow collectors, goes through to enrichers and pseudonymization, and then up to persistence in an S3 bucket. From S3, the events are imported into the Snowflake Data Warehouse. You must have AWS access rights to view this dashboard. For more information, see [monitoring](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/snowplow-pseudonymization#monitoring) in the Snowplow Events pseudonymization service documentation.
- [Sisense dashboard](https://app.periscopedata.com/app/gitlab/417669/Snowplow-Summary-Dashboard) provides information about the number of good and bad events imported into the Data Warehouse, in addition to the total number of imported Snowplow events.
For more information, see this [video walk-through](https://www.youtube.com/watch?v=NxPS0aKa_oU).
## Related topics
- [Snowplow technology 101](https://github.com/snowplow/snowplow/#snowplow-technology-101)
- [Snowplow pseudonymization AWS Lambda project](https://gitlab.com/gitlab-org/analytics-section/analytics-instrumentation/snowplow-pseudonymization)
- [Analytics Instrumentation Guide](https://about.gitlab.com/handbook/product/analytics-instrumentation-guide/)
- [Data Infrastructure](https://about.gitlab.com/handbook/business-technology/data-team/platform/infrastructure/)
- [Snowplow architecture overview (internal)](https://www.youtube.com/watch?v=eVYJjzspsLU)
- [Snowplow architecture overview slide deck (internal)](https://docs.google.com/presentation/d/16gQEO5CAg8Tx4NBtfnZj-GF4juFI6HfEPWcZgH4Rn14/edit?usp=sharing)
- [AWS Lambda implementation (internal)](https://youtu.be/cQd0mdMhkQA)

View File

@ -1,190 +0,0 @@
---
stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Snowplow schemas
This page provides Snowplow schema reference for GitLab events.
## `gitlab_standard`
We are including the [`gitlab_standard` schema](https://gitlab.com/gitlab-org/iglu/-/blob/master/public/schemas/com.gitlab/gitlab_standard/jsonschema/) for structured events and page views.
The [`StandardContext`](https://gitlab.com/gitlab-org/gitlab/-/blob/master/lib/gitlab/tracking/standard_context.rb)
class represents this schema in the application. Some properties are
[automatically populated for frontend events](implementation.md#snowplow-javascript-frontend-tracking),
and can be [provided manually for backend events](implementation.md#implement-ruby-backend-tracking).
| Field Name | Required | Default value | Type | Description |
|-------------------------|:-------------------:|------------------------------|---------------------------|-------------------------------------------------------------------------------------------------------------------------|
| `project_id` | **{dotted-circle}** | Current project ID * | integer | |
| `namespace_id` | **{dotted-circle}** | Current group/namespace ID * | integer | |
| `user_id` | **{dotted-circle}** | Current user ID * | integer | User database record ID attribute. This value undergoes a pseudonymization process at the collector level. |
| `context_generated_at` | **{dotted-circle}** | Current timestamp | string (date time format) | Timestamp indicating when context was generated. |
| `environment` | **{check-circle}** | Current environment | string (max 32 chars) | Name of the source environment, such as `production` or `staging` |
| `source` | **{check-circle}** | Event source | string (max 32 chars) | Name of the source application, such as `gitlab-rails` or `gitlab-javascript` |
| `plan` | **{dotted-circle}** | Current namespace plan * | string (max 32 chars) | Name of the plan for the namespace, such as `free`, `premium`, or `ultimate`. Automatically picked from the `namespace`. |
| `google_analytics_id` | **{dotted-circle}** | GA ID value * | string (max 32 chars) | Google Analytics ID, present when set from our marketing sites. |
| `is_gitlab_team_member` | **{dotted-circle}** | | boolean | Indicates if the events is triggered by a GitLab team member |
| `extra` | **{dotted-circle}** | | JSON | Any additional data associated with the event, in the form of key-value pairs |
_\* Default value present for frontend events only_
## Default Schema
Frontend events include a [web-specific schema](https://docs.snowplow.io/docs/understanding-your-pipeline/canonical-event/#web-specific-fields) provided by Snowplow.
All URLs are pseudonymized. The entity identifier [replaces](https://docs.snowplow.io/docs/collecting-data/collecting-from-own-applications/javascript-trackers/javascript-tracker/javascript-tracker-v2/tracker-setup/other-parameters-2/#setting-a-custom-page-url-and-referrer-url) personally identifiable
information (PII). PII includes usernames, group, and project names.
Page titles are hardcoded as `GitLab` for the same reason.
| Field Name | Required | Type | Description |
|--------------------------|---------------------|-----------|----------------------------------------------------------------------------------------------------------------------------------|
| `app_id` | **{check-circle}** | string | Unique identifier for website / application |
| `base_currency` | **{dotted-circle}** | string | Reporting currency |
| `br_colordepth` | **{dotted-circle}** | integer | Browser color depth |
| `br_cookies` | **{dotted-circle}** | boolean | Does the browser permit cookies? |
| `br_family` | **{dotted-circle}** | string | Browser family |
| `br_features_director` | **{dotted-circle}** | boolean | Director plugin installed? |
| `br_features_flash` | **{dotted-circle}** | boolean | Flash plugin installed? |
| `br_features_gears` | **{dotted-circle}** | boolean | Google gears installed? |
| `br_features_java` | **{dotted-circle}** | boolean | Java plugin installed? |
| `br_features_pdf` | **{dotted-circle}** | boolean | Adobe PDF plugin installed? |
| `br_features_quicktime` | **{dotted-circle}** | boolean | Quicktime plugin installed? |
| `br_features_realplayer` | **{dotted-circle}** | boolean | RealPlayer plugin installed? |
| `br_features_silverlight` | **{dotted-circle}** | boolean | Silverlight plugin installed? |
| `br_features_windowsmedia` | **{dotted-circle}** | boolean | Windows media plugin installed? |
| `br_lang` | **{dotted-circle}** | string | Language the browser is set to |
| `br_name` | **{dotted-circle}** | string | Browser name |
| `br_renderengine` | **{dotted-circle}** | string | Browser rendering engine |
| `br_type` | **{dotted-circle}** | string | Browser type |
| `br_version` | **{dotted-circle}** | string | Browser version |
| `br_viewheight` | **{dotted-circle}** | string | Browser viewport height |
| `br_viewwidth` | **{dotted-circle}** | string | Browser viewport width |
| `collector_tstamp` | **{dotted-circle}** | timestamp | Time stamp for the event recorded by the collector |
| `contexts` | **{dotted-circle}** | | |
| `derived_contexts` | **{dotted-circle}** | | Contexts derived in the Enrich process |
| `derived_tstamp` | **{dotted-circle}** | timestamp | Timestamp making allowance for inaccurate device clock |
| `doc_charset` | **{dotted-circle}** | string | Web page's character encoding |
| `doc_height` | **{dotted-circle}** | string | Web page height |
| `doc_width` | **{dotted-circle}** | string | Web page width |
| `domain_sessionid` | **{dotted-circle}** | string | Unique identifier (UUID) for this visit of this `user_id` to this domain |
| `domain_sessionidx` | **{dotted-circle}** | integer | Index of number of visits that this `user_id` has made to this domain (The first visit is `1`) |
| `domain_userid` | **{dotted-circle}** | string | Unique identifier for a user, based on a first party cookie (so domain specific) |
| `dvce_created_tstamp` | **{dotted-circle}** | timestamp | Timestamp when event occurred, as recorded by client device |
| `dvce_ismobile` | **{dotted-circle}** | boolean | Indicates whether device is mobile |
| `dvce_screenheight` | **{dotted-circle}** | string | Screen / monitor resolution |
| `dvce_screenwidth` | **{dotted-circle}** | string | Screen / monitor resolution |
| `dvce_sent_tstamp` | **{dotted-circle}** | timestamp | Timestamp when event was sent by client device to collector |
| `dvce_type` | **{dotted-circle}** | string | Type of device |
| `etl_tags` | **{dotted-circle}** | string | JSON of tags for this ETL run |
| `etl_tstamp` | **{dotted-circle}** | timestamp | Timestamp event began ETL |
| `event` | **{dotted-circle}** | string | Event type |
| `event_fingerprint` | **{dotted-circle}** | string | Hash client-set event fields |
| `event_format` | **{dotted-circle}** | string | Format for event |
| `event_id` | **{dotted-circle}** | string | Event UUID |
| `event_name` | **{dotted-circle}** | string | Event name |
| `event_vendor` | **{dotted-circle}** | string | The company who developed the event model |
| `event_version` | **{dotted-circle}** | string | Version of event schema |
| `geo_city` | **{dotted-circle}** | string | City of IP origin |
| `geo_country` | **{dotted-circle}** | string | Country of IP origin |
| `geo_latitude` | **{dotted-circle}** | string | An approximate latitude |
| `geo_longitude` | **{dotted-circle}** | string | An approximate longitude |
| `geo_region` | **{dotted-circle}** | string | Region of IP origin |
| `geo_region_name` | **{dotted-circle}** | string | Region of IP origin |
| `geo_timezone` | **{dotted-circle}** | string | Time zone of IP origin |
| `geo_zipcode` | **{dotted-circle}** | string | Zip (postal) code of IP origin |
| `ip_domain` | **{dotted-circle}** | string | Second level domain name associated with the visitor's IP address |
| `ip_isp` | **{dotted-circle}** | string | Visitor's ISP |
| `ip_netspeed` | **{dotted-circle}** | string | Visitor's connection type |
| `ip_organization` | **{dotted-circle}** | string | Organization associated with the visitor's IP address defaults to ISP name if none is found |
| `mkt_campaign` | **{dotted-circle}** | string | The campaign ID |
| `mkt_clickid` | **{dotted-circle}** | string | The click ID |
| `mkt_content` | **{dotted-circle}** | string | The content or ID of the ad. |
| `mkt_medium` | **{dotted-circle}** | string | Type of traffic source |
| `mkt_network` | **{dotted-circle}** | string | The ad network to which the click ID belongs |
| `mkt_source` | **{dotted-circle}** | string | The company / website where the traffic came from |
| `mkt_term` | **{dotted-circle}** | string | Keywords associated with the referrer |
| `name_tracker` | **{dotted-circle}** | string | The tracker namespace |
| `network_userid` | **{dotted-circle}** | string | Unique identifier for a user, based on a cookie from the collector (so set at a network level and shouldn't be set by a tracker) |
| `os_family` | **{dotted-circle}** | string | Operating system family |
| `os_manufacturer` | **{dotted-circle}** | string | Manufacturers of operating system |
| `os_name` | **{dotted-circle}** | string | Name of operating system |
| `os_timezone` | **{dotted-circle}** | string | Client operating system time zone |
| `page_referrer` | **{dotted-circle}** | string | Referrer URL |
| `page_title` | **{dotted-circle}** | string | To not expose personal identifying information, the page title is hardcoded as `GitLab` |
| `page_url` | **{dotted-circle}** | string | Page URL |
| `page_urlfragment` | **{dotted-circle}** | string | Fragment aka anchor |
| `page_urlhost` | **{dotted-circle}** | string | Host aka domain |
| `page_urlpath` | **{dotted-circle}** | string | Path to page |
| `page_urlport` | **{dotted-circle}** | integer | Port if specified, 80 if not |
| `page_urlquery` | **{dotted-circle}** | string | Query string |
| `page_urlscheme` | **{dotted-circle}** | string | Scheme (protocol name) |
| `platform` | **{dotted-circle}** | string | The platform the app runs on |
| `pp_xoffset_max` | **{dotted-circle}** | integer | Maximum page x offset seen in the last ping period |
| `pp_xoffset_min` | **{dotted-circle}** | integer | Minimum page x offset seen in the last ping period |
| `pp_yoffset_max` | **{dotted-circle}** | integer | Maximum page y offset seen in the last ping period |
| `pp_yoffset_min` | **{dotted-circle}** | integer | Minimum page y offset seen in the last ping period |
| `refr_domain_userid` | **{dotted-circle}** | string | The Snowplow `domain_userid` of the referring website |
| `refr_dvce_tstamp` | **{dotted-circle}** | timestamp | The time of attaching the `domain_userid` to the inbound link |
| `refr_medium` | **{dotted-circle}** | string | Type of referer |
| `refr_source` | **{dotted-circle}** | string | Name of referer if recognised |
| `refr_term` | **{dotted-circle}** | string | Keywords if source is a search engine |
| `refr_urlfragment` | **{dotted-circle}** | string | Referer URL fragment |
| `refr_urlhost` | **{dotted-circle}** | string | Referer host |
| `refr_urlpath` | **{dotted-circle}** | string | Referer page path |
| `refr_urlport` | **{dotted-circle}** | integer | Referer port |
| `refr_urlquery` | **{dotted-circle}** | string | Referer URL query string |
| `refr_urlscheme` | **{dotted-circle}** | string | Referer scheme |
| `se_action` | **{dotted-circle}** | string | The action / event itself |
| `se_category` | **{dotted-circle}** | string | The category of event |
| `se_label` | **{dotted-circle}** | string | A label often used to refer to the 'object' the action is performed on |
| `se_property` | **{dotted-circle}** | string | A property associated with either the action or the object |
| `se_value` | **{dotted-circle}** | decimal | A value associated with the user action |
| `ti_category` | **{dotted-circle}** | string | Item category |
| `ti_currency` | **{dotted-circle}** | string | Currency |
| `ti_name` | **{dotted-circle}** | string | Item name |
| `ti_orderid` | **{dotted-circle}** | string | Order ID |
| `ti_price` | **{dotted-circle}** | decimal | Item price |
| `ti_price_base` | **{dotted-circle}** | decimal | Item price in base currency |
| `ti_quantity` | **{dotted-circle}** | integer | Item quantity |
| `ti_sku` | **{dotted-circle}** | string | Item SKU |
| `tr_affiliation` | **{dotted-circle}** | string | Transaction affiliation (such as channel) |
| `tr_city` | **{dotted-circle}** | string | Delivery address: city |
| `tr_country` | **{dotted-circle}** | string | Delivery address: country |
| `tr_currency` | **{dotted-circle}** | string | Transaction Currency |
| `tr_orderid` | **{dotted-circle}** | string | Order ID |
| `tr_shipping` | **{dotted-circle}** | decimal | Delivery cost charged |
| `tr_shipping_base` | **{dotted-circle}** | decimal | Shipping cost in base currency |
| `tr_state` | **{dotted-circle}** | string | Delivery address: state |
| `tr_tax` | **{dotted-circle}** | decimal | Transaction tax value (such as amount of VAT included) |
| `tr_tax_base` | **{dotted-circle}** | decimal | Tax applied in base currency |
| `tr_total` | **{dotted-circle}** | decimal | Transaction total value |
| `tr_total_base` | **{dotted-circle}** | decimal | Total amount of transaction in base currency |
| `true_tstamp` | **{dotted-circle}** | timestamp | User-set exact timestamp |
| `txn_id` | **{dotted-circle}** | string | Transaction ID |
| `unstruct_event` | **{dotted-circle}** | JSON | The properties of the event |
| `uploaded_at` | **{dotted-circle}** | | |
| `user_fingerprint` | **{dotted-circle}** | integer | User identifier based on (hopefully unique) browser features |
| `user_id` | **{dotted-circle}** | string | Unique identifier for user, set by the business using setUserId |
| `user_ipaddress` | **{dotted-circle}** | string | IP address |
| `useragent` | **{dotted-circle}** | string | User agent (expressed as a browser string) |
| `v_collector` | **{dotted-circle}** | string | Collector version |
| `v_etl` | **{dotted-circle}** | string | ETL version |
| `v_tracker` | **{dotted-circle}** | string | Identifier for Snowplow tracker |
## `gitlab_service_ping`
Backend events converted from ServicePing (`redis` and `redis_hll`) must include [ServicePing context](https://gitlab.com/gitlab-org/iglu/-/tree/master/public/schemas/com.gitlab/gitlab_service_ping/jsonschema)
using the [helper class](https://gitlab.com/gitlab-org/gitlab/blob/master/lib/gitlab/tracking/service_ping_context.rb).
An example of converted `redis_hll` [event with context](https://gitlab.com/gitlab-org/gitlab/-/edit/master/app/controllers/concerns/product_analytics_tracking.rb#L58).
| Field Name | Required | Type | Description |
|---------------|:-------------------:|------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `data_source` | **{check-circle}** | string (max 64 chars) | The `data_source` attribute from the metrics YAML definition. |
| `event_name`* | **{dotted-circle}** | string (max 128 chars) | When there is a many-to-many relationship between events and metrics, this field contains the name of a Redis event that can be used for aggregations in downstream systems |
| `key_path`* | **{dotted-circle}** | string (max 256 chars) | The `key_path` attribute from the metrics YAML definition |
_\* Either `event_name` or `key_path` is required_

View File

@ -1,82 +0,0 @@
---
stage: Analyze
group: Analytics Instrumentation
info: To determine the technical writer assigned to the Stage/Group associated with this page, see https://about.gitlab.com/handbook/product/ux/technical-writing/#assignments
---
# Troubleshooting Snowplow
## Monitoring
This page covers dashboards and alerts coming from a number of internal tools.
For a brief video overview of the tools used to monitor Snowplow usage, please check out [this internal video](https://www.youtube.com/watch?v=NxPS0aKa_oU) (you must be logged into GitLab Unfiltered to view).
## Good events drop
### Symptoms
You will be alarmed via a [Sisense alert](https://app.periscopedata.com/app/gitlab/alert/Volume-of-Snowplow-Good-events/5a5f80ef34fe450da5ebb84eaa84067f/edit) that is sent to `#g_product_intelligence` Slack channel
### Locating the problem
First you need to identify at which stage in Snowplow the data pipeline the drop is occurring.
Start at [Snowplow dashboard](https://console.aws.amazon.com/systems-manager/resource-groups/cloudwatch?dashboard=SnowPlow&region=us-east-1#) on CloudWatch.
If you do not have access to CloudWatch, GitLab team members can create an access request issue, similar to this one: `https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/9730`.
While on CloudWatch dashboard set time range to last 4 weeks, to get better picture of system characteristics over time. Than visit following charts:
1. `ELB New Flow Count` and `Collector Auto Scaling Group Network In/Out` - they show in order: number of connections to collectors via load balancers and data volume (in bytes) processed by collectors. If there is drop visible there, it means less events were fired from the GitLab application. Proceed to [application layer guide](#troubleshooting-gitlab-application-layer) for more details
1. `Firehose Records to S3` - it shows how many event records were saved to S3 bucket, if there was drop on this chart but not on the charts from 1. it means that problem is located at AWS infrastructure layer, please refer to [AWS layer guide](#troubleshooting-aws-layer)
1. If drop wasn't visible on any of previous charts it means that problem is at data warehouse layer, please refer to [data warehouse layer guide](#troubleshooting-data-warehouse-layer)
### Troubleshooting GitLab application layer
Drop occurring at application layer can be symptom of some issue, but it might be also a result of typical application lifecycle, intended changes done to analytics instrumentation or experiments tracking
or even a result of a public holiday in some regions of the world with a larger user-base. To verify if there is an underlying problem to solve, you can check following things:
1. Check `about.gitlab.com` website traffic on [Google Analytics](https://analytics.google.com/analytics/web/) to verify if some public holiday might impact overall use of GitLab system
1. You may require to open an access request for Google Analytics access first, for example: [access request internal issue](https://gitlab.com/gitlab-com/team-member-epics/access-requests/-/issues/1772)
1. Plot `select date(dvce_created_tstamp) , event , count(*) from legacy.snowplow_unnested_events_90 where dvce_created_tstamp > '2021-06-15' and dvce_created_tstamp < '2021-07-10' group by 1 , 2 order by 1 , 2` in SiSense to see what type of events was responsible for drop
1. Plot `select date(dvce_created_tstamp) ,se_category , count(*) from legacy.snowplow_unnested_events_90 where dvce_created_tstamp > '2021-06-15' and dvce_created_tstamp < '2021-07-31' and event = 'struct' group by 1 , 2 order by 1, 2` what events recorded the biggest drops in suspected category
1. Check if there was any MR merged that might cause reduction in reported events, pay an attention to ~"analytics instrumentation" and ~"growth experiment" labeled MRs
1. Check (via [Grafana explore tab](https://dashboards.gitlab.net/explore) ) following Prometheus counters `gitlab_snowplow_events_total`, `gitlab_snowplow_failed_events_total` and `gitlab_snowplow_successful_events_total` to see how many events were fired correctly from GitLab.com. Example query to use `sum(rate(gitlab_snowplow_successful_events_total{env="gprd"}[5m])) / sum(rate(gitlab_snowplow_events_total{env="gprd"}[5m]))` would chart rate at which number of good events rose in comparison to events sent in total. If number drops from 1 it means that problem might be in communication between GitLab and AWS collectors fleet.
1. Check [logs in Kibana](https://log.gprd.gitlab.net/app/discover#) and filter with `{ "query": { "match_phrase": { "json.message": "failed to be reported to collector at" } } }` if there are some failed events logged
We conducted an investigation into an unexpected drop in snowplow events volume.
GitLab team members can view more information in this confidential issue: `https://gitlab.com/gitlab-org/gitlab/-/issues/335206`
### Troubleshooting AWS layer
Already conducted investigations:
- [Steep decrease of Snowplow page views](https://gitlab.com/gitlab-org/gitlab/-/issues/268009)
- [`snowplow.trx.gitlab.net` unreachable](https://gitlab.com/gitlab-com/gl-infra/production/-/issues/5073)
### Troubleshooting data warehouse layer
Reach out to [Data team](https://about.gitlab.com/handbook/business-technology/data-team/) to ask about current state of data warehouse. On their handbook page there is a [section with contact details](https://about.gitlab.com/handbook/business-technology/data-team/#how-to-connect-with-us)
## Delay in Snowplow Enrichers
If there is an alert for **Snowplow Raw Good Stream Backing Up**, we receive an email notification. This sometimes happens because Snowplow Enrichers don't scale well enough for the amount of Snowplow events.
If the delay goes over 48 hours, we lose data.
### Contact SRE on-call
Send a message in the [#infrastructure_lounge](https://gitlab.slack.com/archives/CB3LSMEJV) Slack channel using the following template:
```markdown
Hello team!
We received an alert for [Snowplow Raw Good Stream Backing Up](https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#alarmsV2:alarm/SnowPlow+Raw+Good+Stream+Backing+Up?).
Enrichers are not scalling well for the amount of events we receive.
See the [dashboard](https://us-east-1.console.aws.amazon.com/cloudwatch/home?region=us-east-1#dashboards:name=SnowPlow).
Could we get assistance to fix the delay?
Thank you!
```

View File

@ -87,7 +87,7 @@ The hand-raise lead form accepts the following parameters via provide or inject.
},
```
The `ctaTracking` parameters follow [the `data-track` attributes](../internal_analytics/snowplow/implementation.md#data-track-attributes) for implementing Snowplow tracking. The provided tracking attributes are attached to the button inside the `HandRaiseLeadButton` component, which triggers the hand-raise lead modal when selected.
The `ctaTracking` parameters follow the `data-track` attributes for implementing Snowplow tracking. The provided tracking attributes are attached to the button inside the `HandRaiseLeadButton` component, which triggers the hand-raise lead modal when selected.
### Monitor the lead location

View File

@ -30,7 +30,7 @@ Complete performance metrics should be collected for Gitaly instances for identi
### Gitaly performance guidelines
Gitaly functions as the primary Git Repository Storage in GitLab. However, it's not a streaming file server. It also does a lot of demanding computing work, such as preparing and caching Git pack files which informs some of the performance recommendations below.
Gitaly functions as the primary Git Repository Storage in GitLab. However, it's not a streaming file server. It also does a lot of demanding computing work, such as preparing and caching Git packfiles which informs some of the performance recommendations below.
NOTE:
All recommendations are for production configurations, including performance testing. For test configurations, like training or functional testing, you can use less expensive options. However, you should adjust or rebuild if performance is an issue.
@ -48,7 +48,7 @@ All recommendations are for production configurations, including performance tes
**To accommodate:**
- Git Pack file operations are memory and CPU intensive.
- Git packfile operations are memory and CPU intensive.
- If repository commit traffic is dense, large, or very frequent, then more CPU and Memory are required to handle the load. Patterns such as storing binaries and/or busy or large monorepos are examples that can cause high loading.
#### Disk I/O recommendations
@ -60,8 +60,8 @@ All recommendations are for production configurations, including performance tes
**To accommodate:**
- Gitaly storage is expected to be local (not NFS of any type including EFS).
- Gitaly servers also need disk space for building and caching Git pack files. This is above and beyond the permanent storage of your Git Repositories.
- Git Pack files are cached in Gitaly. Creation of pack files in temporary disk benefits from fast disk, and disk caching of pack files benefits from ample disk space.
- Gitaly servers also need disk space for building and caching Git packfiles. This is above and beyond the permanent storage of your Git Repositories.
- Git packfiles are cached in Gitaly. Creation of packfiles in temporary disk benefits from fast disk, and disk caching of packfiles benefits from ample disk space.
#### Network I/O recommendations

View File

@ -38,7 +38,6 @@ The following Rake tasks are available for use with GitLab:
| [Service Desk email](../administration/raketasks/service_desk_email.md) | Service Desk email-related tasks. |
| [SMTP maintenance](../administration/raketasks/smtp.md) | SMTP-related tasks. |
| [SPDX license list import](spdx.md) | Import a local copy of the [SPDX license list](https://spdx.org/licenses/) for matching [License approval policies](../user/compliance/license_approval_policies.md). |
| [Repository storage](../administration/raketasks/storage.md) | List and migrate existing projects and attachments from legacy storage to hashed storage. |
| [Reset user passwords](../security/reset_user_password.md#use-a-rake-task) | Reset user passwords using Rake. |
| [Uploads migrate](../administration/raketasks/uploads/migrate.md) | Migrate uploads between local storage and object storage. |
| [Uploads sanitize](../administration/raketasks/uploads/sanitize.md) | Remove EXIF data from images uploaded to earlier versions of GitLab. |

View File

@ -781,8 +781,7 @@ Prerequisites:
- The [GitLab 14.0 release post contains several important notes](https://about.gitlab.com/releases/2021/06/22/gitlab-14-0-released/#upgrade)
about pre-requisites including [using Patroni instead of repmgr](../../administration/postgresql/replication_and_failover.md#switching-from-repmgr-to-patroni),
migrating [to hashed storage](../../administration/raketasks/storage.md#migrate-to-hashed-storage),
and [to Puma](../../administration/operations/puma.md).
migrating to hashed storage and [to Puma](../../administration/operations/puma.md).
- The support of PostgreSQL 11 [has been dropped](../../install/requirements.md#database). Make sure to [update your database](https://docs.gitlab.com/omnibus/settings/database.html#upgrade-packaged-postgresql-server) to version 12 before updating to GitLab 14.0.
Long running batched background database migrations:

View File

@ -56,7 +56,7 @@ Here's a few links to get you started:
- [Git book migration guide](https://git-scm.com/book/en/v2/Git-and-Other-Systems-Migrating-to-Git#_perforce_import)
`git p4` and `git filter-branch` are not very good at
creating small and efficient Git pack files. So it might be a good
creating small and efficient Git packfiles. So it might be a good
idea to spend time and CPU to properly repack your repository before
sending it for the first time to your GitLab server. See
[this StackOverflow question](https://stackoverflow.com/questions/28720151/git-gc-aggressive-vs-git-repack/).

View File

@ -31,14 +31,10 @@ For a video overview, see [Design Management (GitLab 12.2)](https://www.youtube.
Image thumbnails are stored as other uploads, and are not associated with a project but rather
with a specific design model.
- Projects must use
[hashed storage](../../../administration/raketasks/storage.md#migrate-to-hashed-storage).
Newly created projects use hashed storage by default.
A GitLab administrator can verify the storage type of a project by going to **Admin Area > Projects**
and then selecting the project in question. A project can be identified as
hashed-stored if the value of the **Relative path** field contains `@hashed`.
A GitLab administrator can verify the relative path of a hashed-stored project by going to **Admin Area > Projects**
and then selecting the project in question. The **Relative path** field contains `@hashed` in its value.
If the requirements are not met, you are notified in the **Designs** section.

View File

@ -207,8 +207,8 @@ This:
- Removes any internal Git references to old commits.
- Runs `git gc --prune=30.minutes.ago` against the repository to remove unreferenced objects. Repacking your repository temporarily
causes the size of your repository to increase significantly, because the old pack files are not removed until the
new pack files have been created.
causes the size of your repository to increase significantly, because the old packfiles are not removed until the
new packfiles have been created.
- Unlinks any unused LFS objects attached to your project, freeing up storage space.
- Recalculates the size of your repository on disk.

View File

@ -70,12 +70,12 @@ To view Service Desk issues:
#### Redesigned issue list
> [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/413092) in GitLab 16.1 [with a flag](../../../administration/feature_flags.md) named `service_desk_vue_list`. Disabled by default.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/issues/413092) in GitLab 16.1 [with a flag](../../../administration/feature_flags.md) named `service_desk_vue_list`. Disabled by default.
> - [Enabled on GitLab.com](https://gitlab.com/gitlab-org/gitlab/-/issues/413092) in GitLab 16.5.
FLAG:
On self-managed GitLab, by default this feature is not available. To make it available per project or for your entire instance, an administrator can [enable the feature flag](../../../administration/feature_flags.md) named `service_desk_vue_list`.
On GitLab.com, this feature is not available.
The feature is not ready for production use.
On GitLab.com, this feature is available.
When this feature is enabled, the Service Desk issue list more closely matches the regular issue list.
Available features include:

View File

@ -224,6 +224,7 @@ module API
requires :slack_app_verification_token, type: String, desc: 'The verification token of the GitLab for Slack app. This method of authentication is deprecated by Slack and used only for authenticating slash commands from the app'
end
optional :namespace_aggregation_schedule_lease_duration_in_seconds, type: Integer, desc: 'Maximum duration (in seconds) between refreshes of namespace statistics (Default: 300)'
optional :project_jobs_api_rate_limit, type: Integer, desc: 'Maximum authenticated requests to /project/:id/jobs per minute'
Gitlab::SSHPublicKey.supported_types.each do |type|
optional :"#{type}_key_restriction",

View File

@ -1389,8 +1389,6 @@ module API
optional :expires_at, type: Date, default: -> { 1.day.from_now.to_date }, desc: 'The expiration date in the format YEAR-MONTH-DAY of the personal access token'
end
post feature_category: :system_access do
bad_request!('Endpoint is disabled via user_pat_rest_api feature flag. Please contact your administrator to enable it.') unless Feature.enabled?(:user_pat_rest_api)
response = ::PersonalAccessTokens::CreateService.new(
current_user: current_user, target_user: current_user, params: declared_params(include_missing: false)
).execute

View File

@ -58,8 +58,8 @@ module Gitlab
fetch_google_ip_list: { threshold: 10, interval: 1.minute },
project_fork_sync: { threshold: 10, interval: 30.minutes },
ai_action: { threshold: 160, interval: 8.hours },
jobs_index: { threshold: 600, interval: 1.minute },
vertex_embeddings_api: { threshold: 450, interval: 1.minute },
jobs_index: { threshold: -> { application_settings.project_jobs_api_rate_limit }, interval: 1.minute },
bulk_import: { threshold: 6, interval: 1.minute },
projects_api_rate_limit_unauthenticated: {
threshold: -> { application_settings.projects_api_rate_limit_unauthenticated }, interval: 10.minutes

View File

@ -50,6 +50,7 @@ module Gitlab
delivered_to: delivered_to.map(&:value),
envelope_to: envelope_to.map(&:value),
x_envelope_to: x_envelope_to.map(&:value),
cc_address: cc,
# reduced down to what looks like an email in the received headers
received_recipients: recipients_from_received_headers,
meta: {
@ -84,23 +85,27 @@ module Gitlab
def mail_key
strong_memoize(:mail_key) do
key_from_to_header || key_from_additional_headers
find_first_key_from(to) || key_from_additional_headers
end
end
def key_from_to_header
to.find do |address|
key = email_class.key_from_address(address)
break key if key
def find_first_key_from(items)
items.each do |item|
email = item.is_a?(Mail::Field) ? item.value : item
key = email_class.key_from_address(email)
return key if key
end
nil
end
def key_from_additional_headers
find_key_from_references ||
find_key_from_delivered_to_header ||
find_key_from_envelope_to_header ||
find_key_from_x_envelope_to_header ||
find_first_key_from_received_headers
find_first_key_from(delivered_to) ||
find_first_key_from(envelope_to) ||
find_first_key_from(x_envelope_to) ||
find_first_key_from(recipients_from_received_headers) ||
find_first_key_from(cc)
end
def ensure_references_array(references)
@ -131,6 +136,10 @@ module Gitlab
Array(mail.to)
end
def cc
Array(mail.cc)
end
def delivered_to
Array(mail[:delivered_to])
end
@ -147,34 +156,6 @@ module Gitlab
Array(mail[:received])
end
def find_key_from_delivered_to_header
delivered_to.find do |header|
key = email_class.key_from_address(header.value)
break key if key
end
end
def find_key_from_envelope_to_header
envelope_to.find do |header|
key = email_class.key_from_address(header.value)
break key if key
end
end
def find_key_from_x_envelope_to_header
x_envelope_to.find do |header|
key = email_class.key_from_address(header.value)
break key if key
end
end
def find_first_key_from_received_headers
recipients_from_received_headers.find do |email|
key = email_class.key_from_address(email)
break key if key
end
end
def recipients_from_received_headers
strong_memoize :emails_from_received_headers do
received.filter_map { |header| header.value[RECEIVED_HEADER_REGEX, 1] }

View File

@ -1,125 +0,0 @@
# frozen_string_literal: true
module Gitlab
module HashedStorage
# Hashed Storage Migrator
#
# This is responsible for scheduling and flagging projects
# to be migrated from Legacy to Hashed storage, either one by one or in bulk.
class Migrator
BATCH_SIZE = 100
# Schedule a range of projects to be bulk migrated with #bulk_migrate asynchronously
#
# @param [Integer] start first project id for the range
# @param [Integer] finish last project id for the range
def bulk_schedule_migration(start:, finish:)
::HashedStorage::MigratorWorker.perform_async(start, finish)
end
# Schedule a range of projects to be bulk rolledback with #bulk_rollback asynchronously
#
# @param [Integer] start first project id for the range
# @param [Integer] finish last project id for the range
def bulk_schedule_rollback(start:, finish:)
::HashedStorage::RollbackerWorker.perform_async(start, finish)
end
# Start migration of projects from specified range
#
# Flagging a project to be migrated is a synchronous action
# but the migration runs through async jobs
#
# @param [Integer] start first project id for the range
# @param [Integer] finish last project id for the range
# rubocop: disable CodeReuse/ActiveRecord
def bulk_migrate(start:, finish:)
projects = build_relation(start, finish)
projects.with_route.find_each(batch_size: BATCH_SIZE) do |project|
migrate(project)
end
end
# rubocop: enable CodeReuse/ActiveRecord
# Start rollback of projects from specified range
#
# Flagging a project to be rolled back is a synchronous action
# but the rollback runs through async jobs
#
# @param [Integer] start first project id for the range
# @param [Integer] finish last project id for the range
# rubocop: disable CodeReuse/ActiveRecord
def bulk_rollback(start:, finish:)
projects = build_relation(start, finish)
projects.with_route.find_each(batch_size: BATCH_SIZE) do |project|
rollback(project)
end
end
# rubocop: enable CodeReuse/ActiveRecord
# Flag a project to be migrated to Hashed Storage
#
# @param [Project] project that will be migrated
def migrate(project)
Gitlab::AppLogger.info "Starting storage migration of #{project.full_path} (ID=#{project.id})..."
project.migrate_to_hashed_storage!
rescue StandardError => err
Gitlab::AppLogger.error("#{err.message} migrating storage of #{project.full_path} (ID=#{project.id}), trace - #{err.backtrace}")
end
# Flag a project to be rolled-back to Legacy Storage
#
# @param [Project] project that will be rolled-back
def rollback(project)
Gitlab::AppLogger.info "Starting storage rollback of #{project.full_path} (ID=#{project.id})..."
project.rollback_to_legacy_storage!
rescue StandardError => err
Gitlab::AppLogger.error("#{err.message} rolling-back storage of #{project.full_path} (ID=#{project.id}), trace - #{err.backtrace}")
end
# Returns whether we have any pending storage migration
#
def migration_pending?
any_non_empty_queue?(::HashedStorage::MigratorWorker, ::HashedStorage::ProjectMigrateWorker)
end
# Returns whether we have any pending storage rollback
#
def rollback_pending?
any_non_empty_queue?(::HashedStorage::RollbackerWorker, ::HashedStorage::ProjectRollbackWorker)
end
# Remove all remaining scheduled rollback operations
#
def abort_rollback!
[::HashedStorage::RollbackerWorker, ::HashedStorage::ProjectRollbackWorker].each do |worker|
Sidekiq::Queue.new(worker.queue).clear
end
end
private
def any_non_empty_queue?(*workers)
workers.any? do |worker|
Sidekiq::Queue.new(worker.queue).size != 0 # rubocop:disable Style/ZeroLengthPredicate
end
end
# rubocop: disable CodeReuse/ActiveRecord
def build_relation(start, finish)
relation = Project
table = Project.arel_table
relation = relation.where(table[:id].gteq(start)) if start
relation = relation.where(table[:id].lteq(finish)) if finish
relation
end
# rubocop: enable CodeReuse/ActiveRecord
end
end
end

View File

@ -1,129 +0,0 @@
# frozen_string_literal: true
module Gitlab
module HashedStorage
module RakeHelper
def self.batch_size
ENV.fetch('BATCH', 200).to_i
end
def self.listing_limit
ENV.fetch('LIMIT', 500).to_i
end
def self.range_from
ENV['ID_FROM']
end
def self.range_to
ENV['ID_TO']
end
def self.using_ranges?
!range_from.nil? && !range_to.nil?
end
def self.range_single_item?
using_ranges? && range_from == range_to
end
# rubocop: disable CodeReuse/ActiveRecord
def self.project_id_batches_migration(&block)
Project.with_unmigrated_storage.in_batches(of: batch_size, start: range_from, finish: range_to) do |relation| # rubocop: disable Cop/InBatches
ids = relation.pluck(:id)
yield ids.min, ids.max
end
end
# rubocop: enable CodeReuse/ActiveRecord
# rubocop: disable CodeReuse/ActiveRecord
def self.project_id_batches_rollback(&block)
Project.with_storage_feature(:repository).in_batches(of: batch_size, start: range_from, finish: range_to) do |relation| # rubocop: disable Cop/InBatches
ids = relation.pluck(:id)
yield ids.min, ids.max
end
end
# rubocop: enable CodeReuse/ActiveRecord
def self.legacy_attachments_relation
Upload.inner_join_local_uploads_projects.merge(Project.without_storage_feature(:attachments))
end
def self.hashed_attachments_relation
Upload.inner_join_local_uploads_projects.merge(Project.with_storage_feature(:attachments))
end
def self.relation_summary(relation_name, relation)
relation_count = relation.count
$stdout.puts "* Found #{relation_count} #{relation_name}".color(:green)
relation_count
end
def self.projects_list(relation_name, relation)
listing(relation_name, relation.with_route) do |project|
$stdout.puts " - #{project.full_path} (id: #{project.id})".color(:red)
$stdout.puts " #{project.repository.disk_path}"
end
end
def self.attachments_list(relation_name, relation)
listing(relation_name, relation) do |upload|
$stdout.puts " - #{upload.path} (id: #{upload.id})".color(:red)
end
end
# rubocop: disable CodeReuse/ActiveRecord
def self.listing(relation_name, relation)
relation_count = relation_summary(relation_name, relation)
return unless relation_count > 0
limit = listing_limit
if relation_count > limit
$stdout.puts " ! Displaying first #{limit} #{relation_name}..."
end
relation.find_each(batch_size: batch_size).with_index do |element, index|
yield element
break if index + 1 >= limit
end
end
# rubocop: enable CodeReuse/ActiveRecord
def self.prune(relation_name, relation, dry_run: true, root: nil)
root ||= '../repositories'
known_paths = Set.new
listing(relation_name, relation) { |p| known_paths << "#{root}/#{p.repository.disk_path}" }
marked_for_deletion = Set.new(Dir["#{root}/@hashed/*/*/*"])
marked_for_deletion.reject! do |path|
base = path.gsub(/\.(\w+\.)?git$/, '')
known_paths.include?(base)
end
if marked_for_deletion.empty?
$stdout.puts "No orphaned directories found. Nothing to do!"
else
n = marked_for_deletion.size
$stdout.puts "Found #{n} orphaned #{'directory'.pluralize(n)}"
$stdout.puts "Dry run. (Run again with FORCE=1 to delete). We would have deleted:" if dry_run
end
marked_for_deletion.each do |p|
p = Pathname.new(p)
if dry_run
$stdout.puts " - #{p}"
else
$stdout.puts "Removing #{p}"
p.rmtree
end
end
end
end
end
end

View File

@ -1,186 +0,0 @@
# frozen_string_literal: true
namespace :gitlab do
namespace :storage do
desc 'GitLab | Storage | Migrate existing projects to Hashed Storage'
task migrate_to_hashed: :environment do
if Gitlab::Database.read_only?
abort 'This task requires database write access. Exiting.'
end
storage_migrator = Gitlab::HashedStorage::Migrator.new
helper = Gitlab::HashedStorage::RakeHelper
if storage_migrator.rollback_pending?
abort "There is already a rollback operation in progress, " \
"running a migration at the same time may have unexpected consequences."
end
if helper.range_single_item?
project = Project.with_unmigrated_storage.find_by(id: helper.range_from)
unless project
abort "There are no projects requiring storage migration with ID=#{helper.range_from}"
end
puts "Enqueueing storage migration of #{project.full_path} (ID=#{project.id})..."
storage_migrator.migrate(project)
else
legacy_projects_count = if helper.using_ranges?
Project.with_unmigrated_storage.id_in(helper.range_from..helper.range_to).count
else
Project.with_unmigrated_storage.count
end
if legacy_projects_count == 0
abort 'There are no projects requiring storage migration. Nothing to do!'
end
print "Enqueuing migration of #{legacy_projects_count} projects in batches of #{helper.batch_size}"
helper.project_id_batches_migration do |start, finish|
storage_migrator.bulk_schedule_migration(start: start, finish: finish)
print '.'
end
end
puts ' Done!'
end
desc 'GitLab | Storage | Rollback existing projects to Legacy Storage'
task rollback_to_legacy: :environment do
if Gitlab::Database.read_only?
abort 'This task requires database write access. Exiting.'
end
storage_migrator = Gitlab::HashedStorage::Migrator.new
helper = Gitlab::HashedStorage::RakeHelper
if storage_migrator.migration_pending?
abort "There is already a migration operation in progress, " \
"running a rollback at the same time may have unexpected consequences."
end
if helper.range_single_item?
project = Project.with_storage_feature(:repository).find_by(id: helper.range_from)
unless project
abort "There are no projects that can be rolledback with ID=#{helper.range_from}"
end
puts "Enqueueing storage rollback of #{project.full_path} (ID=#{project.id})..."
storage_migrator.rollback(project)
else
hashed_projects_count = if helper.using_ranges?
Project.with_storage_feature(:repository).id_in(helper.range_from..helper.range_to).count
else
Project.with_storage_feature(:repository).count
end
if hashed_projects_count == 0
abort 'There are no projects that can have storage rolledback. Nothing to do!'
end
print "Enqueuing rollback of #{hashed_projects_count} projects in batches of #{helper.batch_size}"
helper.project_id_batches_rollback do |start, finish|
storage_migrator.bulk_schedule_rollback(start: start, finish: finish)
print '.'
end
end
puts ' Done!'
end
desc 'Gitlab | Storage | Summary of existing projects using Legacy Storage'
task legacy_projects: :environment do
# Required to prevent Docker upgrade to 14.0 if there data on legacy storage
# See: https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5311#note_590454698
wait_until_database_is_ready do
helper = Gitlab::HashedStorage::RakeHelper
helper.relation_summary('projects using Legacy Storage', Project.without_storage_feature(:repository))
end
end
desc 'Gitlab | Storage | List existing projects using Legacy Storage'
task list_legacy_projects: :environment do
helper = Gitlab::HashedStorage::RakeHelper
helper.projects_list('projects using Legacy Storage', Project.without_storage_feature(:repository))
end
desc 'Gitlab | Storage | Summary of existing projects using Hashed Storage'
task hashed_projects: :environment do
helper = Gitlab::HashedStorage::RakeHelper
helper.relation_summary('projects using Hashed Storage', Project.with_storage_feature(:repository))
end
desc 'Gitlab | Storage | List existing projects using Hashed Storage'
task list_hashed_projects: :environment do
helper = Gitlab::HashedStorage::RakeHelper
helper.projects_list('projects using Hashed Storage', Project.with_storage_feature(:repository))
end
desc 'Gitlab | Storage | Prune projects using Hashed Storage. Remove all hashed directories that do not have a project associated'
task prune_hashed_projects: [:environment, :gitlab_environment] do
if Rails.env.production?
abort('This destructive action may only be run in development')
end
helper = Gitlab::HashedStorage::RakeHelper
name = 'projects using Hashed Storage'
relation = Project.with_storage_feature(:repository)
root = Gitlab.config.repositories.storages['default'].legacy_disk_path
dry_run = !ENV['FORCE'].present?
helper.prune(name, relation, dry_run: dry_run, root: root)
end
desc 'Gitlab | Storage | Summary of project attachments using Legacy Storage'
task legacy_attachments: :environment do
# Required to prevent Docker upgrade to 14.0 if there data on legacy storage
# See: https://gitlab.com/gitlab-org/omnibus-gitlab/-/merge_requests/5311#note_590454698
wait_until_database_is_ready do
helper = Gitlab::HashedStorage::RakeHelper
helper.relation_summary('attachments using Legacy Storage', helper.legacy_attachments_relation)
end
end
desc 'Gitlab | Storage | List existing project attachments using Legacy Storage'
task list_legacy_attachments: :environment do
helper = Gitlab::HashedStorage::RakeHelper
helper.attachments_list('attachments using Legacy Storage', helper.legacy_attachments_relation)
end
desc 'Gitlab | Storage | Summary of project attachments using Hashed Storage'
task hashed_attachments: :environment do
helper = Gitlab::HashedStorage::RakeHelper
helper.relation_summary('attachments using Hashed Storage', helper.hashed_attachments_relation)
end
desc 'Gitlab | Storage | List existing project attachments using Hashed Storage'
task list_hashed_attachments: :environment do
helper = Gitlab::HashedStorage::RakeHelper
helper.attachments_list('attachments using Hashed Storage', helper.hashed_attachments_relation)
end
def wait_until_database_is_ready
attempts = (ENV['MAX_DATABASE_CONNECTION_CHECKS'] || 1).to_i
inverval = (ENV['MAX_DATABASE_CONNECTION_CHECK_INTERVAL'] || 10).to_f
attempts.to_i.times do
unless ApplicationRecord.database.exists?
puts "Waiting until database is ready before continuing...".color(:yellow)
sleep inverval
end
end
yield
rescue ActiveRecord::ConnectionNotEstablished => ex
puts "Failed to connect to the database...".color(:red)
puts "Error: #{ex}"
exit 1
end
end
end

View File

@ -20626,9 +20626,6 @@ msgstr ""
msgid "For investigating IT service disruptions or outages"
msgstr ""
msgid "For more info, read the documentation."
msgstr ""
msgid "For more information on how the number of active users is calculated, see the %{self_managed_subscriptions_doc_link} documentation."
msgstr ""
@ -35243,9 +35240,6 @@ msgstr ""
msgid "Please create an index before enabling indexing"
msgstr ""
msgid "Please enable and migrate to hashed storage to avoid security issues and ensure data integrity. %{migrate_link}"
msgstr ""
msgid "Please enter a URL for the custom emoji."
msgstr ""
@ -35300,9 +35294,6 @@ msgstr ""
msgid "Please follow the Let's Encrypt troubleshooting instructions to re-obtain your Let's Encrypt certificate: %{docs_url}."
msgstr ""
msgid "Please migrate all existing projects to hashed storage to avoid security issues and ensure data integrity. %{migrate_link}"
msgstr ""
msgid "Please note that this application is not provided by GitLab and you should verify its authenticity before allowing access."
msgstr ""
@ -43086,6 +43077,9 @@ msgstr ""
msgid "SecurityReports|Failed updating vulnerabilities with the following IDs: %{ids}"
msgstr ""
msgid "SecurityReports|Group your vulnerabilities by one of the provided categories. Leave feedback or suggestions in %{feedbackIssueStart}this issue%{feedbackIssueEnd}."
msgstr ""
msgid "SecurityReports|Has issue"
msgstr ""
@ -43143,6 +43137,9 @@ msgstr ""
msgid "SecurityReports|More info"
msgstr ""
msgid "SecurityReports|New feature: Grouping"
msgstr ""
msgid "SecurityReports|No longer detected"
msgstr ""

View File

@ -193,7 +193,7 @@
"remark-rehype": "^10.1.0",
"scrollparent": "^2.0.1",
"semver": "^7.3.4",
"sentrybrowser": "npm:@sentry/browser@7.66.0",
"sentrybrowser": "npm:@sentry/browser@7.73.0",
"sentrybrowser5": "npm:@sentry/browser@5.30.0",
"sortablejs": "^1.10.2",
"string-hash": "1.1.3",

View File

@ -307,8 +307,12 @@ RSpec.describe ObjectStoreSettings, feature_category: :shared do
it 'returns a list of enabled endpoint URIs' do
stub_config(
artifacts: { enabled: true, object_store: { enabled: true, connection: { endpoint: 'http://example1.com' } } },
external_diffs: {
enabled: true, object_store: { enabled: true, connection: { endpoint: 'http://example1.com' } }
},
lfs: { enabled: false, object_store: { enabled: true, connection: { endpoint: 'http://example2.com' } } },
uploads: { enabled: true, object_store: { enabled: false, connection: { endpoint: 'http://example3.com' } } },
packages: { enabled: true, object_store: { enabled: true, connection: { provider: 'AWS' } } },
pages: { enabled: true, object_store: { enabled: true, connection: { endpoint: 'http://example4.com' } } }
)

View File

@ -730,6 +730,8 @@ RSpec.describe 'Admin updates settings', feature_category: :shared do
fill_in 'Maximum authenticated web requests per rate limit period per user', with: 700
fill_in 'Authenticated web rate limit period in seconds', with: 800
fill_in "Maximum authenticated requests to project/:id/jobs per minute", with: 1000
fill_in 'Plain-text response to send to clients that hit a rate limit', with: 'Custom message'
click_button 'Save changes'
@ -750,6 +752,7 @@ RSpec.describe 'Admin updates settings', feature_category: :shared do
throttle_authenticated_web_enabled: true,
throttle_authenticated_web_requests_per_period: 700,
throttle_authenticated_web_period_in_seconds: 800,
project_jobs_api_rate_limit: 1000,
rate_limiting_response_text: 'Custom message'
)
end

View File

@ -107,4 +107,10 @@ RSpec.describe 'Service Desk Setting', :js, :clean_gitlab_redis_cache, feature_c
expect(page).to have_pushed_frontend_feature_flags(serviceDeskCustomEmail: true)
end
it 'pushes issue_email_participants feature flag to frontend' do
visit edit_project_path(project)
expect(page).to have_pushed_frontend_feature_flags(issueEmailParticipants: true)
end
end

View File

@ -26,14 +26,11 @@ describe('IssueBoardFilter', () => {
});
};
let fetchUsersSpy;
let fetchLabelsSpy;
beforeEach(() => {
fetchUsersSpy = jest.fn();
fetchLabelsSpy = jest.fn();
issueBoardFilters.mockReturnValue({
fetchUsers: fetchUsersSpy,
fetchLabels: fetchLabelsSpy,
});
});
@ -61,7 +58,7 @@ describe('IssueBoardFilter', () => {
({ isSignedIn }) => {
createComponent({ isSignedIn });
const tokens = mockTokens(fetchLabelsSpy, fetchUsersSpy, isSignedIn);
const tokens = mockTokens(fetchLabelsSpy, isSignedIn);
expect(findBoardsFilteredSearch().props('tokens')).toEqual(orderBy(tokens, ['title']));
},

View File

@ -827,7 +827,7 @@ export const mockConfidentialToken = {
],
};
export const mockTokens = (fetchLabels, fetchUsers, isSignedIn) => [
export const mockTokens = (fetchLabels, isSignedIn) => [
{
icon: 'user',
title: TOKEN_TITLE_ASSIGNEE,
@ -836,7 +836,8 @@ export const mockTokens = (fetchLabels, fetchUsers, isSignedIn) => [
token: UserToken,
dataType: 'user',
unique: true,
fetchUsers,
fullPath: 'gitlab-org',
isProject: false,
preloadedUsers: [],
},
{
@ -848,7 +849,8 @@ export const mockTokens = (fetchLabels, fetchUsers, isSignedIn) => [
token: UserToken,
dataType: 'user',
unique: true,
fetchUsers,
fullPath: 'gitlab-org',
isProject: false,
preloadedUsers: [],
},
{

Some files were not shown because too many files have changed in this diff Show More