Previously, we called the `peek_enabled?` method like so:
prepend_before_action :set_peek_request_id, if: :peek_enabled?
Now we don't have a `set_peek_request_id` method, so we don't need that
line. However, the `peek_enabled?` part had a side-effect: it would also
populate the request store cache for whether the performance bar was
enabled for the current request or not.
This commit makes that side-effect explicit, and replaces all uses of
`peek_enabled?` with the more explicit
`Gitlab::PerformanceBar.enabled_for_request?`. There is one spec that
still sets `SafeRequestStore[:peek_enabled]` directly, because it is
contrasting behaviour with and without a request store enabled.
The upshot is:
1. We still set the value in one place. We make it more explicit that
that's what we're doing.
2. Reading that value uses a consistent method so it's easier to find in
future.
This will help identify Sidekiq jobs that invoke excessive number of
filesystem access.
The timing data is stored in `RequestStore`, but this is only active
within the middleware and is not directly accessible to the Sidekiq
logger. However, it is possible for the middleware to modify the job
hash to pass this data along to the logger.
If `GitalyClient#can_use_disk?` returned `false`, it was never cached
properly and led to excessive number of Gitaly calls. Instead of using
`cached_value.present?`, we need to check `cached_value.nil?`.
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/64802
The metric was used to correlate Gitaly requests to the Rails controller
and action combination. However, Kibana provides better observability in
this specific metric, and can handle hig cardinality much better.
There's no dashboard in Grafana that currently depends on this metric
being exposed.
The feature flag has been introduced an was turned off by default,
now the it will default to be turned on. That change would still allow
users to turn this feature off by leveraging the Rails console by
running:
`Feature.disable("gitaly_catfile-cache")`
Another option is to manage the number of items the LRU cache will
contain, by updating the `config.toml` for Gitaly. This would be the
`catfile_cache_size`:
0dcb5c579e/config.toml.example (L27)
Closes: https://gitlab.com/gitlab-org/gitaly/issues/1712
The GitalyClient held a lot of logic which was all very tightly coupled.
In this instance the feature logic was extracted to make it do just a
little less and create a bit more focus in the GitalyClient's
responsibilies.
This change is a fairly straightforward refactor to extract the tracing
and correlation-id code from the gitlab rails codebase into the new
LabKit-Ruby project.
The corresponding import into LabKit-Ruby was in
https://gitlab.com/gitlab-org/labkit-ruby/merge_requests/1
The code itself remains very similar for now.
Extracting it allows us to reuse it in other projects, such as
Gitaly-Ruby. This will give us the advantages of correlation-ids and
distributed tracing in that project too.
This adds the backtrace to a table to show exactly where the Gitaly call
was made to make it easier to understand where the call originated.
This change also collapses the details in the same row to improve the
usability when there is a backtrace.
For a given merge request, it's quite common to see duplicate FindCommit
Gitaly requests because the Gitaly CommitService caches the request by
the commit SHA, not by the ref name. However, most of the duplicate
requests use the ref name, so the cache is never actually used in
practice. This leads to unnecessary requests that slow performance.
This commit allows certain callers to bypass the ref name to
OID conversion in the cache. We don't do this by default because it's
possible the tip of the branch changes during the commit, which
would cause the caller to get stale data.
This commit also forces the Ci::Pipeline to use the full ref name
so that caching can work for merge requests.
Closes https://gitlab.com/gitlab-org/gitlab-ce/issues/57083
We typically don't want to enforce request limits in production
However, we have some production-like test environments, i.e., ones
where `Rails.env.production?` returns `true`. We do want to be able
to check if the limit is being exceeded while testing in those
environments.