This adds ruby and unicorn instrumentation. This was originally
intended in 11.11 but due to performance concerns it was reverted. This
new commit foregoes the sys-proctable gem was causing performance issues
previously.
Updates the EnvironmentController#metrics_dashboard endpoint
to support a "dashboard" param, which can be used to specify
the filepath of a dashboard configuration from a project
repository. Dashboard configurations are expected to be
stored in .gitlab/dashboards/.
Updates dashboard post-processing steps to exclude custom
metrics, which should only display on the system dashboard.
This updates monitor docs to reflect the new ruby and unicorn metrics as
well as making it so we fetch process start time via the proc table
instead of via CLOCK_BOOTTIME
This adds new metrics for monitoring unicorn. These metrics include
process_cpu_seconds_total, process_start_time_seconds, process_max_fds,
and unicorn_workers.
* Use a simple counter for sampler duration instead of a histogram.
* Use a counter to collect GC time.
* Remove unused objects metric.
* Cleanup metric names to match Prometheus conventions.
* Prefix generic GC stats with `gc_stat`.
* Include worker label on memory and file descriptor metrics.
In MR https://gitlab.com/gitlab-org/gitlab-ce/merge_requests/15834 we
removed use of the data produced by the Allocations Gem. However, we
never removed the code that just enables tracking of allocations. In
this commit we remove all remaining traces of this Gem.
There seem to be a lot of cases where the suffix of an action (e.g.
".html") is set to bogus data, such as "*/*" or entire URLs. This can
increase cardinality of our metrics, and isn't very useful for
monitoring and filtering. To work around this, we enforce a whitelist
containing a few suffixes we actually care about. Suffixes not supported
will be grouped under the action without a suffix. This means that a
request to "FooController#bar.jpeg" will be assigned to
"FooController#bar".
Certain controllers (e.g. Doorkeeper::TokensController) don't expose the
method "request_format". This commit changes
Gitlab::Metrics::WebTransaction so we don't rely on this method, instead
using the underlying code this method uses.
Fixes https://gitlab.com/gitlab-org/gitlab-ce/issues/46412
The method "content_type" on a controller does not always return the
correct content type. On the other hand, the method "request_format"
does _and_ immediately returns a Symbol (e.g. :json) instead of a
mime-type name (e.g. application/json). With these changes metrics
should again report their action names correctly.
Fixes https://gitlab.com/gitlab-com/infrastructure/issues/3499
The Sidekiq exporter logs were mixing with the normal Sidekiq logs. In order
to support structured logging in Sidekiq, we either need to split this data
out or convert the exporter to produce structured logs. Since Sidekiq job
processing is fundamentally different information from Web server traffic,
it seems cleaner to move the metrics traffic into a separate file, where they
can be parsed by a different filter if needed.
Relates to #20060
Reduce cardinality of some of GitLab's Prometheus metrics and fix observed duration reporting.
Closes#41045
See merge request gitlab-org/gitlab-ce!15881
i.e.
Using compare and swap we update the expires_at value.
The thread that actually is able to update this value will also set
the cache holding method_call enabled state
On GitLab.com, InfluxSampler#sample_objects appears to take 1.2 s or so to
iterate through 1059 objects. This had led to delays of a couple hundred
milliseconds in processing in the main thread. Remove this code since it's not
really being used.
Closesgitlab-com/infrastructure#3250
* Follow Prometheus naming conventions[0].
* Simplify metrics by adding response lables to the histogram.
* Use standard `http_request_duration_seconds_...` names for the histogram.
[0]: https://prometheus.io/docs/practices/naming/#metric-names
`endpoint.route` is calling `env[Grape::Env::GRAPE_ROUTING_ARGS][:route_info]`
but `env[Grape::Env::GRAPE_ROUTING_ARGS]` is `nil` in the case of a 405 response
Signed-off-by: Rémy Coutable <remy@rymai.me>
GitLab Performance Monitoring is now able to track custom events not
directly related to application performance. These events include the
number of tags pushed, repositories created, builds registered, etc.
The use of these events is to get a better overview of how a GitLab
instance is used and how that may affect performance. For example, a
large number of Git pushes may have a negative impact on the underlying
storage engine.
Events are stored in the "events" measurement and are not prefixed with
"rails_" or "sidekiq_", this makes it easier to query events with the
same name triggered from different parts of the application. All events
being stored in the same measurement also makes it easier to downsample
data.
Currently the following events are tracked:
* Creating repositories
* Removing repositories
* Changing the default branch of a repository
* Pushing a new tag
* Removing an existing tag
* Pushing a commit (along with the branch being pushed to)
* Pushing a new branch
* Removing an existing branch
* Importing a repository (along with the URL we're importing)
* Forking a repository (along with the source/target path)
* CI builds registered (and when no build could be found)
* CI builds being updated
* Rails and Sidekiq exceptions
Fixesgitlab-org/gitlab-ce#13720
This reduces the overhead of the method instrumentation code primarily
by reducing the number of method calls. There are also some other small
optimisations such as not casting timing values to Floats (there's no
particular need for this), using Symbols for method call metric names,
and reducing the number of Hash lookups for instrumented methods.
The exact impact depends on the code being executed. For example, for a
method that's only called once the difference won't be very noticeable.
However, for methods that are called many times the difference can be
more significant.
For example, the loading time of a large commit
(nrclark/dummy_project@81ebdea5df)
was reduced from around 19 seconds to around 15 seconds using these
changes.
Process.clock_gettime allows getting the real time in nanoseconds as
well as allowing one to get a monotonic timestamp. This offers greater
accuracy without the overhead of having to allocate a Time instance. In
general using Time.now/Time.new is about 2x slower than using
Process.clock_gettime(). For example:
require 'benchmark/ips'
Benchmark.ips do |bench|
bench.report 'Time.now' do
Time.now.to_f
end
bench.report 'clock_gettime' do
Process.clock_gettime(Process::CLOCK_MONOTONIC, :millisecond)
end
bench.compare!
end
Running this benchmark gives:
Calculating -------------------------------------
Time.now 108.052k i/100ms
clock_gettime 125.984k i/100ms
-------------------------------------------------
Time.now 2.343M (± 7.1%) i/s - 11.670M
clock_gettime 4.979M (± 0.8%) i/s - 24.945M
Comparison:
clock_gettime: 4979393.8 i/s
Time.now: 2342986.8 i/s - 2.13x slower
Another benefit of using Process.clock_gettime() is that we can simplify
the code a bit since it can give timestamps in nanoseconds out of the
box.
Previously we'd create a separate Metric instance for every method call
that would exceed the method call threshold. This is problematic because
it doesn't provide us with information to accurately get the _total_
execution time of a particular method. For example, if the method
"Foo#bar" was called 4 times with a runtime of ~10 milliseconds we'd end
up with 4 different Metric instances. If we were to then get the
average/95th percentile/etc of the timings this would be roughly 10
milliseconds. However, the _actual_ total time spent in this method
would be around 40 milliseconds.
To solve this problem we now create a single Metric instance per method.
This Metric instance contains the _total_ real/CPU time and the call
count for every instrumented method.
We can't do a lot with classes without names as we can't filter by them,
have no idea where they come from, etc. As such it's best to just ignore
these.
By default instrumentation will instrument public,
protected and private methods, because usually
heavy work is done on private method or at least
that’s what facts is showing
Generating the following tags
Grape#GET /projects/:id/archive
from Grape::Route objects like
{ :path => /:version/projects/:id/archive(.:format)
:version => “v3”,
:method => “GET” }
Use an instance variable to cache raw_path transformations.
This variable is only going to growth to the number of
endpoints of the API, not with exact different requests
We can store this cache as an instance variable because
middleware are initialised only once
Because method call timings are inclusive (that is, they include the
time of any sub method calls) this would lead to the total method
execution time often being far greater than the total transaction time.
Because this is incredibly confusing it's best to simply _not_ track the
total method execution time, after all it's not that useful to begin
with.
Fixesgitlab-org/gitlab-ce#17239
By using Module#prepend we can define a Module containing all proxy
methods. This removes the need for setting up crazy method alias chains
and in turn prevents us from having to deal with all that madness (e.g.
methods calling each other recursively).
Fixesgitlab-org/gitlab-ce#15281
This ensures that an instrumented method that doesn't take arguments
reports an arity of 0, instead of -1.
If Ruby had a proper method for finding out the required arguments of a
method (e.g. Method#required_arguments) this would not have been an
issue. Sadly the only two methods we have are Method#parameters and
Method#arity, and both are equally painful to use.
Fixesgitlab-org/gitlab-ce#12450
Sampling data at a fixed interval means we can potentially miss data
from events occurring between sampling intervals. For example, say we
sample data every 15 seconds but Unicorn workers get killed after 10
seconds. In this particular case it's possible to miss interesting data
as the sampler will never get to actually submitting data.
To work around this (at least for the most part) the sampling interval
is randomized as following:
1. Take the user specified sampling interval (15 seconds by default)
2. Divide it by 2 (referred to as "half" below)
3. Generate a range (using a step of 0.1) from -"half" to "half"
4. Every time the sampler goes to sleep we'll grab the user provided
interval and add a randomly chosen "adjustment" to it while making
sure we don't pick the same value twice in a row.
For a specified timeout of 15 this means the actual intervals can be
anywhere between 7.5 and 22.5, but never can the same interval be used
twice in a row.
The rationale behind this change is that on dev.gitlab.org I'm sometimes
seeing certain Gitlab::Git/Rugged objects being retained, but only for a
few minutes every 24 hours. Knowing the code of Gitlab and how much
memory it uses/leaks I suspect we're missing data due to workers getting
terminated before the sampler can write its data to InfluxDB.
Where a vew is called from doesn't matter as much. We already know what
action they belong to and this is more than enough information. By
removing the file/line number from the list of tags we should also be
able to reduce the number of series stored in InfluxDB.
This gives a very rough estimate of how much memory is allocated during
a transaction. This only works reliably when using a single-threaded
application server and a Ruby implementation with a GIL as otherwise
memory allocated by other threads might skew the statistics. Sadly
there's no way around this as Ruby doesn't provide a reliable way of
gathering accurate object sizes upon allocation on a per-thread basis.
Without this it's impossible to find out what methods/views/queries are
executed by a certain controller or Sidekiq worker. While this will
increase the total number of series it should stay within reasonable
limits due to the amount of "actions" being small enough.
Since filtering by these values is very rare (they're mostly just
displayed as-is) we don't need to waste any index space by saving them
as tags. By storing them as values we also greatly reduce the number of
series in InfluxDB.
This reverts commit 7549102bb7.
Apparently I was wrong about
ActiveSupport::Notifications::Event#duration returning the duration in
seconds, instead it returns it in milliseconds already.
This will be used to store/increment the total query/view rendering
timings on a per transaction basis. This in turn can greatly reduce the
amount of metrics stored.
This particular setup had 3 problems:
1. Storing SQL queries as tags is very inefficient as InfluxDB ends up
indexing every query (and they can get pretty large). Storing these
as values instead means we can't always display the SQL as easily.
2. We already instrument ActiveRecord query methods, thus we already
have timing information about database queries.
3. SQL obfuscation is difficult to get right and I'd rather not expose
sensitive data by accident.
While it's useful to keep track of the different versions (Ruby, GitLab,
etc) doing so for every point wastes disk space and possibly also RAM
(which InfluxDB is all to eager to gobble up). If we want to see the
performance differences between different GitLab versions simply looking
at the performance since the last release date should suffice.
This removes the need for Sidekiq and any overhead/problems introduced
by TCP. There are a few things to take into account:
1. When writing data to InfluxDB you may still get an error if the
server becomes unavailable during the write. Because of this we're
catching all exceptions and just ignore them (for now).
2. Writing via UDP apparently requires the timestamp to be in
nanoseconds. Without this data either isn't written properly.
3. Due to the restrictions on UDP buffer sizes we're writing metrics one
by one, instead of writing all of them at once.
This ensures we don't end up wasting resources by tracking method calls
that only take a few microseconds. By default the threshold is 10
milliseconds but this can be changed using the gitlab.yml configuration
file.
InfluxDB escapes double quotes upon output which makes it a pain to deal
with. This ensures that if we're using PostgreSQL we don't store any
queries containing double quotes in InfluxDB, solving the escaping
problem.
When using instrument_methods/instrument_instance_methods we only want
to instrument methods defined directly in a class, not those included
via mixins (e.g. whatever RSpec throws in during development).
In case an externally included method _has_ to be instrumented we can
still use the regular instrument_method/instrument_instance_method
methods.
The methods Instrumentation.instrument_methods and
Instrumentation.instrument_instance_methods can be used to instrument
all methods of a module at once.
The use of ActiveSupport would slow down instrumented method calls by
about 180x due to:
1. ActiveSupport itself not being the fastest thing on the planet
2. caller_locations() having quite some overhead
The use of caller_locations() has been removed because it's not _that_
useful since we already know the full namespace of receivers and the
names of the called methods.
The use of ActiveSupport has been replaced with some custom code that's
generated using eval() (which can be quite a bit faster than using
define_method).
This new setup results in instrumented methods only being about 35-40x
slower (compared to non instrumented methods).
This adds the ability to write application metrics (e.g. SQL timings) to
InfluxDB. These metrics can in turn be visualized using Grafana, or
really anything else that can read from InfluxDB. These metrics can be
used to track application performance over time, between different Ruby
versions, different GitLab versions, etc.
== Transaction Metrics
Currently the following is tracked on a per transaction basis (a
transaction is a Rails request or a single Sidekiq job):
* Timings per query along with the raw (obfuscated) SQL and information
about what file the query originated from.
* Timings per view along with the path of the view and information about
what file triggered the rendering process.
* The duration of a request itself along with the controller/worker
class and method name.
* The duration of any instrumented method calls (more below).
== Sampled Metrics
Certain metrics can't be directly associated with a transaction. For
example, a process' total memory usage is unrelated to any running
transactions. While a transaction can result in the memory usage going
up there's no accurate way to determine what transaction is to blame,
this becomes especially problematic in multi-threaded environments.
To solve this problem there's a separate thread that takes samples at a
fixed interval. This thread (using the class Gitlab::Metrics::Sampler)
currently tracks the following:
* The process' total memory usage.
* The number of file descriptors opened by the process.
* The amount of Ruby objects (using ObjectSpace.count_objects).
* GC statistics such as timings, heap slots, etc.
The default/current interval is 15 seconds, any smaller interval might
put too much pressure on InfluxDB (especially when running dozens of
processes).
== Method Instrumentation
While currently not yet used methods can be instrumented to track how
long they take to run. Unlike the likes of New Relic this doesn't
require modifying the source code (e.g. including modules), it all
happens from the outside. For example, to track `User.by_login` we'd add
the following code somewhere in an initializer:
Gitlab::Metrics::Instrumentation.
instrument_method(User, :by_login)
to instead instrument an instance method:
Gitlab::Metrics::Instrumentation.
instrument_instance_method(User, :save)
Instrumentation for either all public model methods or a few crucial
ones will be added in the near future, I simply haven't gotten to doing
so just yet.
== Configuration
By default metrics are disabled. This means users don't have to bother
setting anything up if they don't want to. Metrics can be enabled by
editing one's gitlab.yml configuration file (see
config/gitlab.yml.example for example settings).
== Writing Data To InfluxDB
Because InfluxDB is still a fairly young product I expect the worse.
Data loss, unexpected reboots, the database not responding, you name it.
Because of this data is _not_ written to InfluxDB directly, instead it's
queued and processed by Sidekiq. This ensures that users won't notice
anything when InfluxDB is giving trouble.
The metrics worker can be started in a standalone manner as following:
bundle exec sidekiq -q metrics
The corresponding class is called MetricsWorker.