This started in the context of prometheus/docs#1414, specifically
https://github.com/prometheus/docs/pull/1414#issuecomment-520505757
Rather than labelling all metrics with the same label, we are
introducing 2 new metrics: rabbitmq_build_info & rabbitmq_identity_info.
I suspect that we may want to revert deadtrickster/prometheus.erl#91
when we agree that the proposed alternative is better.
We are yet to see through changes in Grafana dashboards. I am most
interested in how the updated queries will look like and, more
importantly, if we will have the same panels as we do now. More commits
to follow shortly, wanted to get this out the door first.
In summary, this commit changes:
# TYPE erlang_mnesia_held_locks gauge
# HELP erlang_mnesia_held_locks Number of held locks.
erlang_mnesia_held_locks{node="rabbit@920f1e3272af",cluster="rabbit@920f1e3272af",rabbitmq_version="3.8.0-alpha.806",erlang_version="22.0.7"} 0
# TYPE erlang_mnesia_lock_queue gauge
# HELP erlang_mnesia_lock_queue Number of transactions waiting for a lock.
erlang_mnesia_lock_queue{node="rabbit@920f1e3272af",cluster="rabbit@920f1e3272af",rabbitmq_version="3.8.0-alpha.806",erlang_version="22.0.7"} 0
...
To this:
# TYPE erlang_mnesia_held_locks gauge
# HELP erlang_mnesia_held_locks Number of held locks.
erlang_mnesia_held_locks 0
# TYPE erlang_mnesia_lock_queue gauge
# HELP erlang_mnesia_lock_queue Number of transactions waiting for a lock.
erlang_mnesia_lock_queue 0
...
# TYPE rabbitmq_build_info untyped
# HELP rabbitmq_build_info RabbitMQ & Erlang/OTP version info
rabbitmq_build_info{rabbitmq_version="3.8.0-alpha.809",prometheus_plugin_version="3.8.0-alpha.809-2019.08.15",prometheus_client_version="4.4.0",erlang_version="22.0.7"} 1
# TYPE rabbitmq_identity_info untyped
# HELP rabbitmq_identity_info Node & cluster identity info
rabbitmq_identity_info{node="rabbit@bc7aeb0c2564",cluster="rabbit@bc7aeb0c2564"} 1
...
[#167846096]
We want to use a consistent range for all metrics that use rate() and a
safe value (4x the Prometheus scrape interval):
https://www.robustperception.io/what-range-should-i-use-with-rate
This also prompted a change in RabbitMQ's default
collect_statistics_interval, so that we don't update metrics
unnecessarily. We are OK if the Management UI doesn't update on every 5s
auto-refresh.
Related a929f22233
[#167846096]
Started as a Prometheus docs discussion in prometheus/docs#1414, mostly
based on https://prometheus.io/docs/instrumenting/writing_exporters/
Raft metrics are of type gauge, not counter. _If you care about the
absolute value rather than only how fast it's increasing, that's a
gauge_
All node_persister_metrics are now counters - some were gauges before.
They are now named using metric naming best practices:
https://prometheus.io/docs/practices/naming/
All metrics names that should have units, do. Some use microseconds,
others milliseconds and others bytes or ops (operations). We don't do
any unit conversion in the collector but simply expose the units that
are used when the metric value is written to ETS.
While some metrics such as io_sync_time_microseconds_total would be
better expressed as Sumarries, the refactoring required to achieve that
is not worth the effort. Will keep things simple & imperfect for now,
especially since we don't have a dashboard that helps visualise these
metrics.
The next step is to address global labels - will submit as a separate
PR.
[#167846096]
Now that there is a 3.8 alpha build that includes
rabbitmq/rabbitmq-server#2075, let's make use of it!
Without this, when a new cluster was started, some nodes ended up wtih
`rabbit@localhost` for the cluster label, instead of e.g. `rmq-gcp-38`.
The main suspect was a race condition, where the rabbitmq_prometheus app
starts before the cluster name is set via `rabbitmqctl
set_cluster_name`.
[finishes #167835770]
It's hard to understand what the different colours mean otherwise. Also,
yellow is preferable to purple when it comes to displaying runnable
processes - those stuck in the run queue.
cc @michaelklishin
It explains the correlation between inet packets & TCP packets, and why
the inet packet size varies when TLS is used for inter-node
communication.
[finishes 166419953]
It makes a big difference for stable throughput. See screenshots from
https://bugs.erlang.org/browse/ERL-959
We need to test this in a real network - I'm thinking GCP -, outside of
Docker. The results will inform whether we should change the default -
which is 1436 bytes.
[#166419953]
Add cadvisor & node-exporter & Docker metrics.
Inspired by https://github.com/stefanprodan/dockprom
There are no Grafana dashboards for these metrics yet. The dockprom ones
don't show any panels in Grafana 6.
[#165818813]
Even though this slows down Grafana container startup, we need to ensure
that this plugin is present, otherwise the panels that track process
state won't work. This will be slow the first time the plugin is
downloaded, and slightly faster on subsequent runs.
[#166004512]
* pin nodes to specific colours
* add message-related single-stats
* reshuffle rows
* node metrics are most useful
* queue, channel & connection churn are least useful
Includes Erlang node to colour pinning
Adds a few make targets to help with docker-compose repetitive commands
& Grafana dashboard updates.
Split Overview & Distribution Docker deployments
re deadtrickster/prometheus.erl#92
[finishes #166004512]
We (+@essen) have answered a bunch of questions (see the story) and
improved the metrics + dashboard in the process. Added some improvements
to the RabbitMQ Overview metrics as well.
[#166004104]
This puts load on the distribution and makes the Erlang-Distribution
dashboard show an interesting behaviour in TCP sockets. @dcorbacho
thinks so too.
re deadtrickster/prometheus.erl#92
[#166004512]
Use 1m instead of $__interval for rates that track metrics with slow
rate of change. Using $__interval will miss changes.
Stop rounding, it skews values.
All `basic.get` metrics are bad. The 0 threshold and the red colour for
all lines is hopefully enought to convey this.
re rabbitmq/rabbitmq-perf-test#203
[finishes #165852775]
Otherwise it's really hard to know what we are looking at when expanding
panels.
Also, pin to colours. Otherwise, rabbit@rabbitmq1 metrics in one panel
will appear yellow, and green in another panel. This is a one-off
which doesn't scale, should be automated in some way. Grafana doesn't
support pinning colors to labels 🤔
This includes the global_labels feature introduced in deadtrickster/prometheus.erl#91
To test, run `docker-compose up` in docker dir, then navigate to
localhost:15692/metrics & localhost:3000/dashboards (admin:admin) to see
the Grafana RabbitMQ Overview dashboard.
Add nodes, alarms & partitions to global counts. These are too important
to not show them. Need to discuss how to expose these via metrics.
[#164374397]
Set memory high watermark to 256MiB to force trigger the memory alarm,
as well as ensure messages get paged to disk (forces disk reads).
Make all legends display as table so that values are easier to see when
toggling them.
Capture limits in thresholds. Even if they are static and somewhat
specific to this RabbitMQ deployment, it's better to have them when
demo-ing the end-to-end Prometheus/Grafana experience.
[#164374751]
This lights up `Published confirmed / s` Grafana panel.
To light up `Published unroutable / s`, unbind all queues from the
direct exchange.
[#164374751]
This has support for disabling metrics_collector, as captured in
rabbitmq/rabbitmq-management-agent#78 & rabbitmq/rabbitmq-management#691
Since we want management to be enabled, this doesn't help our use-case,
but this option is perfect for users that want metrics, but don't want
to pay the overhead of Management - especially metric aggregations.
[#164376052]
After running `docker-compose up`, open Grafana via
http://localhost:3000 and login with user admin & password admin. After
logging in, you will see a RabbitMQ Overview dashboard pre-loaded (/・0・)
Thanks @cirocosta! https://github.com/cirocosta/sample-grafana
cc @MarcialRosales
[finishes #164374321]