It activates and extra graph on the RabbitMQ-Overview dashboard and
let's be honest - why use Quorum Queues if the workload didn't care
whether the broker received the message? They go together, seriously!
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
Since metrics are now aggregated by default, it made more sense to use
the inverse meaning of disabling aggregation, and call it a positive and
explicit action: return_per_object_metrics.
Naming pair: @michaelklishin
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
This is a follow-up to https://github.com/rabbitmq/ra/pull/160
Had to introduce mf_convert/3 so that METRICS_REQUIRING_CONVERSIONS
proplist does not clash with METRICS_RAW proplists that have the same
number of elements. This is begging to be refactored, but I know that
@dcorbacho is working on https://github.com/rabbitmq/rabbitmq-prometheus/issues/26
Also modified the RabbitMQ-Quorum-Queues-Raft dashboard
Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
Grafan will keep failing with the following error message otherwise:
failed to load dashboard from /dashboards/__inputs.json Dashboard title cannot be empty
It still puts a significant load on the host, but any lower and we won't
see any change in the Uncommited log entries graph, and too little
variation in the Log entry commit latency.
Well, almost. flat-statusmap-panel v0.1.1 breaks on Grafana v6.5.0.
Since it's already been mentioned in
https://github.com/flant/grafana-statusmap/issues/76 for a different
reason, let's wait until it this is addressed.
It captures the Quorum-Queues Raft, so let's be specific, especially
since we know that there will be other Raft implementations in RabbitMQ,
not just Quorum Queues.
[#166926415]
It is essential to know which RabbitMQ & Erlang/OTP version the cluster
is running, as well as how many nodes there are in the cluster. We now
have a table which lists this information, right under all singlestat
panels.
The singlestat panels have been re-organized to make room for 2 new
ones: Nodes & Publishers. Classic & Quorum Queues would be great to
have, as would VHosts. The last singlestats that I would add are Alarms
& Partitions. This would bring the total number of singlestat panels to
14 (we currently have 10). While 14 feel overwhelming, it captures all
the important information that I believe is worth knowing about any
RabbitMQ cluster.
All message-related sections now display 2 graph panels instead of 3.
While 3 panels look good on 27" screens, they don't work as well on 15"
screens, which is what the majority will be using. Also the 3rd panel
would always be for anti-pattern graphs (e.g. unroutable messages,
polling operations, etc.) and would be mostly empty in the majority of
cases. Fitting fewer panels per row not only helps focusing and
understanding what is being displayed, but it also makes it easier to
compare when viewing 2 panels side-by-side, on 27" screens. Nodes &
churn sections still have 3 panels, which works well when 1 panel is
more important than the others. The compromise that we need to make is
between giving enough horizontal space to equally important panels vs
making the dashboard page too long. RabbitMQ-Overview has always been a
comprehensive dashboard which captures a lot of imformation, it was
always tough balancing the important vs the complete.
[finishes #167836027]
9.313226 GiB is a lot harder to read than 9.31 GiB, and therefore less
useful. Observing other people use this made it obvious that limiting
the precision was the human-friendly thing to do.
* explains source of metrics via row names
* makes tables slightly wider to mitigate long names line wrapping
* do not limit entries in tables, refresh resets table pagination
[finishes #168734621]
The yardstick for all Grafana dashboards should be 1920 x 1200, the
screen format most common in our team. If the dashboards look good on
our screens, they will look good on other screens too. Smalle
resolutions won't look too crammed, and bigger resolutions can be split
in half (e.g. 27" iMacs).
Some take-aways from optimising the layout of this dashboard:
* limit horizontal graph panels to 3
* limit horizontal panels to 2 if the information is dense (e.g. table + graph)
* use the same width for graph panels that need comparing, stack vertically
While __inputs are required for the dashboards to work in environments
where Prometheus is not the default datasource, it breaks the local
development flow. In other words,
9aa22e1895
prevents `make metrics overview` from working as designed.
We are going to add shortly a simple way of converting the local
dashboards into a format that can be imported in Grafana and will work
when Prometheus is not the default datasource (e.g. when using
https://github.com/coreos/kube-prometheus)
Long-term, these dashboards will be available via grafana.com, which is
the preferred way of consuming them.
cc @mkuratczyk
As described in
https://prometheus.io/docs/instrumenting/writing_clientlibs/#process-metrics.
Until prometheus.erl has the prometheus_process_collector functionality
built-in - this may not happen -, we are exposing a subset of those
metrics via rabbitmq_core_metrics_collector, so we are going to stick to
the expected naming conventions.
This commit supercedes the thought process captured in
1e5f4de4cb
[#167846096]
While `process_open_fds` would have been ideal, because the value is
cached within RabbitMQ, and computed differently across platforms, it is
important to keep the distinction from, say, what the kernel reports
just-in-time.
I am also capturing the Erlang context by adding `erlang_` to the
relevant metrics. The full context is: RabbitMQ observed this Erlang VM
process metric to be X, so this is why some metrics are prefixed with
`rabbitmq_erlang_process_`
Because there is a difference betwen what RabbitMQ limits are set to,
e.g. `rabbitmq_memory_used_limit_bytes`, vs. what RabbitMQ reports about
the Erlang process, e.g. `rabbitmq_erlang_process_memory_used_bytes`.
This is the best that we can do while staying honest about what is being
reported. cc @brian-brazil
[#167846096]
This started in the context of prometheus/docs#1414, specifically
https://github.com/prometheus/docs/pull/1414#issuecomment-520505757
Rather than labelling all metrics with the same label, we are
introducing 2 new metrics: rabbitmq_build_info & rabbitmq_identity_info.
I suspect that we may want to revert deadtrickster/prometheus.erl#91
when we agree that the proposed alternative is better.
We are yet to see through changes in Grafana dashboards. I am most
interested in how the updated queries will look like and, more
importantly, if we will have the same panels as we do now. More commits
to follow shortly, wanted to get this out the door first.
In summary, this commit changes:
# TYPE erlang_mnesia_held_locks gauge
# HELP erlang_mnesia_held_locks Number of held locks.
erlang_mnesia_held_locks{node="rabbit@920f1e3272af",cluster="rabbit@920f1e3272af",rabbitmq_version="3.8.0-alpha.806",erlang_version="22.0.7"} 0
# TYPE erlang_mnesia_lock_queue gauge
# HELP erlang_mnesia_lock_queue Number of transactions waiting for a lock.
erlang_mnesia_lock_queue{node="rabbit@920f1e3272af",cluster="rabbit@920f1e3272af",rabbitmq_version="3.8.0-alpha.806",erlang_version="22.0.7"} 0
...
To this:
# TYPE erlang_mnesia_held_locks gauge
# HELP erlang_mnesia_held_locks Number of held locks.
erlang_mnesia_held_locks 0
# TYPE erlang_mnesia_lock_queue gauge
# HELP erlang_mnesia_lock_queue Number of transactions waiting for a lock.
erlang_mnesia_lock_queue 0
...
# TYPE rabbitmq_build_info untyped
# HELP rabbitmq_build_info RabbitMQ & Erlang/OTP version info
rabbitmq_build_info{rabbitmq_version="3.8.0-alpha.809",prometheus_plugin_version="3.8.0-alpha.809-2019.08.15",prometheus_client_version="4.4.0",erlang_version="22.0.7"} 1
# TYPE rabbitmq_identity_info untyped
# HELP rabbitmq_identity_info Node & cluster identity info
rabbitmq_identity_info{node="rabbit@bc7aeb0c2564",cluster="rabbit@bc7aeb0c2564"} 1
...
[#167846096]
We want to use a consistent range for all metrics that use rate() and a
safe value (4x the Prometheus scrape interval):
https://www.robustperception.io/what-range-should-i-use-with-rate
This also prompted a change in RabbitMQ's default
collect_statistics_interval, so that we don't update metrics
unnecessarily. We are OK if the Management UI doesn't update on every 5s
auto-refresh.
Related a929f22233
[#167846096]
Started as a Prometheus docs discussion in prometheus/docs#1414, mostly
based on https://prometheus.io/docs/instrumenting/writing_exporters/
Raft metrics are of type gauge, not counter. _If you care about the
absolute value rather than only how fast it's increasing, that's a
gauge_
All node_persister_metrics are now counters - some were gauges before.
They are now named using metric naming best practices:
https://prometheus.io/docs/practices/naming/
All metrics names that should have units, do. Some use microseconds,
others milliseconds and others bytes or ops (operations). We don't do
any unit conversion in the collector but simply expose the units that
are used when the metric value is written to ETS.
While some metrics such as io_sync_time_microseconds_total would be
better expressed as Sumarries, the refactoring required to achieve that
is not worth the effort. Will keep things simple & imperfect for now,
especially since we don't have a dashboard that helps visualise these
metrics.
The next step is to address global labels - will submit as a separate
PR.
[#167846096]
Now that there is a 3.8 alpha build that includes
rabbitmq/rabbitmq-server#2075, let's make use of it!
Without this, when a new cluster was started, some nodes ended up wtih
`rabbit@localhost` for the cluster label, instead of e.g. `rmq-gcp-38`.
The main suspect was a race condition, where the rabbitmq_prometheus app
starts before the cluster name is set via `rabbitmqctl
set_cluster_name`.
[finishes #167835770]