Merge pull request #17233 from ADITYATIWARI342005/fix/functions.mdAndStorage.md

docs: fix typos in querying functions and storage
This commit is contained in:
Bryan Boreham 2025-09-30 12:20:24 +01:00 committed by GitHub
commit 8f7e6644f0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
2 changed files with 15 additions and 16 deletions

View File

@ -4,8 +4,7 @@ nav_title: Functions
sort_rank: 3 sort_rank: 3
--- ---
Some functions have default arguments, e.g. `year(v=vector(time()) Some functions have default arguments, e.g. `year(v=vector(time()) instant-vector)`. This means that there is one argument `v` which is an instant
instant-vector)`. This means that there is one argument `v` which is an instant
vector, which if not provided it will default to the value of the expression vector, which if not provided it will default to the value of the expression
`vector(time())`. `vector(time())`.
@ -106,14 +105,14 @@ vector are ignored silently.
## `day_of_month()` ## `day_of_month()`
`day_of_month(v=vector(time()) instant-vector)` interpretes float samples in `day_of_month(v=vector(time()) instant-vector)` interprets float samples in
`v` as timestamps (number of seconds since January 1, 1970 UTC) and returns the `v` as timestamps (number of seconds since January 1, 1970 UTC) and returns the
day of the month (in UTC) for each of those timestamps. Returned values are day of the month (in UTC) for each of those timestamps. Returned values are
from 1 to 31. Histogram samples in the input vector are ignored silently. from 1 to 31. Histogram samples in the input vector are ignored silently.
## `day_of_week()` ## `day_of_week()`
`day_of_week(v=vector(time()) instant-vector)` interpretes float samples in `v` `day_of_week(v=vector(time()) instant-vector)` interprets float samples in `v`
as timestamps (number of seconds since January 1, 1970 UTC) and returns the day as timestamps (number of seconds since January 1, 1970 UTC) and returns the day
of the week (in UTC) for each of those timestamps. Returned values are from 0 of the week (in UTC) for each of those timestamps. Returned values are from 0
to 6, where 0 means Sunday etc. Histogram samples in the input vector are to 6, where 0 means Sunday etc. Histogram samples in the input vector are
@ -121,7 +120,7 @@ ignored silently.
## `day_of_year()` ## `day_of_year()`
`day_of_year(v=vector(time()) instant-vector)` interpretes float samples in `v` `day_of_year(v=vector(time()) instant-vector)` interprets float samples in `v`
as timestamps (number of seconds since January 1, 1970 UTC) and returns the day as timestamps (number of seconds since January 1, 1970 UTC) and returns the day
of the year (in UTC) for each of those timestamps. Returned values are from 1 of the year (in UTC) for each of those timestamps. Returned values are from 1
to 365 for non-leap years, and 1 to 366 in leap years. Histogram samples in the to 365 for non-leap years, and 1 to 366 in leap years. Histogram samples in the
@ -129,7 +128,7 @@ input vector are ignored silently.
## `days_in_month()` ## `days_in_month()`
`days_in_month(v=vector(time()) instant-vector)` interpretes float samples in `days_in_month(v=vector(time()) instant-vector)` interprets float samples in
`v` as timestamps (number of seconds since January 1, 1970 UTC) and returns the `v` as timestamps (number of seconds since January 1, 1970 UTC) and returns the
number of days in the month of each of those timestamps (in UTC). Returned number of days in the month of each of those timestamps (in UTC). Returned
values are from 28 to 31. Histogram samples in the input vector are ignored silently. values are from 28 to 31. Histogram samples in the input vector are ignored silently.
@ -266,7 +265,7 @@ histograms, it is easy to accidentally pick lower or upper values that are very
far away from any bucket boundary, leading to large margins of error. Rather than far away from any bucket boundary, leading to large margins of error. Rather than
using `histogram_fraction()` with classic histograms, it is often a more robust approach using `histogram_fraction()` with classic histograms, it is often a more robust approach
to directly act on the bucket series when calculating fractions. See the to directly act on the bucket series when calculating fractions. See the
[calculation of the Apdex scare](https://prometheus.io/docs/practices/histograms/#apdex-score) [calculation of the Apdex score](https://prometheus.io/docs/practices/histograms/#apdex-score)
as a typical example. as a typical example.
For example, the following expression calculates the fraction of HTTP requests For example, the following expression calculates the fraction of HTTP requests
@ -448,7 +447,7 @@ variance of observations for each histogram sample in `v`.
## `hour()` ## `hour()`
`hour(v=vector(time()) instant-vector)` interpretes float samples in `v` as `hour(v=vector(time()) instant-vector)` interprets float samples in `v` as
timestamps (number of seconds since January 1, 1970 UTC) and returns the hour timestamps (number of seconds since January 1, 1970 UTC) and returns the hour
of the day (in UTC) for each of those timestamps. Returned values are from 0 of the day (in UTC) for each of those timestamps. Returned values are from 0
to 23. Histogram samples in the input vector are ignored silently. to 23. Histogram samples in the input vector are ignored silently.
@ -612,7 +611,7 @@ spikes are hard to read.
Note that when combining `irate()` with an Note that when combining `irate()` with an
[aggregation operator](operators.md#aggregation-operators) (e.g. `sum()`) [aggregation operator](operators.md#aggregation-operators) (e.g. `sum()`)
or a function aggregating over time (any function ending in `_over_time`), or a function aggregating over time (any function ending in `_over_time`),
always take a `irate()` first, then aggregate. Otherwise `irate()` cannot detect always take an `irate()` first, then aggregate. Otherwise `irate()` cannot detect
counter resets when your target restarts. counter resets when your target restarts.
## `label_join()` ## `label_join()`
@ -674,14 +673,14 @@ cases are equivalent to those in `ln`.
## `minute()` ## `minute()`
`minute(v=vector(time()) instant-vector)` interpretes float samples in `v` as `minute(v=vector(time()) instant-vector)` interprets float samples in `v` as
timestamps (number of seconds since January 1, 1970 UTC) and returns the minute timestamps (number of seconds since January 1, 1970 UTC) and returns the minute
of the hour (in UTC) for each of those timestamps. Returned values are from 0 of the hour (in UTC) for each of those timestamps. Returned values are from 0
to 59. Histogram samples in the input vector are ignored silently. to 59. Histogram samples in the input vector are ignored silently.
## `month()` ## `month()`
`month(v=vector(time()) instant-vector)` interpretes float samples in `v` as `month(v=vector(time()) instant-vector)` interprets float samples in `v` as
timestamps (number of seconds since January 1, 1970 UTC) and returns the month timestamps (number of seconds since January 1, 1970 UTC) and returns the month
of the year (in UTC) for each of those timestamps. Returned values are from 1 of the year (in UTC) for each of those timestamps. Returned values are from 1
to 12, where 1 means January etc. Histogram samples in the input vector are to 12, where 1 means January etc. Histogram samples in the input vector are
@ -795,7 +794,7 @@ sorted by the values of the given labels in ascending order. In case these
label values are equal, elements are sorted by their full label sets. label values are equal, elements are sorted by their full label sets.
`sort_by_label` acts on float and histogram samples in the same way. `sort_by_label` acts on float and histogram samples in the same way.
Please note that `sort_by_label` only affect the results of instant queries, as Please note that `sort_by_label` only affects the results of instant queries, as
range query results always have a fixed output ordering. range query results always have a fixed output ordering.
`sort_by_label` uses [natural sort `sort_by_label` uses [natural sort

View File

@ -97,7 +97,7 @@ Prometheus has several flags that configure local storage. The most important ar
(m-mapped Head chunks) directory combined (peaks every 2 hours). (m-mapped Head chunks) directory combined (peaks every 2 hours).
- `--storage.tsdb.wal-compression`: Enables compression of the write-ahead log (WAL). - `--storage.tsdb.wal-compression`: Enables compression of the write-ahead log (WAL).
Depending on your data, you can expect the WAL size to be halved with little extra Depending on your data, you can expect the WAL size to be halved with little extra
cpu load. This flag was introduced in 2.11.0 and enabled by default in 2.20.0. CPU load. This flag was introduced in 2.11.0 and enabled by default in 2.20.0.
Note that once enabled, downgrading Prometheus to a version below 2.11.0 will Note that once enabled, downgrading Prometheus to a version below 2.11.0 will
require deleting the WAL. require deleting the WAL.
@ -117,8 +117,8 @@ If your local storage becomes corrupted to the point where Prometheus will not
start it is recommended to backup the storage directory and restore the start it is recommended to backup the storage directory and restore the
corrupted block directories from your backups. If you do not have backups the corrupted block directories from your backups. If you do not have backups the
last resort is to remove the corrupted files. For example you can try removing last resort is to remove the corrupted files. For example you can try removing
individual block directories or the write-ahead-log (wal) files. Note that this individual block directories or the write-ahead-log (WAL) files. Note that this
means losing the data for the time range those blocks or wal covers. means losing the data for the time range those blocks or WAL covers.
CAUTION: Non-POSIX compliant filesystems are not supported for Prometheus' CAUTION: Non-POSIX compliant filesystems are not supported for Prometheus'
local storage as unrecoverable corruptions may happen. NFS filesystems local storage as unrecoverable corruptions may happen. NFS filesystems
@ -213,7 +213,7 @@ procedure, as they cannot be represented in the OpenMetrics format.
### Usage ### Usage
Backfilling can be used via the Promtool command line. Promtool will write the blocks Backfilling can be used via the `promtool` command line. `promtool` will write the blocks
to a directory. By default this output directory is ./data/, you can change it by to a directory. By default this output directory is ./data/, you can change it by
using the name of the desired output directory as an optional argument in the sub-command. using the name of the desired output directory as an optional argument in the sub-command.