Implements expand/collapse functionality for displaying final scrape
configuration (interval + timeout) in the targets page timing column.
- Add ScrapeDetails component with expand/collapse chevron
- Keep existing "Last Scrape" and "Scrape Duration" badges always visible
- Display "Scrape interval: every \<interval\>" and "Scrape timeout: after \<timeout\>" when expanded
- Use IconRepeat for interval and IconPlugConnectedX for timeout
- Follow TargetLabels.tsx pattern for consistency
- Implement performance optimization with conditional DOM rendering
- Maintain existing hover tooltip functionality
Signed-off-by: ADITYATIWARI342005 <142050150+ADITYATIWARI342005@users.noreply.github.com>
* Add anchored and smoothed to vector selectors.
This adds "anchored" and "smoothed" keywords that can be used following a matrix selector.
"Anchored" selects the last point before the range (or the first one after the range) and adds it at the boundary of the matrix selector.
"Smoothed" applies linear interpolation at the edges using the points around the edges. In the absence of a point before or after the edge, the first or the last point is added to the edge, without interpolation.
*Exemple usage*
* `increase(caddy_http_requests_total[5m] anchored)` (equivalent of *caddy_http_requests_total - caddy_http_requests_total offset 5m* but takes counter reset into consideration)
* `rate(caddy_http_requests_total[step()] smoothed)`
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Update docs/feature_flags.md
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
* Smoothed/Anchored rate: Add more tests
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
* Anchored/Smoothed modifier: error out with histograms
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
---------
Signed-off-by: Julien Pivotto <291750+roidelapluie@users.noreply.github.com>
Signed-off-by: Julien <291750+roidelapluie@users.noreply.github.com>
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Reduce the resolution of histograms as needed and ignore invalid
schemas while emitting a warning log.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Because the label API endpoints read from the TSDB indexes, they can
return information for series which are present in the index but have no
samples in the queried interval.
Add similar note for the series endpoint.
Signed-off-by: Simon Pasquier <spasquie@redhat.com>
If a sample read through remote read has too high resolution,
reduce it to the maximum allowed.
This is a slow path, but we only expect it to happen if the server
side is newer version that allows higher resolution.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Allow -9..52 schemas instead of just -4..8, but reduce resolution to 8 if
above.
The reduce code path will be slow, but we only expect it to happen if
TSDB already has higher resolution samples and we are in a rollback.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
# Conflicts:
# model/histogram/generic.go
When remote read returns chunks, the validation is in tsdb/chunkenc.
However when it returns samples, we need to modify the iterator to
validate.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Otherwise higher level code like PromQL needs to constantly check if it
can handle the samples.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
With the fixed commit order, we can now handle the conversion of float
staleness markers to histogram staleness markers in a more direct way.
Signed-off-by: beorn7 <beorn@grafana.com>
Fixes https://github.com/prometheus/prometheus/issues/15177
The basic idea here is to divide the samples to be commited into (sub)
batches whenever we detect that the same series receives a sample of a
type different from the previous one. We then commit those batches one
after another, and we log them to the WAL one after another, so that
we hit both birds with the same stone. The cost of the stone is that
we have to track the sample type of each series in a map. Given the
amount of things we already track in the appender, I hope that it
won't make a dent. Note that this even addresses the NHCB special case
in the WAL.
This does a few other things that I could not resist to pick up on the
go:
- It adds more zeropool.Pools and uses the existing ones more
consistently. My understanding is that this was merely an oversight.
Maybe the additional pool usage will compensate for the increased
memory demand of the map.
- Create the synthetic zero sample for histograms a bit more
carefully. So far, we created a sample that always went into its own
chunk. Now we create a sample that is compatible enough with the
following sample to go into the same chunk. This changed the test
results quite a bit. But IMHO it makes much more sense now.
- Continuing past efforts, I changed more namings of `Samples` into
`Floats` to keep things consistent and less confusing. (Histogram
samples are also samples.) I still avoided changing names in other
packages.
- I added a few shortcuts `h := a.head`, saving many characters.
TODOs:
- Address @krajorama's TODOs about commit order and staleness handling.
Signed-off-by: beorn7 <beorn@grafana.com>
control over time
The test becomes flaky after it was asked to run on parallel
and "fight" for resources
let's hide all of that
Signed-off-by: machine424 <ayoubmrini424@gmail.com>
It's not possible to store created timestamp at the same timestamp as
the current sample, so do not even try.
In OpenTelemetry spec, if the start time is unknown, it will be set to
the same timestamp as the first sample.
https://opentelemetry.io/docs/specs/otel/metrics/data-model/#cumulative-streams-handling-unknown-start-time
This means that we will get a lot of duplicate sample for timestamp
errors and we should not log those.
Signed-off-by: György Krajcsovits <gyorgy.krajcsovits@grafana.com>
Signed-off-by: Andrew Hall <andrew.hall@grafana.com>
Co-authored-by: Charles Korn <charleskorn@users.noreply.github.com>
Co-authored-by: Arve Knudsen <arve.knudsen@gmail.com>