2015-05-21 17:39:38 +08:00
|
|
|
[[search-aggregations-pipeline]]
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2021-01-14 03:44:54 +08:00
|
|
|
== Pipeline aggregations
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2015-05-21 17:39:38 +08:00
|
|
|
Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding
|
|
|
|
information to the output tree. There are many different types of pipeline aggregation, each computing different information from
|
2015-08-09 05:14:59 +08:00
|
|
|
other aggregations, but these types can be broken down into two families:
|
2015-05-02 04:04:55 +08:00
|
|
|
|
|
|
|
_Parent_::
|
2015-05-21 17:39:38 +08:00
|
|
|
A family of pipeline aggregations that is provided with the output of its parent aggregation and is able
|
2015-05-02 04:04:55 +08:00
|
|
|
to compute new buckets or new aggregations to add to existing buckets.
|
|
|
|
|
|
|
|
_Sibling_::
|
2015-05-21 17:39:38 +08:00
|
|
|
Pipeline aggregations that are provided with the output of a sibling aggregation and are able to compute a
|
2015-05-02 04:04:55 +08:00
|
|
|
new aggregation which will be at the same level as the sibling aggregation.
|
|
|
|
|
2015-08-31 19:47:40 +08:00
|
|
|
Pipeline aggregations can reference the aggregations they need to perform their computation by using the `buckets_path`
|
2015-05-02 04:04:55 +08:00
|
|
|
parameter to indicate the paths to the required metrics. The syntax for defining these paths can be found in the
|
2015-08-31 19:47:40 +08:00
|
|
|
<<buckets-path-syntax, `buckets_path` Syntax>> section below.
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2015-05-21 17:39:38 +08:00
|
|
|
Pipeline aggregations cannot have sub-aggregations but depending on the type it can reference another pipeline in the `buckets_path`
|
2021-03-31 21:57:47 +08:00
|
|
|
allowing pipeline aggregations to be chained. For example, you can chain together two derivatives to calculate the second derivative
|
2016-02-02 22:12:54 +08:00
|
|
|
(i.e. a derivative of a derivative).
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2016-03-01 16:54:41 +08:00
|
|
|
NOTE: Because pipeline aggregations only add to the output, when chaining pipeline aggregations the output of each pipeline aggregation
|
2015-05-21 17:39:38 +08:00
|
|
|
will be included in the final output.
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2015-08-31 19:47:40 +08:00
|
|
|
[[buckets-path-syntax]]
|
2020-07-23 23:48:22 +08:00
|
|
|
[discrete]
|
2015-05-02 04:04:55 +08:00
|
|
|
=== `buckets_path` Syntax
|
|
|
|
|
2021-03-31 21:57:47 +08:00
|
|
|
Most pipeline aggregations require another aggregation as their input. The input aggregation is defined via the `buckets_path`
|
2015-05-02 04:04:55 +08:00
|
|
|
parameter, which follows a specific format:
|
|
|
|
|
2016-08-30 05:33:25 +08:00
|
|
|
// https://en.wikipedia.org/wiki/Extended_Backus%E2%80%93Naur_Form
|
|
|
|
[source,ebnf]
|
2015-05-02 04:04:55 +08:00
|
|
|
--------------------------------------------------
|
2019-08-06 00:15:42 +08:00
|
|
|
AGG_SEPARATOR = `>` ;
|
|
|
|
METRIC_SEPARATOR = `.` ;
|
2016-08-30 05:33:25 +08:00
|
|
|
AGG_NAME = <the name of the aggregation> ;
|
|
|
|
METRIC = <the name of the metric (in case of multi-value metrics aggregation)> ;
|
2019-08-06 00:15:42 +08:00
|
|
|
MULTIBUCKET_KEY = `[<KEY_NAME>]`
|
|
|
|
PATH = <AGG_NAME><MULTIBUCKET_KEY>? (<AGG_SEPARATOR>, <AGG_NAME> )* ( <METRIC_SEPARATOR>, <METRIC> ) ;
|
2015-05-02 04:04:55 +08:00
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
For example, the path `"my_bucket>my_stats.avg"` will path to the `avg` value in the `"my_stats"` metric, which is
|
|
|
|
contained in the `"my_bucket"` bucket aggregation.
|
|
|
|
|
2015-05-21 17:39:38 +08:00
|
|
|
Paths are relative from the position of the pipeline aggregation; they are not absolute paths, and the path cannot go back "up" the
|
2019-03-20 03:31:05 +08:00
|
|
|
aggregation tree. For example, this derivative is embedded inside a date_histogram and refers to a "sibling"
|
2015-05-02 04:04:55 +08:00
|
|
|
metric `"the_sum"`:
|
|
|
|
|
2019-11-25 22:30:00 +08:00
|
|
|
[source,console,id=buckets-path-example]
|
2015-05-02 04:04:55 +08:00
|
|
|
--------------------------------------------------
|
2016-08-13 06:42:19 +08:00
|
|
|
POST /_search
|
2015-05-02 04:04:55 +08:00
|
|
|
{
|
2020-07-21 03:08:04 +08:00
|
|
|
"aggs": {
|
|
|
|
"my_date_histo": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "timestamp",
|
|
|
|
"calendar_interval": "day"
|
|
|
|
},
|
|
|
|
"aggs": {
|
|
|
|
"the_sum": {
|
|
|
|
"sum": { "field": "lemmings" } <1>
|
|
|
|
},
|
|
|
|
"the_deriv": {
|
|
|
|
"derivative": { "buckets_path": "the_sum" } <2>
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2019-09-05 00:51:02 +08:00
|
|
|
|
2015-05-02 04:04:55 +08:00
|
|
|
<1> The metric is called `"the_sum"`
|
|
|
|
<2> The `buckets_path` refers to the metric via a relative path `"the_sum"`
|
|
|
|
|
2015-05-21 17:39:38 +08:00
|
|
|
`buckets_path` is also used for Sibling pipeline aggregations, where the aggregation is "next" to a series of buckets
|
2021-03-31 21:57:47 +08:00
|
|
|
instead of embedded "inside" them. For example, the `max_bucket` aggregation uses the `buckets_path` to specify
|
2015-05-02 04:04:55 +08:00
|
|
|
a metric embedded inside a sibling aggregation:
|
|
|
|
|
2019-11-25 22:30:00 +08:00
|
|
|
[source,console,id=buckets-path-sibling-example]
|
2015-05-02 04:04:55 +08:00
|
|
|
--------------------------------------------------
|
2016-08-13 06:42:19 +08:00
|
|
|
POST /_search
|
2015-05-02 04:04:55 +08:00
|
|
|
{
|
2020-07-21 03:08:04 +08:00
|
|
|
"aggs": {
|
|
|
|
"sales_per_month": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "date",
|
|
|
|
"calendar_interval": "month"
|
|
|
|
},
|
|
|
|
"aggs": {
|
|
|
|
"sales": {
|
|
|
|
"sum": {
|
|
|
|
"field": "price"
|
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"max_monthly_sales": {
|
|
|
|
"max_bucket": {
|
|
|
|
"buckets_path": "sales_per_month>sales" <1>
|
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-08-13 06:42:19 +08:00
|
|
|
// TEST[setup:sales]
|
2019-09-05 00:51:02 +08:00
|
|
|
|
2015-08-31 19:47:40 +08:00
|
|
|
<1> `buckets_path` instructs this max_bucket aggregation that we want the maximum value of the `sales` aggregation in the
|
2015-05-02 04:04:55 +08:00
|
|
|
`sales_per_month` date histogram.
|
|
|
|
|
2019-08-06 00:15:42 +08:00
|
|
|
If a Sibling pipeline agg references a multi-bucket aggregation, such as a `terms` agg, it also has the option to
|
2021-03-31 21:57:47 +08:00
|
|
|
select specific keys from the multi-bucket. For example, a `bucket_script` could select two specific buckets (via
|
2019-08-06 00:15:42 +08:00
|
|
|
their bucket keys) to perform the calculation:
|
|
|
|
|
2019-11-25 22:30:00 +08:00
|
|
|
[source,console,id=buckets-path-specific-bucket-example]
|
2019-08-06 00:15:42 +08:00
|
|
|
--------------------------------------------------
|
|
|
|
POST /_search
|
|
|
|
{
|
2020-07-21 03:08:04 +08:00
|
|
|
"aggs": {
|
|
|
|
"sales_per_month": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "date",
|
|
|
|
"calendar_interval": "month"
|
|
|
|
},
|
|
|
|
"aggs": {
|
|
|
|
"sale_type": {
|
|
|
|
"terms": {
|
|
|
|
"field": "type"
|
|
|
|
},
|
|
|
|
"aggs": {
|
|
|
|
"sales": {
|
|
|
|
"sum": {
|
|
|
|
"field": "price"
|
|
|
|
}
|
2019-08-06 00:15:42 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
|
|
|
},
|
|
|
|
"hat_vs_bag_ratio": {
|
|
|
|
"bucket_script": {
|
|
|
|
"buckets_path": {
|
|
|
|
"hats": "sale_type['hat']>sales", <1>
|
|
|
|
"bags": "sale_type['bag']>sales" <1>
|
|
|
|
},
|
|
|
|
"script": "params.hats / params.bags"
|
|
|
|
}
|
2019-08-06 00:15:42 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2019-08-06 00:15:42 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2019-08-06 00:15:42 +08:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// TEST[setup:sales]
|
2019-09-05 00:51:02 +08:00
|
|
|
|
2019-08-06 00:15:42 +08:00
|
|
|
<1> `buckets_path` selects the hats and bags buckets (via `['hat']`/`['bag']``) to use in the script specifically,
|
|
|
|
instead of fetching all the buckets from `sale_type` aggregation
|
|
|
|
|
2020-07-23 23:48:22 +08:00
|
|
|
[discrete]
|
2016-03-01 16:54:41 +08:00
|
|
|
=== Special Paths
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2021-03-31 21:57:47 +08:00
|
|
|
Instead of pathing to a metric, `buckets_path` can use a special `"_count"` path. This instructs
|
|
|
|
the pipeline aggregation to use the document count as its input. For example, a derivative can be calculated
|
2019-03-20 03:31:05 +08:00
|
|
|
on the document count of each bucket, instead of a specific metric:
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2019-11-25 22:30:00 +08:00
|
|
|
[source,console,id=buckets-path-count-example]
|
2015-05-02 04:04:55 +08:00
|
|
|
--------------------------------------------------
|
2016-08-13 06:42:19 +08:00
|
|
|
POST /_search
|
2015-05-02 04:04:55 +08:00
|
|
|
{
|
2020-07-21 03:08:04 +08:00
|
|
|
"aggs": {
|
|
|
|
"my_date_histo": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "timestamp",
|
|
|
|
"calendar_interval": "day"
|
|
|
|
},
|
|
|
|
"aggs": {
|
|
|
|
"the_deriv": {
|
|
|
|
"derivative": { "buckets_path": "_count" } <1>
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
2020-07-21 03:08:04 +08:00
|
|
|
}
|
2015-05-02 04:04:55 +08:00
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2019-09-05 00:51:02 +08:00
|
|
|
|
2019-03-20 03:31:05 +08:00
|
|
|
<1> By using `_count` instead of a metric name, we can calculate the derivative of document counts in the histogram
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2016-07-25 17:19:15 +08:00
|
|
|
The `buckets_path` can also use `"_bucket_count"` and path to a multi-bucket aggregation to use the number of buckets
|
2020-01-24 21:03:01 +08:00
|
|
|
returned by that aggregation in the pipeline aggregation instead of a metric. For example, a `bucket_selector` can be
|
2016-07-25 17:19:15 +08:00
|
|
|
used here to filter out buckets which contain no buckets for an inner terms aggregation:
|
|
|
|
|
2019-11-25 22:30:00 +08:00
|
|
|
[source,console,id=buckets-path-bucket-count-example]
|
2016-07-25 17:19:15 +08:00
|
|
|
--------------------------------------------------
|
2016-08-13 06:42:19 +08:00
|
|
|
POST /sales/_search
|
2016-07-25 17:19:15 +08:00
|
|
|
{
|
|
|
|
"size": 0,
|
|
|
|
"aggs": {
|
|
|
|
"histo": {
|
|
|
|
"date_histogram": {
|
|
|
|
"field": "date",
|
Force selection of calendar or fixed intervals in date histo agg (#33727)
The date_histogram accepts an interval which can be either a calendar
interval (DST-aware, leap seconds, arbitrary length of months, etc) or
fixed interval (strict multiples of SI units). Unfortunately this is inferred
by first trying to parse as a calendar interval, then falling back to fixed
if that fails.
This leads to confusing arrangement where `1d` == calendar, but
`2d` == fixed. And if you want a day of fixed time, you have to
specify `24h` (e.g. the next smallest unit). This arrangement is very
error-prone for users.
This PR adds `calendar_interval` and `fixed_interval` parameters to any
code that uses intervals (date_histogram, rollup, composite, datafeed, etc).
Calendar only accepts calendar intervals, fixed accepts any combination of
units (meaning `1d` can be used to specify `24h` in fixed time), and both
are mutually exclusive.
The old interval behavior is deprecated and will throw a deprecation warning.
It is also mutually exclusive with the two new parameters. In the future the
old dual-purpose interval will be removed.
The change applies to both REST and java clients.
2019-05-07 05:17:11 +08:00
|
|
|
"calendar_interval": "day"
|
2016-07-25 17:19:15 +08:00
|
|
|
},
|
|
|
|
"aggs": {
|
|
|
|
"categories": {
|
|
|
|
"terms": {
|
|
|
|
"field": "category"
|
|
|
|
}
|
|
|
|
},
|
|
|
|
"min_bucket_selector": {
|
|
|
|
"bucket_selector": {
|
|
|
|
"buckets_path": {
|
2016-07-26 18:33:54 +08:00
|
|
|
"count": "categories._bucket_count" <1>
|
2016-07-25 17:19:15 +08:00
|
|
|
},
|
|
|
|
"script": {
|
2017-06-09 23:29:25 +08:00
|
|
|
"source": "params.count != 0"
|
2016-07-25 17:19:15 +08:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-08-13 06:42:19 +08:00
|
|
|
// TEST[setup:sales]
|
2019-09-05 00:51:02 +08:00
|
|
|
|
2016-07-25 17:19:15 +08:00
|
|
|
<1> By using `_bucket_count` instead of a metric name, we can filter out `histo` buckets where they contain no buckets
|
|
|
|
for the `categories` aggregation
|
|
|
|
|
2016-03-01 16:54:41 +08:00
|
|
|
[[dots-in-agg-names]]
|
2020-07-23 23:48:22 +08:00
|
|
|
[discrete]
|
2016-03-01 16:54:41 +08:00
|
|
|
=== Dealing with dots in agg names
|
|
|
|
|
|
|
|
An alternate syntax is supported to cope with aggregations or metrics which
|
|
|
|
have dots in the name, such as the ++99.9++th
|
|
|
|
<<search-aggregations-metrics-percentile-aggregation,percentile>>. This metric
|
|
|
|
may be referred to as:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
---------------
|
|
|
|
"buckets_path": "my_percentile[99.9]"
|
|
|
|
---------------
|
2017-05-02 01:30:51 +08:00
|
|
|
// NOTCONSOLE
|
2016-03-01 16:54:41 +08:00
|
|
|
|
2015-05-06 19:54:42 +08:00
|
|
|
[[gap-policy]]
|
2020-07-23 23:48:22 +08:00
|
|
|
[discrete]
|
2015-05-02 04:04:55 +08:00
|
|
|
=== Dealing with gaps in the data
|
|
|
|
|
2021-03-31 21:57:47 +08:00
|
|
|
Data in the real world is often noisy and sometimes contains *gaps* -- places where data simply doesn't exist. This can
|
2015-07-08 03:37:42 +08:00
|
|
|
occur for a variety of reasons, the most common being:
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2015-07-08 03:37:42 +08:00
|
|
|
* Documents falling into a bucket do not contain a required field
|
|
|
|
* There are no documents matching the query for one or more buckets
|
|
|
|
* The metric being calculated is unable to generate a value, likely because another dependent bucket is missing a value.
|
|
|
|
Some pipeline aggregations have specific requirements that must be met (e.g. a derivative cannot calculate a metric for the
|
|
|
|
first value because there is no previous value, HoltWinters moving average need "warmup" data to begin calculating, etc)
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2015-07-08 03:37:42 +08:00
|
|
|
Gap policies are a mechanism to inform the pipeline aggregation about the desired behavior when "gappy" or missing
|
2021-03-31 21:57:47 +08:00
|
|
|
data is encountered. All pipeline aggregations accept the `gap_policy` parameter. There are currently two gap policies
|
2015-07-08 03:37:42 +08:00
|
|
|
to choose from:
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2015-05-06 19:54:42 +08:00
|
|
|
_skip_::
|
2021-03-31 21:57:47 +08:00
|
|
|
This option treats missing data as if the bucket does not exist. It will skip the bucket and continue
|
2015-07-08 03:37:42 +08:00
|
|
|
calculating using the next available value.
|
2015-05-02 04:04:55 +08:00
|
|
|
|
|
|
|
_insert_zeros_::
|
2015-07-08 03:37:42 +08:00
|
|
|
This option will replace missing values with a zero (`0`) and pipeline aggregation computation will
|
|
|
|
proceed as normal.
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2021-06-09 03:47:29 +08:00
|
|
|
_keep_values_::
|
|
|
|
This option is similar to skip, except if the metric provides a non-null, non-NaN value this value is
|
|
|
|
used, otherwise the empty bucket is skipped.
|
|
|
|
|
2020-10-31 01:25:21 +08:00
|
|
|
include::pipeline/avg-bucket-aggregation.asciidoc[]
|
2015-05-02 04:04:55 +08:00
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/bucket-script-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
[ML] adding new KS test pipeline aggregation (#73334)
This adds a new pipeline aggregation for calculating Kolmogorov–Smirnov test for a given sample and buckets path.
For now, the buckets path resolution needs to be `_count`. But, this may be relaxed in the future.
It accepts a parameter `fractions` that indicates the distribution of documents from some other pre-calculated sample.
This particular version of the K-S test is Two-sample, meaning, it calculates if the `fractions` and the distribution of `_count` values in the buckets_path are taken from the same distribution.
This in combination with the hypothesis alternatives (`less`, `greater`, `two_sided`) and sampling logic (`upper_tail`, `lower_tail`, `uniform`) allow for flexibility and usefulness when comparing two samples and determining the likelihood of them being from the same overall distribution.
Usage:
```
POST correlate_latency/_search?size=0&filter_path=aggregations
{
"aggs": {
"buckets": {
"terms": { <1>
"field": "version",
"size": 2
},
"aggs": {
"latency_ranges": {
"range": { <2>
"field": "latency",
"ranges": [
{ "to": 0.0 },
{ "from": 0, "to": 105 },
{ "from": 105, "to": 225 },
{ "from": 225, "to": 445 },
{ "from": 445, "to": 665 },
{ "from": 665, "to": 885 },
{ "from": 885, "to": 1115 },
{ "from": 1115, "to": 1335 },
{ "from": 1335, "to": 1555 },
{ "from": 1555, "to": 1775 },
{ "from": 1775 }
]
}
},
"ks_test": { <3>
"bucket_count_ks_test": {
"buckets_path": "latency_ranges>_count",
"alternative": ["less", "greater", "two_sided"]
}
}
}
}
}
}
```
2021-06-04 22:04:41 +08:00
|
|
|
include::pipeline/bucket-count-ks-test-aggregation.asciidoc[]
|
|
|
|
|
|
|
|
include::pipeline/bucket-correlation-aggregation.asciidoc[]
|
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/bucket-selector-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/bucket-sort-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/cumulative-cardinality-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/cumulative-sum-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/derivative-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
|
|
|
include::pipeline/extended-stats-bucket-aggregation.asciidoc[]
|
|
|
|
|
|
|
|
include::pipeline/inference-bucket-aggregation.asciidoc[]
|
|
|
|
|
|
|
|
include::pipeline/max-bucket-aggregation.asciidoc[]
|
|
|
|
|
|
|
|
include::pipeline/min-bucket-aggregation.asciidoc[]
|
|
|
|
|
2018-05-16 22:57:00 +08:00
|
|
|
include::pipeline/movfn-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-12 16:30:52 +08:00
|
|
|
include::pipeline/moving-percentiles-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-15 04:32:42 +08:00
|
|
|
include::pipeline/normalize-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
|
|
|
include::pipeline/percentiles-bucket-aggregation.asciidoc[]
|
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/serial-diff-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
2020-05-16 04:34:47 +08:00
|
|
|
include::pipeline/stats-bucket-aggregation.asciidoc[]
|
2020-10-31 01:25:21 +08:00
|
|
|
|
|
|
|
include::pipeline/sum-bucket-aggregation.asciidoc[]
|