Adds documentation for the `minimum_should_match` parameter to the `bool` query docs. Includes docs for the default values:
- `1` if the `bool` query includes at least one `should` clause and no `must` or `filter` clauses
- `0` otherwise
* Adds a title abbreviation
* Updates the description and adds a Lucene link
* Reformats the parameters section
* Adds analyze, custom analyzer, and custom filter snippets
Relates to #44726.
The documentation contained a small error, as bytes and duration was not
properly converted to a number and thus remained a string.
The documentation is now also properly tested by providing a full blown
simulate pipeline example.
* Creates a prerequisites section in the cross-cluster replication (CCR)
overview.
* Adds concise definitions for local and remote cluster in a CCR context.
* Documents that the ES version of the local cluster must be the same
or a newer compatible version as the remote cluster.
When the enrich processor appends enrich data to an incoming document,
it adds a `target_field` to contain the enrich data.
This `target_field` contains both the `match_field` AND `enrich_fields`
specified in the enrich policy.
Previously, this was reflected in the documented example but not
explicitly stated. This adds several explicit statements to the docs.
The "Restore any snapshots as required" step is a trap: it's somewhere between
tricky and impossible to restore multiple clusters into a single one.
Also add a note about configuring discovery during a rolling upgrade to
proscribe any rare cases where you might accidentally autobootstrap during the
upgrade.
Reindex sort never gave a guarantee about the order of documents being
indexed into the destination, though it could give a sense of locality
of source data.
It prevents us from doing resilient reindex and other optimizations and
it has therefore been deprecated.
Related to #47567
This adds a `_source` setting under the `source` setting of a data
frame analytics config. The new `_source` is reusing the structure
of a `FetchSourceContext` like `analyzed_fields` does. Specifying
includes and excludes for source allows selecting which fields
will get reindexed and will be available in the destination index.
Closes#49531
This commit clarifies how to override JAVA_HOME from the bundled jdk for
deb and rpm installs, which each have their own file that is sourced
upon service startup.
closes#49068
This change adds a dynamic cluster setting named `indices.id_field_data.enabled`.
When set to `false` any attempt to load the fielddata for the `_id` field will fail
with an exception. The default value in this change is set to `false` in order to prevent
fielddata usage on this field for future versions but it will be set to `true` when backporting
to 7x. When the setting is set to true (manually or by default in 7x) the loading will also issue
a deprecation warning since we want to disallow fielddata entirely when https://github.com/elastic/elasticsearch/issues/26472
is implemented.
Closes#43599
Fix reference about the uid:gid that Elasticsearch runs as inside
the Docker container and add a packaging test to ensure that bind
mounting a data dir with a random uid and gid:0 works as
expected.
Relates #49529Closes#47929
This rewrites long sort as a `DistanceFeatureQuery`, which can
efficiently skip non-competitive blocks and segments of documents.
Depending on the dataset, the speedups can be 2 - 10 times.
The optimization can be disabled with setting the system property
`es.search.rewrite_sort` to `false`.
Optimization is skipped when an index has 50% or more data with
the same value.
Optimization is done through:
1. Rewriting sort as `DistanceFeatureQuery` which can
efficiently skip non-competitive blocks and segments of documents.
2. Sorting segments according to the primary numeric sort field(#44021)
This allows to skip non-competitive segments.
3. Using collector manager.
When we optimize sort, we sort segments by their min/max value.
As a collector expects to have segments in order,
we can not use a single collector for sorted segments.
We use collectorManager, where for every segment a dedicated collector
will be created.
4. Using Lucene's shared TopFieldCollector manager
This collector manager is able to exchange minimum competitive
score between collectors, which allows us to efficiently skip
the whole segments that don't contain competitive scores.
5. When index is force merged to a single segment, #48533 interleaving
old and new segments allows for this optimization as well,
as blocks with non-competitive docs can be skipped.
Closes#37043
Co-authored-by: Jim Ferenczi <jim.ferenczi@elastic.co>
* Optimize sort on numeric long and date fields (#39770)
Optimize sort on numeric long and date fields, when
the system property `es.search.long_sort_optimized` is true.
* Skip optimization if the index has duplicate data (#43121)
Skip sort optimization if the index has 50% or more data
with the same value.
When index has a lot of docs with the same value, sort
optimization doesn't make sense, as DistanceFeatureQuery
will produce same scores for these docs, and Lucene
will use the second sort to tie-break. This could be slower
than usual sorting.
* Sort leaves on search according to the primary numeric sort field (#44021)
This change pre-sort the index reader leaves (segment) prior to search
when the primary sort is a numeric field eligible to the distance feature
optimization. It also adds a tie breaker on `_doc` to the rewritten sort
in order to bypass the fact that leaves will be collected in a random order.
I ran this patch on the http_logs benchmark and the results are very promising:
```
| 50th percentile latency | desc_sort_timestamp | 220.706 | 136544 | 136324 | ms |
| 90th percentile latency | desc_sort_timestamp | 244.847 | 162084 | 161839 | ms |
| 99th percentile latency | desc_sort_timestamp | 316.627 | 172005 | 171688 | ms |
| 100th percentile latency | desc_sort_timestamp | 335.306 | 173325 | 172989 | ms |
| 50th percentile service time | desc_sort_timestamp | 218.369 | 1968.11 | 1749.74 | ms |
| 90th percentile service time | desc_sort_timestamp | 244.182 | 2447.2 | 2203.02 | ms |
| 99th percentile service time | desc_sort_timestamp | 313.176 | 2950.85 | 2637.67 | ms |
| 100th percentile service time | desc_sort_timestamp | 332.924 | 2959.38 | 2626.45 | ms |
| error rate | desc_sort_timestamp | 0 | 0 | 0 | % |
| Min Throughput | asc_sort_timestamp | 0.801824 | 0.800855 | -0.00097 | ops/s |
| Median Throughput | asc_sort_timestamp | 0.802595 | 0.801104 | -0.00149 | ops/s |
| Max Throughput | asc_sort_timestamp | 0.803282 | 0.801351 | -0.00193 | ops/s |
| 50th percentile latency | asc_sort_timestamp | 220.761 | 824.098 | 603.336 | ms |
| 90th percentile latency | asc_sort_timestamp | 251.741 | 853.984 | 602.243 | ms |
| 99th percentile latency | asc_sort_timestamp | 368.761 | 893.943 | 525.182 | ms |
| 100th percentile latency | asc_sort_timestamp | 431.042 | 908.85 | 477.808 | ms |
| 50th percentile service time | asc_sort_timestamp | 218.547 | 820.757 | 602.211 | ms |
| 90th percentile service time | asc_sort_timestamp | 249.578 | 849.886 | 600.308 | ms |
| 99th percentile service time | asc_sort_timestamp | 366.317 | 888.894 | 522.577 | ms |
| 100th percentile service time | asc_sort_timestamp | 430.952 | 908.401 | 477.45 | ms |
| error rate | asc_sort_timestamp | 0 | 0 | 0 | % |
```
So roughly 10x faster for the descending sort and 2-3x faster in the ascending case. Note
that I indexed the http_logs with a single client in order to simulate real time-based indices
where document are indexed in their timestamp order.
Relates #37043
* Remove nested collector in docs response
As we don't use cancellableCollector anymore, it should be removed from
the expected docs response.
* Use collector manager for search when necessary (#45829)
When we optimize sort, we sort segments by their min/max value.
As a collector expects to have segments in order,
we can not use a single collector for sorted segments.
Thus for such a case, we use collectorManager,
where for every segment a dedicated collector will be created.
* Use shared TopFieldCollector manager
Use shared TopFieldCollector manager for sort optimization.
This collector manager is able to exchange minimum competitive
score between collectors
* Correct calculation of avg value to avoid overflow
* Optimize calculating if index has duplicate data
In case an exception occurs inside a pipeline processor,
the pipeline stack is kept around as header in the exception.
Then in the on_failure processor the id of the pipeline the
exception occurred is made accessible via the `on_failure_pipeline`
ingest metadata.
Closes#44920
The default is set to Integer.MAX_VALUE but is reported to be `0` in the docs.
With the current implementation a value of 0 would mean all terms are filtered
out, which is the opposite of "unbounded".
Closes#49520
* Adds a title abbreviation
* Relocates the older name deprecation warning
* Updates the description and adds a Lucene link
* Adds a note to explain payloads and how to store them
* Adds analyze and custom analyzer snippets
* Adds a 'Return stored payloads' example
The categorization job wizard in the ML UI will use this
information when showing the effect of the chosen categorization
analyzer on a sample of input.
This commit enhances the required pipeline functionality by changing it
so that default/request pipelines can also be executed, but the required
pipeline is always executed last. This gives users the flexibility to
execute their own indexing pipelines, but also ensure that any required
pipelines are also executed. Since such pipelines are executed last, we
change the name of required pipelines to final pipelines.
This commit replaces the _estimate_memory_usage API with
a new API, the _explain API.
The API consolidates information that is useful before
creating a data frame analytics job.
It includes:
- memory estimation
- field selection explanation
Memory estimation is moved here from what was previously
calculated in the _estimate_memory_usage API.
Field selection is a new feature that explains to the user
whether each available field was selected to be included or
not in the analysis. In the case it was not included, it also
explains the reason why.
Reformats the edge n-gram and n-gram token filter docs. Changes include:
* Adds title abbreviations
* Updates the descriptions and adds Lucene links
* Reformats parameter definitions
* Adds analyze and custom analyzer snippets
* Adds notes explaining differences between the edge n-gram and n-gram
filters
Additional changes:
* Switches titles to use "n-gram" throughout.
* Fixes a typo in the edge n-gram tokenizer docs
* Adds an explicit anchor for the `index.max_ngram_diff` setting