synced flush is going to be replaced by flush. This commit allows to synced_flush api only in v7 compatibility mode.
Worth noting - sync_id is gone and won't be available in v7 responses from indices.stats
relates removal pr #50882
relates #51816
The node executing a shard level operation would in many cases communicate `null` for the shard state update,
leading to follow-up operations incorrectly assuming an empty shard snapshot directory and starting from scratch.
closes#75598
Today when a task is cancelled we record the reason for the cancellation
but this information is very rarely exposed to users. This commit
centralises the construction of the `TaskCancellationException` and
includes the reason in the exception message.
Closes#74825
We only create a `ReceiveTimeoutTransportException` in one place, the
timeout handler for the corresponding transport request, so the stack
trace contains no useful information and just adds noise if ever it is
logged. With this commit we drop the stack trace from these exceptions.
In #75454 we changed our dynamic shadowing logic to check that an unmapped
field was truly shadowed by a runtime field before returning no-op mappers. However,
this does not handle the case where the runtime field can have multiple subfields, as
will be true for the upcoming composite field type. We instead need to check that
the field in question would not be shadowed by any field type returned by any
runtime field.
This commit abstracts this logic into a new isShadowed() method on
DocumentParserContext, which uses a set of runtime field type names built from
the mapping lookup at construction time. It also simplifies the no-op mapper
slightly by making it a singleton object, as we don't need to preserve field names
here.
This commit adds a new master transport action TransportGetShardSnapshotAction
that allows getting the last successful snapshot for a particular
shard in a set of repositories. It deals with the different
implementation details around BwC for repositories.
Relates #73496
The original PR #75264 made some test mistakes
NXY Significant term heuristics have additional values that need to be set when testing
basicScore properties.
Additionally, previous refactor kept the abstract test class in a package that other plugins
don't have access to.
closes#75442, #75561
Nested objects are implemented via a Nested class directly on object mappers,
even though nested and non-nested objects have quite different semantics. In
addition, most call-sites that need to get an object mapper in fact need a nested
object mapper. To make it clearer that nested and object mappers are different
beasts with different implementations and different requirements, we should
split them into different classes.
* Fix up shard generations in `SnapshotsInProgress` during snapshot finalization (don't do it earlier because it's a really heavy computation and we have a ton of places where it would have to run).
* Adjust finalization queue to be able to work with changing snapshot entries after they've been enqueued for finalisation
* Still one remaining bug left after this (see TODO about leaking generations) that I don't feel confident in fixing for `7.13.4` due to the complexity of a fix and how minor the blob leak is (+ it's cleaned up just fine during snapshot deletes)
Closes#75336
Adds a field usage API that reports shard-level statistics about which Lucene fields have been accessed, and which
parts of the Lucene data structures have been accessed.
Field usage statistics are automatically captured when queries are runnning on a cluster. A shard-level search request
that accesses a given field, even if multiple times during that request, is counted as a single use.
In #74081 this test failed with a `NoNodeAvailableException` within the
`indexRandom()` call immediately after stopping a node. This could
happen if the `node-left` event wasn't fully applied before calling
`indexRandom()` with an empty list of docs but with `forceRefresh` set
to true: since there's no docs, the replica wouldn't be marked as stale,
so the final refresh would detect the missing node, failing its
`assertNoFailures` wrapper.
This commit avoids calling `indexRandom()` with no docs in this
location. It also enhances `assertNoFailures` to report the details of
each failure, rather than just the summary.
Closes#74081
This updates the `mapmatcher` test assertion library that we use to pick
up a fix for error messages when you expect a `Map` or a `List` but get
*nothing*. Now it says something sensible like:
```
key: expected a map but was <missing>
```
instead of the confusing
```
key: expected a map containing
<the stuff the map was expected to contain> but was missing
```
Relates to #74721
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
This is related to #73497. Currently replica requests are wrapped in a
concrete replica shard request. This leads to the transport layer not
properly identifying them as replica index_data requests and not
compressing them properly. This commit resolves this bug.
ParseContext is used to parse documents. It was easily confused with ParserContext (now renamed to MappingParserContext) which is instead used to parse mappings.
To remove any confusion, this commit renames ParseContext to DocumentParserContext and adapts its subclasses accordingly.
We currently have one ParseContext class, which is used to parse incoming documents, not to be confused with the former ParserContext (now renamed to MappingParserContext) which is instead used to parse mappings.
There are a few implementations of ParseContext, but mostly the InternalParseContext one is used. There is also a FilterParseContext that allows to delegate to a given context for all methods besides the one explicitly overridden by it.
This commit attempts to simplify ParseContext by extracting its InternalParseContext implementation and moving it where it's used, within DocumentParser and making it private, so that the super-class can be used. This allows to hide some implementation details that only InternalParseContext knows about on nested documents and the way they are stored in lucene.
Also, we are introducing separate test implementations in place of reusing InternalParseContext in tests too.
Additionally FilterParseContext can be greatly simplified by relying on a copy constructor, that makes it so that it does not have to override every single method to delegate to the provided context, at least for the behaviour that can't be overridden (final methods).
Today the master service processes pending tasks in priority order. If
high-priority tasks arrive too frequently then low-priority tasks are
starved of access to the master service and are not executed. This can
cause certain tasks to appear to be stuck due to apparently-unrelated
overloads elsewhere.
With this commit we measure the interval between times when the pending
task queue is empty; if this interval exceeds a configurable threshold
then we log a warning.
`o.e.c.coordination.DeterministicTaskQueue` is today used in various
places, not just for tests of the cluster coordination subsystem. It's
also a bit of a pain to construct, requiring a nonempty `Settings` and a
`Random` even though essentially everyone passes in the same values.
This commit moves this class to the more generic `o.e.c.util.concurrent`
package, adds some Javadoc, and makes it easier to construct.
Reading from translog during a realtime get requires special handling in some higher level components, e.g.
ShardGetService, where we're doing a bunch of tricks to extract other stored fields from the source. Another issue with
the current approach relates to #74227 where we introduce a new "field usage tracking" directory wrapper that's always
applied, and we want to make sure that we can still quickly do realtime gets from translog without creating an in-memory
index of the document, even when this directory wrapper exists.
This PR introduces a directory reader that contains a single translog indexing operation. This can be used during a
realtime get to access documents that haven't been refreshed yet. In the normal case, all information relevant to resolve
the realtime get is mocked out to provide fast access to _id and _source. In case where more values are requested (e.g.
access to other stored fields) etc., this reader will index the document into an in-memory Lucene segment that is
created on-demand.
Relates #64504
This commit is related to #73497. It adds two new settings. The first setting
is transport.compression_scheme. This setting allows the user to
configure LZ4 or DEFLATE as the transport compression. Additionally, it
modifies transport.compress to support the value indexing_data. When
this setting is set to indexing_data only messages which are primarily
composed of raw source data will be compressed. This is bulk, operations
recovery, and shard changes messages.
This PR adds a new API for doing streaming serialization writes to a repository to enable repository metadata of arbitrary size and at bounded memory during writing.
The existing write-APIs require knowledge of the eventual blob size beforehand. This forced us to materialize the serialized blob in memory before writing, costing a lot of memory in case of e.g. very large `RepositoryData` (and limiting us to `2G` max blob size).
With this PR the requirement to fully materialize the serialized metadata goes away and the memory overhead becomes completely bounded by the outbound buffer size of the repository implementation.
As we move to larger repositories this makes master node stability a lot more predictable since writing out `RepositoryData` does not take as much memory any longer (same applies to shard level metadata), enables aggregating multiple metadata blobs into a single larger blobs without massive overhead and removes the 2G size limit on `RepositoryData`.
Added the dimension parameter to the following field types:
keyword
ip
Numeric field types (integer, long, byte, short)
The dimension parameter is of type boolean (default: false) and is used
to mark that a field is a time series dimension field.
Relates to #74014
This PR returns the get snapshots API to the 7.x format (and transport client behavior) and enhances it for requests that ask for multiple repositories.
The changes for requests that target multiple repositories are:
* Add `repository` field to `SnapshotInfo` and REST response
* Add `failures` map alongside `snapshots` list instead of returning just an exception response as done for single repo requests
* Pagination now works across repositories instead of being per repository for multi-repository requests
closes#69108closes#43462
There is no reason for Document to be an inner class of ParseContext, especially as it is public and accessed directly from many different places.
This commit takes it out to its own top-level class file, which has the advantage of simplifying ParseContext which could use some love too.
At the moment thread context is passed via dispatchRequest but in some
places thread context is fetched directly from thread pool
This is not a problem in production, because thread pool is initialized
with the same thread context as the one passed to dispatchRequest via
AbstractHttpServerTransport.
It might be harder to understand though and might cause problems in
testing in smaller units.
SpanBoostQuery will be removed in lucene 9.0. It is currently a no-op anyway,
unless it appears at the top level of a span query tree, in which case it is
equivalent to a standard BoostQuery. This commit removes references to
SpanBoostQuery from elasticsearch SpanQueryBuilders, replacing it with
BoostQuery where appropriate.
It also adds a new, breaking, check to field_masking_span to ensure that
its inner query does not have a boost set on it, bringing it into line with all
other span queries that wrap inner spans.
ParserContext is an inner class of Mapper.TypeParser but is used outside of the context of parsing mappers, for instance also to parse runtime fields. Its purpose is to be used to parse mappings in general, and its name is confusing as it differs ever so slightly from ParseContext which is used for parsing incoming documents.
This commit moves ParserContext to be a top-level class, and renames it to MappingParserContext.
This will allow components to add custom metadata to deprecation issues.
This make extracting additional details about deprecations more robust,
otherwise these details need to be parsed from the deprecation message field.
Adjusted the ml model snapshot deprecation to use custom metadata, and
included the job id and snapshot id as custom metadata.
Closes#73089
We barely test the correct handling of user metadata directly.
With upcoming changes to how `SnapshotInfo` is stored it would be nice
to have better test coverage. This PR adds randomized coverage of serializing
user metadata to a large number of tests that all user the shared infrastructure
that is adjusted here.
I was helping some folks debug an issue with the terms agg and noticed
that we didn't always have the `total_buckets` debug information. I also
noticed that we can't tell how many buckets we build, so I added that
too as `built_buckets`.
Finally, I noticed that when we're using segment ords we count segments
without any values as "multi-valued". We can do better there and count
them as no-valued. That will, mostly, just improve the profiling. When
we collect from global ords we have no way to tell how many values are
on the segment so segments without any values will, sadly, in this case
still be miscounted as multi-valued.
Pagination and snapshots for get snapshots API, build on top of the current implementation to enable work that needs this API for testing. A follow-up will leverage the changes to make things more efficient via pagination.
Relates https://github.com/elastic/elasticsearch/pull/73570 which does part of the under-the-hood changes required to efficiently implement this API on the repository layer.
Change the formatter config to sort / order imports, and reformat the
codebase. We already had a config file for Eclipse users, so Spotless now
uses that.
The "Eclipse Code Formatter" plugin ought to be able to use this file as
well for import ordering, but in my experiments the results were poor.
Instead, use IntelliJ's `.editorconfig` support to configure import
ordering.
I've also added a config file for the formatter plugin.
Other changes:
* I've quietly enabled the `toggleOnOff` option for Spotless. It was
already possible to disable formatting for sections using the markers
for docs snippets, so enabling this option just accepts this reality
and makes it possible via `formatter:off` and `formatter:on` without
the restrictions around line length. It should still only be used as
a very last resort and with good reason.
* I've removed mention of the `paddedCell` option from the contributing
guide, since I haven't had to use that option for a very long time. I
moved the docs to the spotless config.
This PR refactors the `Repository` API for fetching `SnapshotInfo` to enabled implementations to optimize for bulk fetching multiple `SnapshotInfo` at once. This is a requirement for making use of a more efficient repository format that does not require loading individual blobs per snapshot to fetch a snapshot listing. Also, by enabling consuming `SnapshotInfo` as they are fetched on the snapshot meta thread this allows for some more memory efficient usage of snapshot listing.
Also, this commit makes use of the new API to make the snapshot status API run a little more parallel if fetching multiple snapshots (though there's additional improvements possible+useful here as far as fetching shard level metadata in parallel).
This commit contains a bug when merging deeply-nested mappers which
was causing errors downstream. Reverting to unblock while the bug
is fixed.
This reverts commit 29ee4202a2.
If a `NodeDisconnectedException` happens when sending a ban for a task
then today we log a message at `INFO` or `WARN` indicating that the ban
failed, but we don't indicate why. The message also uses a default
`toString()` for an inner class which is unhelpful.
Ban failures during disconnections are benign and somewhat expected, and
task cancellation respects disconnections anyway (#65443). There's not
much the user can do about these messages either, and they can be
confusing and draw attention away from the real problem.
With this commit we log the failure messages at `DEBUG` on
disconnections, and include the exception details. We also include the
exception message for other kinds of failures, and we fix up a few cases
where a useless default `toString()` implementation was used in log
messages.
Slightly relates #72968 in that these messages tend to obscure a
connectivity issue.
With work to make repo APIs more async incoming in #73570
we need a non-blocking way to run this check. This adds that async
check and removes the need to manually pass executors around as well.