Nested objects are implemented via a Nested class directly on object mappers,
even though nested and non-nested objects have quite different semantics. In
addition, most call-sites that need to get an object mapper in fact need a nested
object mapper. To make it clearer that nested and object mappers are different
beasts with different implementations and different requirements, we should
split them into different classes.
We're doing some clean up logic to delete indices, data streams,
auto-follow patterns or searchable snapshot indices in some test
classes after a test case is executed. Today we either fail or log
a warning if the clean up failed but I think we should simply
ignore the 404 - Not Found response exception, like we do in
other places for regular indices.
Note that this change applies only:
- when cleaning up searchable snapshots indices in ESRestTestCase
- when cleaning up indices, data streams and auto-follow pattern in AutoFollowIT
We changed the default joda behaviour in strict_date_optional_time to
max 4 digits in a year. Java.time implementation should behave the same way.
At the same time date_optional_time should have 9digits for year part.
closes#52396closes#72191
* Add new thread pool for critical operations
* Split critical thread pool into read and write
* Add POJO to hold thread pool names
* Add tests for critical thread pools
* Add thread pools to data streams
* Update settings for security plugin
* Retrieve ExecutorSelector from SystemIndices where possible
* Use a singleton ExecutorSelector
FieldTypeLookup and MappingLookup expose the getMatchingFieldTypes method to look up matching field type by a string pattern. We have migrated ExistsQueryBuilder to instead rely on getMatchingFieldNames, hence we can go ahead and remove the remaining usages and the method itself.
The remaining usages are to find specific field types from the mappings, specifically to eagerly load global ordinals and for the join field type. These are operations that are performed only once when loading the mappings, and may be refactored to work differently in the future. For now, we remove getMatchingFieldTypes and rather call for the two mentioned scenarios getMatchingFieldNames(*) and then getFieldType for each of the returned field name. This is a bit wasteful but performance can be sacrificed for these scenarios in favour of less code to maintain.
EnsureNoWarnings method should assert that there is no other warnings
than the allowed "predefined" warnings in filteredWarnings() method
bug introduced in #71207
Sometimes our fancy "run this agg as a Query" optimizations end up
slower than running the aggregation in the old way. We know that and use
heuristics to dissable the optimization in that case. But it turns out
that the process of running the heuristics itself can be slow, depending
on the query. Worse, changing the heuristics requires an upgrade, which
means waiting. If the heurisics make a terrible choice folks need a
quick way out. This adds such a way: a cluster level setting that
contains a list of queries that are considered "too expensive" to try
and optimize. If the top level query contains any of those queries we'll
disable the "run as Query" optimization.
The default for this settings is wildcard and term-in-set queries, which
is fairly conservative. There are certainly wildcard and term-in-set
queries that the optimization works well with, but there are other queries
of that type that it works very badly with. So we're being careful.
Better, you can modify this setting in a running cluster to disable the
optimization if we find a new type of query that doesn't work well.
Closes#73426
This commit adds some clean up logic to ESRestTestCase so
that searchable snapshots indices are deleted after test case
executions, before the snapshot and repositories are wipe out.
Backport of #73555
This template was added in #64978, however, there can be some test failures if we try to remove
built-in templates. It was missing from the list and now needs to be added back.
This allows indexing documents into a data stream alias.
The ingestion is that forwarded to the write index of the data stream
that is marked as write data stream.
The `is_write_index` parameter can be used to indicate what the write data stream is,
when updating / adding a data steam alias.
Relates to #66163
Ever since I wrote `NotEqualsMessageBuilder` I've thought to myself
"if this were a hamcrest matcher we could use it everywhere and get
nicer error messages." A few weeks ago I finally built a work-alike
hamcrest matcher that I think produces better error messages. This plugs
that matcher into the `MatchAssertion` used by our yaml and docs tests.
Upgrades to Lucene-8.9 snapshot which includes:
- LUCENE-9507: Custom order for leaves (/cc @mayya-sharipova)
- LUCENE-9935: Enable bulk merge for stored fields with index sort
MappingLookup has a method simpleMatchToFieldName that attempts
to return all field names that match a given pattern; if no patterns match,
then it returns a single-valued collection containing just the pattern that
was originally passed in. This is a fairly confusing semantic.
This PR replaces simpleMatchToFullName with two new methods:
* getMatchingFieldNames(), which returns a set of all mapped field names
that match a pattern. Calling getFieldType() with a name returned by
this method is guaranteed to return a non-null MappedFieldType
* getMatchingFieldTypes, that returns a collection of all MappedFieldTypes
in a mapping that match the passed-in pattern.
This allows us to clean up several call-sites because we know that
MappedFieldTypes returned from these calls will never be null. It also
simplifies object field exists query construction.
This commit serves two purposes. For one, we need the ability to dynamically
update a repository setting for the encrypted repository work.
Also, this allows dynamically updating repository rate limits while snapshots are
in progress. This has often been an issue in the past where a long running snapshot
made progress over a long period of time already but is going too slowly with the
current rate limit. This left no good options, either throw away the existing
partly done snapshot's work and recreate the repo with a higher rate limit to speed
things up or wait for a long time with the current rate limit.
With this change the rate limit can simply be increased while a snapshot or restore
is running and will take effect imidiately.
Aliases to data streams can be defined via the existing update aliases api.
Aliases can either only refer to data streams or to indices (not both).
Also the existing get aliases api has been modified to support returning
aliases that refer to data streams.
Aliases for data streams are stored separately from data streams and
and refer to data streams by name and not to the backing indices of
a data stream. This means that when backing indices are added or removed
from a data stream that then the data stream alias doesn't need to be
updated.
The authorization model for aliases that refer to data streams is the
same as for aliases the refer to indices. In security privileges can
be defined on aliases, indices and data streams. When a privilege is
granted on an alias then access is also granted on the indices that
an alias refers to (irregardless whether privileges are granted or denied
on the actual indices). The same will apply for aliases that refer
to data streams. See for more details:
https://github.com/elastic/elasticsearch/issues/66163#issuecomment-824709767
Relates to #66163
Use an iterator instead of a list when passing around what to delete.
In the case of very large deletes the iterator is a much smaller than
the actual list of files to delete (since we save all the prefixes
which adds up if the individual shard folders contain lots of deletes).
Also this commit as a side-effect adjusts a few spots in logging where the
log messages could be catastrophic in size when trace logging is activated.
There should be a singleton for the empty version of this.
All the copying to `String[]` or use as an iterator make
no sense either when we can just use the list outright.
This commit removes the bootstrap.system_call_filter setting, as
starting in Elasticsearch 8.0.0 we are going to require that system call
filters be installed and that this is not user configurable. Note that
while we force bootstrap to attempt to install system call filters, we
only enforce that they are installed via a bootstrap check in production
environments. We can consider changing this behavior, but leave that for
future consideration and thus a potential follow-up change.
Makes the following changes:
- The node shutdown feature flag isn't set on the test runner, only the
cluster JVMs, so we can't use it to check here. Instead, the cleanup
now infers whether it's enabled from the shape of the first
GET `_nodes/shutdown` response.
- Now uses `adminClient()` instead of `client()`
- Removes the unnecessary `instanceof` check, which was *not* due to parsing,
but the fact that `nodes` is indeed a map if the feature flag isn't enabled.
Extract usage of internal API from TestClustersPlugin and PluginBuildPlugin and related plugins and build logic
This includes a refactoring of ElasticsearchDistribution to handle types
better in a way we can differentiate between supported Elasticsearch
Distribution types supported in TestCkustersPlugin and types only supported
in internal plugins.
It also introduces a set of internal versions of public plugins.
As part of this we also generate the plugin descriptors now.
As a follow up on this we can actually move these public used classes into
an extra project (declared as included build)
We keep LoggedExec and VersionProperties effectively public And workaround for RestTestBase
The query phase applies an optimization when sorting by a numeric field.
This optimization doesn't handle early termination correctly when `timeout`
and/or `terminate_after` are used. An IAE exception is thrown at the shard
level when the timeout is reached.
This commit fixes the bug, early terminated exceptions are correctly caught
and the result is computed from the documents that the shard was able to collect
before the termination.
Closes#72661
This change adds a test module called `error-query` that exposes a
query builder to simulate errors and warnings on shard search request.
The query accepts a list of indices and shard ids where errors or warnings
should be reported:
```
POST test*/_search
{
"query": {
"error_query": {
"indices": [
{
"name": "test_exception",
"shard_ids": [1],
"error_type": "exception",
"message": "boom"
},
{
"name": "test_warn*",
"error_type": "warning",
"message": "Watch out!"
}
]
}
}
}
```
The `error_type` can be set to `exception` or `warning` and the `name` accepts
simple patterns, aliases and fully qualified index name if the search targets remote shards.
This module is published only within snapshots like the other test modules.
Relates #70784
Dries up the efficient way to hash a bytes reference and makes use
of it in a few other spots that were needlessly copying all bytes in
the bytes reference for hashing.
This commit ensures that node shutdown metadata is cleaned up between
tests, as it causes unrelated tests to fail if a test leaves node
shutdown metadata in place.
If this runs needlessly for large repositories (especially in timeout/retry situations)
it's a significant memory+cpu hit => made it cancellable like we recently did for many
other endpoints.
DocumentMapperForType is used to create a document mapper when no mappings exists for an index and we are indexing the first document in it. This is only to cover for the edge case of empty docs, without any fields to dynamically map, being indexed, as we need to ensure that any index with at least one document in it has some mappings.
We can replace using DocumentMapperForType with the same logic that MapperService#documentMapperWithAutoCreate includes. This also helps clean up the only case where we create a DocumentMapper from its public constructor, which can be removed and replaced by a more targeted static method.
We recently replaced some usages of DocumentMapper with MappingLookup in the search layer, as document mapper is mutable which can cause issues. In order to do that, MappingLookup grew and became quite similar to DocumentMapper in what it does and holds.
In many cases it makes sense to use MappingLookup instead of DocumentMapper, and we may even be able to remove DocumentMapper entirely in favour of MappingLookup in the long run.
This commit replaces some of its straight-forward usages.
The leak tracking can be run for every test while the existing solution would only work
with a very limited set of tests giving us no coverage on pages that weren't acquired through
the mock transport service.
MappingLookup has been introduced to expose a snapshot of the mappings to the search layer. We have been using it more and more over time as it is convenient and always non null.
This commit documents some of its semantics and makes it easier to trace when it is created with limited functionalities (without a document parser, index settings and index analyzers).
DocumentMapper contains some complicated logic to load
metadata fields so that it can build tombstone documents.
However, we only actually need three metadata mappers for
this purpose, and they are all stateless so this logic is
unnecessary. This commit adds two new static methods to
ParsedDocument to build no-op and delete tombstones,
and removes some ceremony elsewhere.
With the introduction of BKD-based geo shape indexing in #32039, the prefix tree indexing method has
been deprecated. From 8.0.0, it will not be allowed to create new mappings using deprecated parameters.
MapperTestCase has a check that if a field mapper supports stored fields,
those stored fields are available to index time scripts. Many of our mappers
do not support stored fields, and we try and catch this with an assumeFalse
so that those mappers do not run this test. However, this test is fragile - it
does not work for mappers created with an index version below 8.0, and it
misses mappers that always store their values, e.g. match_only_text.
This commit adds a new supportsStoredField method to MapperTestCase,
and overrides it for those mappers that do not support storing values. It
also adds a minimalStoredMapping method that defaults to the minimal
mapping plus a store parameter, which is overridden by match_only_text
because storing is not configurable and always available on this mapper.
This commit renames the nodeDataPaths method to be singular and return a
single Path instead of an array. This is done in isolation from other
NodeEnvironemnt methods to make it reviewable.
relates #71205
The `S3HttpHandler` reads the contents of the uploaded blob, but if the
upload used chunked encoding then the reader would skip one or more
`\r\n` sequences if they appeared at the start of a chunk.
This commit reworks the reader to be stricter about its interpretation
of chunks, and removes some indirection via streams since we can work
pretty much entirely on the underlying `BytesReference` instead.
Closes#72358
The path.data setting is now a singular string, but the method
dataFiles() that gives access to the path is still an array. This commit
renames the method and makes the return type a single Path.
relates #71205
This prevents the `date_histogram` from running out of memory allocating
empty buckets when you set the interval to something tiny like `seconds`
and aggregate over a very wide date range. Without this change we'd
allocate memory very quickly and throw and out of memory error, taking
down the node. With it we instead throw the standard "too many buckets"
error.
Relates to #71758
Since multiple data path support has been removed, the Setting no longer
needs to support multiple values. This commit converts the
PATH_DATA_SETTING to a String setting from List<String>.
relates #71205
We shouldn't loop over the listeners under the mutex in `done` since in most use-cases we used `DirectExecutorService`
with this class.
Also, no need to create an `AbstractRunnable` for direct execution. We use this listener on the hot path in authentication
making this a worthwhile optimization I think.
Lastly, no need to clear and thus loop over `listeners`, the list is not used again after the `done` call returns anyway
so no point in retaining it at all (especially when in a number of use cases we add listeners only after the `done` call
so we can also save the instantiation by making the field non-final).
Related to #71593 we move all build logic that is for elasticsearch build only into
the org.elasticsearch.gradle.internal* packages
This makes it clearer if build logic is considered to be used by external projects
Ultimately we want to only expose TestCluster and PluginBuildPlugin logic
to third party plugin authors.
This is a very first step towards that direction.
This commit converts the deprecation messages for multiple data paths
into errors. It effectively removes support for multiple data paths.
relates #71205
I broke composite early termination when reworking how aggregations'
contact for `getLeafCollector` around early termination in #70320. We
didn't see it in our tests because we weren't properly emulating the
aggregation collection stage. This fixes early termination by adhering
to the new contract and adds more tests.
Closes#72078
Co-authored-by: Benjamin Trent <4357155+benwtrent@users.noreply.github.com>
Create a class for holding the large number of arguments to this method
and to dry up resource handling across snapshot shard service and the
source-only repository.
We have recently split DocumentMapper creation from parsing Mapping. There was one method leftover that exposed parsing mapping into DocumentMapper, which is generally not needed. Either you only need to parse into a Mapping instance, which is more lightweight, or like in some tests you need to apply a mapping update for which you merge new mappings and get the resulting document mapper. This commit addresses this and removes the method.
Today we track a couple of values for each snapshot in the top-level
`RepositoryData` blob: the overall snapshot state and the version of
Elasticsearch which took the snapshot. In the blob these values are
fields of the corresponding snapshot object, but in code they're kept in
independent maps. In the near future we would like to track some more
values in the same way, but adding a new field for every tracked value
is a little ugly. This commit refactors these values into a single
object, `SnapshotDetails`, so that it can be more easily extended in
future.
In the corner case of uploading a large (>5MB) metadata blob we did not set content validation
requirement on the upload request (we automatically have it for smaller requests that are not resumable
uploads). This change sets the relevant request option to enforce a MD5 hash check when writing
`BytesReference` to GCS (as is the case with all but data blob writes)
closes#72018
This adds a new `match_only_text` field, which indexes the same data as a `text`
field that has `index_options: docs` and `norms: false` and uses the `_source`
for positional queries like `match_phrase`. Unlike `text`, this field doesn't
support scoring.
#71696 introduced a regression to the various shape field mappers,
where they would no longer handle null values. This commit fixes
that regression and adds a testNullValues method to MapperTestCase
to ensure that all field mappers correctly handle nulls.
Fixes#71874
This commit adds support for system data streams and also the first use
of a system data stream with the fleet action results data stream. A
system data stream is one that is used to store system data that users
should not interact with directly. Elasticsearch will manage these data
streams. REST API access is available for external system data streams
so that other stack components can store system data within a system
data stream. System data streams will not use the system index read and
write threadpools.
This drops a few properties from the reproduction info printed when a
test fails because it is implied by the build:
* `tests.security.manager`
* `tests.rest.suite`
* `tests.rest.blacklist`
The two `tests.rest` properties a set by the build *and* duplicate the
`--test` output!
Closes#71290
The frozen tier partially downloads shards only. This commit
introduces an autoscaling decider that scales the total storage
on the tier according to a configurable percentage relative to
the total data set size.
As we started thinking about applying on_script_error to runtime fields, to handle script errors at search time, we would like to use the same parameter that was recently introduced for indexed fields. We decided that continue or fail gives a better indication of the behaviour compared to the current ignore or reject which is too specific to indexing documents.
This commit applies such rename.
Some functionality will no longer work with multiple data paths and in
order to run integration tests for that, we need the capability to
force a single data path for those tests.
Relates #71844
This commit adds some per-index statistics to the `SnapshotInfo` blob:
- number of shards
- total size in bytes
- maximum number of segments per shard
It also exposes these statistics in the get snapshot API.
This commit allows to use the include_type_name parameter with the compatible rest api.
The support for include_type_name was previously removed in #48632
relates #51816
types removal meta issue #54160
Adds support for close_to assertion to yaml tests. The assertion can be called
the following way:
```
- close_to: { get.fields._routing: { value: 5.1, error: 0.00001 } }
```
Closes#71303
This PR makes sure MockSearchService collects ARS statistics. Before, if we
randomly chose to use MockSearchService then ARS information would be missing
and the test would fail.
Also makes the following fixes:
* Removes a test workaround for the bug #71022, which is now fixed.
* Handle the case where nodes have same rank, to prevent random failures.
This speeds up the `terms` aggregation when it can't take the fancy
`filters` path, there is more than one segment, and any of those
segments have only a single value for the field. These three things are
super common.
Here are the performance change numbers:
```
| 50th percentile latency | date-histo-string-terms-via-global-ords | 3414.02 | 2632.01 | -782.015 | ms |
| 90th percentile latency | date-histo-string-terms-via-global-ords | 3470.91 | 2756.88 | -714.031 | ms |
| 100th percentile latency | date-histo-string-terms-via-global-ords | 3620.89 | 2875.79 | -745.102 | ms |
| 50th percentile service time | date-histo-string-terms-via-global-ords | 3410.15 | 2628.87 | -781.275 | ms |
| 90th percentile service time | date-histo-string-terms-via-global-ords | 3467.36 | 2752.43 | -714.933 | ms | 20%!!!!
| 100th percentile service time | date-histo-string-terms-via-global-ords | 3617.71 | 2871.63 | -746.083 | ms |
```
This works by hooking global ordinals into `DocValues.unwrapSingleton`.
Without this you could unwrap singletons *if* the segment's ordinals
aligned exactly with the global ordinals. If they didn't we'd return an
doc values iterator that you can't unwrap. Even if the segment ordinals
were singletons.
That speeds up the terms aggregator because we have a fast path we can
take if we have singletons. It was previously only working if we had a
single segment. Or if the segment's ordinals lined up exactly. Which,
for low cardinality fields is fairly common. So they might not benefit
from this quite as much as high cardinality fields.
Closes#71086
Revamps the integration tests for the `filter` agg to be more clear and
builds integration tests for the `fitlers` agg. Both of these
integration tests are fairly basic but they do assert that the aggs
work.
This PR introduces a new query called `combined_fields` for searching multiple
text fields. It takes a term-centric view, first analyzing the query string
into individual terms, then searching for each term any of the fields as though
they were one combined field. It is based on Lucene's `CombinedFieldQuery`,
which takes a principled approach to scoring based on the BM25F formula.
This query provides an alternative to the `cross_fields` `multi_match` mode. It
has simpler behavior and a more robust approach to scoring.
Addresses #41106.
Normally there are only 3 parts to a YAML REST test
`api/name/test section name` where `api` is sourced
from the filesystem, a relative path from the root of
the tests. `name` is the filename of the test minus the `.yml`
and the `test section name` is from inside the .yml file`
Some tests have use multiple directories to represent the `api`
for example `foo/bar/10_basic/My test Name` where foo/bar is the
relative path from the root of the tests. All works fine in both
*nix and Windows. Except for when you need to reference that `api`
(aka path from root) under Windows. Under Windows that relative path
uses backslashes to represent the `api`. This means that under Windows
you need to `foo\bar/10_basic/My test Name` to reproduce\execute a test.
Additionally, due to how the regex matching is done for blacklisting tests
the backslash will never match, so it is not possible to
blacklist a 4+ path YAML REST test for Windows.
This commit simply ensures that the API part is always represented as a
forward slash. This commit also removes a prior naive attempt to blacklist
on Windows.
closes#71475
This commit removes the ability for plugins to add roles. Roles are
fairly tightly coupled with the behavior of the system (as evidenced by
the fact that some roles from the default distribution leaked behavior
into the OSS distribution). We previously had this plugin extension
point so that we could support a difference in the set of roles between
the OSS and default distributions. We no longer need to maintain that
differentiation, and can therefore remove this plugin extension
point. This was technical debt that we were willing to accept to allow
the default distribution to have additional roles, but now we no longer
need to be encumbered with that technical debt.
copy_to is currently implemented at document parse time, and does not
work with values generated from index-time scripts. We may want to add
this functionality in future, but for now this commit ensures that we throw
an exception if copy_to and script are both set on a field mapper.
* Warn users if security is implicitly disabled
Elasticsearch has security features implicitly disabled by default for
Basic and Trial licenses, unless explicitly set in the configuration
file.
This may be good for onboarding, but it also lead to unintended insecure
clusters.
This change introduces clear warnings when security features are
implicitly disabled.
- a warning header in each REST response if security is implicitly
disabled;
- a log message during cluster boot.
This fixes the `global` aggregator when `profile` is enabled. It does so
by removing all of the special case handling for `global` aggs in
`AggregationPhase` and having the global aggregator itself perform the
scoped collection using the same trick that we use in filter-by-filter
mode of the `filters` aggregation.
Closes#71098
This adds a few tests for runtime field queries applied to
"filter-by-filter" style aggregations. We expect to still be able to
use filter-by-filter aggregations to speed up collection when the top
level query is a runtime field. You'd think that filter-by-filter would
be slow when the top level query is slow, like it is with runtime
fields, but we only run filter-by-filter when we can translate each
aggregation bucket into a quick query. So long as the results of those
queries don't "overlap" we shouldn't end up running the slower top level
query more times than we would during regular collection.
This also adds some javadoc to that effect to the two places where we
chose between filter-by-filter and a "native" aggregation
implementation.
Elasticsearch enumerates Lucene files extensions for various
purposes: grouping files in segment stats under a description,
mapping files in memory through HybridDirectory or adjusting
the caching strategy for Lucene files in searchable snapshots.
But when a new extension is handled somewhere(let's say,
added to the list of files to mmap) it is easy to forget to add it
in other places. This commit is an attempt to centralize in a
single place all known Lucene files extensions in Elasticsearch.
Multifields are built at the same time as their parent fields, using
a positioned xcontent parser to read information. Fields with index
time scripts are built entirely differently, and it does not make sense
to combine the two.
This commit adds a base test to MapperScriptTestCase that ensures
a field mapper defined with both multifields and a script parameter
throws a parse error.
This change adds additional test to GeoIpDownloaderIT which tests that artifacts produces by GeoIP CLI tool can be consumed by cluster the same way as from our original service.
It does so by running the tool from fixture which then simply serves the generated files (this is exactly the way users are supposed to use the tool as well).
Relates to #68920
When we added scripts to long and double mapped fields, we added tests
for the general scripting infrastructure, and also specific tests for those two
field types. This commit extracts those type-specific tests out into a new base
test class that we can use when adding scripts to more field mappers.
Adds support for "Default Application Credentials" for GCS repositories, making it easier to set up a repository on GCP,
as all relevant information to connect to the repository is retrieved from the environment, not necessitating complicated
keystore setups.
In some scenarios where the read timeout is too tight it's possible
that the http request times out before the response headers have
been received, in that case an URLHttpClientIOException is thrown.
This commit adds that exception type to the expected set of read timeout
exceptions.
Closes#70931
The test currently generates a list of random values and checks whether
retrieval of these values via doc values is equivallent to fetching them with a
value fetcher from source. If the random value array contains a duplicate value,
we will only get one back via doc values, but fetching from source will return
both, which is a case we should probably avoid in this test.
Closes#71053
Today when creating an internal test cluster, we allow the test to
supply the node settings that are applied. The extension point to
provide these settings has a single integer parameter, indicating the
index (zero-based) of the node being constructed. This allows the test
to make some decisions about the settings to return, but it is too
simplistic. For example, imagine a test that wants to provide a setting,
but some values for that setting are not valid on non-data nodes. Since
the only information the test has about the node being constructed is
its index, it does not have sufficient information to determine if the
node being constructed is a non-data node or not, since this is done by
the test framework externally by overriding the final settings with
specific settings that dicate the roles of the node. This commit changes
the test framework so that the test has information about what settings
are going to be overriden by the test framework after the test provide
its test-specific settings. This allows the test to make informed
decisions about what values it can return to the test framework.
This PR sets the default value of `action.destructive_requires_name`
to `true.` Fixes#61074. Additionally, we set this value explicitly in
test classes that rely on wildcard deletions to clear test state.
This commit moves the data tier roles to server. It is no longer
necessary to separate these roles from server as we no longer build
distributions that would not contain these roles. Moving these roles
will simplify many things. This is deliberately the smallest possible
commit that moves these roles. Other aspects related to the data tiers
can move in separate, also small, commits.
Air-gapped environments can't simply use GeoIp database service provided by Infra, so they have to either use proxy or recreate similar service themselves.
This PR adds tool to make this process easier. Basic workflow is:
download databases from MaxMind site to single directory (either .mmdb files or gzipped tarballs with .tgz suffix)
run the tool with $ES_PATH/bin/elasticsearch-geoip -s directory/to/use [-t target/directory]
serve static files from that directory (for example with docker run -v directory/to/use:/usr/share/nginx/html:ro nginx
use server above as endpoint for GeoIpDownloader (geoip.downloader.endpoint setting)
to update new databases simply put new files in directory and run the tool again
This change also adds support for relative paths in overview json because the cli tool doesn't know about the address it would be served under.
Relates to #68920
This commit adds a script parameter to long and double fields that makes
it possible to calculate a value for these fields at index time. It uses the same
script context as the equivalent runtime fields, and allows for multiple index-time
scripted fields to cross-refer while still checking for indirection loops.
Runtime fields currently live in their own java package. This is really
a leftover from when they were in their own module; now that they are
in core they should instead live in the common packages for classes of
their kind.
This commit makes the following moves:
org.elasticsearch.runtimefields.mapper => org.elasticsearch.index.mapper
org.elasticsearch.runtimefields.fielddata => org.elasticsearch.index.fielddata
org.elasticsearch.runtimefields.query => org.elasticsearch.search.runtime
The XFieldScript fields are moved out of the `mapper` package into
org.elasticsearch.scripts, and the `PARSE_FROM_SOURCE` default scripts
are moved from these Script classes directly into the field type classes that
use them.
We have to ship COPYRIGHT.txt and LICENSE.txt files alongside .mmdb files for legal compliance. Infra will pack these in single .tgz (gzipped tar) archive provided by GeoIP databases service.
This change adds support for that format to GeoIpDownloader and DatabaseRegistry
We use the `getDataPath` method to convert from a resource
location to a filesystem path in anticipation of eventually moving
the json files to a top-level directory. However, we were constructing
the resource locations using a filesystem concatenation, which,
on Windows, put backslashes in the path instead of slashes.
We will use a simple string concatenation to fix the Windows tests.
We've had a few bugs in the fields API where is doesn't behave like we'd
expect. Typically this happens because it isn't obvious what we expct. So
we'll try and use randomized testing to ferret out what we want. This adds
a test for most field types that asserts that `fields` works similarly
to `docvalues_fields`. We expect this to be true for most fields.
It does so by forcing all subclasses of `MapperTestCase` to define a
method that makes random values. It declares a few other hooks that
subclasses can override to further randomize the test.
We skip the test for a few field types that don't have doc values:
* `annotated_text`
* `completion`
* `search_as_you_type`
* `text`
We should come up with some way to test these without doc values, even
if it isn't as nice. But that is a problem for another time, I think.
We skip the test for a few more types just because I wanted to cut this
PR in half so we could get to reviewing it earlier. We'll get to those
in a follow up change.
I've filed a few bugs for things that are inconsistent with
`docvalues_fields`. Typically that means that we have to limit the
random values that we generate to those that *do* round trip properly.
Today nothing prevents CCR's auto-follow patterns to pick
up snapshot backed indices on a remote cluster. This can
lead to various errors on the follower cluster that are not
obvious to troubleshoot for a user (ex: multiple engine
factories provided).
This commit adds verifications to CCR to make it fail faster
when a user tries to follow an index that is backed by a
snapshot, providing a more obvious error message.
The types removal effort has removed the type from Index API in #47671 and from Get API in #46587
This commit allows to use 'typed' endpoints for the both Index and Get APIs
relates compatible types-removal meta issue #54160
TypeParser.ParserContext exposes a ScriptService allowing type parsers to
compile scripts from runtime fields (and in future, index-time scripts). However,
ScriptService itself is a fairly heavyweight object, and you can only add script
parsers to it via the Plugin mechanism.
To make testing easier, this commit extracts a ScriptCompiler interface and makes
that available from ParserContext instead.
* Don't try to invoke delete component/index templates APIs if there are no templates to delete.
* Don't delete deprecation templates by marking these as xpack templates.
Relates to #69973
Low-effort fix for #70482 that makes sure that no two slices get created with the
same name but different byte ranges in a single test run which lead to conflicts with
the assumption that cfs file ranges are fixed for given names for the frozen index input.
closes#70482
Rest API specs define the API's used at the rest level, however these specs
only define the endpoint and the parameters. We miss definitions for the
body, especially when it comes to rich bodies like they are used in ML.
This change introduces an abstract testcase for json schema validation. This
allows developers to validate any object that is serializable to JSON - using
the `ToXContent` - to be tested against a json schema. You can use it for REST
input and outputs, but also for internal objects(documents) and
`ToXContentFragments`.
As the overall goal is to ensure it validates properly, the testcase enforces
strictness. A schema file must spec all properties. This will ensure that once
a schema test has been added, it won't go out of sync. Every change to the
pojo enforces a schema update as otherwise the test would fail.
Schemas can load sub-schemas from extra files. That way you can re-use schemas
e.g. in hierarchies or re-use a schema for similar but not same interfaces.
If you send `percentiles` or `percentiles_ranks` over the transport
client or over cross cluster search it breaks internal components
subtly. They mostly work so we hadn't yet noticed the break. But if you
send the request to the slow log then it'll fail to log. This fixes the
subtle internal break.
Add support to delete component templates api to specify multiple template
names separated by a comma.
Change the cleanup template logic for rest tests to remove all component templates via a single delete component template request. This to optimize the cleanup logic. After each rest test we delete all templates. So deleting templates this via a single api call (and thus single cluster state update) saves a lot of time considering the number of rest tests.
Older versions don't support component / composable index templates
and/or data streams. Yet the test base class tries to remove objects
after each test, which adds a significant number of lines to the
log files (which slows the tests down). The ESRestTestCase will
now check whether all nodes have a specific version and then decide
whether data streams and component / composable index templates will
be deleted.
Also ensured that the logstash-index-template and security-index-template
aren't deleted between tests, these templates are builtin templates that
ES will install if missing. So if tests remove these templates between tests
then ES will add these template back almost immediately. These causes
many log lines and a lot of cluster state updates, which slow tests down.
Relates to #69973
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Add support to delete composable index templates api to specify multiple template
names separated by a comma.
Change to cleanup template logic for rest tests to remove all composable index templates via a single delete composable index template request. This to optimize the cleanup logic. After each rest test we delete all templates. So deleting templates this via a single api call (and thus single cluster state update) saves a lot of time considering the number of rest tests.
If this pr is accepted then I will do the same change for the delete component template api.
Relates to #69973
This commit removes the metadata field _parent_join
that was needed to ensure that only one join field is used in a mapping.
It is replaced with a validation at the field level.
This change also fixes in [bug](https://github.com/elastic/kibana/issues/92960) in the handling of parent join fields in _field_caps.
This metadata field throws an unexpected exception in [7.11](https://github.com/elastic/elasticsearch/pull/63878)
when checking if the field is aggregatable.
That's now fixed since this unused field has been removed.
A #68808 introduced a possibility to declare fields which will be only available to parsing when a compatible API was used.
This commit replaces deprecated log with compatible logging when a 'compatible only' field was used. Also includes a refactoring of LoggingDeprecationHandler method names
relates #51816
This fixes an issue that `fields` has with dates sent in the
`###.######` format.
If you send a date in the format `#####.######` we'll parse the bit
before the decimal place as the number of milliseconds since epoch and
we'll parse the bit after the decimal as the number of nanoseconds since
the start of that millisecond. This works and is convenient for some
folks. Sadly, the code that back the `fields` API for dates doesn't work
with the string format in this case - it works with a `double`. `double`
is bad for two reasons:
1. It's default string representation is scientific notation and our
parsers don't know how to deal with that.
2. It loses precision relative to the string representation. `double`
only has 52 bits of mantissa which can precisely store the number of
nanoseconds until about 6am on April 15th, 1970. After that it starts
to lose precision.
This fixed the first issue, getting us the correct string
representation is a "quick and dirty" way. It just converts the `double`
back to a string. But we still lose precision. Fixing that would require
a larger change.....
We only used this method in tests and it's somewhat needless to have a potentially
slow `notifyAll` in the hot path for assertions only when we can just busy assert in tests instead.
closes#69963
Before this change upon wiping the cluster, we would get a list of all legacy and index component templates. For each template first attempt to delete it as legacy template if that returned a 404 then remove it as composable index template. In the worst case this means that we would make double the amount of delete requests for templates then is necessary.
This change first gets all composable index templates (if exist and if the cluster supports it) and then deletes these composable index templates. After this separately get a list of all legacy templates and then delete those legacy templates.
Relates to #69973
Today in tests we often use a utility method that creates and starts a
transport service, and then we start it again in the tests anyway. This
commit removes this unnecessary code and asserts that we only ever call
`TransportService#acceptIncomingRequests` once.
Unwrapping the `byte[]` in a `BytesArray` can be generalized
so that releasable bytes references can make use of the optimizations
that use the bytes directly as well.
Relates #67502 but already saves a few cycles here and there around
`BytesRequest` (publish and recovery actions)
Search cancellation currently does not work well in the context of searchable snapshot shards, as it requires search
tasks to fully enter the query phase (i.e. start execution on the node, loading up the searcher, which means loading up
the index on FrozenEngine and doing some actual work) to detect cancellation, which can take a while in the frozen tier,
blocking on file downloads.
Since #69861 CFS files read from FrozenIndexInput create
dedicated frozen shared cache files when they are sliced.
This does not play well with some tests that use the
randomReadAndSlice to read files: this method can create
overlapping slice/clone reads operations which makes it
difficult to assert anything about CFS files with partial cache.
This commit prevent the tests to generate a .cfs file name
when the partial cache is randomly picked up. As a follow
up we should rework those test to make them more realistic
with the new behavior.
Closes#70000
The blob store cache is used to cache a variable length of the
begining of Lucene files in the .snapshot-blob-cache system
index. This is useful to speed up Lucene directory opening
during shard recovery and to limit the number of bytes
downloaded from the blob store when a searchable snapshot
shard must be rebuilt.
This commit adds support for compound files segment (.cfs)
when they are partially cached (ie, Storage.SHARED_CACHE)
so that the files they are composed of can also be cached in
the blob store cache index.
Co-authored-by: Yannick Welsch yannick@welsch.lu
Today every `BulkProcessor` creates two scheduler threads, both called
`[node-name][scheduler][T#1]`, which is also the name of the main
scheduler thread for the node. The duplicated thread names make it
harder to interpret a thread dump.
This commit makes the names of these threads distinct.
Closes#68470
Reading ops from the translog snapshot must not run on the transport thread.
When sending more than one batch of ops the listener (and thus `run`) would be
invoked on the transport thread for all but the first batch of ops.
=> Forking to the generic pool like we do for sending ops during recovery.
This change updates GeoIP database service URL to the new https://geoip.elastic.co/v1/database and removes (now optional) key/UUID parameter.
It also fixes geoip-fixture to provide 3 different test databases (City, Country and ASN).
It also unmutes GeoIpDownloaderIT. testGeoIpDatabasesDownload with additional logging and increased timeouts which tries to address #69594
MapperRegistry is the only class under the indices.mapper package. It fits better under index.mapper together with MapperService and friends, hence this commit moves it there and removes the indices.mapper package.
This commit adds support for two new REST test features.
warnings_regex and allowed_warnings_regex.
This is a near mirror of the warnings and allowed_warnings
warnings feature where the test can be instructed to allow
or require HTTP warnings. The difference with these new features
is that is allows the match to be based on a regular expression.
This commit introduces system index types that will be used to
differentiate behavior. Previously system indices were all treated the
same regardless of whether they belonged to Elasticsearch, a stack
component, or one of our solutions. Upon further discussion and
analysis this decision was not in the best interest of the various
teams and instead a new type of system index was needed. These system
indices will be referred to as external system indices. Within external
system indices, an option exists for these indices to be managed by
Elasticsearch or to be managed by the external product.
In order to represent this within Elasticsearch, each system index will
have a type and this type will be used to control behavior.
Closes#67383
In 2a04118e88 we moved `ensureGreen()`
from `IndexingIT` to `ESRestTestCase`, including its `70s` timeout. This
timeout makes sense in the context of an `AbstractRollingTestCase` which
has a client timeout of `90s` (#26781) but general-purpose REST tests
only have a `60s` client timeout, so if `ensureGreen()` fails then it
fails with a `SocketTimeoutException`, bypassing the useful exception
handling that log the cluster state at time of failure.
This commit reduces the `ensureGreen()` timeout for most tests, leaving
it at `70s` only for `AbstractRollingTestCase`.
We should not be ignoring and suppressing exceptions on releasing
network resources quietly in these spots.
Co-authored-by: David Turner <david.turner@elastic.co>
If a shard is reassigned to a node, but it has open searches (could be
scrolls even), the current behavior is to throw a
ShardLockObtainFailedException. This commit changes the behavior to
close the search contexts, likely failing some of the searches. The
sentiment is to prefer restoring availability over trying to complete
those searches. A situation where this can happen is when master(s) are
restarted, which is likely to cause similar search issues anyway.
This speeds up the `terms` agg in a very specific case:
1. It has no child aggregations
2. It has no parent aggregations
3. There are no deleted documents
4. You are not using document level security
5. There is no top level query
6. The field has global ordinals
7. There are less than one thousand distinct terms
That is a lot of restirctions! But the speed up pretty substantial because
in those cases we can serve the entire aggregation using metadata that
lucene precomputes while it builds the index. In a real rally track we
have we get a 92% speed improvement, but the index isn't *that* big:
```
| 90th percentile service time | keyword-terms-low-cardinality | 446.031 | 36.7677 | -409.263 | ms |
```
In a rally track with a larger index I ran some tests by hand and the
aggregation went from 2200ms to 8ms.
Even though there are 7 restrictions on this, I expect it to come into
play enough to matter. Restriction 6 just means you are aggregating on
a `keyword` field. Or an `ip`. And its fairly common for `keyword`s to
have less than a thousand distinct values. Certainly not everywhere, but
some places.
I expect "cold tier" indices are very very likely not to have deleted
documents at all. And the optimization works segment by segment - so
it'll save some time on each segment without deleted documents. But more
time if the entire index doesn't have any.
The optimization builds on #68871 which translates `terms` aggregations
against low cardinality fields with global ordinals into a `filters`
aggregation. This teaches the `filters` aggregation to recognize when
it can get its results from the index metadata. Rather, it creates the
infrastructure to make that fairly simple and applies it in the case of
the queries generated by the terms aggregation.
When we test the REST actions we assumed that they'd produce a result
but one of the mocking/verification mechanisms didn't. This forces it to
produce a result. It uses some generics dancing to force the calling
code to mock things with the appropriate type even though we don't its
only a compile time guarantee. So long as callers aren't rude its safe.
This commit adds leak tracking infrastructure that enables assertions
about the state of objects at GC time (simplified version of what Netty
uses to track `ByteBuf` instances).
This commit uses the infrastructure to improve the quality of leak
checks for page recycling in the mock nio transport (the logic in
`org.elasticsearch.common.util.MockPageCacheRecycler#ensureAllPagesAreReleased`
does not run for all tests and tracks too little information to allow for debugging
what caused a specific leak in most cases due to the lack of an equivalent of the added
`#touch` logic).
Co-authored-by: David Turner <david.turner@elastic.co>
Makes sure that we assert every expected message in this test, because
if we don't then we might shut the appender down too early.
Reverts #68267Closes#66630
This change adds query parameter confirming that we accept ToS of GeoIP database service provided by Infra.
It also changes integration test to use lower timeout when using local fixture.
Relates to #68920
This change adds component that will download new GeoIP databases from infra service
New databases are downloaded in chunks and stored in .geoip_databases index
Downloads are verified against MD5 checksum provided by the server
Current state of all stored databases is stored in cluster state in persistent task state
Relates to #68920