This drops a few properties from the reproduction info printed when a
test fails because it is implied by the build:
* `tests.security.manager`
* `tests.rest.suite`
* `tests.rest.blacklist`
The two `tests.rest` properties a set by the build *and* duplicate the
`--test` output!
Closes#71290
The frozen tier partially downloads shards only. This commit
introduces an autoscaling decider that scales the total storage
on the tier according to a configurable percentage relative to
the total data set size.
As we started thinking about applying on_script_error to runtime fields, to handle script errors at search time, we would like to use the same parameter that was recently introduced for indexed fields. We decided that continue or fail gives a better indication of the behaviour compared to the current ignore or reject which is too specific to indexing documents.
This commit applies such rename.
Some functionality will no longer work with multiple data paths and in
order to run integration tests for that, we need the capability to
force a single data path for those tests.
Relates #71844
This commit adds some per-index statistics to the `SnapshotInfo` blob:
- number of shards
- total size in bytes
- maximum number of segments per shard
It also exposes these statistics in the get snapshot API.
This commit allows to use the include_type_name parameter with the compatible rest api.
The support for include_type_name was previously removed in #48632
relates #51816
types removal meta issue #54160
Adds support for close_to assertion to yaml tests. The assertion can be called
the following way:
```
- close_to: { get.fields._routing: { value: 5.1, error: 0.00001 } }
```
Closes#71303
This PR makes sure MockSearchService collects ARS statistics. Before, if we
randomly chose to use MockSearchService then ARS information would be missing
and the test would fail.
Also makes the following fixes:
* Removes a test workaround for the bug #71022, which is now fixed.
* Handle the case where nodes have same rank, to prevent random failures.
This speeds up the `terms` aggregation when it can't take the fancy
`filters` path, there is more than one segment, and any of those
segments have only a single value for the field. These three things are
super common.
Here are the performance change numbers:
```
| 50th percentile latency | date-histo-string-terms-via-global-ords | 3414.02 | 2632.01 | -782.015 | ms |
| 90th percentile latency | date-histo-string-terms-via-global-ords | 3470.91 | 2756.88 | -714.031 | ms |
| 100th percentile latency | date-histo-string-terms-via-global-ords | 3620.89 | 2875.79 | -745.102 | ms |
| 50th percentile service time | date-histo-string-terms-via-global-ords | 3410.15 | 2628.87 | -781.275 | ms |
| 90th percentile service time | date-histo-string-terms-via-global-ords | 3467.36 | 2752.43 | -714.933 | ms | 20%!!!!
| 100th percentile service time | date-histo-string-terms-via-global-ords | 3617.71 | 2871.63 | -746.083 | ms |
```
This works by hooking global ordinals into `DocValues.unwrapSingleton`.
Without this you could unwrap singletons *if* the segment's ordinals
aligned exactly with the global ordinals. If they didn't we'd return an
doc values iterator that you can't unwrap. Even if the segment ordinals
were singletons.
That speeds up the terms aggregator because we have a fast path we can
take if we have singletons. It was previously only working if we had a
single segment. Or if the segment's ordinals lined up exactly. Which,
for low cardinality fields is fairly common. So they might not benefit
from this quite as much as high cardinality fields.
Closes#71086
Revamps the integration tests for the `filter` agg to be more clear and
builds integration tests for the `fitlers` agg. Both of these
integration tests are fairly basic but they do assert that the aggs
work.
This PR introduces a new query called `combined_fields` for searching multiple
text fields. It takes a term-centric view, first analyzing the query string
into individual terms, then searching for each term any of the fields as though
they were one combined field. It is based on Lucene's `CombinedFieldQuery`,
which takes a principled approach to scoring based on the BM25F formula.
This query provides an alternative to the `cross_fields` `multi_match` mode. It
has simpler behavior and a more robust approach to scoring.
Addresses #41106.
Normally there are only 3 parts to a YAML REST test
`api/name/test section name` where `api` is sourced
from the filesystem, a relative path from the root of
the tests. `name` is the filename of the test minus the `.yml`
and the `test section name` is from inside the .yml file`
Some tests have use multiple directories to represent the `api`
for example `foo/bar/10_basic/My test Name` where foo/bar is the
relative path from the root of the tests. All works fine in both
*nix and Windows. Except for when you need to reference that `api`
(aka path from root) under Windows. Under Windows that relative path
uses backslashes to represent the `api`. This means that under Windows
you need to `foo\bar/10_basic/My test Name` to reproduce\execute a test.
Additionally, due to how the regex matching is done for blacklisting tests
the backslash will never match, so it is not possible to
blacklist a 4+ path YAML REST test for Windows.
This commit simply ensures that the API part is always represented as a
forward slash. This commit also removes a prior naive attempt to blacklist
on Windows.
closes#71475
This commit removes the ability for plugins to add roles. Roles are
fairly tightly coupled with the behavior of the system (as evidenced by
the fact that some roles from the default distribution leaked behavior
into the OSS distribution). We previously had this plugin extension
point so that we could support a difference in the set of roles between
the OSS and default distributions. We no longer need to maintain that
differentiation, and can therefore remove this plugin extension
point. This was technical debt that we were willing to accept to allow
the default distribution to have additional roles, but now we no longer
need to be encumbered with that technical debt.
copy_to is currently implemented at document parse time, and does not
work with values generated from index-time scripts. We may want to add
this functionality in future, but for now this commit ensures that we throw
an exception if copy_to and script are both set on a field mapper.
* Warn users if security is implicitly disabled
Elasticsearch has security features implicitly disabled by default for
Basic and Trial licenses, unless explicitly set in the configuration
file.
This may be good for onboarding, but it also lead to unintended insecure
clusters.
This change introduces clear warnings when security features are
implicitly disabled.
- a warning header in each REST response if security is implicitly
disabled;
- a log message during cluster boot.
This fixes the `global` aggregator when `profile` is enabled. It does so
by removing all of the special case handling for `global` aggs in
`AggregationPhase` and having the global aggregator itself perform the
scoped collection using the same trick that we use in filter-by-filter
mode of the `filters` aggregation.
Closes#71098
This adds a few tests for runtime field queries applied to
"filter-by-filter" style aggregations. We expect to still be able to
use filter-by-filter aggregations to speed up collection when the top
level query is a runtime field. You'd think that filter-by-filter would
be slow when the top level query is slow, like it is with runtime
fields, but we only run filter-by-filter when we can translate each
aggregation bucket into a quick query. So long as the results of those
queries don't "overlap" we shouldn't end up running the slower top level
query more times than we would during regular collection.
This also adds some javadoc to that effect to the two places where we
chose between filter-by-filter and a "native" aggregation
implementation.
Elasticsearch enumerates Lucene files extensions for various
purposes: grouping files in segment stats under a description,
mapping files in memory through HybridDirectory or adjusting
the caching strategy for Lucene files in searchable snapshots.
But when a new extension is handled somewhere(let's say,
added to the list of files to mmap) it is easy to forget to add it
in other places. This commit is an attempt to centralize in a
single place all known Lucene files extensions in Elasticsearch.
Multifields are built at the same time as their parent fields, using
a positioned xcontent parser to read information. Fields with index
time scripts are built entirely differently, and it does not make sense
to combine the two.
This commit adds a base test to MapperScriptTestCase that ensures
a field mapper defined with both multifields and a script parameter
throws a parse error.
This change adds additional test to GeoIpDownloaderIT which tests that artifacts produces by GeoIP CLI tool can be consumed by cluster the same way as from our original service.
It does so by running the tool from fixture which then simply serves the generated files (this is exactly the way users are supposed to use the tool as well).
Relates to #68920
When we added scripts to long and double mapped fields, we added tests
for the general scripting infrastructure, and also specific tests for those two
field types. This commit extracts those type-specific tests out into a new base
test class that we can use when adding scripts to more field mappers.
Adds support for "Default Application Credentials" for GCS repositories, making it easier to set up a repository on GCP,
as all relevant information to connect to the repository is retrieved from the environment, not necessitating complicated
keystore setups.
In some scenarios where the read timeout is too tight it's possible
that the http request times out before the response headers have
been received, in that case an URLHttpClientIOException is thrown.
This commit adds that exception type to the expected set of read timeout
exceptions.
Closes#70931
The test currently generates a list of random values and checks whether
retrieval of these values via doc values is equivallent to fetching them with a
value fetcher from source. If the random value array contains a duplicate value,
we will only get one back via doc values, but fetching from source will return
both, which is a case we should probably avoid in this test.
Closes#71053
Today when creating an internal test cluster, we allow the test to
supply the node settings that are applied. The extension point to
provide these settings has a single integer parameter, indicating the
index (zero-based) of the node being constructed. This allows the test
to make some decisions about the settings to return, but it is too
simplistic. For example, imagine a test that wants to provide a setting,
but some values for that setting are not valid on non-data nodes. Since
the only information the test has about the node being constructed is
its index, it does not have sufficient information to determine if the
node being constructed is a non-data node or not, since this is done by
the test framework externally by overriding the final settings with
specific settings that dicate the roles of the node. This commit changes
the test framework so that the test has information about what settings
are going to be overriden by the test framework after the test provide
its test-specific settings. This allows the test to make informed
decisions about what values it can return to the test framework.
This PR sets the default value of `action.destructive_requires_name`
to `true.` Fixes#61074. Additionally, we set this value explicitly in
test classes that rely on wildcard deletions to clear test state.
This commit moves the data tier roles to server. It is no longer
necessary to separate these roles from server as we no longer build
distributions that would not contain these roles. Moving these roles
will simplify many things. This is deliberately the smallest possible
commit that moves these roles. Other aspects related to the data tiers
can move in separate, also small, commits.
Air-gapped environments can't simply use GeoIp database service provided by Infra, so they have to either use proxy or recreate similar service themselves.
This PR adds tool to make this process easier. Basic workflow is:
download databases from MaxMind site to single directory (either .mmdb files or gzipped tarballs with .tgz suffix)
run the tool with $ES_PATH/bin/elasticsearch-geoip -s directory/to/use [-t target/directory]
serve static files from that directory (for example with docker run -v directory/to/use:/usr/share/nginx/html:ro nginx
use server above as endpoint for GeoIpDownloader (geoip.downloader.endpoint setting)
to update new databases simply put new files in directory and run the tool again
This change also adds support for relative paths in overview json because the cli tool doesn't know about the address it would be served under.
Relates to #68920
This commit adds a script parameter to long and double fields that makes
it possible to calculate a value for these fields at index time. It uses the same
script context as the equivalent runtime fields, and allows for multiple index-time
scripted fields to cross-refer while still checking for indirection loops.
Runtime fields currently live in their own java package. This is really
a leftover from when they were in their own module; now that they are
in core they should instead live in the common packages for classes of
their kind.
This commit makes the following moves:
org.elasticsearch.runtimefields.mapper => org.elasticsearch.index.mapper
org.elasticsearch.runtimefields.fielddata => org.elasticsearch.index.fielddata
org.elasticsearch.runtimefields.query => org.elasticsearch.search.runtime
The XFieldScript fields are moved out of the `mapper` package into
org.elasticsearch.scripts, and the `PARSE_FROM_SOURCE` default scripts
are moved from these Script classes directly into the field type classes that
use them.
We have to ship COPYRIGHT.txt and LICENSE.txt files alongside .mmdb files for legal compliance. Infra will pack these in single .tgz (gzipped tar) archive provided by GeoIP databases service.
This change adds support for that format to GeoIpDownloader and DatabaseRegistry
We use the `getDataPath` method to convert from a resource
location to a filesystem path in anticipation of eventually moving
the json files to a top-level directory. However, we were constructing
the resource locations using a filesystem concatenation, which,
on Windows, put backslashes in the path instead of slashes.
We will use a simple string concatenation to fix the Windows tests.
We've had a few bugs in the fields API where is doesn't behave like we'd
expect. Typically this happens because it isn't obvious what we expct. So
we'll try and use randomized testing to ferret out what we want. This adds
a test for most field types that asserts that `fields` works similarly
to `docvalues_fields`. We expect this to be true for most fields.
It does so by forcing all subclasses of `MapperTestCase` to define a
method that makes random values. It declares a few other hooks that
subclasses can override to further randomize the test.
We skip the test for a few field types that don't have doc values:
* `annotated_text`
* `completion`
* `search_as_you_type`
* `text`
We should come up with some way to test these without doc values, even
if it isn't as nice. But that is a problem for another time, I think.
We skip the test for a few more types just because I wanted to cut this
PR in half so we could get to reviewing it earlier. We'll get to those
in a follow up change.
I've filed a few bugs for things that are inconsistent with
`docvalues_fields`. Typically that means that we have to limit the
random values that we generate to those that *do* round trip properly.
Today nothing prevents CCR's auto-follow patterns to pick
up snapshot backed indices on a remote cluster. This can
lead to various errors on the follower cluster that are not
obvious to troubleshoot for a user (ex: multiple engine
factories provided).
This commit adds verifications to CCR to make it fail faster
when a user tries to follow an index that is backed by a
snapshot, providing a more obvious error message.
The types removal effort has removed the type from Index API in #47671 and from Get API in #46587
This commit allows to use 'typed' endpoints for the both Index and Get APIs
relates compatible types-removal meta issue #54160