This introduces a basic public yaml rest test plugin that is supposed to be used by external
elasticsearch plugin authors. This is driven by #76215
- Rename yaml-rest-test to intern-yaml-rest-test
- Use public yaml plugin in example plugins
Co-authored-by: Mark Vieira <portugee@gmail.com>
This PR adds a new API for doing streaming serialization writes to a repository to enable repository metadata of arbitrary size and at bounded memory during writing.
The existing write-APIs require knowledge of the eventual blob size beforehand. This forced us to materialize the serialized blob in memory before writing, costing a lot of memory in case of e.g. very large `RepositoryData` (and limiting us to `2G` max blob size).
With this PR the requirement to fully materialize the serialized metadata goes away and the memory overhead becomes completely bounded by the outbound buffer size of the repository implementation.
As we move to larger repositories this makes master node stability a lot more predictable since writing out `RepositoryData` does not take as much memory any longer (same applies to shard level metadata), enables aggregating multiple metadata blobs into a single larger blobs without massive overhead and removes the 2G size limit on `RepositoryData`.
This PR returns the get snapshots API to the 7.x format (and transport client behavior) and enhances it for requests that ask for multiple repositories.
The changes for requests that target multiple repositories are:
* Add `repository` field to `SnapshotInfo` and REST response
* Add `failures` map alongside `snapshots` list instead of returning just an exception response as done for single repo requests
* Pagination now works across repositories instead of being per repository for multi-repository requests
closes#69108closes#43462
When libs/core was created, several classes were moved from server's
o.e.common package, but they were not moved to a new package. Split
packages need to go away long term, so that Elasticsearch can even think
about modularization. This commit moves all the classes under o.e.common
in core to o.e.core.
relates #73784
The recent upgrade of the Azure SDK has caused a few test failures that
have been difficult to debug and do not yet have a fix. In particular, a
change to the netty reactor resolving
(https://github.com/reactor/reactor-netty/issues/1655). We need to wait
for a fix for that issue, so this reverts commit
6c4c4a0ecb.
relates #73493
This commit upgrades the Azure SDK to 12.11.0 and Jackson to 2.12.2. The
Jackson upgrade must happen at the same time due to Azure depending on
this new version of Jackson.
closes#66555closes#67214
Co-authored-by: Francisco Fernández Castaño <francisco.fernandez.castano@gmail.com>
Use an iterator instead of a list when passing around what to delete.
In the case of very large deletes the iterator is a much smaller than
the actual list of files to delete (since we save all the prefixes
which adds up if the individual shard folders contain lots of deletes).
Also this commit as a side-effect adjusts a few spots in logging where the
log messages could be catastrophic in size when trace logging is activated.
There should be a singleton for the empty version of this.
All the copying to `String[]` or use as an iterator make
no sense either when we can just use the list outright.
This commit upgrades the Azure SDK to 12.11.0 and Jackson to 2.12.2. The
Jackson upgrade must happen at the same time due to Azure depending on
this new version of Jackson.
closes#66555closes#67214
Since multiple data path support has been removed, the Setting no longer
needs to support multiple values. This commit converts the
PATH_DATA_SETTING to a String setting from List<String>.
relates #71205
Related to #71593 we move all build logic that is for elasticsearch build only into
the org.elasticsearch.gradle.internal* packages
This makes it clearer if build logic is considered to be used by external projects
Ultimately we want to only expose TestCluster and PluginBuildPlugin logic
to third party plugin authors.
This is a very first step towards that direction.
To write the contents from an InputStream we convert it into a
Flux<ByteBuffer>, since we only have 1 IO thread for the Azure
SDK, the reads from the underlying InputStream are dispatched
to different threads in order to avoid blocking the IO thread.
This could cause visibility problems when the underlying InputStream
holds state that is not synchronized. In order to avoid this,
this commit introduces explicit synchronization barriers in order
to ensure visibility. The overhead should be fairly minimal, since
the contention should be small.
1. Limit the number of blob deletes to execute in parallel to `100`.
2. Use flat listing when deleting a directory to require fewer listings
in between deletes and keep the code simpler.
closes#71267
Today when creating an internal test cluster, we allow the test to
supply the node settings that are applied. The extension point to
provide these settings has a single integer parameter, indicating the
index (zero-based) of the node being constructed. This allows the test
to make some decisions about the settings to return, but it is too
simplistic. For example, imagine a test that wants to provide a setting,
but some values for that setting are not valid on non-data nodes. Since
the only information the test has about the node being constructed is
its index, it does not have sufficient information to determine if the
node being constructed is a non-data node or not, since this is done by
the test framework externally by overriding the final settings with
specific settings that dicate the roles of the node. This commit changes
the test framework so that the test has information about what settings
are going to be overriden by the test framework after the test provide
its test-specific settings. This allows the test to make informed
decisions about what values it can return to the test framework.
Today we represent block IDs sent to Azure using the URL-safe base-64
encoding. This makes sense: these IDs appear in URLs. It turns out that
Azure rejects this encoding for block IDs and instead demands that they
are represented using the regular, URL-unsafe, base-64 encoding instead,
then further wrapped in %-encoding to deal with the URL-unsafe
characters that inevitably result.
Relates #66489
As per the new licensing change for Elasticsearch and Kibana this commit
moves existing Apache 2.0 licensed source code to the new dual license
SSPL+Elastic license 2.0. In addition, existing x-pack code now uses
the new version 2.0 of the Elastic license. Full changes include:
- Updating LICENSE and NOTICE files throughout the code base, as well
as those packaged in our published artifacts
- Update IDE integration to now use the new license header on newly
created source files
- Remove references to the "OSS" distribution from our documentation
- Update build time verification checks to no longer allow Apache 2.0
license header in Elasticsearch source code
- Replace all existing Apache 2.0 license headers for non-xpack code
with updated header (vendored code with Apache 2.0 headers obviously
remains the same).
- Replace all Elastic license 1.0 headers with new 2.0 header in xpack.
A blob store repository can be put in readonly mode by setting
`readonly: true` in its settings. In the codebase the setting key is
just the literal string `"readonly"` wherever it's used and it takes
some effort to determine what the right setting name is, in particular
to check each time that it's not spelled `"read_only"`.
This commit replaces those literal `"readonly"` strings with the
`BlobStoreRepository#READONLY_SETTING_KEY` constant to reduce this
trappiness.
We have an in-house rule to compare explicitly against `false` instead
of using the logical not operator (`!`). However, this hasn't
historically been enforced, meaning that there are many violations in
the source at present.
We now have a Checkstyle rule that can detect these cases, but before we
can turn it on, we need to fix the existing violations. This is being
done over a series of PRs, since there are a lot to fix.
Eclipse wasn't seeing the special shadow jars we were making for
repository-azure and repository-hdfs so it wasn't able to compile those
plugins. This points Eclipse at the project that we use to build the
shadow jar which gets it compiling. The tests don't pass because we
aren't pointing at the shadow jars but at least we compile.
The error message might change depending on the timing when we try
to read from the stream. Since we already check that we're not able
to read any data this assertion doesn't add much value.
We were too agressive with retries and in certain scenarios (CI) it
was possible that when the SDK had retried n times the http handler
had some pending backlog that didn't account for all the performed
requests.
Closes#66865
This PR fixes the validation of the conversion from an input stream to a flux in the
AzureBlobStore's multipart update logic, which erroneously checked that the upload
input stream is exhausted after each part's flux is completed.
This finishes porting all tasks created in gradle build scripts and plugins to use
the task avoidance api (see #56610)
* Port all task definitions to task avoidance api
* Fix last task created during configuration
* Fix test setup in :modules:reindex
* Declare proper task inputs
The client-side encrypted repository is a new type of snapshot repository that
internally delegates to the regular variants of snapshot repositories (of types
Azure, S3, GCS, FS, and maybe others but not yet tested). After the encrypted
repository is set up, it is transparent to the snapshot and restore APIs (i.e. all
snapshots stored in the encrypted repository are encrypted, no other parameters
required).
The encrypted repository is protected by a password stored on every node's
keystore (which must be the same across the nodes).
The password is used to generate a key encrytion key (KEK), using the PBKDF2
function, which is used to encrypt (using the AES Wrap algorithm) other
symmetric keys (referred to as DEK - data encryption keys), which themselves
are generated randomly, and which are ultimately used to encrypt the snapshot
blobs.
For example, here is how to set up an encrypted FS repository:
------
1) make sure that the cluster runs under at least a "platinum" license
(simplest test configuration is to put `xpack.license.self_generated.type: "trial"`
in the elasticsearch.yml file)
2) identical to the un-encrypted FS repository, specify the mount point of the
shared FS in the elasticsearch.yml conf file (on all the cluster nodes),
e.g. `path.repo: ["/tmp/repo"]`
3) store the repository password inside the elasticsearch.keystore, *on every cluster node*.
In order to support changing password on existing repository (implemented in a follow-up),
the password itself must be names, e.g. for the "test_enc_key" repository password name:
`./bin/elasticsearch-keystore add repository.encrypted.test_enc_pass.password`
*type in the password*
4) start up the cluster and create the new encrypted FS repository, named "test_enc", by calling:
`
curl -X PUT "localhost:9200/_snapshot/test_enc?pretty" -H 'Content-Type: application/json' -d'
{
"type": "encrypted",
"settings": {
"location": "/tmp/repo/enc",
"delegate_type": "fs",
"password_name": "test_enc_pass"
}
}
'
`
5) the snapshot and restore APIs work unmodified when they refer to this new repository, e.g.
` curl -X PUT "localhost:9200/_snapshot/test_enc/snapshot_1?wait_for_completion=true"`
Related: #49896#41910#50846#48221#65768
We were depending on the BouncyCastle FIPS own mechanics to set
itself in approved only mode since we run with the Security
Manager enabled. The check during startup seems to happen before we
set our restrictive SecurityManager though in
org.elasticsearch.bootstrap.Elasticsearch , and this means that
BCFIPS would not be in approved only mode, unless explicitly
configured so.
This commit sets the appropriate JVM property to explicitly set
BCFIPS in approved only mode in CI and adds tests to ensure that we
will be running with BCFIPS in approved only mode when we expect to.
It also sets xpack.security.fips_mode.enabled to true for all test clusters
used in fips mode and sets the distribution to the default one. It adds a
password to the elasticsearch keystore for all test clusters that run in fips
mode.
Moreover, it changes a few unit tests where we would use bcrypt even in
FIPS 140 mode. These would still pass since we are bundling our own
bcrypt implementation, but are now changed to use FIPS 140 approved
algorithms instead for better coverage.
It also addresses a number of tests that would fail in approved only mode
Mainly:
Tests that use PBKDF2 with a password less than 112 bits (14char). We
elected to change the passwords used everywhere to be at least 14
characters long instead of mandating
the use of pbkdf2_stretch because both pbkdf2 and
pbkdf2_stretch are supported and allowed in fips mode and it makes sense
to test with both. We could possibly figure out the password algorithm used
for each test and adjust password length accordingly only for pbkdf2 but
there is little value in that. It's good practice to use strong passwords so if
our docs and tests use longer passwords, then it's for the best. The approach
is brittle as there is no guarantee that the next test that will be added won't
use a short password, so we add some testing documentation too.
This leaves us with a possible coverage gap since we do support passwords
as short as 6 characters but we only test with > 14 chars but the
validation itself was not tested even before. Tests can be added in a followup,
outside of fips related context.
Tests that use a PKCS12 keystore and were not already muted.
Tests that depend on running test clusters with a basic license or
using the OSS distribution as FIPS 140 support is not available in
neither of these.
Finally, it adds some information around FIPS 140 testing in our testing
documentation reference so that developers can hopefully keep in
mind fips 140 related intricacies when writing/changing docs.
Except when writing actual segment files to the blob store
we always write `BytesReference` instead of a stream.
Only having the stream API available forces needless copies
on us. I fixed the straight-forward needless copying for
HDFS and FS repos in this PR, we could do similar fixes for
GCS and Azure as well and thus significantly reduce the peak
memory use of these writes on master nodes in particular.
This commit moves the upload logic to the repository itself
instead of delegating into the SDK.
Multi-block uploads are done sequentially instead of in parallel
that allows to bound the outstanding memory.
Additionally the number of i/o threads and heap arenas have been
reduced to 1, to reduce the memory overhead.
Closes#66385
Making `#map` look and feel a little nicer, optimize chains of `#map`,
and replace `#delegateFailure` calls with `#map` calls where possible
in order to enforce callbacks not throwing where possible.
Many of the metadata blobs we handle in the changed spots can grow
up in size up to `O(1M)`. Not using recycled bytes when working with
them causes significant spikes in memory use for larger repositories.