This commit allows returning a correct requested response content-type - it did not work for versioned media types.
It is done by adding new vendor specific instances to XContent and TextFormat enums. These instances can then "format" the response content type string when provided with parameters. This is similar to what SQL plugin does with its media types.
#51816
We were depending on the BouncyCastle FIPS own mechanics to set
itself in approved only mode since we run with the Security
Manager enabled. The check during startup seems to happen before we
set our restrictive SecurityManager though in
org.elasticsearch.bootstrap.Elasticsearch , and this means that
BCFIPS would not be in approved only mode, unless explicitly
configured so.
This commit sets the appropriate JVM property to explicitly set
BCFIPS in approved only mode in CI and adds tests to ensure that we
will be running with BCFIPS in approved only mode when we expect to.
It also sets xpack.security.fips_mode.enabled to true for all test clusters
used in fips mode and sets the distribution to the default one. It adds a
password to the elasticsearch keystore for all test clusters that run in fips
mode.
Moreover, it changes a few unit tests where we would use bcrypt even in
FIPS 140 mode. These would still pass since we are bundling our own
bcrypt implementation, but are now changed to use FIPS 140 approved
algorithms instead for better coverage.
It also addresses a number of tests that would fail in approved only mode
Mainly:
Tests that use PBKDF2 with a password less than 112 bits (14char). We
elected to change the passwords used everywhere to be at least 14
characters long instead of mandating
the use of pbkdf2_stretch because both pbkdf2 and
pbkdf2_stretch are supported and allowed in fips mode and it makes sense
to test with both. We could possibly figure out the password algorithm used
for each test and adjust password length accordingly only for pbkdf2 but
there is little value in that. It's good practice to use strong passwords so if
our docs and tests use longer passwords, then it's for the best. The approach
is brittle as there is no guarantee that the next test that will be added won't
use a short password, so we add some testing documentation too.
This leaves us with a possible coverage gap since we do support passwords
as short as 6 characters but we only test with > 14 chars but the
validation itself was not tested even before. Tests can be added in a followup,
outside of fips related context.
Tests that use a PKCS12 keystore and were not already muted.
Tests that depend on running test clusters with a basic license or
using the OSS distribution as FIPS 140 support is not available in
neither of these.
Finally, it adds some information around FIPS 140 testing in our testing
documentation reference so that developers can hopefully keep in
mind fips 140 related intricacies when writing/changing docs.
This makes sure that we only serve a hit from the request cache if it
was build using the same mapping and that the same mapping is used for
the entire "query phase" of the search.
Closes#62033
In 8.x the default for `?wait_for_active_shards` changes from `NONE` to
`DEFAULT` on calls to `POST /index/_close`. This commit adds this change
to the breaking changes docs.
Relates #66419, #66542
Generating certificates with the cert sub-command now requires either: 1) a CA
to be provided with --ca or --ca-cert/--ca-key; or 2) make them self-signed
with the --self-signed option. Generating a CA on the fly is no longer
supported. The --keep-ca-key option is removed and the tool throws an error
saying the CA needs to be generated separately if the option is specified.
This is a follow-up PR for #61884, which deprecated the "ca-on-the-fly" usage.
There is little evidence of this endpoint being used
and there is quite a lot of code complexity associated
with the various formats that can be used to upload
data and the different errors that can occur when direct
data upload is open to end users.
In a future release we can make this endpoint internal
so that only datafeeds can use it, and remove all the
options and formats that are not used by datafeeds.
End users will have to store their input data for
anomaly detection in Elasticsearch indices (which we
believe all do today) and use a datafeed to feed it
to anomaly detection jobs.
We lost the `logger.transport.name` line in #65169 and I incorrectly
extrapolated from what was left and mangled it further in #66318. This
commit fixes things.
Added documentation for the aggregate_metric_double field that was merged in #56745
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
This commit is fixing a potential bug if we support anomaly detection
results index rollover in the future.
In particular, we determine the current `data_counts` by sorting on the
latest record time. However, this is not correct if the job reverts
to an older model snapshot. To fix this we add `log_time` to `data_counts`
(similarly to `model_size_stats`) and sort on `log_time` to figure
out the current counts for the job.
Today the docs use `logger.org.elasticsearch.transport: TRACE` as the
example for adjusting the logger config. This is a dangerous thing to
suggest since that's one of the most verbose loggers we have. An
accidental copy-and-paste of this example into a busy cluster can
cause damage.
This commit suggests `logger.org.elasticsearch.discovery: DEBUG`
instead, which is much more benign.
It also corrects the order of the levels and notes that `DEBUG` and
`TRACE` are only for expert use.