An attempt to apply the spotless plugin everywhere and then disable
where it wasn't appropriate didn't work, and instead everything was
formatted. Revert how we apply the pluing, and use a different approach
to applying extra configuration in build files.
Changes:
* Removes the limitation for multi-value fields.
* Adds a recommendation to avoid complex expressions for Boolean comparisons to the `string` fn.
Relates to #76610.
v7compatibilityNotSupportedTests was introduced to make it easier to
track tests that have been identified as not needing compatible changes
and those that still need to be checked.
We have checked all tests now and the separate list is no longer needed.
relates #51816
relates #73912
This PR fixes the generation of `UnassignedInfo` in
AllocationRoutedStepTests#testExecuteAllocateUnassigned to
ensure that it will not trigger consistency-check assertions.
Currently we use the custom lz4-block scheme when compressing data. This
scheme automatically calculates and write a checksum when compressing.
We do not actually read this checksum when decompressing so it is
unnecessary. This commit resolves this by not writing a no-op checksum.
This will break arbitrary decompressors. However, since the lz4 block
format is not an official format anyway, this should be fine.
Relates to #73497.
* Remove Node Shutdown API feature flag
This PR removes the Node Shutdown API feature flag.
The Node Shutdown API will now always be available.
* Check if xpack is enabled in cleanup
When I removed the feature flag, I assumed that we would always have the
Node Shutdown APIs, but that turns out not to be the case if xpack isn't
enabled. This case was caught by the logic to handle the case where the
feature flag wasn't enabled by accident.
This commit adds the check we always should have had.
* Also check version before tyring cleanup
This change adds a a new "augmented" annotation to the Painless allowlist parser. The first use of the
annotation supports adding static final fields to a specified allowlist class from another class. This
supports the fields api as we can add additional fields types from other classes and augment the Field
class with the new types.
This commit extends the `inference_processor` to support allocated
models. In particular, the internal infer action is now checking
if the model is allocated, and if so, it redirects the request
to an instance of the allocated model. If not, then it proceeds
as previously to load the model through the `ModelLoadingService`.
In addition, we now check that a model is not allocated when
the `StopTrainedModelDeploymentAction`.
Note that no new `InferenceConfigUpdate` objects are introduced
as there are no settings currently that can be set on inference time
for allocated models.
This introduces an optimisation of the EQL requests when these target
one remote cluster only (i.e. no mixed local and remote indices or
multiple remote clusters). In this case, the EQL request is forwarded
to the remote cluster and executed there, instead of having the local
cluster perform multiple queries to the remote cluster.
* Reformatting to keep Checkstyle after formatting
* Configure spotless everywhere, and disable the tasks if necessary
* Add XContentBuilder helpers, fix test
* Tweaks
* Add a TODO
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Update Gradle wrapper to 7.2-rc-1
Fix deprecation warnings on the go
* Remove deprecated lambda based Gradle task actions
* Remove usage of deprecated BasePluginConvention
* Update wrapper to 7.2-rc-2
* Update gradle wrapper to 7.2-rc-3
* Update gradle wrapper to 7.2
**Parent ticket:** https://github.com/elastic/kibana/issues/101016
**Related to:** https://github.com/elastic/elasticsearch/pull/72181
## Summary
Similar to the previous PR (https://github.com/elastic/elasticsearch/pull/72181), we'd like to add privileges to a new set of indices to the `kibana_system` role.
The reason for that is we need to have different naming schemes for alerts-as-data index aliases and backing indices pointing to these aliases, which in turn is needed to support backwards compatibility, migrations and reindexing in the future.
We didn't want to prefix the backing indices with `.kibana-`, so we're adding a new `.internal.alerts` prefix. Prefixing with `.kibana-` would make them system indices, which means they would not be supposed to be read by end users, which is not what we want.
`.internal` could become a universal prefix for hidden Kibana indices, but at this point I don't feel confident enough to generalise prematurely.
This PR fixes two situations where `NOT_STARTED` can appear as the shard migration status inappropriately:
1. When the node is actually shut down after having all the shards migrate away.
2. When a non-data-node is registered for shutdown.
It also adds tests to ensure these cases are handled correctly.
This commit fixes the parsing of allocation delay from XContent, which
was previously completely broken. Also adjusts the tests to exercise
that parsing.
* Adding base RestHandler class for Enrollment APIs
This change adding an abstract RestHandler class and extends it by
enrollment API classes (node and Kibana enrollment). It will handle the
cases when `enrollment.enabled` is not set to `true`. It will return an
appropriate exception in this case.
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
* Gracefully nandle very large sizes on terms
Folks often as for the `terms` agg to have *very* large `size`
parameters in an effort to get everything they can. But they never fill
that large size. They can't! There isn't really enough heap to return
two billion buckets.
After #74096 we try to pre-allocate an array of `size + 1` length,
regardless of how many results are returned. When folks ask for
`MAX_INT` buckets this helpfully fails off in lucene land with an error
message that fairly esoterit to someone reading it outside of
Elasticsearch.
This change handles that `MAX_INT` case by building a
`TopBucketsBuilder` designed to handle large max sizes that are rarely
filled. When you ask for 1024 or more buckets you get that one instead
and we don't preallocate the entire array for the reduction. You get the
old preallocated one when you have few buckets.
Closes#76492
* fixup skip
* Moar comment
The `InboundDecoderTests` use `PageCacheRecycler#NON_RECYCLING_INSTANCE`
for their recycler, which has no leak detection. This commit replaces it
with a `MockPageCacheRecycler` to catch leaks in this area, and fixes
the two (test-only) leaks that it found.
This fixes the case when the create-snapshot step fails because it treats
a partial snapshot as a failure, hence the step will be retried and every retry
will fail as the snapshot already exists in the repository.
This makes the step report "incomplete" and have ILM rewind to `cleanup-snapshot`
step in order to first delete the existing (partial) snapshot and create a fresh one.
The change is quite big due to changing the signature of AsyncActionStep#performAction
to not use ActionListener<Boolean> (as all steps should've return true) but to use
ActionListener<Void>. This also deletes AsyncActionBranchingStep because it was
unfit for purpose given the new *explicit* binary state (success or failure) for async
steps - which was the source of the bug this commit is fixing.
Separate the stats collection for data tiers into two steps: nodes then index stats. Nodes are
summarized by all the configured tiers they are assigned to. Indices are then checked for
their most preferred tier, and are then summarized based on that, regardless of which node
an index's shards are hosted from.
* Script: ulong via fields API
Exposes unsigned long via the fields API.
Unsigned longs default to java signed longs. That means the upper range
appears negative. Consumers should use `Long.compareUnsigned(long, long)`
`Long.divideUnsigned(long, long)` and `Long.remainderUnsigned(long, long)`
to correctly work with values known to be unsigned long.
Alternatively, users may treat the unsigned long type as `BigInteger` using
the field API, `field('ul').as(Field.BigInteger).getValue(BigInteger.ZERO)`.
```
field('ul').as(Field.BigInteger).getValue(BigInteger.valueOf(1000))
field('ul').getValue(1000L)
```
This change also implements the beginning of the converters for the fields
API. The following conversions have been added:
```
ulong <-> BigInteger
long <-> BigInteger
double -> BigInteger
String (parsed as long or double) -> BigInteger
double -> long
String (parsed as long or double) -> long
Date (epoch milliseconds) -> long
Nano Date (epoch nanoseconds) -> long
boolean (1L for true, 0L for false) -> long
```
Adds multi-valued fields support:
- remove in-place optimizations (ranges, binary comparisons, negation
logic)
- functions support for different values of the same field in multi-use
scenarios
- null handling
- negation handling
- backwards compatibility checks and tests for multi-value fields support
This commit changes the Enroll Kibana API to create and return
a token for this service account, instead of setting and returning the
password of the kibana_system built-in user. Both the token name and
value are returned in the response of the API.
This commit is a bundle of changes to support the removal of X-Pack
SSL in favour of the ssl-config library.
The main changes are:
1. Migrating some certificate management in PKI and SAML realm to use
ssl-config
2. Updating a variety of test cases to use ssl-config for their SSL
setup and verification
This commit changes the implementation of the Realms class to listen
for license changes, and recompute the set of actively licensed realms
only when the license changes rather than each time the "asList" method
is called.
This is primarily a performance optimisation, but it also allows us to
turn off the "in use" license tracking for realms when they are
disabled by a change in license.
Relates: #76476
This PR adjusts assertions in testPagination so that more detailed messages
would be made available on failures. For example, if the sorted keys do not
match the expected order, it now shows all keys for better context instead of
just the single mismatched key.
Relates: #76542
This PR makes the delayed allocation infrastructure aware of registered node shutdowns, so that reallocation of shards will be further delayed for nodes which are known to be restarting.
To make this more configurable, the Node Shutdown APIs now support a `allocation_delay` parameter, which defaults to 5 minutes. For example:
```
PUT /_nodes/USpTGYaBSIKbgSUJR2Z9lg/shutdown
{
"type": "restart",
"reason": "Demonstrating how the node shutdown API works",
"allocation_delay": "20m"
}
```
Will cause reallocation of shards assigned to that node to another node to be delayed by 20 minutes. Note that this delay will only be used if it's *longer* than the index-level allocation delay, set via `index.unassigned.node_left.delayed_timeout`.
The `allocation_delay` parameter is only valid for `restart`-type shutdown registrations, and the request will be rejected if it's used with another shutdown type.