This adds a new parameter to the start trained model deployment API,
namely `priority`. The available settings are `normal` and `low`.
For normal priority deployments the allocations get distributed so that
node processors are never oversubscribed.
Low priority deployments allow users to test model functionality even if there
are no node processors available. They are limited to 1 allocation with a single thread.
In addition, the process is executed in low priority which limits the amount of
CPU that can be used when the CPU is under pressure. The intention of this is to
limit the impact of low priority deployments on normal priority deployments.
When we rebalance model assignments we now:
1. compute a plan just for normal priority deployments
2. fix the resources used by normal deployments
3. compute a plan just for low priority deployments
4. merge the two plans
Closes#91024
We have some existing tests that verify that we throw errors when parsing certain special field names
in documents. This commit expands them to test more edge cases, as well as to test the same scearios
with subobjects:false as well as dynamic:runtime. Furthermore, additional tests and checks are added
to verify that the behaviour is aligned between document parsing and mapping parsing, which is not
the case in many scenarios.
This commit does not aim at fixing the anomalies found, but only to surface them and have tests
that can later be adapted once the different scenarios are fixed.
The data stream is created so that it includes a downsampled index and
a normal write index. The date histogram query uses a fixed_interval value
that is smaller than the downsample fixed_interval aggregation interval.
We currently use unicon/shibboleth-idp:3.4.2 to help test our SAML integration.
That container is no longer actively supported and does not support
ARM architectures.
This commit is a partial clone from Unicon/shibboleth-idp-dockerized 3.4.3.
Changes from upstream include:
Use openjdk:11.0.16-jre as the base image for support for ARM architectures
Handle missing keystore download from Jetty
Fix URL paths for artifacts to download
Changes to this repository include:
Copied required Jetty configuration files from upstream project
Updates to docker compose
Placed the missing keystore Jetty downloads in a separate location (jetty-custom)
The final result is a bit messy. Mixing cloned files with custom files and mixing
Jetty and IDP concerns. However, it is not much messier than prior and now
that we control building the image we can more easily upgrade shibboleth IDP
The upgrade to the latest version is fairly involved and as such we will need to
deviate more from the clone which should allow some additional clean up.
part of: #71378
related: #91144
supersedes: #89674
1. Moves integration tests from an `ESIntegTestCase` into a REST test so
we can do forwards compatibility tests with it later.
2. Moves the `bucket_sort` pipeline agg from `server` into
`modules:aggregations` so it'll be a little easier to test it.
add a filter to the frequent items agg that filters documents from the analysis while still calculating support on the full set
A filter is specified top-level in frequent_items:
"frequent_items": {
"filter": {
"term": {
"host.name.keyword": "i-12345"
}
},
...
The above filters documents that don't match, however still counts the docs when calculating support. That's in contrast to
specifying a query at the top, in which case you find the same item sets, but don't know the importance given the full
document set.
We should not hold a reference to the cluster state on the listener when
we don't need it here. This lead to significant unnecessary heap
consumption with many long running tasks in a client issue.
The auditable field was meant to determine whether authorization should
be audited or not. However, in practice, this field is always true and
what actually enabling auditing is by using different AuditTrail
implementations. The field is hence not necessary and only adds clutter
to the code. It is also arguable whether auditable or not belongs to the
AuthorizationInfo class. As a result, this PR removes this field from
AuthorizationInfo and its subclass IndexAuthorizationInfo.
Relates: https://github.com/elastic/elasticsearch/pull/91180/files#r1011344119
There can be no exclusions if they don't follow wildcards,
i.e. names that start with a leading "-" are considered verbatim
index names. Also, using request options,
i.e. expand_wildcards=none, it's possible to disable
wildcards expansion completely. In the case that wildcard
expansion is disabled, exclusions ought be considered verbatim
index names, even if they follow wildcard expressions.
This is indeed Security's behavior, as well as Core's, most of
the time (see explainer below).
In #90298 we introduced support for datemath exclusions
of the form *,-<index-{now/d}> in order to allign the Core resolver
with Security's (the evaluated datemath index-2022.10.26 is to
be excluded). But then we haven't considered the request option
that completely disables wildcard expansion. This PR remedies that.
Therefore, if expand_wildcards=none the expression *,-<index-{now/d}>
is now going to evaluate to *,-<index-{now/d}> (unchanged), because
the leading "-" is treated verbatim (not as an exclusion sign) so the
datemath is also not evaluated because datemaths must start with
"<", but this expression starts with "-" (the expression was previously
(before this PR) evaluated to *,-index-2022.10.26 and the
-index-2022.10.26 name was considered verbatim not as an exclusion,
which was different from Security's behavior that considered
-<index-{now/d}> verbatim).
This PR makes expression names that contain "*" always
be interpreted as wildcards, and names that start with "-",
and that follow a wildcard, always be interpreted as an
exclusion. This behavior is alligned with Security's, which
has been the same since probably forever.
Indices and aliases starting with "-" or containing "*" can't be
created, or reindexed into, since at least version 5.16 (tested).
But even if such was possible, they wouldn't be working
correctly today for a variety of reasons.
Today, in the Core's expression resolver, names that start
with "-" or contain "*", if they were possible, are first checked
as explicit names of indices, and only after that it's their
exclusion or wildcard trait considered. This can have surprising
results in practice (again, in the case such names were possible),
for example, a resource name starting with "-" can never be
excluded.
This commit updates the c2id docker image to the latest released version.
This commit also introduces a multi-stage build with the openjdk image that
supports ARM architectures. So our oidc tests now supports multiple archtectures.
related: #89526
part of: #71378
Simple implementation of chunked encoding for the snapshot status API.
Tested with 100 snapshots of 25k shards (all in-progress) where it
can produce the 1G+ response in less than 10s.
closes#78887
Currently, when setting a decay value on a score function, you can only set a value between 0 and 1 **exclusively**, independently of the score function you're using.
This makes sense with gauss and exp score functions, since they use ln(decay) in their formula, and ln(0) does not exist.
But when using linear functions the decay should be allowed to be 0, since the formula would not result in a division by 0.
Today we sort of assume that cleanups succeed in the
`JoinValidationService`. A failure in these places might explain the
leaks seen in #90576 and #89712. It's not obvious that anything can fail
here but let's make sure.
We've had a few reports of failed upgrades caused by deleting the last
ancient index just before stopping the first node, which may not have
allowed enough time for the deletion to complete. Today the resulting
error message doesn't say how to address this, which leaves users
struggling mid-upgrade. This commit makes the error message more
actionable.
When we launch Elasticsearch with the APM monitoring
agent, we create a temporary configuration file to
securely pass the API key or secret. This temporary
file is cleaned up on Elasticsearch Node creation.
After we renamed the APM module, the delete logic
didn't get updated, which means we never delete the file
anymore.
This commit:
- fixes the APM module pattern match when we delete
- adds additional delete safety net on failed node start
- adds tests for ensuring the naming dependency isn't
broken again.
Clean up the logic as we are allowing only first neighbours so we can simplify it a bit and remove some unnecessary
allocations. In addition we ported the method H3#areNeighborCells which can be useful for example for aggregations
over geo_shape.
The canonical way to construct application privileges is to use
ApplicationPrivilege::get which accounts for stored privilege look-up.
This PR restricts and reduces the direct usage of the
ApplicationPrivilege constructor, to enforce this. To ease testing, the
PR introduces a new utility function that matches the constructor's
signature.
This PR is the 2nd half of updating DocumentPermissions and FieldPermissions
to support multi-level of limiting similar to LimitedRole (since #81403).
Instead of hard coding fieldsDefinition and limitedByFieldsDefinition,
this PR replaces them with a list of fieldsDefinitions which can accomodate
multiple of them (more than 2).
Relates: #91151
Since #81403, the Role class has been able to support multi-levels of
limiting (intersections). However, it was an oversight that the
underlying DocumentPermissions and FieldPermissions still do not support
it. They are still hardcoded to support up to 2 levels of intersection.
This PR now updates DocumentPermissions so it can support multi-level of
intersections. The similar change for FieldPermissions will be done in a
separate PR.
An API key's permission is bounded by its owner user's permission. When
checking for DLS access, both the key's permission and the owner user's
permission must be consulted. The access is granted only when it is
granted by both. This PR ensures this logic is correctly enforced by the
termsEnum action.
When run-as fails because the target user does not exist, the
authentication is created with a null lookup realm. It is then rejected
at authorization time. But for authentication, it is treated as success.
This can lead to NPE when auditing the authenticationSuccess event.
This PR fixes the NPE by checking whether lookup realm is null before
using it.
Relates: https://github.com/elastic/elasticsearch/pull/91126#discussion_r1005472501
When delegated PKI authentication is used, the delegatee's realm name is
added as a metadata field. This realm name should be the effective
subject's realm instead of that of the authenticating subject. This PR
ensures this is the case.
The most common usage of field-caps is retrieving the field-caps of
group indices having the same index mappings. We can speed up the
merging process by performing bulk merges for index responses with the
same mapping hash.
This change reduces the response time by 10 times in the many_shards
benchmark.
GET /auditbeat*/_field_caps?fields=* (single index mapping)
| 50th percentile latency | field-caps | 4420.91 | 374.729 | -4046.19 | ms | -91.52% |
| 90th percentile latency | field-caps | 5126.87 | 402.883 | -4723.98 | ms | -92.14% |
| 99th percentile latency | field-caps | 5529.41 | 576.324 | -4953.08 | ms | -89.58% |
| 100th percentile latency | field-caps | 6096.73 | 643.252 | -5453.48 | ms | -89.45% |
GET /*/_field_caps?fields=* * (i.e. multiple index mappings)
| 50th percentile latency | field-caps-all | 4475.04 | 395.844 | -4079.2 | ms | -91.15% |
| 90th percentile latency | field-caps-all | 5334.01 | 425.248 | -4908.76 | ms | -92.03% |
| 99th percentile latency | field-caps-all | 5628.16 | 606.959 | -5021.2 | ms | -89.22% |
| 100th percentile latency | field-caps-all | 6292.63 | 675.807 | -5616.82 | ms | -89.26% |
Since ignore_case is set to true in our custom stop words filter, the matching will be case-insensitive.
(cherry picked from commit a03fba9d77)
Co-authored-by: Siniša Subašić <68671543+sinisuba@users.noreply.github.com>