This introduces a new cmd line tool that generates the security configuration
for a new node in a new cluster (as opposed to joining an existing cluster).
The security configuration consists of TLS key and certificates, which
are stored in a directory inside the config path, as well as settings appended
to the elasticsearch.yml referencing the aforementioned certs.
This commit upgrades the existing SSPL licensed "ssl-config" library
to include additional features that are supported by the X-Pack SSL
library.
This commit does not make any changes to X-Pack to use these new
features - it introduces them in preparation for their future use by
X-Pack.
The reindex module is updated to reflect API changes in ssl-config
ParseField is part of the x-content lib, yet it doesn't exist under the
same root package as the rest of the lib. This commit moves the class to
the appropriate package.
relates #73784
When libs/core was created, several classes were moved from server's
o.e.common package, but they were not moved to a new package. Split
packages need to go away long term, so that Elasticsearch can even think
about modularization. This commit moves all the classes under o.e.common
in core to o.e.core.
relates #73784
The recent upgrade of the Azure SDK has caused a few test failures that
have been difficult to debug and do not yet have a fix. In particular, a
change to the netty reactor resolving
(https://github.com/reactor/reactor-netty/issues/1655). We need to wait
for a fix for that issue, so this reverts commit
6c4c4a0ecb.
relates #73493
The org.elasticsearch.bootstrap package exists in server with classes
for starting up Elasticsearch. The elasticsearch-core jar has a handful
of classes that were split out from there, namely java version parsing
and jarhell. This commit moves those classes to a new
org.elasticsearch.jdk package so as to not split the server owned
bootstrap package.
relates #73784
The plugin classloader exists in its own jar file for legacy reasons,
and while it should go away in the future, it currently duplicates the
package name of the rest of the plugin classes. This commit moves the
classloader into its own unique package.
relates #73784
Today, writing a Writable value to XContent in Base64 format performs
these steps: (1) create a BytesStreamOutput, (2) write Writable to that
output, (3) encode a copy of bytes from that output stream, (4) create a
string from the encoded bytes, (5) write the encoded string to XContent.
These steps allocate/use memory 5 times than writing the encode chars
directly to the output of XContent.
This API would help reduce memory usage when storing a large response
of an async search.
Relates #67594
This commit upgrades the Azure SDK to 12.11.0 and Jackson to 2.12.2. The
Jackson upgrade must happen at the same time due to Azure depending on
this new version of Jackson.
closes#66555closes#67214
Co-authored-by: Francisco Fernández Castaño <francisco.fernandez.castano@gmail.com>
This commit upgrades the Azure SDK to 12.11.0 and Jackson to 2.12.2. The
Jackson upgrade must happen at the same time due to Azure depending on
this new version of Jackson.
closes#66555closes#67214
The majority of field mappers read a single value from their positioned
XContentParser, and do not need to call nextToken. There is a general
assumption that the same holds for any multifields defined on them, and
so the XContentParser is passed down to their multifields builder as-is.
This assumption does not hold for mappers that accept json objects,
and so we have a second mechanism for passing values around called
'external values', where a mapper can set a specific value on its context
and child mappers can then check for these external values before reading
from xcontent. The disadvantage of this is that every field mapper now
needs to check its context for external values. Because the values are
defined by their java class, we can also know that in the vast majority of
cases this functionality is unused. We have only two mappers that actually
make use of this, CompletionFieldMapper and GeoPointFieldMapper.
This commit removes external values entirely, and replaces it with the ability
to pass a modified XContentParser to multifields. FieldMappers can just check
the parser attached to their context for data and don't need to worry about
multiple sources.
Plugins implementing field mappers will need to take the removal of external
values into account. Implementations that are passing structured objects
as external values should instead use ParseContext.switchParser and
wrap the objects using MapXContentParser.wrapObject().
GeoPointFieldMapper passes on a fake parser that just wraps its input data
formatted as a geohash; CompletionFieldMapper has a slightly more complicated
parser that in general wraps its metadata, but if textOrNull() is called without
the parser being advanced just returns its text input.
Relates to #56063
The various geo field mappers are organised in a hierarchy that shares
parsing and indexing code. This ends up over-complicating things,
particularly when we have some mappers that accept multiple values
and others that only accept singletons. It also leads to confusing
behaviour around ignore_malformed behaviour: geo fields will ignore
all values if a single one is badly formed, while all other field mappers
will only ignore the problem value and index the rest. Finally, this
structure makes adding index-time scripts to geo_point needlessly
complex.
This commit refactors the indexing logic of the hierarchy to move the
individual value indexing logic into the concrete implementations,
and aligns the ignore_malformed behaviour with that of other mappers.
It contains two breaking changes:
* The geo field mappers no longer check for external field values on the
parse context. This added considerable complication to the refactored
parse methods, and is unused anywhere in our codebase, but may
impact plugin-based field mappers which expect to use geo fields
as multifields
* The geo_point field mapper now passes geohashes to its multifields
one-by-one, instead of formatting them into a comma-delimited
string and passing them all at once. Completion multifields using
this as an input should still behave as normal because by default
they would split this combined geohash string on the commas in any
case, but keyword subfields may look different.
Fixes#69601
We should not allow exceptions in the internal close I think.
If the exceptions thrown here are fatal then they should just be errors
instead as well. If they are not and need handling then the handling should happen
in the `closeInternal` call. Otherwise, all callers of `decRef` must be aware of
necessary exception handling which would make reasoning about the behavior
of exceptions in `#closeInternal` extremely complicated.
This change adds the ability to call value on an XContentBuilder and consume a boolean[]. This was
missing from the set of other writers for the unknown value call.
When passing in restApiVersion during creation of XContentBuilder
it makes it more clear that this field is final.
This prevents accidental change of the version during the xcontent
creation.
The withCompatibleVersion method can also be removed, since the field
only needs to be set in constructor.
relates #51816
It is possible that a developer accidentally declares two parsers for the
same field name.
This commit introduces a validation to prevent that from happening.
The Checkstyle rule that bans unary negation in favour of an explicit
`== false` has a `maximumDepth` of 2 configured, which meant that it
didn't catch all violations. The `maximumDepth` isn't required (actually
it has a really high default), so this change removes the limit and
fixes the resulting violations.
A #68808 introduced a possibility to declare fields which will be only available to parsing when a compatible API was used.
This commit replaces deprecated log with compatible logging when a 'compatible only' field was used. Also includes a refactoring of LoggingDeprecationHandler method names
relates #51816
When renaming/removing a field, a new field might be declared which
should be parseable starting with the current version.
This commit changes the way ParseField is declared for compatible
Version. Instead of concrete version a boolean function has to be used
to indicate for what version a field is parseable. The onOrAfter and
equalTo functions are declared on RestApiVersion to allow for
this.
We should not be ignoring and suppressing exceptions on releasing
network resources quietly in these spots.
Co-authored-by: David Turner <david.turner@elastic.co>
Added this assertion to have an easier time debugging
work on #67502 and found that we were accessing `refcount == 0`
bytes in the `SSLOutboundBuffer` so I fixed that buffer to not
keep references to released pages.
This commit adds leak tracking infrastructure that enables assertions
about the state of objects at GC time (simplified version of what Netty
uses to track `ByteBuf` instances).
This commit uses the infrastructure to improve the quality of leak
checks for page recycling in the mock nio transport (the logic in
`org.elasticsearch.common.util.MockPageCacheRecycler#ensureAllPagesAreReleased`
does not run for all tests and tracks too little information to allow for debugging
what caused a specific leak in most cases due to the lack of an equivalent of the added
`#touch` logic).
Co-authored-by: David Turner <david.turner@elastic.co>
In order to support compatible fields when parsing XContent additional information has to be set during ParsedField declaration.
This commit adds a set of RestApiCompatibleVersion on a ParsedField in order to specify on which versions a field is supported. By default ParsedField is allowed to be parsed on both current and previous major versions.
ObjectParser - which is used for constructing objects using 'setters' - has a modified fieldParsersMap to be Map of Maps. with key being RestApiCompatibility. This allows to choose set of field-parsers as specified on a request.
Under RestApiCompatibility.minimumSupported key, there is a map that contains field-parsers for both previous and current versions.
Under RestApiCompatibility.current there will be only current versions field (compatible fields not a present)
ConstructingObjectParser - which is used for constructing objects using 'constructors' - is modified to contain a map of Version To constructorArgInfo , declarations of fields to be set on a constructor depending on a version
relates #51816
Compatible API version is at the moment represented by both Version and
byte - representing a major version. This can lead to a confusion which
representation to use, as well as to incorrect assumptions that minor
versions are supported (with the use of Version.V_7_0_0)
Current usage of XContentParser.useCompatible is also not allowing to
define two compatible implementations. This is not about
support N-2 compatibility, but to allow to continue development when a
major release is performed.
This commit is introducing the CompatibleVersion object responsible for
wrapping around a major version of compatible API.
relates #68100
Part 8.
We have an in-house rule to compare explicitly against `false` instead
of using the logical not operator (`!`). However, this hasn't
historically been enforced, meaning that there are many violations in
the source at present.
We now have a Checkstyle rule that can detect these cases, but before we
can turn it on, we need to fix the existing violations. This is being
done over a series of PRs, since there are a lot to fix.
Part 7.
We have an in-house rule to compare explicitly against `false` instead
of using the logical not operator (`!`). However, this hasn't
historically been enforced, meaning that there are many violations in
the source at present.
We now have a Checkstyle rule that can detect these cases, but before we
can turn it on, we need to fix the existing violations. This is being
done over a series of PRs, since there are a lot to fix.
As per the new licensing change for Elasticsearch and Kibana this commit
moves existing Apache 2.0 licensed source code to the new dual license
SSPL+Elastic license 2.0. In addition, existing x-pack code now uses
the new version 2.0 of the Elastic license. Full changes include:
- Updating LICENSE and NOTICE files throughout the code base, as well
as those packaged in our published artifacts
- Update IDE integration to now use the new license header on newly
created source files
- Remove references to the "OSS" distribution from our documentation
- Update build time verification checks to no longer allow Apache 2.0
license header in Elasticsearch source code
- Replace all existing Apache 2.0 license headers for non-xpack code
with updated header (vendored code with Apache 2.0 headers obviously
remains the same).
- Replace all Elastic license 1.0 headers with new 2.0 header in xpack.
when parsing xContent objects named object can be registered and need to
be parsed as per N-1 rules.
named objects are declared on a parser (PARSER.declareNamedObject) and
the corresponding parsing logic is registered in NamedObjectRegistry.
There is a need to use an old N-1 "configuration" of the registry in
order to support the compatible parsing logic.
This commit extends the NamedObjectRegistry with additional set of
compatible entries, which provide the compatible parsing logic. Those
entries are then in turn used in parseNamedObject when xContentParser is
using compatibility.
relates #51816
This adds a `grok` and a `dissect` method to runtime fields which
returns a `Matcher` style object you can use to get the matched
patterns. A fairly simple script to extract the "verb" from an apache
log line with `grok` would look like this:
```
String verb = grok('%{COMMONAPACHELOG}').extract(doc["message"].value)?.verb;
if (verb != null) {
emit(verb);
}
```
And `dissect` would look like:
```
String verb = dissect('%{clientip} %{ident} %{auth} [%{@timestamp}] "%{verb} %{request} HTTP/%{httpversion}" %{status} %{size}').extract(doc["message"].value)?.verb;
if (verb != null) {
emit(verb);
}
```
We'll work later to get it down to a clean looking one liner, but for
now, this'll do.
The `grok` and `dissect` methods are special in that they only run at
script compile time. You can't pass non-constants to them. They'll
produce compile errors if you send in a bad pattern. This is nice
because they can be expensive to "compile" and there are many other
optimizations we can make when the patterns are available up front.
Closes#67825
Making XContentParser aware of the compatible API version. This is a preparatory work for supporting compatible changes to named xcontent object parsing.
XContentParser#useCompatibleApi will not be used for surgical compatible implementations, it will be only used in the infrastructure to select a compatible namedxcontent registry. Therefore information about the exact major version is not needed.
relates #51816
We have an in-house rule to compare explicitly against `false` instead
of using the logical not operator (`!`). However, this hasn't
historically been enforced, meaning that there are many violations in
the source at present.
We now have a Checkstyle rule that can detect these cases, but before we
can turn it on, we need to fix the existing violations. This is being
done over a series of PRs, since there are a lot to fix.
By accident ParsedMediaType was not truly immutable and it was possible
to modify other ParsedMediaType's by accident when building.
ParsedMediaType#parseMediaType(XContentType,Map) was modifying
ParsedMediaTypes within XContentType. This should never happen.
This commit is addressing this by making Map of parameters immutable.
closes#67545