The only reason this method is throwing an exception is because the
method ByteArrayOutputStream#close() is declaring it although it is a
noop. Therefore it can be safely ignored.
Thanks @romseygeek for bringing into attention.
Another round of automated fixes to this, marking things that can be
made static as static. Saves some JIT cycles but also turns some lambdas
from capturing to non-capturing and makes the "utilityness" of some
classes visible.
Adds @SuppressWarnings("this-escape") to all necessary places to that
Elasticsearch can compile with -Werror on JDK21
No investigation has been done to determine whether any of the cases
are a potential source of errors - we have simply suppressed all
existing occurrences.
Resolves: #99845
* Use long in Centroid count
Centroids currently use integers to track how many samples their mean
tracks. This can overflow in case the digest tracks billions of samples
or more.
TDigestState already serializes the count as VLong, so it can be read as
VInt without compatibility issues.
Fixes#80153
* Update docs/changelog/99491.yaml
* More test fixes
* Bump TransportVersion
* Revert TransportVersion change
Jackson has a direct method for writing string arrays
that saves us some of the indirection we have when looping
over a string array. This normally doesn't gain much, but for extreme
cases like long index name lists in field caps it saves a couple percent
in CPU time.
This commit fixes a jarhell test to create an unnamed temp dir, instead
of the existing creation which uses the test method name. The reason
this causes problems is when running with many iterations, the test
method name is artificially adjusted to include seed information, using
special characters that are potentially invalid path characters.
closes#98949
While ingesting documents that contain nested objects and the
mapping property subobjects is set to false instead of throwing
a mapping exception and dropping the document(s), we map only
leaf field(s) with their full path as their name separated by dots.
Lots of spots where we did weird things around streams like redundant stream creation, redundant collecting
before adding all the collected elements to another collection or so, redundant streams for joining strings
and using less efficient `Collectors.toList` and in a few cases also incorrectly relying on the result being mutable.
This commit updates the plugin cli and scanner components to use ASM 9.5.
The update is required to successfully test with JDK 21. Tests in this component programatically run the java source compiler, which generates class files with major version 65, then tries to parse those generated class files. Without this change the tests fail with java.lang.IllegalArgumentException: Unsupported class file major version 65.
This PR introduces downsampling configuration to the data stream lifecycle. Keep in mind downsampling implementation will come in a follow up PR. Configuration looks like this:
```
{
"lifecycle": {
"data_retention": "90d",
"downsampling": [
{
"after": "1d",
"fixed_interval": "2h"
},
{ "after": "15d", "fixed_interval": "1d" },
{ "after": "30d", "fixed_interval": "1w" }
]
}
}
```
We will also support using `null` to unset downsampling configuration during template composition:
```
{
"lifecycle": {
"data_retention": "90d",
"downsampling": null
}
}
```
* Skip SortingDigest when merging a large digest in HybridDigest.
This is a small performance optimization that avoids creating an
intermediate SortingDigest when merging a digest tracking many samples.
The current behavior is to keep adding values to SortingDigest until we
cross the threshold for switching to MergingDigest, at which point we
copy all values from SortingDigest to MergingDigest and release the
former.
As a side cleanup, remove the methods for adding a list of digests. It's
not used anywhere and it can be tricky to get right - the current
implementation for HybridDigest is buggy.
* Update docs/changelog/97099.yaml
When a SortingDigest gets serialized, it's reconstructed by writing and
reading elements in sorted order. In this case, there's no need to sort
the elements again.
Fixes#96961
The asserts were misfiring in this test:
```
REPRODUCE WITH: ./gradlew ':server:test' --tests "org.elasticsearch.search.aggregations.metrics.InternalMedianAbsoluteDeviationTests.testReduceRandom" -Dtests.seed=AA1D81AD056870F0 -Dtests.locale=en-CA -Dtests.timezone=US/Eastern -Druntime.java=20
org.elasticsearch.search.aggregations.metrics.InternalMedianAbsoluteDeviationTests > testReduceRandom FAILED
java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([AA1D81AD056870F0:6A202DBC2335E9A6]:0)
at org.elasticsearch.tdigest.MergingDigest.merge(MergingDigest.java:316)
at org.elasticsearch.tdigest.MergingDigest.mergeNewValues(MergingDigest.java:298)
at org.elasticsearch.tdigest.MergingDigest.mergeNewValues(MergingDigest.java:288)
at org.elasticsearch.tdigest.MergingDigest.quantile(MergingDigest.java:485)
at org.elasticsearch.tdigest.HybridDigest.quantile(HybridDigest.java:141)
at org.elasticsearch.search.aggregations.metrics.TDigestState.quantile(TDigestState.java:247)
at org.elasticsearch.search.aggregations.metrics.InternalMedianAbsoluteDeviation.computeMedianAbsoluteDeviation(InternalMedianAbsoluteDeviation.java:38)
at org.elasticsearch.search.aggregations.metrics.InternalMedianAbsoluteDeviation.<init>(InternalMedianAbsoluteDeviation.java:48)
at org.elasticsearch.search.aggregations.metrics.InternalMedianAbsoluteDeviationTests.createTestInstance(InternalMedianAbsoluteDeviationTests.java:33)
at org.elasticsearch.search.aggregations.metrics.InternalMedianAbsoluteDeviationTests.createTestInstance(InternalMedianAbsoluteDeviationTests.java:23)
```
They should have been removed earlier, it's possible for the first and
the last centroid to have weight > 1.
Related to #95903
* Use a long to properly track added samples.
This helps avoid warnings around implicit conversions from long to
integer values in SortingDigest.
* Update docs/changelog/96912.yaml
* Another fix for invalid type conversion.
* Update docs/changelog/96912.yaml
* Initial import for TDigest forking.
* Fix MedianTest.
More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.
* Fix Dist.
* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.
* Update docs/changelog/96086.yaml
* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.
* Add merging to TDigestState.hashCode and .equals.
Remove wrong asserts from tests and MergingDigest.
* Fix style violations for tdigest library.
* Fix typo.
* Fix more style violations.
* Fix more style violations.
* Fix remaining style violations in tdigest library.
* Update results in docs based on the forked tdigest.
* Fix YAML tests in aggs module.
* Fix YAML tests in x-pack/plugin.
* Skip failing V7 compat tests in modules/aggregations.
* Fix TDigest library unittests.
Remove redundant serializing interfaces from the library.
* Remove YAML test versions for older releases.
These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).
* Fix test failures in docs and mixed cluster.
* Reduce buffer sizes in MergingDigest to avoid oom.
* Exclude more failing V7 compatibility tests.
* Update results for JdbcCsvSpecIT tests.
* Update results for JdbcDocCsvSpecIT tests.
* Revert unrelated change.
* More test fixes.
* Use version skips instead of blacklisting in mixed cluster tests.
* Switch TDigestState back to AVLTreeDigest.
* Update docs and tests with AVLTreeDigest output.
* Update flaky test.
* Remove dead code, esp around tracking of incoming data.
* Update docs/changelog/96086.yaml
* Delete docs/changelog/96086.yaml
* Remove explicit compression calls.
This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.
* Revert "Remove explicit compression calls."
This reverts commit 5352c96f65.
* Remove explicit compression calls to MedianAbsoluteDeviation input.
* Add unittests for AVL and merging digest accuracy.
* Fix spotless violations.
* Delete redundant tests and benchmarks.
* Fix spotless violation.
* Use the old implementation of AVLTreeDigest.
The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.
* Update docs with latest percentile results.
* Update docs with latest percentile results.
* Remove repeated compression calls.
* Update more percentile results.
* Use approximate percentile values in integration tests.
This helps with mixed cluster tests, where some of the tests where
blocked.
* Fix expected percentile value in test.
* Revert in-place node updates in AVL tree.
Update quantile calculations between centroids and min/max values to
match v.3.2.
* Add SortingDigest and HybridDigest.
The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.
The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.
* Remove deps to the 3.2 library.
* Remove unused licenses for tdigest.
* Revert changes for SortingDigest and HybridDigest.
These will be submitted in a follow-up PR for enabling MergingDigest.
* Remove unused Histogram classes and unit tests.
Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.
* Remove Comparison class, not used.
* Revert "Revert changes for SortingDigest and HybridDigest."
This reverts commit 2336b11598.
* Use HybridDigest as default tdigest implementation
Add SortingDigest as a simple structure for percentile calculations that
tracks all data points in a sorted array. This is a fast and perfectly
accurate solution that leads to bloated memory allocation.
Add HybridDigest that uses SortingDigest for small sample counts, then
switches to MergingDigest. This approach delivers extreme
performance and accuracy for small populations while scaling
indefinitely and maintaining acceptable performance and accuracy with
constant memory allocation (15kB by default).
Provide knobs to switch back to AVLTreeDigest, either per query or
through ClusterSettings.
* Small fixes.
* Add javadoc and tests.
* Add javadoc and tests.
* Remove special logic for singletons in the boundaries.
While this helps with the case where the digest contains only
singletons (perfect accuracy), it has a major issue problem
(non-monotonic quantile function) when the first singleton is followed
by a non-singleton centroid. It's preferable to revert to the old
version from 3.2; inaccuracies in a singleton-only digest should be
mitigated by using a sorted array for small sample counts.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Tentatively restore percentile rank expected results.
* Use cdf version from 3.2
Update Dist.cdf to use interpolation, use the same cdf
version in AVLTreeDigest and MergingDigest.
* Revert "Tentatively restore percentile rank expected results."
This reverts commit 7718dbba59.
* Revert remaining changes compared to main.
* Revert excluded V7 compat tests.
* Exclude V7 compat tests still failing.
* Exclude V7 compat tests still failing.
* Remove ClusterSettings tentatively.
* Initial import for TDigest forking.
* Fix MedianTest.
More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.
* Fix Dist.
* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.
* Update docs/changelog/96086.yaml
* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.
* Add merging to TDigestState.hashCode and .equals.
Remove wrong asserts from tests and MergingDigest.
* Fix style violations for tdigest library.
* Fix typo.
* Fix more style violations.
* Fix more style violations.
* Fix remaining style violations in tdigest library.
* Update results in docs based on the forked tdigest.
* Fix YAML tests in aggs module.
* Fix YAML tests in x-pack/plugin.
* Skip failing V7 compat tests in modules/aggregations.
* Fix TDigest library unittests.
Remove redundant serializing interfaces from the library.
* Remove YAML test versions for older releases.
These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).
* Fix test failures in docs and mixed cluster.
* Reduce buffer sizes in MergingDigest to avoid oom.
* Exclude more failing V7 compatibility tests.
* Update results for JdbcCsvSpecIT tests.
* Update results for JdbcDocCsvSpecIT tests.
* Revert unrelated change.
* More test fixes.
* Use version skips instead of blacklisting in mixed cluster tests.
* Switch TDigestState back to AVLTreeDigest.
* Update docs and tests with AVLTreeDigest output.
* Update flaky test.
* Remove dead code, esp around tracking of incoming data.
* Remove explicit compression calls.
This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.
* Update docs/changelog/96086.yaml
* Delete docs/changelog/96086.yaml
* Revert "Remove explicit compression calls."
This reverts commit 5352c96f65.
* Remove explicit compression calls to MedianAbsoluteDeviation input.
* Add unittests for AVL and merging digest accuracy.
* Fix spotless violations.
* Delete redundant tests and benchmarks.
* Fix spotless violation.
* Use the old implementation of AVLTreeDigest.
The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.
* Update docs with latest percentile results.
* Update docs with latest percentile results.
* Remove repeated compression calls.
* Update more percentile results.
* Use approximate percentile values in integration tests.
This helps with mixed cluster tests, where some of the tests where
blocked.
* Fix expected percentile value in test.
* Revert in-place node updates in AVL tree.
Update quantile calculations between centroids and min/max values to
match v.3.2.
* Add SortingDigest and HybridDigest.
The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.
The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.
* Remove deps to the 3.2 library.
* Remove unused licenses for tdigest.
* Revert changes for SortingDigest and HybridDigest.
These will be submitted in a follow-up PR for enabling MergingDigest.
* Remove unused Histogram classes and unit tests.
Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.
* Remove Comparison class, not used.
* Revert "Revert changes for SortingDigest and HybridDigest."
This reverts commit 2336b11598.
* Use HybridDigest as default tdigest implementation
Add SortingDigest as a simple structure for percentile calculations that
tracks all data points in a sorted array. This is a fast and perfectly
accurate solution that leads to bloated memory allocation.
Add HybridDigest that uses SortingDigest for small sample counts, then
switches to MergingDigest. This approach delivers extreme
performance and accuracy for small populations while scaling
indefinitely and maintaining acceptable performance and accuracy with
constant memory allocation (15kB by default).
Provide knobs to switch back to AVLTreeDigest, either per query or
through ClusterSettings.
* Add javadoc and tests.
* Remove ClusterSettings tentatively.
* Restore bySize function in TDigest and subclasses.
* Update Dist.cdf to match the rest.
Update tests.
* Revert outdated test changes.
* Revert outdated changes.
* Small fixes.
* Update docs/changelog/96794.yaml
* Make HybridDigest the default implementation.
* Update boxplot documentation.
* Restore AVLTreeDigest as the default in TDigestState.
TDigest.createHybridDigest nw returns the right type.
The switch in TDigestState will happen in a separate PR
as it requires many test updates.
* Use execution_hint in tdigest spec.
* Fix Dist.cdf for empty digest.
* Bump up TransportVersion.
* Bump up TransportVersion for real.
* HybridDigest uses its final implementation during deserialization.
* Restore the right TransportVersion in TDigestState.read
* Use TDigestExecutionHint instead of strings.
* Add link to TDigest javadoc.
* Spotless fix.
* Small fixes.
* Bump up TransportVersion.
* Bump up the TransportVersion, again.
* WIP Started geo_line for TSDB work
Starting with YAML tests (which currently pass) and AggregatorTests
(currently failing, likely due to mistake in the tests)
* Update docs/changelog/94954.yaml
* WIP Refactoring to prepare for TSDB geo_line
* Created TimeSeries version of GeoLineAggregator, and wired it in so that time-series aggregations use it, but current behavior is still identical to non-time-series.
* Added both yaml and unit tests for testing that geo_line works with correct results in both time-series and non-time-series cases.
* Added additional tests to verify the grouping behaviour of time-series vs. terms aggs, and the combination of the two.
* WIP Refactoring to prepare for TSDB geo_line
* Started refactoring to re-use simplifier for all buckets
* Fixed bug with leaf collector not changing per segment
* Fixed bug with leaf collector not detecting bucket changes
The bucket id can change within a segment, so we need to detect this and save the geo_line.
* Renamed class since it no longer extends BucketedSort
The original geo_line relied on the BucketedSort for all intelligence.
The time-series geo_line uses none of that, and does its own memory management.
* Fixed bug with geo_point leaking between geo_line buckets
And enhanced unit tests to cover multiple groups
* Code review updates
* Verify that the sort field is specifically the TS timestamp
Only activate the time-series optimizations if the aggregation is both:
* Within a time-series aggregation (ie. tsid and @timestamp ordered)
* The geo_line sort field is @timestamp
* Allow geo_point time-series to skip sort config
Also disables the new geo_line for time-series even if the correct
sort and point fields are used if the point field is not explicitly
configured to be a position metric.
* Support geo_centroid and geo_bounds on position metric
* Update yaml tests for multi-terms tests
* Changed to disallow alternative sort-fields in ts-geo_line
Since the primary criteria for switching to the new algorithm is that
geo_line is within a time-series aggregation, we now disallow any other sort field.
We test the negative case in the yaml tests, but changed the unit tests to
use TermsAggregation to minim the time-series aggregation to get comparable
results.
* For non-time-series check missing sort field early
The old code only threw error if there was data because the check was done
inside the leaf collector just before actually reading the sort field.
And there were no tests for missing sort field.
This commit adds the tests, and checks early so even if data is missing.
* Reviewed TODOs
* Test that behaviour is identical with or without POSITION metric
* Removed fallback code in builder (was switching to old geo_line without POSITION metric)
* Removed two TODO's that are no longer valid concerns
* Initial import for TDigest forking.
* Fix MedianTest.
More work needed for TDigestPercentile*Tests and the TDigestTest (and
the rest of the tests) in the tdigest lib to pass.
* Fix Dist.
* Fix AVLTreeDigest.quantile to match Dist for uniform centroids.
* Update docs/changelog/96086.yaml
* Fix `MergingDigest.quantile` to match `Dist` on uniform distribution.
* Add merging to TDigestState.hashCode and .equals.
Remove wrong asserts from tests and MergingDigest.
* Fix style violations for tdigest library.
* Fix typo.
* Fix more style violations.
* Fix more style violations.
* Fix remaining style violations in tdigest library.
* Update results in docs based on the forked tdigest.
* Fix YAML tests in aggs module.
* Fix YAML tests in x-pack/plugin.
* Skip failing V7 compat tests in modules/aggregations.
* Fix TDigest library unittests.
Remove redundant serializing interfaces from the library.
* Remove YAML test versions for older releases.
These tests don't address compatibility issues in mixed cluster tests as
the latter contain a mix of older and newer nodes, so the output depends
on which node is picked as a data node since the forked TDigest library
is not backwards compatible (produces slightly different results).
* Fix test failures in docs and mixed cluster.
* Reduce buffer sizes in MergingDigest to avoid oom.
* Exclude more failing V7 compatibility tests.
* Update results for JdbcCsvSpecIT tests.
* Update results for JdbcDocCsvSpecIT tests.
* Revert unrelated change.
* More test fixes.
* Use version skips instead of blacklisting in mixed cluster tests.
* Switch TDigestState back to AVLTreeDigest.
* Update docs and tests with AVLTreeDigest output.
* Update flaky test.
* Remove dead code, esp around tracking of incoming data.
* Update docs/changelog/96086.yaml
* Delete docs/changelog/96086.yaml
* Remove explicit compression calls.
This was added to prevent concurrency tests from failing, but it leads
to reduces precision. Submit this to see if the concurrency tests are
still failing.
* Revert "Remove explicit compression calls."
This reverts commit 5352c96f65.
* Remove explicit compression calls to MedianAbsoluteDeviation input.
* Add unittests for AVL and merging digest accuracy.
* Fix spotless violations.
* Delete redundant tests and benchmarks.
* Fix spotless violation.
* Use the old implementation of AVLTreeDigest.
The latest library version is 50% slower and less accurate, as verified
by ComparisonTests.
* Update docs with latest percentile results.
* Update docs with latest percentile results.
* Remove repeated compression calls.
* Update more percentile results.
* Use approximate percentile values in integration tests.
This helps with mixed cluster tests, where some of the tests where
blocked.
* Fix expected percentile value in test.
* Revert in-place node updates in AVL tree.
Update quantile calculations between centroids and min/max values to
match v.3.2.
* Add SortingDigest and HybridDigest.
The SortingDigest tracks all samples in an ArrayList that
gets sorted for quantile calculations. This approach
provides perfectly accurate results and is the most
efficient implementation for up to millions of samples,
at the cost of bloated memory footprint.
The HybridDigest uses a SortingDigest for small sample
populations, then switches to a MergingDigest. This
approach combines to the best performance and results for
small sample counts with very good performance and
acceptable accuracy for effectively unbounded sample
counts.
* Remove deps to the 3.2 library.
* Remove unused licenses for tdigest.
* Revert changes for SortingDigest and HybridDigest.
These will be submitted in a follow-up PR for enabling MergingDigest.
* Remove unused Histogram classes and unit tests.
Delete dead and commented out code, make the remaining tests run
reasonably fast. Remove unused annotations, esp. SuppressWarnings.
* Remove Comparison class, not used.
* Small fixes.
* Add javadoc and tests.
* Remove special logic for singletons in the boundaries.
While this helps with the case where the digest contains only
singletons (perfect accuracy), it has a major issue problem
(non-monotonic quantile function) when the first singleton is followed
by a non-singleton centroid. It's preferable to revert to the old
version from 3.2; inaccuracies in a singleton-only digest should be
mitigated by using a sorted array for small sample counts.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Revert changes to expected values in tests.
This is due to restoring quantile functions to match head.
* Tentatively restore percentile rank expected results.
* Use cdf version from 3.2
Update Dist.cdf to use interpolation, use the same cdf
version in AVLTreeDigest and MergingDigest.
* Revert "Tentatively restore percentile rank expected results."
This reverts commit 7718dbba59.
* Revert remaining changes compared to main.
* Revert excluded V7 compat tests.
* Exclude V7 compat tests still failing.
* Exclude V7 compat tests still failing.
* Restore bySize function in TDigest and subclasses.
It's legitimate to wrap the delegate twice, with two different
assertOnce calls, which would yield different objects if and only if
assertions are enabled. So we'd better not ever use these things as map
keys etc.
This class was quite hot in recent benchmarks of shared-cached based
searches and we can make instantiating the releasable locks a little cheaper.
Also, those same benchmarks showed a lot of visible time spent on
dealing with ref counts. I removed one layer of indirection in atomic
use from both the release-once and the abstract ref count which
should save a little in CPU caches as well.
This commits fixes the incorrect pattern for TChar defined in RFC7230 section 3.2.6
`a-zA-z` was accidentally used and the pattern `a-zA-Z` should be used instead
Today `AbstractRefCounted` holds an `AtomicInteger` which holds the
actual ref count, which is an extra heap object and means that
acquiring/releasing refs always goes through that extra pointer lookup.
We use this utility extensively, on some pretty hot paths, so with this
commit we move to using a primitive `refCount` field with atomic
operations via a `VarHandle`.
Jackson 2.15 introduced a (rough) maximum limit on string length. This
commit relaxes that limit to its maximum size, leaving document size
constraints to other existing limits in the system. We can revisit
whether string length within a document should be independently
constrainted later.
Support geometry and streaming simplification
There are many opportunities to enable geometry simplification in Elasticsearch, both as an explicit feature available to users, and as an internal optimization technique for reducing memory consumption for complex geometries. For the latter case, it can even be considered a bug fix. This PR provides support for constraining Line and LinearRing sizes to a fixed number of points, and thereby a fixed amount of memory usage.
Consider, for example, the geo_line aggregation. This is similar to the top-10 aggregation, but allows the top-10k (ten thousand) points to be aggregated. This is not only a lot of memory, but can still cause unwanted line truncation for very large geometries. Line simplification is a solution to this. It is likely that a much smaller limit than 10k would suffice, while at the same time not truncating the geometry at all, so we fix a bug (truncation) while improving memory usage (pull limit from 10k down to perhaps just 1k).
This PR provides two APIs:
Streaming:
* By using the simplifier.consume(x, y) method on a stream of points, the total memory used is limited to a linear function of k, the total number of points to retain. This algorithm is at its heart based on the Visvalingam–Whyatt algorithm, with concepts from https://bost.ocks.org/mike/simplify/ and in particular the detailed streaming discussions in the paper at https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.7132&rep=rep1&type=pdf
Full-geometry:
* Simplifying full geometries using the simplifier.simplify(geometry) method can work with most geometry types, even GeometryCollection, but:
- Some geometries do not get simplified because it makes no sense to: Point, Circle, Rectangle
- The maxPoints parameter is used as is to apply to the main component (shell for polygons, largest geometry for multi-polygons and geometry collections), and all other sub-components (holes in polygons, etc.) are simplified to a scaled down version of the maxPoints, scaled by the relative size of the sub-component to the main component.
* The simplification itself is done on each Line and LinearRing component using the same streaming algorithm above. Since we use the Visvalingam–Whyatt algorithm, this works is applicable to both streaming and full-geometry simplification with the same essential result, but better control over memory than normal full-geometry simplifiers.
The basic algorithm for simplification on a stream of points requires maintaining two data structures:
* an array of all currently simplified points (implicitly ordered in stream order)
* a priority queue of all but the two end points with an estimated error on each that expresses the cost of removing that point from the line
In #94884 the ability to add qualified exports and opens from jars
upstream of server was added. Some Elasticsearch components need to
qualify their exports to another component. This commit tweaks the
loading of the exports services so that each loaded plugin/component
has their qualified exports handled automatically.
I saw this in some hot-threads. Splitting by a pattern that isn't a single char is expensive
because it instantiates a `Pattern`. Seems like it's redundant to split the spaces+tabs away anyway since
we trim values and keys later on in the logic.
-> lets use the split fast path and not have this on the transport thread.
This refactor introduces a new data structure called `PatternBank` which is an abstraction over the old `Map<String, String>` used all over the place. This data structure has handy methods to extend the pattern bank with new patterns and also centralize the validation of pattern banks into one place. Thanks to this, the repeated code to create Grok Pattern banks is 0.
---------
Co-authored-by: Joe Gallo <joe.gallo@elastic.co>
The preallocate module needs access to java.io internals. However, in
order to open java.io to a specific module, rather than the unnamed
module as was previously done, the said module must be in the boot
layer.
This commit moves the preallocate module to libs. It adds it to the main
lib dir, though it does not add it as a compile dependency of server.
Pushes the chunking of `GET _nodes/stats` down to avoid creating
unboundedly large chunks. With this commit we yield one chunk per shard
(if `?level=shards`) or index (if `?level=indices`) and per HTTP client
and per transport action.
Closes#93985
Add two new methods to DissectParser, so that output keys and references keys can be inferred in advance (given a pattern), without the need for parsing an actual input.
These APIs return the keys in the order they are defined in the pattern.
Fixes#82794. Upgrade the spotless plugin, which addresses the issue
around formatting `instanceof` expressions. Formatting of statements
including lambdas seems to have improved too.
When writing generic objects to x-content the value may cause an error
if XContentBuilder does not know how to understand the concrete object
type. This commit adds a new helper method, similar to
StreamOutput.checkWriteable, which validates the type of an object (and
any inner objects if it is a collection) are writeable to x-content.