This adds a microbenchmark running our traditional `ScriptScoreQuery`
race. This races Lucene Expressions, Painless, and a hand rolled
implementation of `ScoreScript`. Through the magic of the async
profiler, this revealed a few bottlenecks that hit painless that we
likely can fix! Happy times.
Co-authored-by: Rene Groeschke <rene@breskeby.com>
As per the new licensing change for Elasticsearch and Kibana this commit
moves existing Apache 2.0 licensed source code to the new dual license
SSPL+Elastic license 2.0. In addition, existing x-pack code now uses
the new version 2.0 of the Elastic license. Full changes include:
- Updating LICENSE and NOTICE files throughout the code base, as well
as those packaged in our published artifacts
- Update IDE integration to now use the new license header on newly
created source files
- Remove references to the "OSS" distribution from our documentation
- Update build time verification checks to no longer allow Apache 2.0
license header in Elasticsearch source code
- Replace all existing Apache 2.0 license headers for non-xpack code
with updated header (vendored code with Apache 2.0 headers obviously
remains the same).
- Replace all Elastic license 1.0 headers with new 2.0 header in xpack.
Upgrade JMH to latest (1.26) to pick up its async profiler integration
and update the documentation to include instructions to running the
async profiler and making pretty pretty flame graphs.
This lowers the contention on the `REQUEST` circuit breaker when building
many aggregations on many threads by preallocating a chunk of breaker
up front. This cuts down on the number of times we enter the busy loop
in `ChildMemoryCircuitBreaker.limit`. Now we hit it one time when building
aggregations. We still hit the busy loop if we collect many buckets.
We let the `AggregationBuilder` pick size of the "chunk" that we
preallocate but it doesn't have much to go on - not even the field types.
But it is available in a convenient spot and the estimates don't have to
be particularly accurate.
The benchmarks on my 12 core desktop are interesting:
```
Benchmark (breaker) Mode Cnt Score Error Units
sum noop avgt 10 1.672 ± 0.042 us/op
sum real avgt 10 4.100 ± 0.027 us/op
sum preallocate avgt 10 4.230 ± 0.034 us/op
termsSixtySums noop avgt 10 92.658 ± 0.939 us/op
termsSixtySums real avgt 10 278.764 ± 39.751 us/op
termsSixtySums preallocate avgt 10 120.896 ± 16.097 us/op
termsSum noop avgt 10 4.573 ± 0.095 us/op
termsSum real avgt 10 9.932 ± 0.211 us/op
termsSum preallocate avgt 10 7.695 ± 0.313 us/op
```
They show pretty clearly that not using the circuit breaker at all is
faster. But we can't do that because we don't want to bring the node
down on bad aggs. When there are many aggs (termsSixtySums) the
preallocation claws back much of the performance. It even helps
marginally when there are two aggs (termsSum). For a single agg (sum)
we see a 130 nanosecond hit. Fine.
But these values are all pretty small. At best we're seeing a 160
microsecond savings. Not so on a 160 vCPU machine:
```
Benchmark (breaker) Mode Cnt Score Error Units
sum noop avgt 10 44.956 ± 8.851 us/op
sum real avgt 10 118.008 ± 19.505 us/op
sum preallocate avgt 10 241.234 ± 305.998 us/op
termsSixtySums noop avgt 10 1339.802 ± 51.410 us/op
termsSixtySums real avgt 10 12077.671 ± 12110.993 us/op
termsSixtySums preallocate avgt 10 3804.515 ± 1458.702 us/op
termsSum noop avgt 10 59.478 ± 2.261 us/op
termsSum real avgt 10 293.756 ± 253.854 us/op
termsSum preallocate avgt 10 197.963 ± 41.578 us/op
```
All of these numbers are larger because we're running all the CPUs
flat out and we're seeing more contention everywhere. Even the "noop"
breaker sees some contention, but I think it is mostly around memory
allocation. Anyway, with many many (termsSixtySums) aggs we're looking
at 8 milliseconds of savings by preallocating. Just by dodging the busy
loop as much as possible. The error in the measurements there are
substantial. Here are the runs:
```
real:
Iteration 1: 8679.417 ±(99.9%) 273.220 us/op
Iteration 2: 5849.538 ±(99.9%) 179.258 us/op
Iteration 3: 5953.935 ±(99.9%) 152.829 us/op
Iteration 4: 5763.465 ±(99.9%) 150.759 us/op
Iteration 5: 14157.592 ±(99.9%) 395.224 us/op
Iteration 1: 24857.020 ±(99.9%) 1133.847 us/op
Iteration 2: 24730.903 ±(99.9%) 1107.718 us/op
Iteration 3: 18894.383 ±(99.9%) 738.706 us/op
Iteration 4: 5493.965 ±(99.9%) 120.529 us/op
Iteration 5: 6396.493 ±(99.9%) 143.630 us/op
preallocate:
Iteration 1: 5512.590 ±(99.9%) 110.222 us/op
Iteration 2: 3087.771 ±(99.9%) 120.084 us/op
Iteration 3: 3544.282 ±(99.9%) 110.373 us/op
Iteration 4: 3477.228 ±(99.9%) 107.270 us/op
Iteration 5: 4351.820 ±(99.9%) 82.946 us/op
Iteration 1: 3185.250 ±(99.9%) 154.102 us/op
Iteration 2: 3058.000 ±(99.9%) 143.758 us/op
Iteration 3: 3199.920 ±(99.9%) 61.589 us/op
Iteration 4: 3163.735 ±(99.9%) 71.291 us/op
Iteration 5: 5464.556 ±(99.9%) 59.034 us/op
```
That variability from 5.5ms to 25ms is terrible. It makes me not
particularly trust the 8ms savings from the report. But still,
the preallocating method has much less variability between runs
and almost all the runs are faster than all of the non-preallocated
runs. Maybe the savings is more like 2 or 3 milliseconds, but still.
Or maybe we should think of hte savings as worst vs worst? If so its
19 milliseconds.
Anyway, its hard to measure how much this helps. But, certainly some.
Closes#58647
This change adds a gradle task that builds a simplified dependency graph
of our runtime dependencies and pushes that to be monitored by a
software composition analysis service.
Determines the shard size of shards before allocating shards that are
recovering from snapshots. It ensures during shard allocation that the
target node that is selected as recovery target will have enough free
disk space for the recovery event. This applies to regular restores,
CCR bootstrap from remote, as well as mounting searchable snapshots.
The InternalSnapshotInfoService is responsible for fetching snapshot
shard sizes from repositories. It provides a getShardSize() method
to other components of the system that can be used to retrieve the
latest known shard size. If the latest snapshot shard size retrieval
failed, the getShardSize() returns
ShardRouting.UNAVAILABLE_EXPECTED_SHARD_SIZE. While
we'd like a better way to handle such failures, returning this value
allows to keep the existing behavior for now.
Note that this PR does not address an issues (we already have today)
where a replica is being allocated without knowing how much disk
space is being used by the primary.
This commit allows coordinating node to account the memory used to perform partial and final reduce of
aggregations in the request circuit breaker. The search coordinator adds the memory that it used to save
and reduce the results of shard aggregations in the request circuit breaker. Before any partial or final
reduce, the memory needed to reduce the aggregations is estimated and a CircuitBreakingException} is thrown
if exceeds the maximum memory allowed in this breaker.
This size is estimated as roughly 1.5 times the size of the serialized aggregations that need to be reduced.
This estimation can be completely off for some aggregations but it is corrected with the real size after
the reduce completes.
If the reduce is successful, we update the circuit breaker to remove the size of the source aggregations
and replace the estimation with the serialized size of the newly reduced result.
As a follow up we could trigger partial reduces based on the memory accounted in the circuit breaker instead
of relying on a static number of shard responses. A simpler follow up that could be done in the mean time is
to [reduce the default batch reduce size](https://github.com/elastic/elasticsearch/issues/51857) of blocking
search request to a more sane number.
Closes#37182
This speeds up `StreamOutput#writeVInt` quite a bit which is nice
because it is *very* commonly called when serializing aggregations. Well,
when serializing anything. All "collections" serialize their size as a
vint. Anyway, I was examining the serialization speeds of `StringTerms`
and this saves about 30% of the write time for that. I expect it'll be
useful other places.
This way is faster, saving about 8% on the microbenchmark that rounds to
the nearest month. That is in the hot path for `date_histogram` which is
a very popular aggregation so it seems worth it to at least try and
speed it up a little.
I've always been confused by the strange behavior that I saw when
working on #57304. Specifically, I saw switching from a bimorphic
invocation to a monomorphic invocation to give us a 7%-15% performance
bump. This felt *bonkers* to me. And, it also made me wonder whether
it'd be worth looking into doing it everywhere.
It turns out that, no, it isn't needed everywhere. This benchmark shows
that a bimorphic invocation like:
```
LongKeyedBucketOrds ords = new LongKeyedBucketOrds.ForSingle();
ords.add(0, 0); <------ this line
```
is 19% slower than a monomorphic invocation like:
```
LongKeyedBucketOrds.ForSingle ords = new LongKeyedBucketOrds.ForSingle();
ords.add(0, 0); <------ this line
```
But *only* when the reference is mutable. In the example above, if
`ords` is never changed then both perform the same. But if the `ords`
reference is assigned twice then we start to see the difference:
```
immutable bimorphic avgt 10 6.468 ± 0.045 ns/op
immutable monomorphic avgt 10 6.756 ± 0.026 ns/op
mutable bimorphic avgt 10 9.741 ± 0.073 ns/op
mutable monomorphic avgt 10 8.190 ± 0.016 ns/op
```
So the conclusion from all this is that we've done the right thing:
`auto_date_histogram` is the only aggregation in which `ords` isn't final
and it is the only aggregation that forces monomorphic invocations. All
other aggregations use an immutable bimorphic invocation. Which is fine.
Relates to #56487
- Use java-library instead of plugin to allow api configuration usage
- Remove explicit references to runtime configurations in dependency declarations
- Make test runtime classpath input for testing convention
- required as java library will by default not have build jar file
- jar file is now explicit input of the task and gradle will ensure its properly build
When an index spans a daylight savings time transition we can't use our
optimization that rewrites the requested time zone to a fixed time zone
and instead we used to fall back to a java.util.time based rounding
implementation. In #55559 we optimized "time unit" rounding. This
optimizes "time interval" rounding.
The java.util.time based implementation is about 1650% slower than the
rounding implementation for a fixed time zone. This replaces it with a
similar optimization that is only about 30% slower than the fixed time
zone. The java.util.time implementation allocates a ton of short lived
objects but the optimized implementation doesn't. So it *might* end up
being faster than the microbenchmarks imply.
Rounding dates on a shard that contains a daylight savings time transition
is currently something like 1400% slower than when a shard contains dates
only on one side of the DST transition. And it makes a ton of short lived
garbage. This replaces that implementation with one that benchmarks to
having around 30% overhead instead of the 1400%. And it doesn't generate
any garbage per search hit.
Some background:
There are two ways to round in ES:
* Round to the nearest time unit (Day/Hour/Week/Month/etc)
* Round to the nearest time *interval* (3 days/2 weeks/etc)
I'm only optimizing the first one in this change and plan to do the second
in a follow up. It turns out that rounding to the nearest unit really *is*
two problems: when the unit rounds to midnight (day/week/month/year) and
when it doesn't (hour/minute/second). Rounding to midnight is consistently
about 25% faster and rounding to individual hour or minutes.
This optimization relies on being able to *usually* figure out what the
minimum and maximum dates are on the shard. This is similar to an existing
optimization where we rewrite time zones that aren't fixed
(think America/New_York and its daylight savings time transitions) into
fixed time zones so long as there isn't a daylight savings time transition
on the shard (UTC-5 or UTC-4 for America/New_York). Once I implement
time interval rounding the time zone rewriting optimization *should* no
longer be needed.
This optimization doesn't come into play for `composite` or
`auto_date_histogram` aggs because neither have been migrated to the new
`DATE` `ValuesSourceType` which is where that range lookup happens. When
they are they will be able to pick up the optimization without much work.
I expect this to be substantial for `auto_date_histogram` but less so for
`composite` because it deals with fewer values.
Note: My 30% overhead figure comes from small numbers of daylight savings
time transitions. That overhead gets higher when there are more
transitions in logarithmic fashion. When there are two thousand years
worth of transitions my algorithm ends up being 250% slower than rounding
without a time zone, but java time is 47000% slower at that point,
allocating memory as fast as it possibly can.
Currently forbidden apis accounts for 800+ tasks in the build. These
tasks are aggressively created by the plugin. In forbidden apis 3.0, we
will get task avoidance
(https://github.com/policeman-tools/forbidden-apis/pull/162), but we
need to ourselves use the same task avoidance mechanisms to not trigger
these task creations. This commit does that for our foribdden apis
usages, in preparation for upgrading to 3.0 when it is released.
This commit merges the searchable-snapshots feature branch into master.
See #54803 for the complete list of squashed commits.
Co-authored-by: David Turner <david.turner@elastic.co>
Co-authored-by: Yannick Welsch <yannick@welsch.lu>
Co-authored-by: Lee Hinman <dakrone@users.noreply.github.com>
Co-authored-by: Andrei Dan <andrei.dan@elastic.co>
This is a simple naming change PR, to fix the fact that "metadata" is a
single English word, and for too long we have not followed general
naming conventions for it. We are also not consistent about it, for
example, METADATA instead of META_DATA if we were trying to be
consistent with MetaData (although METADATA is correct when considered
in the context of "metadata"). This was a simple find and replace across
the code base, only taking a few minutes to fix this naming issue
forever.
IDEs can sometimes run annotation processors that leave files in
`src/main/generated/**/*.java`, causing Spotless to complain. Even
though this path ought not to exist, exclude it anyway in order to avoid
spurious failures.
Also, in the Allocators class, a number of methods declared thrown exceptions that IntelliJ reported were never thrown, and as far as I could see this is true, so I removed the exceptions.
This commit is part of issue #40366 to remove disabled Xlint warnings
from gradle files. In particular, it removes the Xlint exclusions from
the following files:
- benchmarks/build.gradle
- client/client-benchmark-noop-api-plugin/build.gradle
- x-pack/qa/rolling-upgrade/build.gradle
- x-pack/qa/third-party/active-directory/build.gradle
- modules/transport-netty4/build.gradle
For the first three files no code adjustments were needed. For
x-pack/qa/third-party/active-directory move the suppression at the code
level. For transport-netty4 replace the variable arguments with
ArrayLists and remove any redundant casts.
Closes#48724. Update `.editorconfig` to make the Java settings the default
for all files, and then apply a 2-space indent to all `*.gradle` files.
Then reformat all the files.
This commit introduces a consistent, and type-safe manner for handling
global build parameters through out our build logic. Primarily this
replaces the existing usages of extra properties with static accessors.
It also introduces and explicit API for initialization and mutation of
any such parameters, as well as better error handling for uninitialized
or eager access of parameter values.
Closes#42042
This commit replaces the existing RandomizedTestingTask and supporting code with Gradle's built-in JUnit support via the Test task type. Additionally, the previous workaround to disable all tasks named "test" and create new unit testing tasks named "unitTest" has been removed such that the "test" task now runs unit tests as per the normal Gradle Java plugin conventions.
Many gradle projects specifically use the -try exclude flag, because
there are many cases where auto-closeable resource ignore is never
referenced in body of corresponding try statement. Suppressing this
warning specifically in each case that it happens using
`@SuppressWarnings("try")` would be very verbose.
This change removes `-try` from any gradle project and adds it to the
build plugin. Also this change removes exclude flags from gradle projects
that is already specified in build plugin (for example -deprecation).
Relates to #40366
This commit replaces the existing RandomizedTestingTask and supporting code with Gradle's built-in JUnit support via the Test task type. Additionally, the previous workaround to disable all tasks named "test" and create new unit testing tasks named "unitTest" has been removed such that the "test" task now runs unit tests as per the normal Gradle Java plugin conventions
This moves the joda based rounding classes into the test directory, so
it does not get used accidentally. This allows us to keep the duel tests
running for now, which ensure that joda and java time implementations
produce the same results. This also removes the benchmarks class.
The benchmarks showed a sharp decrease in aggregation performance for
the UTC case.
This commit uses the same calculation as joda time, which requires no
conversion into any java time object, also, the check for an fixedoffset
has been put into the ctor to reduce the need for runtime calculations.
The same goes for the amount of the used unit in milliseconds.
Closes#37826
This reduces objects creations in the rounding class (used by aggs) by properly
creating the objects only once. Furthermore a few unneeded ZonedDateTime objects
were created in order to create other objects out of them. This was
changed as well.
Running the benchmarks shows a much faster performance for all of the
java time based Rounding classes.
The existing implementation was slow due to exceptions being thrown if
an accessor did not have a time zone. This implementation queries for
having a timezone, local time and local date and also checks for an
instant preventing to throw an exception and thus speeding up the conversion.
This removes the existing method and create a new one named
DateFormatters.from(TemporalAccessor accessor) to resemble the naming of
the java time ones.
Before this change an epoch millis parser using the toZonedDateTime
method took approximately 50x longer.
Relates #37826
This commit moves the aggregation and mapping code from joda time to
java time. This includes field mappers, root object mappers, aggregations with date
histograms, query builders and a lot of changes within tests.
The cut-over to java time is a requirement so that we can support nanoseconds
properly in a future field mapper.
Relates #27330
* Add benchmark
* Use java time API instead of exception handling
when several formatters are used, the existing way of parsing those is
to throw an exception catch it, and try the next one. This is is
considerably slower than the approach taken in joda time, so that
indexing is reduced when a date format like `x||y` is used and y is the
date format being used.
This commit now uses the java API to parse the date by appending the
date time formatters to each other and does not rely on exception
handling.
* fix benchmark
* fix tests by changing formatter, also expose printer
* restore optional printing logic to fix tests
* fix tests
* incorporate review comments
Stop passing `Settings` to `AbstractComponent`'s ctor. This allows us to
stop passing around `Settings` in a *ton* of places. While this change
touches many files, it touches them all in fairly small, mechanical
ways, doing a few things per file:
1. Drop the `super(settings);` line on everything that extends
`AbstractComponent`.
2. Drop the `settings` argument to the ctor if it is no longer used.
3. If the file doesn't use `logger` then drop `extends
AbstractComponent` from it.
4. Clean up all compilation failure caused by the `settings` removal
and drop any now unused `settings` isntances and method arguments.
I've intentionally *not* removed the `settings` argument from a few
files:
1. TransportAction
2. AbstractLifecycleComponent
3. BaseRestHandler
These files don't *need* `settings` either, but this change is large
enough as is.
Relates to #34488
ES is scanning for dangling indices on every cluster state update. For this, it lists the subfolders of
the indices directory to determine which extra index directories exist on the node where there's no
corresponding index in the cluster state. These are potential targets for dangling index import. On
certain machine types, and with large number of indices, this subfolder listing can be horribly slow.
This means that every cluster state update will be slowed down by potentially hundreds of
milliseconds. One of the reasons for this poor performance is that Files.isDirectory() is a relatively
expensive call on some OS and JDK versions. There is no need though to do all these isDirectory
calls for folders which we know we are going to discard anyhow in the next step of the dangling
indices logic. This commit allows adding an exclusion predicate to the availableIndexFolders
methods which can dramatically speed up this method when scanning for dangling indices.
Removes shadowing from the benchmarks. It isn't *strictly* needed. We do
have to rework the documentation on how to run the benchmark, but it
still seems to work if you run everything through gradle.
First, some background: we have 15 different methods to get a logger in
Elasticsearch but they can be broken down into three broad categories
based on what information is provided when building the logger.
Just a class like:
```
private static final Logger logger = ESLoggerFactory.getLogger(ActionModule.class);
```
or:
```
protected final Logger logger = Loggers.getLogger(getClass());
```
The class and settings:
```
this.logger = Loggers.getLogger(getClass(), settings);
```
Or more information like:
```
Loggers.getLogger("index.store.deletes", settings, shardId)
```
The goal of the "class and settings" variant is to attach the node name
to the logger. Because we don't always have the settings available, we
often use the "just a class" variant and get loggers without node names
attached. There isn't any real consistency here. Some loggers get the
node name because it is convenient and some do not.
This change makes the node name available to all loggers all the time.
Almost. There are some caveats are testing that I'll get to. But in
*production* code the node name is node available to all loggers. This
means we can stop using the "class and settings" variants to fetch
loggers which was the real goal here, but a pleasant side effect is that
the ndoe name is now consitent on every log line and optional by editing
the logging pattern. This is all powered by setting the node name
statically on a logging formatter very early in initialization.
Now to tests: tests can't set the node name statically because
subclasses of `ESIntegTestCase` run many nodes in the same jvm, even in
the same class loader. Also, lots of tests don't run with a real node so
they don't *have* a node name at all. To support multiple nodes in the
same JVM tests suss out the node name from the thread name which works
surprisingly well and easy to test in a nice way. For those threads
that are not part of an `ESIntegTestCase` node we stick whatever useful
information we can get form the thread name in the place of the node
name. This allows us to keep the logger format consistent.
Moves the customizations to the build to produce nice shadow jars and
javadocs into common build code, mostly BuildPlugin with a little into
the root build.gradle file. This means that any project that applies the
shadow plugin will automatically be set up just like the high level rest
client:
* The non-shadow jar will not be built
* The shadow jar will not have a "classifier"
* Tests will run against the shadow jar
* Javadoc will include all of the shadowed classes
* Service files in `META-INF/services` will be merged
With this commit we introduce a new circuit-breaking strategy to the parent
circuit breaker. Contrary to the current implementation which only accounts for
memory reserved via child circuit breakers, the new strategy measures real heap
memory usage at the time of reservation. This allows us to be much more
aggressive with the circuit breaker limit so we bump it to 95% by default. The
new strategy is turned on by default and can be controlled with the new cluster
setting `indices.breaker.total.userealmemory`.
Note that we turn it off for all integration tests with an internal test cluster
because it leads to spurious test failures which are of no value (we cannot
fully control heap memory usage in tests). All REST tests, however, will make
use of the real memory circuit breaker.
Relates #31767
* Move to Gradle 4.8 RC1
* Use latest version of plugin
The current does not work with Gradle 4.8 RC1
* Switch to Gradle GA
* Add and configure build compare plugin
* add work-around for https://github.com/gradle/gradle/issues/5692
* work around https://github.com/gradle/gradle/issues/5696
* Make use of Gradle build compare with reference project
* Make the manifest more compare friendly
* Clear the manifest in compare friendly mode
* Remove animalsniffer from buildscript classpath
* Fix javadoc errors
* Fix doc issues
* reference Gradle issues in comments
* Conditionally configure build compare
* Fix some more doclint issues
* fix typo in build script
* Add sanity check to make sure the test task was replaced
Relates to #31324. It seems like Gradle has an inconsistent behavior and
the taks is not always replaced.
* Include number of non conforming tasks in the exception.
* No longer replace test task, create implicit instead
Closes#31324. The issue has full context in comments.
With this change the `test` task becomes nothing more than an alias for `utest`.
Some of the stand alone tests that had a `test` task now have `integTest`, and a
few of them that used to have `integTest` to run multiple tests now only
have `check`.
This will also help separarate unit/micro tests from integration tests.
* Revert "No longer replace test task, create implicit instead"
This reverts commit f1ebaf7d93e4a0a19e751109bf620477dc35023c.
* Fix replacement of the test task
Based on information from gradle/gradle#5730 replace the task taking
into account the task providres.
Closes#31324.
* Only apply build comapare plugin if needed
* Make sure test runs before integTest
* Fix doclint aftter merge
* PR review comments
* Switch to Gradle 4.8.1 and remove workaround
* PR review comments
* Consolidate task ordering
* Solve Gradle deprecation warnings around shadowJar
- Apply workaround for: johnrengelman/shadow#336
- bump plugin to 2.0.4
Changes between 2.0.2 and 2.0.4 of the plugin:
```
477db40 12 days ago john.engelman@target.com Release 2.0.4
3e3da37 3 weeks ago john.engelman@target.com Remove internal Gradle API and annotation internal getters on shadow jar.
31e2380 3 weeks ago john.engelman@target.com Close input streams. Closes#364
f712cc8 3 weeks ago john.engelman@target.com Upgrade ASM to 6.1.1 to address perf issues. Closes#374
2f94b2b 3 weeks ago john.engelman@target.com next version
23bbf3d 7 weeks ago john.r.engelman@gmail.com Add some gradle versions. Update changelog for 2.0.3
7435c74 7 weeks ago john.r.engelman@gmail.com Merge pull request #367 from ttsiebzehntt/366-java10
325c002 7 weeks ago info@martinsadowski.de Update ASM to 6.1
94550e5 3 months ago john.r.engelman@gmail.com Merge pull request #356 from sgnewson/update-file-to-files
66b691e 4 months ago john.r.engelman@gmail.com Merge pull request #358 from 3flex/patch-1
14761b1 4 months ago 3flex@users.noreply.github.com fix markdown for User Guide URL in issue template
a3f6984 4 months ago newson@synopsys.com update inputs.file to inputs.files, to remove warning
```
closes#30389
* Improove comment as suggested
With this commit we configure our microbenchmarks project to use the
configured RUNTIME_JAVA_HOME and to fallback on JAVA_HOME so this
behavior is consistent with the rest of the Elasticsearch build.
Closes#28961