Adds support for "Default Application Credentials" for GCS repositories, making it easier to set up a repository on GCP,
as all relevant information to connect to the repository is retrieved from the environment, not necessitating complicated
keystore setups.
Currently metadata fields like `_size` or `_doc_count` cannot be retrieved using
the fields API. With this change, we allow this if the field is explicitely
queried for using its name, but won't include metadata fields when e.g.
requesting all fields via "*".
With this change, not all metadata fields will be retrievable by using its name,
but support for "_size" and "_doc_count" (which is fetched from source) is
added. Support for other metadata field types will need to be decided case by
case and an appropriate ValueFetcher needs to be supplied.
Relates to #63569
Closes#66476. Add support for removing multiple plugins at the
same time to `elasticsearch-plugin`. Also change references from
"plugin name" to "plugin id", to align better with the installer
class.
We removed the global `repositories.s3.base_path` setting in 6.0 but it
is still mentioned in the docs for the S3 repository plugin. This commit
removes it from the docs.
Relates #24445
the bucket naming convention should be clear, underscores (which we use in all of our examples) are not allowed so we should really change all these examples.
* Clarify that field data cache includes global ordinals
* Describe that the cache should be cleared once the limit is reached
* Clarify that the `_id` field does not supported aggregations anymore
* Fold the `fielddata` mapping parameter page into the `text field docs
* Improve cross-linking
Closes#61145.
This PR adds a quota-aware filesystem plugin to Elasticsearch. This plugin
offers a way to provide user quota limits (specifically, total quota size
and available quota size) to Elasticsearch, in an implementation-agnostic
manner.
As part of this work, this PR also introduces the concept of "bootstrap
only" plugins, which are excluded from the normal plugin loading process.
Finally, note that this implementation supports `createLink(...)`, since ES
/ Lucene use hard links where possible.
Today in the `repository-s3` docs we say
> Other S3-compatible storage systems may also work with Elasticsearch,
> but these are not tested or supported.
Saying that they are explicitly not supported is a very strong
statement, implying that it is positively irresponsible to use anything
except S3 or Minio, even after extensive testing. S3-compatibility in
third-party systems has matured in recent years and users today report
success with a good number of them. In contrast, we effectively claim
support for any old NFS implementation when used with the
shared-filesystem repository.
This commit weakens this statement, removing the absolute claim of
unsupportedness and instead spelling out that the user is responsible
for ironing out any incompatibilities with the storage supplier.
* Add PGSync as a new community supported tool (#62788)
* Remvoing errant space in Kafka link.
Co-authored-by: Tolu Aina <7848930+toluaina@users.noreply.github.com>
Co-authored-by: Tolu Aina <7848930+toluaina@users.noreply.github.com>
* Update url to camel.apache component integration
* Point url to v5.0 code to the right branch
* Fix text around link
Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>
Changes:
* Moves `Retrieve selected fields` to its own page and adds a title abbreviation.
* Adds existing script and stored fields content to `Retrieve selected fields`
* Adds a xref for `Retrieve selected fields` to `Search your data`
* Adds related redirects and updates existing xrefs
Add VPC endpoint as the recommended way of connecting to s3 in private subnets
Co-authored-by: Bill Mitchell <vocatan@users.noreply.github.com>
Co-authored-by: David Turner <david.turner@elastic.co>
Plugin discovery documentation contained information about installing
Elasticsearch 2.0 and installing an oracle JDK, both of which is no
longer valid.
While noticing that the instructions used cleartext HTTP to install
packages, this commit replaces HTTPs links instead of HTTP where possible.
In addition a few community links have been removed, as they do not seem
to exist anymore.
Removing these limits as they cause unnecessarily many object in the blob stores.
We do not have to worry about BwC of this change since we do not support any 3rd party
implementations of Azure or GCS.
Also, since there is no valid reason to set a different than the default maximum chunk size at this
point, removing the documentation (which was incorrect in the case of Azure to begin with) for the setting
from the docs.
Closes#56018
Restoring from a snapshot (which is a particular form of recovery) does not currently take recovery throttling into account
(i.e. the `indices.recovery.max_bytes_per_sec` setting). While restores are subject to their own throttling (repository
setting `max_restore_bytes_per_sec`), this repository setting does not allow for values to be configured differently on a
per-node basis. As restores are very similar in nature to peer recoveries (streaming bytes to the node), it makes sense to
configure throttling in a single place.
The `max_restore_bytes_per_sec` setting is also changed to default to unlimited now, whereas previously it was set to
`40mb`, which is the current default of `indices.recovery.max_bytes_per_sec`). This means that no behavioral change
will be observed by clusters where the recovery and restore settings were not adapted.
Relates https://github.com/elastic/elasticsearch/issues/57023
Co-authored-by: James Rodewig <james.rodewig@elastic.co>
Changes:
* Condenses and relocates the `docvalue_fields` example to the 'Run a search'
page.
* Adds docs for the `docvalue_fields` request body parameter.
* Updates several related xrefs.
Co-authored-by: debadair <debadair@elastic.co>