This guide covers the core concepts you need to understand to get started with {es}.
If you'd prefer to start working with {es} right away, set up a <<run-elasticsearch-locally,local development environment>> and jump to <<quickstart,hands-on code examples>>.
This guide covers the following topics:
* <<elasticsearch-intro-what-is-es>>: Learn about {es} and some of its main use cases.
* <<elasticsearch-intro-deploy>>: Understand your options for deploying {es} in different environments, including a fast local development setup.
* <<documents-indices>>: Understand {es}'s most important primitives and how it stores data.
* <<es-ingestion-overview>>: Understand your options for ingesting data into {es}.
* <<search-analyze>>: Understand your options for searching and analyzing data in {es}.
* <<scalability>>: Understand the basic concepts required for moving your {es} deployment to production.
{es} is used for a wide and growing range of use cases. Here are a few examples:
**Observability**
* *Logs, metrics, and traces*: Collect, store, and analyze logs, metrics, and traces from applications, systems, and services.
* *Application performance monitoring (APM)*: Monitor and analyze the performance of business-critical software applications.
* *Real user monitoring (RUM)*: Monitor, quantify, and analyze user interactions with web applications.
* *OpenTelemetry*: Reuse your existing instrumentation to send telemetry data to the Elastic Stack using the OpenTelemetry standard.
**Search**
* *Full-text search*: Build a fast, relevant full-text search solution using inverted indexes, tokenization, and text analysis.
* *Vector database*: Store and search vectorized data, and create vector embeddings with built-in and third-party natural language processing (NLP) models.
* *Semantic search*: Understand the intent and contextual meaning behind search queries using tools like synonyms, dense vector embeddings, and learned sparse query-document expansion.
* *Hybrid search*: Combine full-text search with vector search using state-of-the-art ranking algorithms.
* *Build search experiences*: Add hybrid search capabilities to apps or websites, or build enterprise search engines over your organization's internal data sources.
* *Retrieval augmented generation (RAG)*: Use {es} as a retrieval engine to supplement generative AI models with more relevant, up-to-date, or proprietary data for a range of use cases.
* *Geospatial search*: Search for locations and calculate spatial relationships using geospatial queries.
**Security**
* *Security information and event management (SIEM)*: Collect, store, and analyze security data from applications, systems, and services.
* *Endpoint security*: Monitor and analyze endpoint security data.
* *Threat hunting*: Search and analyze data to detect and respond to security threats.
This is just a sample of search, observability, and security use cases enabled by {es}.
Refer to Elastic https://www.elastic.co/customers/success-stories[customer success stories] for concrete examples across a range of industries.
* <<run-elasticsearch-locally,*Local development*>>: Get started quickly with a minimal local Docker setup for development and testing.
**Hosted options**
* {cloud}/ec-getting-started-trial.html[*Elastic Cloud Hosted*]: {es} is available as part of the hosted Elastic Stack offering, deployed in the cloud with your provider of choice. Sign up for a https://cloud.elastic.co/registration[14-day free trial].
* {serverless-docs}/general/sign-up-trial[*Elastic Cloud Serverless*]: Create serverless projects for autoscaled and fully managed {es} deployments. Sign up for a https://cloud.elastic.co/serverless-registration[14-day free trial].
* <<elasticsearch-deployment-options,*Self-managed*>>: Install, configure, and run {es} on your own premises.
* {ece-ref}/Elastic-Cloud-Enterprise-overview.html[*Elastic Cloud Enterprise*]: Deploy Elastic Cloud on public or private clouds, virtual machines, or your own premises.
* {eck-ref}/k8s-overview.html[*Elastic Cloud on Kubernetes*]: Deploy Elastic Cloud on Kubernetes.
This index abstraction is optimized for append-only timestamped data, and is made up of hidden, auto-generated backing indices.
If you're working with timestamped data, we recommend the {observability-guide}[Elastic Observability] solution for additional tools and optimized content.
* <<mapping-dynamic, Dynamic mapping>>: Let {es} automatically detect the data types and create the mappings for you. Dynamic mapping helps you get started quickly, but might yield suboptimal results for your specific use case due to automatic field type inference.
* <<mapping-explicit, Explicit mapping>>: Define the mappings up front by specifying data types for each field. Recommended for production use cases, because you have full control over how your data is indexed to suit your specific use case.
The option that you choose depends on whether you're working with timestamped data or non-timestamped data, where the data is coming from, its complexity, and more.
[TIP]
====
You can load {kibana-ref}/connect-to-elasticsearch.html#_add_sample_data[sample data] into your {es} cluster using {kib}, to get started quickly.
====
[discrete]
[[es-ingestion-overview-general-content]]
==== General content
General content is data that does not have a timestamp.
This could be data like vector embeddings, website content, product catalogs, and more.
For general content, you have the following options for adding data to {es} indices:
* <<docs,API>>: Use the {es} <<docs,Document APIs>> to index documents directly, using the Dev Tools {kibana-ref}/console-kibana.html[Console], or cURL.
+
If you're building a website or app, then you can call Elasticsearch APIs using an https://www.elastic.co/guide/en/elasticsearch/client/index.html[{es} client] in the programming language of your choice. If you use the Python client, then check out the `elasticsearch-labs` repo for various https://github.com/elastic/elasticsearch-labs/tree/main/notebooks/search/python-examples[example notebooks].
* {kibana-ref}/connect-to-elasticsearch.html#upload-data-kibana[File upload]: Use the {kib} file uploader to index single files for one-off testing and exploration. The GUI guides you through setting up your index and field mappings.
* https://github.com/elastic/crawler[Web crawler]: Extract and index web page content into {es} documents.
Timestamped data in {es} refers to datasets that include a timestamp field. If you use the {ecs-ref}/ecs-reference.html[Elastic Common Schema (ECS)], this field is named `@timestamp`.
This could be data like logs, metrics, and traces.
For timestamped data, you have the following options for adding data to {es} data streams:
* {fleet-guide}/fleet-overview.html[Elastic Agent and Fleet]: The preferred way to index timestamped data. Each Elastic Agent based integration includes default ingestion rules, dashboards, and visualizations to start analyzing your data right away.
You can use the Fleet UI in {kib} to centrally manage Elastic Agents and their policies.
* {beats-ref}/beats-reference.html[Beats]: If your data source isn't supported by Elastic Agent, use Beats to collect and ship data to Elasticsearch. You install a separate Beat for each type of data to collect.
* {logstash-ref}/introduction.html[Logstash]: Logstash is an open source data collection engine with real-time pipelining capabilities that supports a wide variety of data sources. You might use this option because neither Elastic Agent nor Beats supports your data source. You can also use Logstash to persist incoming data, or if you need to send the data to multiple destinations.
* {cloud}/ec-ingest-guides.html[Language clients]: The linked tutorials demonstrate how to use {es} programming language clients to ingest data from an application. In these examples, {es} is running on Elastic Cloud, but the same principles apply to any {es} deployment.
[TIP]
====
If you're interested in data ingestion pipelines for timestamped data, use the decision tree in the {cloud}/ec-cloud-ingest-data.html#ec-data-ingest-pipeline[Elastic Cloud docs] to understand your options.
Refer to <<getting-started,first steps with Elasticsearch>> for a hands-on example of using the `_search` endpoint, adding data to {es}, and running basic searches in Query DSL syntax.
{es} provides a number of query languages for interacting with your data.
*Query DSL* is the primary query language for {es} today.
*{esql}* is a new piped query language and compute engine which was first added in version *8.11*.
{esql} does not yet support all the features of Query DSL, like full-text search and semantic search.
Look forward to new {esql} features and functionalities in each release.
Refer to <<search-analyze-query-languages>> for a full overview of the query languages available in {es}.
[discrete]
[[search-analyze-query-dsl]]
===== Query DSL
<<query-dsl, Query DSL>> is a full-featured JSON-style query language that enables complex searching, filtering, and aggregations.
It is the original and most powerful query language for {es} today.
The <<search-your-data, `_search` endpoint>> accepts queries written in Query DSL syntax.
[discrete]
[[search-analyze-query-dsl-search-filter]]
====== Search and filter with Query DSL
Query DSL support a wide range of search techniques, including the following:
* <<full-text-queries,*Full-text search*>>: Search text that has been analyzed and indexed to support phrase or proximity queries, fuzzy matches, and more.
* <<keyword,*Keyword search*>>: Search for exact matches using `keyword` fields.
* <<semantic-search-semantic-text,*Semantic search*>>: Search `semantic_text` fields using dense or sparse vector search on embeddings generated in your {es} cluster.
* <<knn-search,*Vector search*>>: Search for similar dense vectors using the kNN algorithm for embeddings generated outside of {es}.
* <<geo-queries,*Geospatial search*>>: Search for locations and calculate spatial relationships using geospatial queries.
<<esql,Elasticsearch Query Language ({esql})>> is a piped query language for filtering, transforming, and analyzing data.
{esql} is built on top of a new compute engine, where search, aggregation, and transformation functions are
directly executed within {es} itself.
{esql} syntax can also be used within various {kib} tools.
The <<esql-rest,`_query` endpoint>> accepts queries written in {esql} syntax.
Today, it supports a subset of the features available in Query DSL, like aggregations, filters, and transformations.
It does not yet support full-text search or semantic search.
It comes with a comprehensive set of <<esql-functions-operators,functions and operators>> for working with data and has robust integration with {kib}'s Discover, dashboards and visualizations.
Learn more in <<esql-getting-started,Getting started with {esql}>>, or try https://www.elastic.co/training/introduction-to-esql[our training course].
The following table summarizes all available {es} query languages, to help you choose the right one for your use case.
[cols="1,2,2,1", options="header"]
|===
| Name | Description | Use cases | API endpoint
| <<query-dsl,Query DSL>>
| The primary query language for {es}. A powerful and flexible JSON-style language that enables complex queries.
| Full-text search, semantic search, keyword search, filtering, aggregations, and more.
| <<search-search,`_search`>>
| <<esql,{esql}>>
| Introduced in *8.11*, the Elasticsearch Query Language ({esql}) is a piped query language language for filtering, transforming, and analyzing data.
| Initially tailored towards working with time series data like logs and metrics.
Robust integration with {kib} for querying, visualizing, and analyzing data.
Does not yet support full-text search.
| <<esql-rest,`_query`>>
| <<eql,EQL>>
| Event Query Language (EQL) is a query language for event-based time series data. Data must contain the `@timestamp` field to use EQL.
| Designed for the threat hunting security use case.
| <<eql-apis,`_eql`>>
| <<xpack-sql,Elasticsearch SQL>>
| Allows native, real-time SQL-like querying against {es} data. JDBC and ODBC drivers are available for integration with business intelligence (BI) tools.
| Enables users familiar with SQL to query {es} data using familiar syntax for BI and reporting.
| <<sql-apis,`_sql`>>
| {kibana-ref}/kuery-query.html[Kibana Query Language (KQL)]
Many teams rely on {es} to run their key services. To keep these services running, you can design your {es} deployment
to keep {es} available, even in case of large-scale outages. To keep it running fast, you also can design your
deployment to be responsive to production workloads.
{es} is built to be always available and to scale with your needs. It does this using a distributed architecture.
By distributing your cluster, you can keep Elastic online and responsive to requests.
In case of failure, {es} offers tools for cross-cluster replication and cluster snapshots that can
help you fall back or recover quickly. You can also use cross-cluster replication to serve requests based on the
geographic location of your users and your resources.
{es} also offers security and monitoring tools to help you keep your cluster highly available.
[discrete]
[[use-multiple-nodes-shards]]
==== Use multiple nodes and shards
[NOTE]
====
Nodes and shards are what make {es} distributed and scalable.
These concepts aren’t essential if you’re just getting started. How you <<elasticsearch-intro-deploy,deploy {es}>> in production determines what you need to know:
* *Self-managed {es}*: You are responsible for setting up and managing nodes, clusters, shards, and replicas. This includes
managing the underlying infrastructure, scaling, and ensuring high availability through failover and backup strategies.
* *Elastic Cloud*: Elastic can autoscale resources in response to workload changes. Choose from different deployment types
to apply sensible defaults for your use case. A basic understanding of nodes, shards, and replicas is still important.
* *Elastic Cloud Serverless*: You don’t need to worry about nodes, shards, or replicas. These resources are 100% automated
on the serverless platform, which is designed to scale with your workload.
====
You can add servers (_nodes_) to a cluster to increase capacity, and {es} automatically distributes your data and query load
across all of the available nodes.
Elastic is able to distribute your data across nodes by subdividing an index into _shards_. Each index in {es} is a grouping
of one or more physical shards, where each shard is a self-contained Lucene index containing a subset of the documents in
the index. By distributing the documents in an index across multiple shards, and distributing those shards across multiple
nodes, {es} increases indexing and query capacity.
There are two types of shards: _primaries_ and _replicas_. Each document in an index belongs to one primary shard. A replica
shard is a copy of a primary shard. Replicas maintain redundant copies of your data across the nodes in your cluster.
This protects against hardware failure and increases capacity to serve read requests like searching or retrieving a document.
[TIP]
====
The number of primary shards in an index is fixed at the time that an index is created, but the number of replica shards can
be changed at any time, without interrupting indexing or query operations.
====
Shard copies in your cluster are automatically balanced across nodes to provide scale and high availability. All nodes are
aware of all the other nodes in the cluster and can forward client requests to the appropriate node. This allows {es}
to distribute indexing and query load across the cluster.
If you’re exploring {es} for the first time or working in a development environment, then you can use a cluster with a single node and create indices
with only one shard. However, in a production environment, you should build a cluster with multiple nodes and indices
with multiple shards to increase performance and resilience.
// TODO - diagram
To learn about optimizing the number and size of shards in your cluster, refer to <<size-your-shards,Size your shards>>.
To learn about how read and write operations are replicated across shards and shard copies, refer to <<docs-replication,Reading and writing documents>>.
To adjust how shards are allocated and balanced across nodes, refer to <<shard-allocation-relocation-recovery,Shard allocation, relocation, and recovery>>.
optimal configuration for your use case is through https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing[testing with your own data and queries].