elasticsearch/docs/reference
Nik Everett 10336c950c
ESQL: Speed loading stored fields (#127348)
This speeds up loading from stored fields by opting more blocks into the
"sequential" strategy. This really kicks in when loading stored fields
like `text`. And when you need less than 100% of documents, but more than,
say, 10%. This is most useful when you need 99.9% of field documents.
That sort of thing. Here's the perf numbers:
```
%100.0 {"took": 403 -> 401,"documents_found":1000000}
%099.9 {"took":3990 -> 436,"documents_found": 999000}
%099.0 {"took":4069 -> 440,"documents_found": 990000}
%090.0 {"took":3468 -> 421,"documents_found": 900000}
%030.0 {"took":1213 -> 152,"documents_found": 300000}
%020.0 {"took": 766 -> 104,"documents_found": 200000}
%010.0 {"took": 397 ->  55,"documents_found": 100000}
%009.0 {"took": 352 -> 375,"documents_found":  90000}
%008.0 {"took": 304 -> 317,"documents_found":  80000}
%007.0 {"took": 273 -> 287,"documents_found":  70000}
%005.0 {"took": 199 -> 204,"documents_found":  50000}
%001.0 {"took":  46 ->  46,"documents_found":  10000}
```

Let's explain this with an example. First, jump to `main` and load a
million documents:
```
rm -f /tmp/bulk
for a in {1..1000}; do
    echo '{"index":{}}' >> /tmp/bulk
    echo '{"text":"text '$(printf %04d $a)'"}' >> /tmp/bulk
done

curl -s -uelastic:password -HContent-Type:application/json -XDELETE localhost:9200/test
for a in {1..1000}; do
    echo -n $a:
    curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_bulk?pretty --data-binary @/tmp/bulk | grep errors
done
curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_forcemerge?max_num_segments=1
curl -s -uelastic:password -HContent-Type:application/json -XPOST localhost:9200/test/_refresh
echo
```

Now query them all. Run this a few times until it's stable:
```
echo -n "%100.0 "
curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
    "query": "FROM test | STATS SUM(LENGTH(text))",
    "pragma": {
        "data_partitioning": "shard"
    }
}' | jq -c '{took, documents_found}'
```

Now fetch 99.9% of documents:
```
echo -n "%099.9 "
curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
    "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))",
    "pragma": {
        "data_partitioning": "shard"
    }
}' | jq -c '{took, documents_found}'
```

This should spit out something like:
```
%100.0 { "took":403,"documents_found":1000000}
%099.9 {"took":4098, "documents_found":999000}
```

We're loading *fewer* documents but it's slower! What in the world?!
If you dig into the profile you'll see that it's value loading:
```
$ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
    "query": "FROM test | STATS SUM(LENGTH(text))",
    "pragma": {
        "data_partitioning": "shard"
    },
    "profile": true
}' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))'
{
  "operator": "ValuesSourceReaderOperator[fields = [text]]",
  "status": {
    "readers_built": {
      "stored_fields[requires_source:true, fields:0, sequential: true]": 222,
      "text:column_at_a_time:null": 222,
      "text:row_stride:BlockSourceReader.Bytes": 1
    },
    "values_loaded": 1000000,
    "process_nanos": 370687157,
    "pages_processed": 222,
    "rows_received": 1000000,
    "rows_emitted": 1000000
  }
}
$ curl -s -uelastic:password -HContent-Type:application/json -XPOST 'localhost:9200/_query?pretty' -d'{
    "query": "FROM test | WHERE NOT text.keyword IN (\"text 0998\") | STATS SUM(LENGTH(text))",
    "pragma": {
        "data_partitioning": "shard"
    },
    "profile": true
}' | jq '.profile.drivers[].operators[] | select(.operator | contains("ValuesSourceReaderOperator"))'
{
  "operator": "ValuesSourceReaderOperator[fields = [text]]",
  "status": {
    "readers_built": {
      "stored_fields[requires_source:true, fields:0, sequential: false]": 222,
      "text:column_at_a_time:null": 222,
      "text:row_stride:BlockSourceReader.Bytes": 1
    },
    "values_loaded": 999000,
    "process_nanos": 3965803793,
    "pages_processed": 222,
    "rows_received": 999000,
    "rows_emitted": 999000
  }
}
```

It jumps from 370ms to almost four seconds! Loading fewer values! The
second big difference is in the `stored_fields` marker. In the second on
it's `sequential: false` and in the first `sequential: true`.

`sequential: true` uses Lucene's "merge" stored fields reader instead of
the default one. It's much more optimized at decoding sequences of
documents.

Previously we only enabled this reader when loading compact sequences of
documents - when the entire block looks like
```
1, 2, 3, 4, 5, ... 1230, 1231
```

If there are any gaps we wouldn't enable it. That was a very
conservative thing we did long ago without doing any experiments. We
knew it was faster without any gaps, but not otherwise. It turns out
it's a lot faster in a lot more cases. I've measured it as faster for
99% gaps, at least on simple documents. I'm a bit worried that this is
too aggressive, so I've set made it configurable and made the default
being to use the "merge" loader with 10% gaps. So we'd use the merge
loader with a block like:
```
1, 11, 21, 31, ..., 1231, 1241
```
2025-04-29 23:20:15 +02:00
..
aggregations [docs] Fix various syntax and rendering errors (#127062) 2025-04-24 17:57:03 +02:00
community-contributed remove reliance on redirects in docs-content (#125863) 2025-03-28 16:41:38 +01:00
elasticsearch ESQL: Speed loading stored fields (#127348) 2025-04-29 23:20:15 +02:00
elasticsearch-plugins [docs] Fix various syntax and rendering errors (#127062) 2025-04-24 17:57:03 +02:00
enrich-processor Add info to `date` processor docs (#127434) 2025-04-29 13:40:36 +01:00
query-languages [DOCS][9.x] Improve ESQL reference docs information architecture (#127248) 2025-04-25 09:54:45 +02:00
scripting-languages [docs] Fix various syntax and rendering errors (#127062) 2025-04-24 17:57:03 +02:00
search/search-your-data [DOCS] Clean up Asciidoc links in markdown files (#125046) 2025-03-19 08:03:55 -07:00
search-connectors [docs] Fix various syntax and rendering errors (#127062) 2025-04-24 17:57:03 +02:00
setup/install/docker [docs] Migrate docs from AsciiDoc to Markdown (#123507) 2025-02-27 17:56:14 +01:00
text-analysis [docs] Fix various syntax and rendering errors (#127062) 2025-04-24 17:57:03 +02:00
watcher/images [docs] Migrate docs from AsciiDoc to Markdown (#123507) 2025-02-27 17:56:14 +01:00
index.md [docs] More updates for docs-assembler (#125509) 2025-03-24 14:20:14 -05:00
toc.yml [docs] More updates for docs-assembler (#125509) 2025-03-24 14:20:14 -05:00