mirror of https://github.com/redis/redis.git
1 Commits
Author | SHA1 | Message | Date |
---|---|---|---|
|
11947d8892
|
[Vector sets] fast JSON filter (#13959)
CI / test-ubuntu-latest (push) Waiting to run
Details
CI / test-sanitizer-address (push) Waiting to run
Details
CI / build-debian-old (push) Waiting to run
Details
CI / build-macos-latest (push) Waiting to run
Details
CI / build-32bit (push) Waiting to run
Details
CI / build-libc-malloc (push) Waiting to run
Details
CI / build-centos-jemalloc (push) Waiting to run
Details
CI / build-old-chain-jemalloc (push) Waiting to run
Details
Codecov / code-coverage (push) Waiting to run
Details
External Server Tests / test-external-standalone (push) Waiting to run
Details
External Server Tests / test-external-cluster (push) Waiting to run
Details
External Server Tests / test-external-nodebug (push) Waiting to run
Details
Spellcheck / Spellcheck (push) Waiting to run
Details
This PR replaces cJSON with an home-made parser designed for the kind of access pattern the FILTER option of VSIM performs on JSON objects. The main points here are: * cJSON forces us to parse the whole JSON, create a graph of cJSON objects, then we need to seek in O(N) to find the right field. * The cJSON object associated with the value is not of the same format as the expr.c virtual machine. We needed a conversion function doing more allocation and work. * Right now we only support top level fields in the JSON object, so a full parser is not needed. With all these things in mind, and after carefully profiling the old code, I realized that a specialized parser able to parse JSON in a zero-allocation fashion and only actually parse the value associated to our key would be much more efficient. Moreover, after this change, the dependencies of Vector Sets to external code drops to zero, and the count of lines of code is 3000 lines less. The new line count with LOC is 4200, making Vector Sets easily the smallest full featured implementation of a Vector store available. # Speedup achieved In a dataset with JSON objects with 30 fields, 1 million elements, the following query shows a 3.5x speedup: vsim vectors:million ele ele943903 FILTER ".field29 > 1000 and .field15 < 50" Please note that we get **3.5x speedup** in the VSIM command itself. This means that the actual JSON parsing speedup is significantly greater than that. However, in Redis land, under my past kingdom of many years ago, the rule was that an improvement would produce speedups that are *user facing*. This PR definitely qualifies. What is interesting is that even with a JSON containing a single element the speedup is of about 70%, so we are faster even in the worst case. # Further info Note that the new skipping parser, may happily process JSON objects that are not perfectly valid, as soon as they look valid from the POV of balancing [] and {} and so forth. This should not be an issue. Anyway invalid JSON produces random results (the element is skipped at all even if it would pass the filter). Please feel free to ask me anything about the new implementation before merging. |