ollama/llama
Jesse Gross 8111c35be8 llm: New memory management
This changes the memory allocation strategy from upfront estimation to
tracking actual allocations done by the engine and reacting to that. The
goal is avoid issues caused by both under-estimation (crashing) and
over-estimation (low performance due to under-utilized GPUs).

It is currently opt-in and can be enabled for models running on the
Ollama engine by setting OLLAMA_NEW_ESTIMATES=1. Behavior in other
cases is unchanged and will continue to use the existing estimates.
2025-08-20 16:56:54 +02:00
..
llama.cpp update vendored llama.cpp and ggml (#11823) 2025-08-20 16:41:49 +02:00
patches llm: New memory management 2025-08-20 16:56:54 +02:00
.gitignore Re-introduce the `llama` package (#5034) 2024-10-08 08:53:54 -07:00
README.md docs: improve syntax highlighting in code blocks (#8854) 2025-02-07 09:55:07 -08:00
build-info.cpp update vendored llama.cpp and ggml (#11823) 2025-08-20 16:41:49 +02:00
build-info.cpp.in chore: update gitattributes (#8860) 2025-02-05 16:37:18 -08:00
llama.go llm: New memory management 2025-08-20 16:56:54 +02:00
llama_test.go llama: move grammar tests to llama_test.go (#8411) 2025-01-14 12:55:45 -08:00
sampling_ext.cpp update vendored llama.cpp and ggml (#11823) 2025-08-20 16:41:49 +02:00
sampling_ext.h api: remove unused sampling parameters (#10581) 2025-05-08 08:31:08 -07:00

README.md

llama

This package provides Go bindings to llama.cpp.

Vendoring

Ollama vendors llama.cpp and ggml. While we generally strive to contribute changes back upstream to avoid drift, we carry a small set of patches which are applied to the tracking commit.

If you update the vendoring code, start by running the following command to establish the tracking llama.cpp repo in the ./vendor/ directory.

make -f Makefile.sync apply-patches

Updating Base Commit

Pin to new base commit

To change the base commit, update FETCH_HEAD in Makefile.sync.

When updating to a newer base commit, the existing patches may not apply cleanly and require manual merge resolution.

Start by applying the patches. If any of the patches have conflicts, the git am will stop at the first failure.

make -f Makefile.sync apply-patches

If there are conflicts, you will see an error message. Resolve the conflicts in ./vendor/, and continue the patch series with git am --continue and rerun make -f Makefile.sync apply-patches. Repeat until all patches are successfully applied.

Once all patches are applied, commit the changes to the tracking repository.

make -f Makefile.sync format-patches sync

Generating Patches

When working on new fixes or features that impact vendored code, use the following model. First get a clean tracking repo with all current patches applied:

make -f Makefile.sync clean apply-patches

Iterate until you're ready to submit PRs. Once your code is ready, commit a change in the ./vendor/ directory, then generate the patches for ollama with

make -f Makefile.sync format-patches

In your ./vendor/ directory, create a branch, and cherry-pick the new commit to that branch, then submit a PR upstream to llama.cpp.

Commit the changes in the ollama repo and submit a PR to Ollama, which will include the vendored code update with your change, along with the patches.

After your PR upstream is merged, follow the Updating Base Commit instructions above, however first remove your patch before running apply-patches since the new base commit contains your change already.