ollama/llama/patches
Jesse Gross 35fda7b4af ggml: Report ordinal IDs for AMD GPUs on Windows
We don't get valid UUIDs for AMD GPUs on Windows, so the best option
is to use the ordinal IDs. This brings us in line with what we currently
do on the Ollama server - the only exception is AMD GPUs on Linux, which
falls back to using ordinal IDs. The GGML implementation has no fallback
but it doesn't appear to occur for any of the GPUs that we support.

It's also possible that there are collisions between ordinal IDs for
different libraries - however the only places where we use them are
AMD on Windows and Metal on Mac, which can never occur on the same
system.
2025-07-09 10:35:31 -07:00
..
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
0002-pretokenizer.patch llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
0003-embeddings.patch llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
0004-clip-unicode.patch llama: update to commit de4c07f93 (#10655) 2025-05-12 12:17:26 -07:00
0005-solar-pro.patch add new gemma model (#11204) 2025-06-25 21:47:09 -07:00
0006-fix-deepseek-deseret-regex.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0007-maintain-ordering-for-rules-for-grammar.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0008-ensure-KV-cache-is-fully-defragmented.patch add new gemma model (#11204) 2025-06-25 21:47:09 -07:00
0009-sort-devices-by-score.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0010-add-phony-target-ggml-cpu-for-all-cpu-variants.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0011-remove-amx.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0012-fix-string-arr-kv-loading.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0013-ollama-debug-tensor.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0014-add-ollama-vocab-for-grammar-support.patch chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
0015-add-argsort-and-cuda-copy-for-i32.patch add new gemma model (#11204) 2025-06-25 21:47:09 -07:00
0016-graph-memory-reporting-on-failure.patch ggml: Report graph memory for failed allocations 2025-05-22 14:38:09 -07:00
0017-ggml-Export-GPU-UUIDs.patch ggml: Report ordinal IDs for AMD GPUs on Windows 2025-07-09 10:35:31 -07:00
0018-temporary-prevent-rocm-cuda-mixed-loading.patch Re-remove cuda v11 (#10694) 2025-06-23 14:07:00 -07:00
0019-metal-add-mean-kernel-14267.patch add new gemma model (#11204) 2025-06-25 21:47:09 -07:00
0020-CUDA-add-mean-operation-14313.patch add new gemma model (#11204) 2025-06-25 21:47:09 -07:00