| .. |
|
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch
|
llama: update to commit de4c07f93 (#10655)
|
2025-05-12 12:17:26 -07:00 |
|
0002-pretokenizer.patch
|
llama: update to commit de4c07f93 (#10655)
|
2025-05-12 12:17:26 -07:00 |
|
0003-embeddings.patch
|
llama: update to commit de4c07f93 (#10655)
|
2025-05-12 12:17:26 -07:00 |
|
0004-clip-unicode.patch
|
llama: update to commit de4c07f93 (#10655)
|
2025-05-12 12:17:26 -07:00 |
|
0005-solar-pro.patch
|
add new gemma model (#11204)
|
2025-06-25 21:47:09 -07:00 |
|
0006-fix-deepseek-deseret-regex.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0007-maintain-ordering-for-rules-for-grammar.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0008-ensure-KV-cache-is-fully-defragmented.patch
|
add new gemma model (#11204)
|
2025-06-25 21:47:09 -07:00 |
|
0009-sort-devices-by-score.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0010-add-phony-target-ggml-cpu-for-all-cpu-variants.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0011-remove-amx.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0012-fix-string-arr-kv-loading.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0013-ollama-debug-tensor.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0014-add-ollama-vocab-for-grammar-support.patch
|
chore: update mllama to use ollama engine (#10637)
|
2025-05-13 17:36:02 -07:00 |
|
0015-add-argsort-and-cuda-copy-for-i32.patch
|
add new gemma model (#11204)
|
2025-06-25 21:47:09 -07:00 |
|
0016-graph-memory-reporting-on-failure.patch
|
ggml: Report graph memory for failed allocations
|
2025-05-22 14:38:09 -07:00 |
|
0017-ggml-Export-GPU-UUIDs.patch
|
Revert "Revert "ggml: Export GPU UUIDs" (#11115)" (#11117)
|
2025-06-18 07:30:49 -07:00 |
|
0018-temporary-prevent-rocm-cuda-mixed-loading.patch
|
Re-remove cuda v11 (#10694)
|
2025-06-23 14:07:00 -07:00 |
|
0019-metal-add-mean-kernel-14267.patch
|
add new gemma model (#11204)
|
2025-06-25 21:47:09 -07:00 |
|
0020-CUDA-add-mean-operation-14313.patch
|
add new gemma model (#11204)
|
2025-06-25 21:47:09 -07:00 |