| .. |
|
0001-ggml-backend-malloc-and-free-using-the-same-compiler.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0002-pretokenizer.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0003-embeddings.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0004-clip-unicode.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0005-solar-pro.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0006-conditional-fattn.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0007-add-mllama-support.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0008-add-unpad-operator.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0009-fix-deepseek-deseret-regex.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0010-Maintain-ordering-for-rules-for-grammar.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0011-ensure-KV-cache-is-fully-defragmented.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0012-sort-devices-by-score.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0013-add-phony-target-ggml-cpu-for-all-cpu-variants.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0014-remove-amx.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0015-fix-string-arr-kv-loading.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0016-ollama-debug-tensor.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0017-add-model-quantizations.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0018-add-op_neg.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0019-fix-compiler-error-in-clip.h.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
|
0020-Revert-Simplify-and-improve-CUDA-graphs-through-use-.patch
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |