.. |
llama-adapter.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-adapter.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-arch.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-arch.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-batch.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-batch.h
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-chat.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-chat.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-context.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-context.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-cparams.cpp
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-cparams.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-grammar.cpp
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-grammar.h
|
llama: remove model loading for grammar (#10096)
|
2025-04-24 11:51:19 -07:00 |
llama-graph.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-graph.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-hparams.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-hparams.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-impl.cpp
|
llama: update llama.cpp vendor code to commit d7cfe1ff (#9356)
|
2025-02-26 20:34:44 -08:00 |
llama-impl.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-io.cpp
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
llama-io.h
|
llama: update to commit 71e90e88 (#10192)
|
2025-04-16 15:14:01 -07:00 |
llama-kv-cache-iswa.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-kv-cache-iswa.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-kv-cache.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-kv-cache.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-kv-cells.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-memory-hybrid.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-memory-hybrid.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-memory-recurrent.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-memory-recurrent.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-memory.cpp
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-memory.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-mmap.cpp
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-mmap.h
|
llama: update llama.cpp vendor code to commit d7cfe1ff (#9356)
|
2025-02-26 20:34:44 -08:00 |
llama-model-loader.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-model-loader.h
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-model-saver.cpp
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
llama-model-saver.h
|
llama: update to commit de4c07f93 (#10655)
|
2025-05-12 12:17:26 -07:00 |
llama-model.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-model.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-quant.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-quant.h
|
next build (#8539)
|
2025-01-29 15:03:38 -08:00 |
llama-sampling.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-sampling.h
|
llama: update llama.cpp vendor code to commit d7cfe1ff (#9356)
|
2025-02-26 20:34:44 -08:00 |
llama-vocab.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama-vocab.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama.cpp
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |
llama.go
|
Revert "cgo: use O3"
|
2025-01-31 10:25:39 -08:00 |
unicode-data.cpp
|
next build (#8539)
|
2025-01-29 15:03:38 -08:00 |
unicode-data.h
|
next build (#8539)
|
2025-01-29 15:03:38 -08:00 |
unicode.cpp
|
update vendored llama.cpp and ggml (#11823)
|
2025-08-14 14:42:58 -07:00 |
unicode.h
|
Update GGML to b6646 (#12245)
|
2025-10-02 14:47:10 -07:00 |