ollama/model
Oliver Simons ea85e27bbd
Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
* Enable CUDA Graphs for gemma3n.

Similar to
https://github.com/ggml-org/llama.cpp/pull/14741,
though ollama has a slightly different model graph
than llama.cpp which requires different workaround
checks.

* Remove residual check by reshaping differently in gemma3n model

This should make the heuristics more robust
2025-07-29 12:37:06 -07:00
..
imageproc imageproc mllama refactor (#7537) 2024-12-14 19:50:15 -08:00
input ollamarunner: Separate text and multimodal graphs 2025-05-15 13:46:20 -07:00
models Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525) 2025-07-29 12:37:06 -07:00
testdata gemma2 impl 2025-03-11 14:35:08 -07:00
bytepairencoding.go add thinking support to the api and cli (#10584) 2025-05-28 19:38:52 -07:00
bytepairencoding_test.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
model.go ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
model_test.go fs: move ml.Config to fs package 2025-04-03 13:12:24 -07:00
sentencepiece.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
sentencepiece_test.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
textprocessor.go model: handle multiple eos tokens (#10577) 2025-05-16 13:40:23 -07:00
vocabulary.go model: treat 'user defined' tokens as special tokens (#11077) 2025-06-16 16:03:16 -07:00
vocabulary_test.go model: treat 'user defined' tokens as special tokens (#11077) 2025-06-16 16:03:16 -07:00