ollama/model/models
Oliver Simons ea85e27bbd
Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525)
* Enable CUDA Graphs for gemma3n.

Similar to
https://github.com/ggml-org/llama.cpp/pull/14741,
though ollama has a slightly different model graph
than llama.cpp which requires different workaround
checks.

* Remove residual check by reshaping differently in gemma3n model

This should make the heuristics more robust
2025-07-29 12:37:06 -07:00
..
gemma2 ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
gemma3 ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
gemma3n Increase performance for Gemma3n models on NVGPUs by enabling CUDA Graph execution (#11525) 2025-07-29 12:37:06 -07:00
llama Only load supported models on new engine (#11362) 2025-07-11 12:21:54 -07:00
llama4 use nn.Linear in place of ml.Tensor (#11049) 2025-06-11 12:10:15 -07:00
mistral3 ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
mllama ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
qwen2 Only load supported models on new engine (#11362) 2025-07-11 12:21:54 -07:00
qwen3 use nn.Linear in place of ml.Tensor (#11049) 2025-06-11 12:10:15 -07:00
qwen25vl ml: Panic rather than return error on tensor allocation failure 2025-05-22 14:38:09 -07:00
models.go add new gemma model (#11204) 2025-06-25 21:47:09 -07:00