mirror of https://github.com/ollama/ollama.git
* Enable CUDA Graphs for gemma3n. Similar to https://github.com/ggml-org/llama.cpp/pull/14741, though ollama has a slightly different model graph than llama.cpp which requires different workaround checks. * Remove residual check by reshaping differently in gemma3n model This should make the heuristics more robust |
||
|---|---|---|
| .. | ||
| gemma2 | ||
| gemma3 | ||
| gemma3n | ||
| llama | ||
| llama4 | ||
| mistral3 | ||
| mllama | ||
| qwen2 | ||
| qwen3 | ||
| qwen25vl | ||
| models.go | ||