ollama/llm
Jesse Gross aba1575315 llm: Don't try to load split vision models in the Ollama engine
If a model with a split vision projector is loaded in the Ollama
engine, the projector will be ignored and the model will hallucinate
a response. Instead, fallback and try to load the model in the llama
engine.
2025-09-11 11:41:55 -07:00
..
llm_darwin.go
llm_linux.go
llm_windows.go win: lint fix (#10571) 2025-05-05 11:08:12 -07:00
memory.go llm: Remove unneeded warning with flash attention enabled 2025-09-10 16:40:45 -07:00
memory_test.go llm: New memory management 2025-08-14 15:24:01 -07:00
server.go llm: Don't try to load split vision models in the Ollama engine 2025-09-11 11:41:55 -07:00
server_test.go llm: New memory management 2025-08-14 15:24:01 -07:00
status.go Improve crash reporting (#7728) 2024-11-19 16:26:57 -08:00