mirror of https://github.com/ollama/ollama.git
As we automatically enable flash attention for more models, there are likely some cases where we get it wrong. This allows setting OLLAMA_FLASH_ATTENTION=0 to disable it, even for models that usually have flash attention. |
||
---|---|---|
.. | ||
llm_darwin.go | ||
llm_linux.go | ||
llm_windows.go | ||
memory.go | ||
memory_test.go | ||
server.go | ||
server_test.go | ||
status.go |