ollama/docs
Daniel Hiltgen 1f50356e8e Bump ROCm on windows to 6.1.2
This also adjusts our algorithm to favor our bundled ROCm.
I've confirmed VRAM reporting still doesn't work properly so we
can't yet enable concurrency by default.
2024-07-10 11:01:22 -07:00
..
tutorials add embed model command and fix question invoke (#4766) 2024-06-03 22:20:48 -07:00
README.md Doc container usage and workaround for nvidia errors 2024-05-09 09:26:45 -07:00
api.md Update api.md 2024-06-29 16:22:49 -07:00
development.md update llama.cpp submodule to `d7fd29f` (#5475) 2024-07-05 13:25:58 -04:00
docker.md Doc container usage and workaround for nvidia errors 2024-05-09 09:26:45 -07:00
faq.md Bump ROCm on windows to 6.1.2 2024-07-10 11:01:22 -07:00
gpu.md Update gpu.md (#5382) 2024-06-30 21:48:51 -04:00
import.md Update import.md 2024-06-17 19:44:14 -04:00
linux.md Add instructions to easily install specific versions on faq.md (#4084) 2024-06-09 10:49:03 -07:00
modelfile.md Update 'llama2' -> 'llama3' in most places (#4116) 2024-05-03 15:25:04 -04:00
openai.md OpenAI: /v1/models and /v1/models/{model} compatibility (#5007) 2024-07-02 11:50:56 -07:00
troubleshooting.md Document older win10 terminal problems 2024-07-03 17:32:14 -07:00
tutorials.md
windows.md Document older win10 terminal problems 2024-07-03 17:32:14 -07:00