mirror of https://github.com/ollama/ollama.git
This also adjusts our algorithm to favor our bundled ROCm. I've confirmed VRAM reporting still doesn't work properly so we can't yet enable concurrency by default. |
||
|---|---|---|
| .. | ||
| tutorials | ||
| README.md | ||
| api.md | ||
| development.md | ||
| docker.md | ||
| faq.md | ||
| gpu.md | ||
| import.md | ||
| linux.md | ||
| modelfile.md | ||
| openai.md | ||
| troubleshooting.md | ||
| tutorials.md | ||
| windows.md | ||