ollama/llm
Jesse Gross a2cc8571c5 llm: Consistently track unassigned model data
In some cases, if we fail to assign a piece of the model to a GPU then
we lose track of this data. Although it doesn't change the memory
allocation, it does affect the total size of the model reported by
tools such as ollama ps (and also the percent offloaded).

This makes it look like setting num_gpu isn't reflected in ollama ps,
which isn't true but the offloading percent may appear to not change.

Spreading the model across more GPUs will continue to impact the
reported total size of the model.
2025-05-19 09:52:48 -07:00
..
llm_darwin.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_linux.go Optimize container images for startup (#6547) 2024-09-12 12:10:30 -07:00
llm_windows.go win: lint fix (#10571) 2025-05-05 11:08:12 -07:00
memory.go llm: Consistently track unassigned model data 2025-05-19 09:52:48 -07:00
memory_test.go Move quantization to new backend (#10363) 2025-05-06 11:20:48 -07:00
server.go chore: update mllama to use ollama engine (#10637) 2025-05-13 17:36:02 -07:00
server_test.go lint: enable usetesting, disable tenv (#10594) 2025-05-08 11:42:14 -07:00
status.go Improve crash reporting (#7728) 2024-11-19 16:26:57 -08:00