ollama/fs
Gabe Goodhart 7b91c9ce51
Hybrid and recurrent memory estimates (#12186)
This PR updates the memory size estimate logic to better handle recurrent and hybrid-recurrent models which are currently being badly overestimated because the default logic assumes full attention for all layers.

The logic for the sizing of the recurrent layers comes from the llama.cpp implementation

        ggml_tensor * r = ggml_new_tensor_1d(ctx, type_r, hparams.n_embd_r()*mem_size);
        ggml_tensor * s = ggml_new_tensor_1d(ctx, type_s, hparams.n_embd_s()*mem_size);

Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
2025-09-08 14:53:22 -07:00
..
ggml Hybrid and recurrent memory estimates (#12186) 2025-09-08 14:53:22 -07:00
gguf Reapply "feat: incremental gguf parser (#10822)" (#11114) (#11119) 2025-06-20 11:11:40 -07:00
util/bufioutil next ollama runner (#7913) 2025-02-13 16:31:21 -08:00
config.go add new gemma model (#11204) 2025-06-25 21:47:09 -07:00