mirror of https://github.com/ollama/ollama.git
This PR updates the memory size estimate logic to better handle recurrent and hybrid-recurrent models which are currently being badly overestimated because the default logic assumes full attention for all layers.
The logic for the sizing of the recurrent layers comes from the llama.cpp implementation
ggml_tensor * r = ggml_new_tensor_1d(ctx, type_r, hparams.n_embd_r()*mem_size);
ggml_tensor * s = ggml_new_tensor_1d(ctx, type_s, hparams.n_embd_s()*mem_size);
Signed-off-by: Gabe Goodhart <ghart@us.ibm.com>
|
||
|---|---|---|
| .. | ||
| ggml | ||
| gguf | ||
| util/bufioutil | ||
| config.go | ||