The GGML CUDA backend allocates additional memory for intermediate
results during calculation. This memory isn't currently allocated
during worst case graph reservation and therefore not included in
scheduling. This means that as these buffers potentially grow
with context length, we could crash.
This extends the memory allocation system down layer from the GGML
graph to the CUDA layer, preallocating the worst case memory there
as well.
Fixes#11753
The GGML scale kernel uses signed 32-bit ints to represent
the number of elements in the tensor. For large images,
mistral-small3.2 overflows this, triggering CUDA errors due
to negative arguments.
Currently, this can happen when the user passes a large image
to mistral-small3.2. However, with upcoming changes to reserve
CUDA memory, it happens every time mistral-small is loaded as
we reserve using a worst case batch.
This patch is part of an upstream GGML commit and should be removed
after GGML is updated past 0a1b398 "ggml: add ops for WAN video model
(cuda && cpu) (#15669)".
Fixes#10388
For each memory allocation we report the size of the (attempted)
allocation and whether it succeeded or failed. The latter status
reporting proved to be not that useful in practice as systems
such as Windows can automatically overflow from VRAM into RAM,
resultings in successful allocations even when there isn't
enough memory where we wanted.
As a result, this information is only used for debug logging,
which isn't worthwhile enough for the amount of code. It
also isn't fully accurate, as multiple allocations may result
in partial failures.
There are two bugs when using `/load <model>` for a model that doesn't exist, namely:
1. it will not restore the current model settings if the current model is a thinking model; and
2. it will crash is the current model is a non-thinking model
This bug fix saves the current runOptions and then restores them if the model load
doesn't happen. It also fixes the crash happening for non-thinking models.
When trimming whitespace at the end of every chunk, we were iterating
backwards over the string byte-by-byte instead of rune-by-rune.
As an example of how this can cause corruption, suppose we have the
multi-byte character ✅ (`"\u2705"`), which is represented in utf-8 as
the three bytes `0xE2 0x9C 0x85`. It happens that `0x85` is NEL, which
passes `unicode.IsSpace()`. Because we were iterating byte-by-byte, this
caused us to mistakenly slice in the middle of the rune, removing `0x85`
and leaving `0xE2 0x9C`, which beyond being the incorrect place to
slice, is not even a valid utf-8 character.
`trailingWhitespaceLen()` was modified to count from the end in a
rune-aware way. Tests with various multibyte unicode characters were
also added.
Fixes: #12414
* init deepseek model file
* temp removal of flash attention implementation
* shapes and proper, can make a pass
* query, key, value have good cosine similarity, but the max diff is a bit high
* Attention block is working! ** with eager for now, have not added the mask line
* Attention block is working! ** with eager for now, have not added the mask line
* working MoE at around 0.95 cosine sim
* added cosine similarity function
* Starting end to end structure
* Trying (and failing) to get rope to work, going to test full thing on tater
* running on tater36... just not the right outputs
* we have the right values for rope... but its still not working?
* chnage Extrapolation Factor to 1
* removed adding residuals twice, removed normalization from shared expert, refactored Norms (Attention, MLP) to be outside the (Attention, MLP) blocks and in the Transformer block instead, add cache setLayer
* Temporary modelfiles for cpu
* change kpass intermediate step to kv, two layer outputs [0,1] look fine
* this calls for 16 chicken nuggets
* whoops
* cleaning up code
* delete stuff we dont need
* getting rid of debug statements for llama cpp
* working with long contexts
* fix long context view error
* reverting some changes I made for files that are not apart of pr
* Added proper tokenizer for deeepseek3
* clean up model and go test
* remove Modelfile
* not passing the tests
* whoops
* how to pass the ci tests
* resolving some of the comments
* rename
* linted and renamed deepseek3 -> deepseek2
* remove name go
* addressed changes - main change was adopting qwen3 naming scheme
* I cannot with linters
* clean up logs
* clean up logs
---------
Co-authored-by: Grace Guo <graceguo@Graces-MBP.localdomain>
Co-authored-by: Grace Guo <graceguo@Graces-MacBook-Pro.local>
Co-authored-by: graceguo <graceguo@tater36.localdomain>
In #11910 we started sanitizing function names, but we accidentally were
modifying built-ins like `browser.open` to `browser_open`. This was
removing the special prompt rendering for built-ins, but this wasn't
immediately apparent since the models seem to be reasonably good at
remembering the built-ins even when presented with these slightly
renamed version. This fix prevents built-ins from ever being renamed.
* auth: fix problems with the ollama keypairs
This change adds several fixes including:
- reading in the pubkey files correctly
- fixing the push unit test to create a keypair file in a temp directory
- not return 500 errors for normal status error
* tests: add single threaded history test
Also tidies up some existing tests to handle more model output variation
* test: add support for testing specific architectures
In <https://github.com/ollama/ollama/issues/12357> we that the model
will output tool calls such as
```
<function=shell>
<parameter=command>
pwd && ls -la
</parameter>
</function>
```
We parse this using the approach of transforming into valid xml and then
using an xml parser. While we do transform the function and parameter
names, we weren't escaping the parameter values (which in this example
are invalid since `pwd && ls -la` contains unescaped ampersands).
This has been fixed by first transforming the tags in the same way, and
then walking the transformed string and escaping the text in between the
tags. This also fixes a case where `<` in the middle of a parameter
value would cause an xml parse failure.
Fixes: #12357
with #12181, there's now support for embeddings in ollama engine.
this is done by mutating the architecture and adding _embed when it
detects an embedding model. however this introduced a bug where if
an embedding model was run based on an existing ollama engine model
without an embedding implementation, e.g. llama4, it will pass the
initial arch support check but fail when actually loaded.
there's currently two entrypoints to creating a model. previously this
second entrypoint was necessary because calling model.New would also
load the model. since #11818, this is no longer th case so merge them
to reduce complexity
With the addition of cuda v13, on a clean setup, the level of parallelism
was causing docker desktop to become overwhelmed and compilers
were crashing. This limits to 8 parallel per build stage, with the ability
to override if you have many more cores available.
Now that we have a built-in parser abstraction, which was introduced in
<https://github.com/ollama/ollama/pull/12248>, we can modify our harmony
parser to match this and then get rid of nearly all of the
harmony-specific logic in routes.go. We do have a small amount of
code that turns the parser on by default if the architecture matches and
no other built-in parser was provided.
The built-in parser interface was modified in order to handle harmony's
prefill and tool name translation requirements.
This change moves back to converting bf16 vision weights to fp16,
specifically if they start with the name "v." (such as v.blk.0.attn_k.weight).
This fixes a bug where converted images are failing because they are trying
to call `im2col` which doesn't have a bf16 kernel in ggml.