2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								package  llm  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								import  (  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"bufio" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"bytes" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"context" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"encoding/json" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"errors" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"fmt" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"io" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"log" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"log/slog" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"math/rand" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"net" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"net/http" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"os" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"os/exec" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"path/filepath" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"runtime" 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-13 02:43:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"slices" 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"sort" 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									"strconv" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									"strings" 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"sync" 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									"time" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"golang.org/x/sync/semaphore" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									"github.com/ollama/ollama/api" 
							 
						 
					
						
							
								
									
										
										
										
											2024-10-17 08:45:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/discover" 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-28 08:21:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/envconfig" 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									"github.com/ollama/ollama/format" 
							 
						 
					
						
							
								
									
										
										
										
											2025-02-14 08:31:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/fs/ggml" 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/llama" 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-13 02:43:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/logutil" 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/ml" 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									"github.com/ollama/ollama/model" 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								)  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-13 02:43:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  filteredEnv  [ ] string  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( e  filteredEnv )  LogValue ( )  slog . Value  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  attrs  [ ] slog . Attr 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  env  :=  range  e  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  key ,  value ,  ok  :=  strings . Cut ( env ,  "=" ) ;  ok  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											switch  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											case  strings . HasPrefix ( key ,  "OLLAMA_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "CUDA_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "ROCR_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "ROCM_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "HIP_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "GPU_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "HSA_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												strings . HasPrefix ( key ,  "GGML_" ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												slices . Contains ( [ ] string { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													"PATH" , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													"LD_LIBRARY_PATH" , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													"DYLD_LIBRARY_PATH" , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} ,  key ) : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												attrs  =  append ( attrs ,  slog . String ( key ,  value ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  slog . GroupValue ( attrs ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  LlamaServer  interface  {  
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									ModelPath ( )  string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Load ( ctx  context . Context ,  gpus  discover . GpuInfoList ,  requireFull  bool )  error 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Ping ( ctx  context . Context )  error 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									WaitUntilRunning ( ctx  context . Context )  error 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Completion ( ctx  context . Context ,  req  CompletionRequest ,  fn  func ( CompletionResponse ) )  error 
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Embedding ( ctx  context . Context ,  input  string )  ( [ ] float32 ,  error ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Tokenize ( ctx  context . Context ,  content  string )  ( [ ] int ,  error ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Detokenize ( ctx  context . Context ,  tokens  [ ] int )  ( string ,  error ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Close ( )  error 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									VRAMSize ( )  uint64  // Total VRAM across all GPUs
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									TotalSize ( )  uint64 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									VRAMByGPU ( gpuID  string )  uint64 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-04 03:01:56 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Pid ( )  int 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								// llmServer is an instance of a runner hosting a single model
  
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  llmServer  struct  {  
						 
					
						
							
								
									
										
										
										
											2024-08-07 11:20:49 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									port         int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									cmd          * exec . Cmd 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									done         chan  error  // Channel to signal when the process exits
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									status       * StatusWriter 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									options      api . Options 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									numParallel  int 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									modelPath    string 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									loadRequest  LoadRequest  // Parameters used to initialize the runner
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// llamaModel is an instance of the cgo llama.cpp model definition
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// nil if this server is running the new engine
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									llamaModel      * llama . Model 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									llamaModelLock  * sync . Mutex 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// textProcessor handles text encoding/decoding for the model in the Ollama engine
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// nil if this server is running the llama.cpp based engine
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									textProcessor  model . TextProcessor 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									totalLayers   uint64 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									loadStart     time . Time  // Record how long it took the model to load
 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-19 03:34:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									loadProgress  float32 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									sem  * semaphore . Weighted 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  llamaServer  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									llmServer 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									ggml      * ggml . GGML 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpus      discover . GpuInfoList  // The set of GPUs covered by the memory estimate
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									estimate  MemoryEstimate 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  ollamaServer  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									llmServer 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									mem  * ml . BackendMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-06-25 12:47:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								// LoadModel will load a model from disk. The model must be in the GGML format.
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								//
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// It collects array values for arrays with a size less than or equal to
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// maxArraySize. If maxArraySize is 0, the default value of 1024 is used. If
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// the maxArraySize is negative, all arrays are collected.
  
						 
					
						
							
								
									
										
										
										
											2025-02-14 08:31:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  LoadModel ( model  string ,  maxArraySize  int )  ( * ggml . GGML ,  error )  {  
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  _ ,  err  :=  os . Stat ( model ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									f ,  err  :=  os . Open ( model ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  f . Close ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-04-18 04:42:40 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									ggml ,  err  :=  ggml . Decode ( f ,  maxArraySize ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									return  ggml ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								// NewLlamaServer will run a server for the given GPUs
  
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  NewLlamaServer ( gpus  discover . GpuInfoList ,  modelPath  string ,  f  * ggml . GGML ,  adapters ,  projectors  [ ] string ,  opts  api . Options ,  numParallel  int )  ( LlamaServer ,  error )  {  
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									var  llamaModel  * llama . Model 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  textProcessor  model . TextProcessor 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  err  error 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  envconfig . NewEngine ( )  ||  f . KV ( ) . OllamaEngineRequired ( )  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-11 02:03:06 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  len ( projectors )  ==  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											textProcessor ,  err  =  model . NewTextProcessor ( modelPath ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											err  =  errors . New ( "split vision models aren't supported" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// To prepare for opt-out mode, instead of treating this as an error, we fallback to the old runner
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Debug ( "model not yet supported by Ollama engine, switching to compatibility mode" ,  "model" ,  modelPath ,  "error" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-04 10:09:23 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  textProcessor  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										llamaModel ,  err  =  llama . LoadModelFromFile ( modelPath ,  llama . ModelParams { VocabOnly :  true } ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  nil ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-06-24 06:52:50 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// Verify the requested context size is <= the model training size
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									trainCtx  :=  f . KV ( ) . ContextLength ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  opts . NumCtx  >  int ( trainCtx )  &&  trainCtx  >  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Warn ( "requested context size too large for model" ,  "num_ctx" ,  opts . NumCtx ,  "n_ctx_train" ,  trainCtx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										opts . NumCtx  =  int ( trainCtx ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-11 15:53:12 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-09-09 08:33:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									opts . NumBatch  =  min ( opts . NumBatch ,  opts . NumCtx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									loadRequest  :=  LoadRequest { LoraPath :  adapters ,  KvSize :  opts . NumCtx  *  numParallel ,  BatchSize :  opts . NumBatch ,  Parallel :  numParallel ,  MultiUserCache :  envconfig . MultiUserCache ( ) } 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-18 09:39:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									defaultThreads  :=  discover . GetSystemInfo ( ) . GetOptimalThreadCount ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  opts . NumThread  >  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										loadRequest . NumThreads  =  opts . NumThread 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									}  else  if  defaultThreads  >  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										loadRequest . NumThreads  =  defaultThreads 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 04:52:56 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// TODO - NUMA support currently doesn't work properly
 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  opts . MainGPU  >  0  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										loadRequest . MainGPU  =  opts . MainGPU 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  len ( projectors )  >  0  &&  llamaModel  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										loadRequest . ProjectorPath  =  projectors [ 0 ] 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// This will disable flash attention unless all GPUs on the system support it, even if we end up selecting a subset
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// that can handle it.
 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									fa  :=  envconfig . FlashAttention ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-27 04:34:45 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  f . FlashAttention ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Info ( "model wants flash attention" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										fa  =  true 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  fa  &&  ! gpus . FlashAttentionSupported ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Warn ( "flash attention enabled but not supported by gpu" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										fa  =  false 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 04:36:03 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-02-14 08:31:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  fa  &&  ! f . SupportsFlashAttention ( )  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Warn ( "flash attention enabled but not supported by model" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										fa  =  false 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 08:30:40 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									kvct  :=  strings . ToLower ( envconfig . KvCacheType ( ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  fa  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Info ( "enabling flash attention" ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										loadRequest . FlashAttention  =  true 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// Flash Attention also supports kv cache quantization
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// Enable if the requested and kv cache type is supported by the model
 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-10 01:37:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  f . SupportsKVCacheType ( kvct )  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											loadRequest . KvCacheType  =  kvct 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Warn ( "kv cache type not supported by model" ,  "type" ,  kvct ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 04:36:03 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-04 07:57:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									}  else  if  kvct  !=  ""  &&  kvct  !=  "f16"  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Warn ( "quantized kv cache requested but flash attention disabled" ,  "type" ,  kvct ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-31 07:58:01 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									availableLibs  :=  make ( map [ string ] string ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-02-04 04:27:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  entries ,  err  :=  os . ReadDir ( discover . LibOllamaPath ) ;  err  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  _ ,  entry  :=  range  entries  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											availableLibs [ entry . Name ( ) ]  =  filepath . Join ( discover . LibOllamaPath ,  entry . Name ( ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									var  gpuLibs  [ ] string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  gpu  :=  range  gpus  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										gpuLibs  =  append ( gpuLibs ,  gpu . RunnerName ( ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									requested  :=  envconfig . LLMLibrary ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  availableLibs [ requested ]  !=  ""  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Info ( "using requested gpu library" ,  "requested" ,  requested ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										gpuLibs  =  [ ] string { requested } 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  compatible  [ ] string 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									for  _ ,  gpuLib  :=  range  gpuLibs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										var  matchingLibs  [ ] string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  k  :=  range  availableLibs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// exact match first
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  k  ==  gpuLib  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												matchingLibs  =  append ( [ ] string { k } ,  matchingLibs ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// then match the family (e.g. 'cuda')
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  strings . Split ( k ,  "_" ) [ 0 ]  ==  strings . Split ( gpuLib ,  "_" ) [ 0 ]  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												matchingLibs  =  append ( matchingLibs ,  k ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-23 07:22:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  len ( matchingLibs )  >  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											compatible  =  append ( compatible ,  matchingLibs [ 0 ] ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-05 00:15:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									exe ,  err  :=  os . Executable ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "unable to lookup executable path: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  eval ,  err  :=  filepath . EvalSymlinks ( exe ) ;  err  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										exe  =  eval 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-06-24 05:07:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// iterate through compatible GPU libraries such as 'cuda_v12', 'rocm', etc.
 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// adding each library's respective path to the LD_LIBRARY_PATH, until finally running
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// without any LD_LIBRARY_PATH flags
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										port  :=  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  a ,  err  :=  net . ResolveTCPAddr ( "tcp" ,  "localhost:0" ) ;  err  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											var  l  * net . TCPListener 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  l ,  err  =  net . ListenTCP ( "tcp" ,  a ) ;  err  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												port  =  l . Addr ( ) . ( * net . TCPAddr ) . Port 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												l . Close ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  port  ==  0  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-02-04 04:27:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Debug ( "ResolveTCPAddr failed, using random port" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											port  =  rand . Intn ( 65535 - 49152 )  +  49152  // get a random port in the ephemeral range
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										params  :=  [ ] string { "runner" } 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  textProcessor  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// New engine
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// TODO - if we have failure to load scenarios, add logic to retry with the old runner
 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											params  =  append ( params ,  "--ollama-engine" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-18 11:59:41 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										params  =  append ( params ,  "--model" ,  modelPath ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										params  =  append ( params ,  "--port" ,  strconv . Itoa ( port ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-02-05 07:05:39 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										var  pathEnv  string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										switch  runtime . GOOS  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  "windows" : 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											pathEnv  =  "PATH" 
							 
						 
					
						
							
								
									
										
										
										
											2025-02-05 07:05:39 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										case  "darwin" : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											pathEnv  =  "DYLD_LIBRARY_PATH" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											pathEnv  =  "LD_LIBRARY_PATH" 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-04 04:11:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										// Note: we always put our dependency paths first
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// since these are the exact version we compiled/linked against
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										libraryPaths  :=  [ ] string { discover . LibOllamaPath } 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										if  libraryPath ,  ok  :=  os . LookupEnv ( pathEnv ) ;  ok  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-06 08:45:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											libraryPaths  =  append ( libraryPaths ,  filepath . SplitList ( libraryPath ) ... ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-01 02:25:22 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										ggmlPaths  :=  [ ] string { discover . LibOllamaPath } 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										for  _ ,  c  :=  range  compatible  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  libpath ,  ok  :=  availableLibs [ c ] ;  ok  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												slog . Debug ( "adding gpu library" ,  "path" ,  libpath ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-04 04:11:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												libraryPaths  =  append ( [ ] string { libpath } ,  libraryPaths ... ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-01 02:25:22 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												ggmlPaths  =  append ( ggmlPaths ,  libpath ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										for  _ ,  gpu  :=  range  gpus  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  gpu . DependencyPath  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												slog . Debug ( "adding gpu dependency paths" ,  "paths" ,  gpu . DependencyPath ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												libraryPaths  =  append ( gpu . DependencyPath ,  libraryPaths ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										// finally, add the root library path
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										libraryPaths  =  append ( libraryPaths ,  discover . LibOllamaPath ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										s  :=  llmServer { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											port :            port , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											cmd :             exec . Command ( exe ,  params ... ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											status :          NewStatusWriter ( os . Stderr ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											options :         opts , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											modelPath :       modelPath , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											loadRequest :     loadRequest , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											llamaModel :      llamaModel , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											llamaModelLock :  & sync . Mutex { } , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											textProcessor :   textProcessor , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											numParallel :     numParallel , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											sem :             semaphore . NewWeighted ( int64 ( numParallel ) ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											totalLayers :     f . KV ( ) . BlockCount ( )  +  1 , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											loadStart :       time . Now ( ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											done :            make ( chan  error ,  1 ) , 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-11 13:53:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										s . cmd . Env  =  os . Environ ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										s . cmd . Stdout  =  os . Stdout 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										s . cmd . Stderr  =  s . status 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-16 00:25:56 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										s . cmd . SysProcAttr  =  LlamaServerSysProcAttr 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-01 02:25:22 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										s . cmd . Env  =  append ( s . cmd . Env ,  "OLLAMA_LIBRARY_PATH=" + strings . Join ( ggmlPaths ,  string ( filepath . ListSeparator ) ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-08-30 03:17:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										envWorkarounds  :=  [ ] string { } 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 07:15:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										for  _ ,  gpu  :=  range  gpus  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											envWorkarounds  =  append ( envWorkarounds ,  gpu . EnvWorkarounds ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-30 03:17:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										// Always filter down the set of GPUs in case there are any unsupported devices that might crash
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										envWorkarounds  =  append ( envWorkarounds ,  gpus . GetVisibleDevicesEnv ( ) ... ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-11 13:53:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										pathEnvVal  :=  strings . Join ( libraryPaths ,  string ( filepath . ListSeparator ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										// Update or add the path variable with our adjusted version
 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-11 13:53:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										pathNeeded  :=  true 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-30 03:17:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										envWorkaroundDone  :=  make ( [ ] bool ,  len ( envWorkarounds ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-11 13:53:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										for  i  :=  range  s . cmd . Env  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											cmp  :=  strings . SplitN ( s . cmd . Env [ i ] ,  "=" ,  2 ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  strings . EqualFold ( cmp [ 0 ] ,  pathEnv )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												s . cmd . Env [ i ]  =  pathEnv  +  "="  +  pathEnvVal 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												pathNeeded  =  false 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 07:15:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											}  else  if  len ( envWorkarounds )  !=  0  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-30 03:17:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												for  j ,  kv  :=  range  envWorkarounds  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													tmp  :=  strings . SplitN ( kv ,  "=" ,  2 ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													if  strings . EqualFold ( cmp [ 0 ] ,  tmp [ 0 ] )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														s . cmd . Env [ i ]  =  kv 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														envWorkaroundDone [ j ]  =  true 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 07:15:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
													} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-11 13:53:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-06 08:45:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-11 13:53:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  pathNeeded  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . cmd . Env  =  append ( s . cmd . Env ,  pathEnv + "=" + pathEnvVal ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-06 08:45:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-30 03:17:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										for  i ,  done  :=  range  envWorkaroundDone  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  ! done  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												s . cmd . Env  =  append ( s . cmd . Env ,  envWorkarounds [ i ] ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Info ( "starting runner" ,  "cmd" ,  s . cmd ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-13 02:43:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Debug ( "subprocess" ,  "" ,  filteredEnv ( s . cmd . Env ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  err  =  s . cmd . Start ( ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											var  msg  string 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											if  s . status  !=  nil  &&  s . status . LastErrMsg  !=  ""  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												msg  =  s . status . LastErrMsg 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											err  :=  fmt . Errorf ( "error starting runner: %v %s" ,  err ,  msg ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  len ( compatible )  ==  0  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												if  llamaModel  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													llama . FreeModel ( llamaModel ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												return  nil ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Warn ( "unable to start runner with compatible gpu" ,  "error" ,  err ,  "compatible" ,  compatible ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											compatible  =  compatible [ 1 : ] 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										// reap subprocess when it exits
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										go  func ( )  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-22 23:52:16 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											err  :=  s . cmd . Wait ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// Favor a more detailed message over the process exit status
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  err  !=  nil  &&  s . status  !=  nil  &&  s . status . LastErrMsg  !=  ""  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-01-30 07:03:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												slog . Error ( "llama runner terminated" ,  "error" ,  err ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-22 23:52:16 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												if  strings . Contains ( s . status . LastErrMsg ,  "unknown model" )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													s . status . LastErrMsg  =  "this model is not supported by your version of Ollama. You may need to upgrade" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
									
										
										
										
											2024-08-02 05:52:15 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												s . done  <-  errors . New ( s . status . LastErrMsg ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-22 23:52:16 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												s . done  <-  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-09-12 01:30:18 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  textProcessor  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  & ollamaServer { llmServer :  s } ,  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  & llamaServer { llmServer :  s ,  ggml :  f } ,  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * llmServer )  ModelPath ( )  string  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  s . modelPath 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  LoadOperation  int  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// The order of these constants are significant because we iterate over the operations. They
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// should be in order of increasingly loading the model.
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								const  (  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									LoadOperationFit     LoadOperation  =  iota  // Return memory requirements but do not allocate
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									LoadOperationAlloc                        // Allocate memory but do not load the weights
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									LoadOperationCommit                       // Load weights - further changes cannot be made after this
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									LoadOperationClose                        // Close model and free memory
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								)  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( o  LoadOperation )  String ( )  string  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									switch  o  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  LoadOperationFit : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "fit" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  LoadOperationAlloc : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "alloc" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  LoadOperationCommit : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "commit" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  LoadOperationClose : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "close" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "unknown" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  LoadRequest  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Operation  LoadOperation 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									LoraPath        [ ] string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Parallel        int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									BatchSize       int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									FlashAttention  bool 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									KvSize          int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									KvCacheType     string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									NumThreads      int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									GPULayers       ml . GPULayersList 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									MultiUserCache  bool 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// Legacy fields - not used with the Ollama engine
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									ProjectorPath  string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									MainGPU        int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									UseMmap        bool 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  LoadResponse  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Success  bool 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Memory   ml . BackendMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								var  ErrLoadRequiredFull  =  errors . New ( "unable to load full model on GPU" )  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * llamaServer )  Load ( ctx  context . Context ,  gpus  discover . GpuInfoList ,  requireFull  bool )  error  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemInfo  :=  discover . GetSystemInfo ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemTotalMemory  :=  systemInfo . System . TotalMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemFreeMemory  :=  systemInfo . System . FreeMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemSwapFreeMemory  :=  systemInfo . System . FreeSwap 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									slog . Info ( "system memory" ,  "total" ,  format . HumanBytes2 ( systemTotalMemory ) ,  "free" ,  format . HumanBytes2 ( systemFreeMemory ) ,  "free_swap" ,  format . HumanBytes2 ( systemSwapFreeMemory ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									g  :=  pickBestFullFitByLibrary ( s . ggml ,  s . modelPath ,  [ ] string { s . loadRequest . ProjectorPath } ,  s . loadRequest . LoraPath ,  s . options ,  gpus ,  s . numParallel ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  g  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  ! requireFull  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											g  =  pickBestPartialFitByLibrary ( s . ggml ,  [ ] string { s . loadRequest . ProjectorPath } ,  s . loadRequest . LoraPath ,  s . options ,  gpus ,  s . numParallel ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-21 03:51:45 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Info ( "model requires more memory than is currently available, evicting a model to make space" ,  "estimate" ,  s . estimate ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  ErrLoadRequiredFull 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpus  =  g 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									s . estimate  =  estimateGPULayers ( gpus ,  s . ggml ,  [ ] string { s . loadRequest . ProjectorPath } ,  s . options ,  s . numParallel ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  len ( gpus )  >  1  ||  gpus [ 0 ] . Library  !=  "cpu"  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										switch  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  gpus [ 0 ] . Library  ==  "metal"  &&  s . estimate . VRAMSize  >  systemInfo . System . TotalMemory : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// disable partial offloading when model is greater than total system memory as this
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// can lead to locking up the system
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . options . NumGPU  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  gpus [ 0 ] . Library  !=  "metal"  &&  s . estimate . Layers  ==  0 : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// Don't bother loading into the GPU if no layers can fit
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											gpus  =  discover . GetCPUInfo ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  s . options . NumGPU  <  0  &&  s . estimate . Layers  >  0  &&  gpus [ 0 ] . Library  !=  "cpu" : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . options . NumGPU  =  s . estimate . Layers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// On linux and windows, over-allocating CPU memory will almost always result in an error
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// Darwin has fully dynamic swap so has no direct concept of free swap space
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  runtime . GOOS  !=  "darwin"  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										systemMemoryRequired  :=  s . estimate . TotalSize  -  s . estimate . VRAMSize 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										available  :=  systemInfo . System . FreeMemory  +  systemInfo . System . FreeSwap 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  systemMemoryRequired  >  available  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Warn ( "model request too large for system" ,  "requested" ,  format . HumanBytes2 ( systemMemoryRequired ) ,  "available" ,  format . HumanBytes2 ( available ) ,  "total" ,  format . HumanBytes2 ( systemInfo . System . TotalMemory ) ,  "free" ,  format . HumanBytes2 ( systemInfo . System . FreeMemory ) ,  "swap" ,  format . HumanBytes2 ( systemInfo . System . FreeSwap ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  fmt . Errorf ( "model requires more system memory (%s) than is available (%s)" ,  format . HumanBytes2 ( systemMemoryRequired ) ,  format . HumanBytes2 ( available ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									slog . Info ( "offload" ,  "" ,  s . estimate ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									s . gpus  =  gpus 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									s . loadRequest . GPULayers  =  createGPULayers ( s . estimate ,  s . ggml ,  gpus ,  s . options . NumGPU ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// Mmap is only supported on the llama engine
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . textProcessor  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										s . loadRequest . UseMmap  =  true 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// mmap has issues with partial offloading on metal
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  _ ,  g  :=  range  gpus  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  g . Library  ==  "metal"  && 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												uint64 ( s . options . NumGPU )  >  0  && 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												uint64 ( s . options . NumGPU )  <  s . ggml . KV ( ) . BlockCount ( ) + 1  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												s . options . UseMMap  =  new ( bool ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												* s . options . UseMMap  =  false 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// Windows CUDA should not use mmap for best performance
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// Linux  with a model larger than free space, mmap leads to thrashing
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// For CPU loads we want the memory to be allocated, not FS cache
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  ( runtime . GOOS  ==  "windows"  &&  gpus [ 0 ] . Library  ==  "cuda"  &&  s . options . UseMMap  ==  nil )  || 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											( runtime . GOOS  ==  "linux"  &&  systemInfo . System . FreeMemory  <  s . estimate . TotalSize  &&  s . options . UseMMap  ==  nil )  || 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											( gpus [ 0 ] . Library  ==  "cpu"  &&  s . options . UseMMap  ==  nil )  || 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											( s . options . UseMMap  !=  nil  &&  ! * s . options . UseMMap )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . loadRequest . UseMmap  =  false 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  :=  s . waitUntilRunnerLaunched ( ctx ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									resp ,  err  :=  s . initModel ( ctx ,  s . loadRequest ,  LoadOperationCommit ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// On the Ollama engine, we can print out a summary of the memory allocations.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// We don't have this for the llama engine but it does something similar itself.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . textProcessor  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										resp . Memory . Log ( slog . LevelInfo ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  ! resp . Success  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Warn ( "failed to allocate memory for model" ,  "memory" ,  resp . Memory ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  errors . New ( "failed to allocate memory for model" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// The llama engine does its memory allocations together with model loading, so we
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// need to wait until it is done to ensure that we have accurate memory data before
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// loading the next model
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . textProcessor  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  s . WaitUntilRunning ( ctx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// createGPULayers maps from the tensor splits assigned by the memory estimates to explicit assignment
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// of particular layers onto GPUs
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  createGPULayers ( estimate  MemoryEstimate ,  ggml  * ggml . GGML ,  gpus  discover . GpuInfoList ,  numGPU  int )  ml . GPULayersList  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  numGPU  <=  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpuLayers  :=  make ( ml . GPULayersList ,  len ( gpus ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  gpuLayers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										gpuLayers [ i ] . ID  =  gpus [ i ] . ID 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  sum  float32 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									splits  :=  make ( [ ] float32 ,  len ( estimate . TensorSplit ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// cumulative sum of all splits
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  splits  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										sum  +=  float32 ( estimate . TensorSplit [ i ] ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										splits [ i ]  =  sum 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  sum  <=  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// normalize splits
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  splits  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										splits [ i ]  /=  sum 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									blocks  :=  int ( ggml . KV ( ) . BlockCount ( ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpuRangeStart  :=  max ( 0 ,  blocks - numGPU ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpuRangeStop  :=  min ( gpuRangeStart + numGPU ,  blocks + 1 ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  blocks  +  1  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  i  <  gpuRangeStart  ||  i  >=  gpuRangeStop  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										index  :=  slices . IndexFunc ( splits ,  func ( f  float32 )  bool  {  return  float32 ( i - gpuRangeStart ) / float32 ( gpuRangeStop - gpuRangeStart )  <  f  } ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  index  <  0  ||  index  >=  len ( gpus )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										gpuLayers [ index ] . Layers  =  append ( gpuLayers [ index ] . Layers ,  i ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// Load finds the optimal layout of layers to offload on GPUs based on no initial information about the size of the model
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// It does this by:
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// 1. Assigning the full model to the GPU with the largest available free memory
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// 2. Attempting to allocate the layout and receiving the memory requirements in response
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// 3. Creating a new layout based on the updated memory information
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// 4. Going back to step 2 and looping until we either stabilize on a particular layout or discover that we have entered a cycle
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								//
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// This process is repeated for higher levels of loading the model (fit, allocate, commit). The earlier levels are quicker,
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// allowing for faster iteration, but may return less information.
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * ollamaServer )  Load ( ctx  context . Context ,  gpus  discover . GpuInfoList ,  requireFull  bool )  error  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  success  bool 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  func ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  ! success  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . initModel ( ctx ,  LoadRequest { } ,  LoadOperationClose ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-08-19 04:52:07 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  s . mem  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . mem . Log ( slog . LevelInfo ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									slog . Info ( "loading model" ,  "model layers" ,  s . totalLayers ,  "requested" ,  s . options . NumGPU ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemInfo  :=  discover . GetSystemInfo ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemTotalMemory  :=  systemInfo . System . TotalMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemFreeMemory  :=  systemInfo . System . FreeMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									systemSwapFreeMemory  :=  systemInfo . System . FreeSwap 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									slog . Info ( "system memory" ,  "total" ,  format . HumanBytes2 ( systemTotalMemory ) ,  "free" ,  format . HumanBytes2 ( systemFreeMemory ) ,  "free_swap" ,  format . HumanBytes2 ( systemSwapFreeMemory ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  ! ( len ( gpus )  ==  1  &&  gpus [ 0 ] . Library  ==  "cpu" )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  _ ,  gpu  :=  range  gpus  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-03 01:47:33 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											available  :=  gpu . FreeMemory  -  envconfig . GpuOverhead ( )  -  gpu . MinimumMemory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  gpu . FreeMemory  <  envconfig . GpuOverhead ( ) + gpu . MinimumMemory  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												available  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Info ( "gpu memory" ,  "id" ,  gpu . ID , 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-03 01:47:33 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												"available" ,  format . HumanBytes2 ( available ) , 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												"free" ,  format . HumanBytes2 ( gpu . FreeMemory ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												"minimum" ,  format . HumanBytes2 ( gpu . MinimumMemory ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												"overhead" ,  format . HumanBytes2 ( envconfig . GpuOverhead ( ) ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									pastAllocations  :=  make ( map [ uint64 ] struct { } ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  backoff  float32 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpuLayers ,  err  :=  s . createLayout ( systemInfo ,  gpus ,  s . mem ,  requireFull ,  backoff ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  :=  s . waitUntilRunnerLaunched ( ctx ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								nextOperation :  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  operation  :=  LoadOperationFit ;  operation  <  LoadOperationCommit ;  operation ++  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									nextLoad : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . loadRequest . GPULayers  =  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											resp ,  err  :=  s . initModel ( ctx ,  s . loadRequest ,  operation ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											resp . Memory . Log ( slog . LevelDebug ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Debug ( "memory" ,  "success" ,  resp . Success ,  "required" ,  resp . Memory ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											pastAllocations [ gpuLayers . Hash ( ) ]  =  struct { } { } 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . mem  =  & resp . Memory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											for  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												newGPULayers ,  err  :=  s . createLayout ( systemInfo ,  gpus ,  s . mem ,  requireFull ,  backoff ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												slog . Debug ( "new layout created" ,  "layers" ,  newGPULayers ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// We get additional memory information over time, which will reduce the number of
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// layers that can fit, so fewer layers is actually better. As long as we haven't seen
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// this layout before and it doesn't have more layers than the last one, we can keep
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// trying to see if we can do better.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  _ ,  ok  :=  pastAllocations [ newGPULayers . Hash ( ) ] ;  ! ok  &&  newGPULayers . Sum ( )  <=  gpuLayers . Sum ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													gpuLayers  =  newGPULayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													continue  nextLoad 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// If we are looping around a few different layouts due to graphs moving off and on
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// GPUs, make sure that we try out the intermediate states. For example, if we are
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// looping between offloading 39 and 41 layers, we should also check 40.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												//
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// This switches strategies to force an incremental number of layers to be offloaded
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// and checking the memory layout. If the allocation succeeds and creating a new layout
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// without forcing offload yields the same or greater number of layers offloaded, then
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// the trial is successful.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												//
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// This alternate strategy does not introduce the possibility of loops with the overall
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// state machine, as it exits this code block either with a successful result, moving
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// to the next operation or the original number of layers offloaded.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  s . options . NumGPU  <  0  &&  newGPULayers . Sum ( ) - gpuLayers . Sum ( )  >  1  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													for  i  :=  newGPULayers . Sum ( )  -  1 ;  i  >=  gpuLayers . Sum ( ) ;  i --  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														slog . Debug ( "exploring intermediate layers" ,  "layer" ,  i ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														s . options . NumGPU  =  i 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														newGPULayers ,  err  =  s . createLayout ( systemInfo ,  gpus ,  s . mem ,  requireFull ,  backoff ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														s . options . NumGPU  =  - 1 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														slog . Debug ( "new layout created" ,  "layers" ,  newGPULayers ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														s . loadRequest . GPULayers  =  newGPULayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														resp ,  err  =  s . initModel ( ctx ,  s . loadRequest ,  operation ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														resp . Memory . Log ( slog . LevelDebug ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														slog . Debug ( "memory" ,  "success" ,  resp . Success ,  "required" ,  resp . Memory ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														if  resp . Success  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															verifyGPULayers ,  err  :=  s . createLayout ( systemInfo ,  gpus ,  & resp . Memory ,  requireFull ,  backoff ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
																return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															slog . Debug ( "verifying layout" ,  "layers" ,  verifyGPULayers ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															if  newGPULayers . Sum ( )  <=  verifyGPULayers . Sum ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
																gpuLayers  =  newGPULayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
																// Since we are going backwards (increasing the number of layers), ensure that
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
																// we can come back down if needed
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
																clear ( pastAllocations ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
																continue  nextOperation 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
															} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// If we generated a layout a second time or go backwards, then we've converged. Use the last
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// layout before the repeat, which is already allocated.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  resp . Success  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													continue  nextOperation 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  s . options . NumGPU  >=  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													return  fmt . Errorf ( "memory layout cannot be allocated with num_gpu = %v" ,  s . options . NumGPU ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// Memory allocation failed even though we created a layout that we thought should
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// fit in available memory. This could happen if either our free memory reports
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// are incorrect or if available memory is changing between layout and allocation
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// time. Apply an exponential backoff to try to find the real amount of available
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// space.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  backoff  >  1  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													slog . Warn ( "memory layout cannot be allocated" ,  "memory" ,  resp . Memory ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													return  errors . New ( "memory layout cannot be allocated" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												}  else  if  backoff  ==  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													backoff  =  0.01 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													backoff  *=  2 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												slog . Info ( "model layout did not fit, applying backoff" ,  "backoff" ,  fmt . Sprintf ( "%.2f" ,  backoff ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									s . loadRequest . GPULayers  =  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									resp ,  err  :=  s . initModel ( ctx ,  s . loadRequest ,  LoadOperationCommit ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									success  =  resp . Success 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									s . mem  =  & resp . Memory 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  ! success  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Warn ( "failed to commit memory for model" ,  "memory" ,  resp . Memory ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  errors . New ( "failed to commit memory for model" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// createLayout uses the current best view of memory requirements and creates a layout of model layers on GPUs.
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// It does this by:
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// - Calculating how much space each layer requires
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// - Calculating how much space each GPU has available for layers, based on free memory and space occupied by the graph
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// - Assigning layers
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// - Ensuring that we don't exceed limits, such as requirements about partial offloading or system memory
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * ollamaServer )  createLayout ( systemInfo  discover . SystemInfo ,  systemGPUs  discover . GpuInfoList ,  memory  * ml . BackendMemory ,  requireFull  bool ,  backoff  float32 )  ( ml . GPULayersList ,  error )  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . totalLayers  ==  0  ||  s . options . NumGPU  ==  0  ||  len ( systemGPUs )  ==  0  ||  ( len ( systemGPUs )  ==  1  &&  systemGPUs [ 0 ] . Library  ==  "cpu" )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  ml . GPULayersList { } ,  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpus  :=  append ( make ( discover . GpuInfoList ,  0 ,  len ( systemGPUs ) ) ,  systemGPUs ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									sort . Sort ( sort . Reverse ( discover . ByFreeMemory ( gpus ) ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  memory  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										memory  =  & ml . BackendMemory { CPU :  ml . DeviceMemory { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											Weights :  make ( [ ] ml . Memory ,  s . totalLayers ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											Cache :    make ( [ ] ml . Memory ,  s . totalLayers ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} } 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									layers  :=  make ( [ ] uint64 ,  len ( memory . CPU . Weights ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  layers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  j  :=  range  memory . GPUs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											layers [ i ]  +=  memory . GPUs [ j ] . Weights [ i ] . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											layers [ i ]  +=  memory . GPUs [ j ] . Cache [ i ] . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										layers [ i ]  +=  memory . CPU . Weights [ i ] . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										layers [ i ]  +=  memory . CPU . Cache [ i ] . Size 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-03 04:09:12 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										logutil . Trace ( "layer to assign" ,  "layer" ,  i ,  "size" ,  format . HumanBytes2 ( layers [ i ] ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpuLayers  :=  ml . GPULayersList { } 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  gl  :=  range  gpus . ByLibrary ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// If a GPU already has a graph allocated on it, then we should continue to use it.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// Otherwise, we lose information that we got from previous allocations, which can
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// cause cycling. Plus, we get more information about required allocation from each
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// iteration, so it doesn't make sense that a later iteration would use fewer GPUs.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										lastUsedGPU  :=  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  i  :=  range  gl  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											found  :=  false 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											for  j  :=  range  memory . GPUs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  gl [ i ] . ID  ==  memory . GPUs [ j ] . ID  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													if  memory . GPUs [ j ] . Graph . Size  !=  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														lastUsedGPU  =  i 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													reserved  :=  uint64 ( float32 ( gl [ i ] . FreeMemory ) * backoff )  +  gl [ i ] . MinimumMemory  +  envconfig . GpuOverhead ( )  +  memory . GPUs [ j ] . Graph . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													if  gl [ i ] . FreeMemory  >  reserved  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														gl [ i ] . FreeMemory  -=  reserved 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														gl [ i ] . FreeMemory  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													slog . Debug ( "available gpu" ,  "id" ,  gl [ i ] . ID , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														"available layer vram" ,  format . HumanBytes2 ( gl [ i ] . FreeMemory ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														"backoff" ,  fmt . Sprintf ( "%.2f" ,  backoff ) ,  "minimum" ,  format . HumanBytes2 ( gl [ i ] . MinimumMemory ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														"overhead" ,  format . HumanBytes2 ( envconfig . GpuOverhead ( ) ) , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
														"graph" ,  format . HumanBytes2 ( memory . GPUs [ j ] . Graph . Size ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													found  =  true 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  ! found  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// The runner doesn't report seeing this GPU
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												gl [ i ] . FreeMemory  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										libraryGpuLayers  :=  assignLayers ( layers ,  gl ,  s . options . NumGPU ,  lastUsedGPU ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  libraryGpuLayers . Sum ( )  >  gpuLayers . Sum ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											gpuLayers  =  libraryGpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// These sizes will only increase as we go through additional iterations and get additional information.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									cpuSize  :=  memory . InputWeights . Size  +  memory . CPU . Graph . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  vramSize  uint64 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  gl  :=  range  gpuLayers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  _ ,  gpu  :=  range  memory . GPUs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  gl . ID  ==  gpu . ID  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												vramSize  +=  gpu . Graph . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								nextLayer :  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  layers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  _ ,  g  :=  range  gpuLayers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											for  _ ,  gl  :=  range  g . Layers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  i  ==  gl  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													vramSize  +=  layers [ i ] 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													continue  nextLayer 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										cpuSize  +=  layers [ i ] 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  requireFull  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  gpuLayers . Sum ( )  <  len ( layers )  &&  ( s . options . NumGPU  <  0  ||  gpuLayers . Sum ( )  <  s . options . NumGPU )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  nil ,  ErrLoadRequiredFull 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  cpuSize  >  systemInfo . System . FreeMemory  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  nil ,  ErrLoadRequiredFull 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// On linux and windows, over-allocating CPU memory will almost always result in an error
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// Darwin has fully dynamic swap so has no direct concept of free swap space
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  runtime . GOOS  !=  "darwin"  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										available  :=  systemInfo . System . FreeMemory  +  systemInfo . System . FreeSwap 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  cpuSize  >  available  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Warn ( "model request too large for system" ,  "requested" ,  format . HumanBytes2 ( cpuSize ) ,  "available" ,  format . HumanBytes2 ( available ) ,  "total" ,  format . HumanBytes2 ( systemInfo . System . TotalMemory ) ,  "free" ,  format . HumanBytes2 ( systemInfo . System . FreeMemory ) ,  "swap" ,  format . HumanBytes2 ( systemInfo . System . FreeSwap ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  nil ,  fmt . Errorf ( "model requires more system memory (%s) than is available (%s)" ,  format . HumanBytes2 ( cpuSize ) ,  format . HumanBytes2 ( available ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  vramSize  >  systemInfo . System . TotalMemory  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// disable partial offloading when model is greater than total system memory as this
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// can lead to locking up the system
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											s . options . NumGPU  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											gpuLayers  =  ml . GPULayersList { } 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  gpuLayers . Sum ( )  ==  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Debug ( "insufficient VRAM to load any model layers" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  gpuLayers ,  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// assignLayers packs the maximum number of layers onto the smallest set of GPUs and comes up with a layer assignment
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  assignLayers ( layers  [ ] uint64 ,  gpus  discover . GpuInfoList ,  requestedLayers  int ,  lastUsedGPU  int )  ( gpuLayers  ml . GPULayersList )  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// If we can't fit everything then prefer offloading layers other than the output layer
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  range  2  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// requestedLayers may be -1 if nothing was requested
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										requestedLayers  =  min ( len ( layers ) ,  requestedLayers ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  ! envconfig . SchedSpread ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											for  i  :=  lastUsedGPU ;  i  <  len ( gpus ) ;  i ++  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												// Try to pack things into as few GPUs as possible
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												forceRequest  :=  i  ==  len ( gpus ) - 1 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												gpuLayers  =  findBestFit ( layers ,  gpus [ : i + 1 ] ,  requestedLayers ,  forceRequest ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												if  gpuLayers . Sum ( )  ==  len ( layers )  ||  gpuLayers . Sum ( )  ==  requestedLayers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											gpuLayers  =  findBestFit ( layers ,  gpus ,  requestedLayers ,  true ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// We only stop if we've gotten all of the layers - even if we got requestedLayers, we still
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										// might want to try dropping the output layer.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  gpuLayers . Sum ( )  ==  len ( layers )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										layers  =  layers [ : len ( layers ) - 1 ] 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// findBestFit binary searches to find the smallest capacity factor that can fit
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// the max number of layers. The capacity factor is multiplied by the free space on
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// each GPU and a small one will force even balancing.
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  findBestFit ( layers  [ ] uint64 ,  gpus  discover . GpuInfoList ,  requestedLayers  int ,  forceRequest  bool )  ( gpuLayers  ml . GPULayersList )  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  high  float32  =  1 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  low  float32  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// If we need to fulfill the requested number of layers, pretend we have almost infinite VRAM
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  requestedLayers  >=  0  &&  forceRequest  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										high  =  1000 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									bestAssignments  :=  greedyFit ( layers ,  gpus ,  high ,  requestedLayers ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									maxNumGPU  :=  bestAssignments . Sum ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  maxNumGPU  ==  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  bestAssignments 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  high - low  >  1e-6  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										mid  :=  ( low  +  high )  /  2 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										assignments  :=  greedyFit ( layers ,  gpus ,  mid ,  requestedLayers ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  assignments . Sum ( )  ==  maxNumGPU  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											high  =  mid 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											bestAssignments  =  assignments 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											low  =  mid 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  bestAssignments 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// greedyFit assigns layers incrementally to GPUs, spilling over as each runs out of free space
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  greedyFit ( layers  [ ] uint64 ,  gpus  discover . GpuInfoList ,  capacity  float32 ,  requestedLayers  int )  ( gpuLayers  ml . GPULayersList )  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									device  :=  len ( gpus )  -  1 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									gpuLayers  =  ml . GPULayersList { { ID :  gpus [ device ] . ID } } 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									freeSpace  :=  uint64 ( float32 ( gpus [ device ] . FreeMemory )  *  capacity ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  len ( layers )  -  1 ;  i  >=  0 ;  i --  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  requestedLayers  >=  0  &&  len ( layers ) - 1 - i  >=  requestedLayers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  layers [ i ]  <=  freeSpace  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												gpuLayers [ 0 ] . Layers  =  append ( [ ] int { i } ,  gpuLayers [ 0 ] . Layers ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												freeSpace  -=  layers [ i ] 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											device -- 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  device  <  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											gpuLayers  =  append ( ml . GPULayersList { { ID :  gpus [ device ] . ID } } ,  gpuLayers ... ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											freeSpace  =  uint64 ( float32 ( gpus [ device ] . FreeMemory )  *  capacity ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  gpuLayers 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// waitUntilRunnerLaunched sleeps until the runner subprocess is alive enough
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// to respond to status requests
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * llmServer )  waitUntilRunnerLaunched ( ctx  context . Context )  error  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										_ ,  err  :=  s . getServerStatus ( ctx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  err  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										t  :=  time . NewTimer ( 10  *  time . Millisecond ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										select  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  <- t . C : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  <- ctx . Done ( ) : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  ctx . Err ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// initModel sends a load request to the runner based on the request operation (fit, alloc, commit)
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								// and parameters
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * llmServer )  initModel ( ctx  context . Context ,  req  LoadRequest ,  operation  LoadOperation )  ( * LoadResponse ,  error )  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									req . Operation  =  operation 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									data ,  err  :=  json . Marshal ( req ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "error marshaling load data: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									r ,  err  :=  http . NewRequestWithContext ( ctx ,  http . MethodPost ,  fmt . Sprintf ( "http://127.0.0.1:%d/load" ,  s . port ) ,  bytes . NewBuffer ( data ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "error creating load request: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									r . Header . Set ( "Content-Type" ,  "application/json" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									resp ,  err  :=  http . DefaultClient . Do ( r ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "do load request: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  resp . Body . Close ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									body ,  err  :=  io . ReadAll ( resp . Body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "read load request: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  resp . StatusCode  >=  400  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										log . Printf ( "llm load error: %s" ,  body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "%s" ,  body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  llmResp  LoadResponse 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  :=  json . Unmarshal ( body ,  & llmResp ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "load unmarshal encode response: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  & llmResp ,  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  ServerStatus  int  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								const  (  // iota is reset to 0
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									ServerStatusReady  ServerStatus  =  iota 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									ServerStatusNoSlotsAvailable 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									ServerStatusLaunched 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									ServerStatusLoadingModel 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									ServerStatusNotResponding 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									ServerStatusError 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								)  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  ServerStatus )  String ( )  string  {  
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									switch  s  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  ServerStatusReady : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "llm server ready" 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									case  ServerStatusNoSlotsAvailable : 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  "llm busy - no slots available" 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									case  ServerStatusLaunched : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "llm server launched" 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									case  ServerStatusLoadingModel : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "llm server loading model" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  ServerStatusNotResponding : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "llm server not responding" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "llm server error" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  ServerStatusResponse  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Status    ServerStatus  ` json:"status" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Progress  float32       ` json:"progress" ` 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  getServerStatus ( ctx  context . Context )  ( ServerStatus ,  error )  {  
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									// Fail fast if its exited
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . cmd . ProcessState  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										msg  :=  "" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  s . status  !=  nil  &&  s . status . LastErrMsg  !=  ""  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											msg  =  s . status . LastErrMsg 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-08 07:46:15 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  s . cmd . ProcessState . ExitCode ( )  ==  - 1  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// Most likely a signal killed it, log some more details to try to help troubleshoot
 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Warn ( "llama runner process no longer running" ,  "sys" ,  s . cmd . ProcessState . Sys ( ) ,  "string" ,  s . cmd . ProcessState ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-08 07:46:15 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										return  ServerStatusError ,  fmt . Errorf ( "llama runner process no longer running: %d %s" ,  s . cmd . ProcessState . ExitCode ( ) ,  msg ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									req ,  err  :=  http . NewRequestWithContext ( ctx ,  http . MethodGet ,  fmt . Sprintf ( "http://127.0.0.1:%d/health" ,  s . port ) ,  nil ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  ServerStatusError ,  fmt . Errorf ( "error creating GET request: %v" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									req . Header . Set ( "Content-Type" ,  "application/json" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									resp ,  err  :=  http . DefaultClient . Do ( req ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  errors . Is ( err ,  context . DeadlineExceeded )  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-22 13:07:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  ServerStatusNotResponding ,  errors . New ( "server not responding" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-04 03:01:56 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  strings . Contains ( err . Error ( ) ,  "connection refused" )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  ServerStatusNotResponding ,  errors . New ( "connection refused" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										return  ServerStatusError ,  fmt . Errorf ( "health resp: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  resp . Body . Close ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									body ,  err  :=  io . ReadAll ( resp . Body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  ServerStatusError ,  fmt . Errorf ( "read health request: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									var  ssr  ServerStatusResponse 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  :=  json . Unmarshal ( body ,  & ssr ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										return  ServerStatusError ,  fmt . Errorf ( "health unmarshal encode response: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									switch  ssr . Status  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  ServerStatusLoadingModel : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										s . loadProgress  =  ssr . Progress 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  ssr . Status ,  nil 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									case  ServerStatusLaunched ,  ServerStatusReady ,  ServerStatusNoSlotsAvailable : 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  ssr . Status ,  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									default : 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  ssr . Status ,  fmt . Errorf ( "server error: %+v" ,  ssr ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								// getServerStatusRetry will retry if ServerStatusNoSlotsAvailable is received
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * llmServer )  getServerStatusRetry ( ctx  context . Context )  ( ServerStatus ,  error )  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  retries  int 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										status ,  err  :=  s . getServerStatus ( ctx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  status ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  status  ==  ServerStatusNoSlotsAvailable  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  retries  >=  10  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  status ,  fmt . Errorf ( "no slots available after %d retries" ,  retries ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											time . Sleep ( 5  *  time . Millisecond ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											retries ++ 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  status ,  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Ping ( ctx  context . Context )  error  {  
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									_ ,  err  :=  s . getServerStatus ( ctx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										slog . Debug ( "server unhealthy" ,  "error" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  WaitUntilRunning ( ctx  context . Context )  error  {  
						 
					
						
							
								
									
										
										
										
											2024-09-06 05:00:08 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									stallDuration  :=  envconfig . LoadTimeout ( )     // If no progress happens
 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-28 23:56:18 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									stallTimer  :=  time . Now ( ) . Add ( stallDuration )  // give up if we stall
 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									slog . Info ( "waiting for llama runner to start responding" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  lastStatus  ServerStatus  =  - 1 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-28 23:56:18 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									fullyLoaded  :=  false 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									for  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										select  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										case  <- ctx . Done ( ) : 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-26 00:23:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Warn ( "client connection closed before server finished loading, aborting load" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-27 05:38:29 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  fmt . Errorf ( "timed out waiting for llama runner to start: %w" ,  ctx . Err ( ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										case  err  :=  <- s . done : 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-22 23:52:16 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  fmt . Errorf ( "llama runner process has terminated: %w" ,  err ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-09 07:44:35 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 07:41:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  time . Now ( ) . After ( stallTimer )  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											// timeout
 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											msg  :=  "" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  s . status  !=  nil  &&  s . status . LastErrMsg  !=  ""  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												msg  =  s . status . LastErrMsg 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 07:41:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  fmt . Errorf ( "timed out waiting for llama runner to start - progress %0.2f - %s" ,  s . loadProgress ,  msg ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  s . cmd . ProcessState  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											msg  :=  "" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  s . status  !=  nil  &&  s . status . LastErrMsg  !=  ""  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												msg  =  s . status . LastErrMsg 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  fmt . Errorf ( "llama runner process no longer running: %d %s" ,  s . cmd . ProcessState . ExitCode ( ) ,  msg ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										ctx ,  cancel  :=  context . WithTimeout ( ctx ,  200 * time . Millisecond ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										defer  cancel ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 07:41:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										priorProgress  :=  s . loadProgress 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										status ,  _  :=  s . getServerStatus ( ctx ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  lastStatus  !=  status  &&  status  !=  ServerStatusReady  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// Only log on status changes
 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Info ( "waiting for server to become available" ,  "status" ,  status ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										switch  status  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  ServerStatusReady : 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Info ( fmt . Sprintf ( "llama runner started in %0.2f seconds" ,  time . Since ( s . loadStart ) . Seconds ( ) ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										default : 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											lastStatus  =  status 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 07:41:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											// Reset the timer as long as we're making forward progress on the load
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  priorProgress  !=  s . loadProgress  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												slog . Debug ( fmt . Sprintf ( "model load progress %0.2f" ,  s . loadProgress ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												stallTimer  =  time . Now ( ) . Add ( stallDuration ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-28 23:56:18 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											}  else  if  ! fullyLoaded  &&  int ( s . loadProgress * 100.0 )  >=  100  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												slog . Debug ( "model load completed, waiting for server to become available" ,  "status" ,  status ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-09-06 05:00:08 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												stallTimer  =  time . Now ( ) . Add ( stallDuration ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-28 23:56:18 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												fullyLoaded  =  true 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-21 07:41:43 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-17 23:39:52 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											time . Sleep ( time . Millisecond  *  250 ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											continue 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-04 03:01:56 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Pid ( )  int  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . cmd  !=  nil  &&  s . cmd . Process  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  s . cmd . Process . Pid 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  - 1 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-12-12 06:07:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								var  grammarJSON  =  `  
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								root    : :=  object  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								value   : :=  object  |  array  |  string  |  number  |  ( "true"  |  "false"  |  "null" )  ws  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								object  : :=  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								  "{"  ws  ( 
							 
						 
					
						
							
								
									
										
										
										
											2025-04-25 07:47:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								         string  ":"  ws  value 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								    ( ","  ws  string  ":"  ws  value ) * 
							 
						 
					
						
							
								
									
										
										
										
											2025-04-25 07:47:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								  ) ?  ws  "}"  
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								array   : :=  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								  "["  ws  ( 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								            value 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								    ( ","  ws  value ) * 
							 
						 
					
						
							
								
									
										
										
										
											2025-04-25 07:47:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								  ) ?  ws  "]"  
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								string  : :=  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								  "\""  ( 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-10 01:57:09 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								    [ ^ " \ \ \ x7F \ x00 - \ x1F ]  | 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								    "\\"  ( [ "\\/bfnrt] | " u "  [ 0 - 9 a - fA - F ]  [ 0 - 9 a - fA - F ]  [ 0 - 9 a - fA - F ]  [ 0 - 9 a - fA - F ] )  #  escapes 
							 
						 
					
						
							
								
									
										
										
										
											2025-04-25 07:47:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								  ) *  "\""  
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								number  : :=  ( "-" ?  ( [ 0 - 9 ]  |  [ 1 - 9 ]  [ 0 - 9 ] * ) )  ( "."  [ 0 - 9 ] + ) ?  ( [ eE ]  [ - + ] ?  [ 0 - 9 ] + ) ?   
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								#  Optional  space :  by  convention ,  applied  in  this  grammar  after  literal  chars  when  allowed  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								ws  : :=  ( [  \ t \ n ]  ws ) ?  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								`  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								const  maxBufferSize  =  512  *  format . KiloByte  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  ImageData  struct  {  
						 
					
						
							
								
									
										
										
										
											2025-05-14 08:36:02 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Data  [ ] byte  ` json:"data" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									ID    int     ` json:"id" ` 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  CompletionRequest  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Prompt   string 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-05 08:31:19 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Format   json . RawMessage 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									Images   [ ] ImageData 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-04 00:00:07 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Options  * api . Options 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-09-13 04:32:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Grammar  string  // set before sending the request to the subprocess
 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-04-04 01:19:24 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								// DoneReason represents the reason why a completion response is done
  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  DoneReason  int  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								const  (  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// DoneReasonStop indicates the completion stopped naturally
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									DoneReasonStop  DoneReason  =  iota 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// DoneReasonLength indicates the completion stopped due to length limits
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									DoneReasonLength 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// DoneReasonConnectionClosed indicates the completion stopped due to the connection being closed
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									DoneReasonConnectionClosed 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								)  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( d  DoneReason )  String ( )  string  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									switch  d  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  DoneReasonLength : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "length" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									case  DoneReasonStop : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  "stop" 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  ""  // closed
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								type  CompletionResponse  struct  {  
						 
					
						
							
								
									
										
										
										
											2025-09-13 04:32:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									Content             string         ` json:"content" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									DoneReason          DoneReason     ` json:"done_reason" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Done                bool           ` json:"done" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									PromptEvalCount     int            ` json:"prompt_eval_count" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									PromptEvalDuration  time . Duration  ` json:"prompt_eval_duration" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									EvalCount           int            ` json:"eval_count" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									EvalDuration        time . Duration  ` json:"eval_duration" ` 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Completion ( ctx  context . Context ,  req  CompletionRequest ,  fn  func ( CompletionResponse ) )  error  {  
						 
					
						
							
								
									
										
										
										
											2025-05-13 02:43:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									slog . Debug ( "completion request" ,  "images" ,  len ( req . Images ) ,  "prompt" ,  len ( req . Prompt ) ,  "format" ,  string ( req . Format ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									slog . Log ( ctx ,  logutil . LevelTrace ,  "completion request" ,  "prompt" ,  req . Prompt ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
											
												llm: do not silently fail for supplied, but invalid formats (#8130)
Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
It also fixed a bug where the server would silently fail when clients
requested invalid formats. It also, unfortunately, introduced a bug
where the server would reject requests with an empty format, which
should be allowed.
The change in #8127 updated the code to allow the empty format, but also
reintroduced the regression where the server would silently fail when
the format was set, but invalid.
This commit fixes both regressions. The server does not reject the empty
format, but it does reject invalid formats. It also adds tests to help
us catch regressions in the future.
Also, the updated code provides a more detailed error message when a
client sends a non-empty, but invalid format, echoing the invalid format
in the response.
This commits also takes the opportunity to remove superfluous linter
checks.
											 
										 
										
											2024-12-17 13:57:49 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  len ( req . Format )  >  0  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-10 23:17:39 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										switch  string ( req . Format )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  ` null ` ,  ` "" ` : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// Field was set, but "missing" a value. We accept
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// these as "not set".
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  ` "json" ` : 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											req . Grammar  =  grammarJSON 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-10 23:17:39 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  req . Format [ 0 ]  !=  '{'  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  fmt . Errorf ( "invalid format: %q; expected \"json\" or a valid JSON Schema object" ,  req . Format ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-12-18 01:49:37 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-10 23:17:39 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											// User provided a JSON schema
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											g  :=  llama . SchemaToGrammar ( req . Format ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  g  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  fmt . Errorf ( "invalid JSON schema in format" ) 
							 
						 
					
						
							
								
									
										
										
											
												llm: do not silently fail for supplied, but invalid formats (#8130)
Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
It also fixed a bug where the server would silently fail when clients
requested invalid formats. It also, unfortunately, introduced a bug
where the server would reject requests with an empty format, which
should be allowed.
The change in #8127 updated the code to allow the empty format, but also
reintroduced the regression where the server would silently fail when
the format was set, but invalid.
This commit fixes both regressions. The server does not reject the empty
format, but it does reject invalid formats. It also adds tests to help
us catch regressions in the future.
Also, the updated code provides a more detailed error message when a
client sends a non-empty, but invalid format, echoing the invalid format
in the response.
This commits also takes the opportunity to remove superfluous linter
checks.
											 
										 
										
											2024-12-17 13:57:49 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											req . Grammar  =  string ( g ) 
							 
						 
					
						
							
								
									
										
										
											
												llm: do not silently fail for supplied, but invalid formats (#8130)
Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
It also fixed a bug where the server would silently fail when clients
requested invalid formats. It also, unfortunately, introduced a bug
where the server would reject requests with an empty format, which
should be allowed.
The change in #8127 updated the code to allow the empty format, but also
reintroduced the regression where the server would silently fail when
the format was set, but invalid.
This commit fixes both regressions. The server does not reject the empty
format, but it does reject invalid formats. It also adds tests to help
us catch regressions in the future.
Also, the updated code provides a more detailed error message when a
client sends a non-empty, but invalid format, echoing the invalid format
in the response.
This commits also takes the opportunity to remove superfluous linter
checks.
											 
										 
										
											2024-12-17 13:57:49 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  req . Options  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										opts  :=  api . DefaultOptions ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										req . Options  =  & opts 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
											
												llm: do not silently fail for supplied, but invalid formats (#8130)
Changes in #8002 introduced fixes for bugs with mangling JSON Schemas.
It also fixed a bug where the server would silently fail when clients
requested invalid formats. It also, unfortunately, introduced a bug
where the server would reject requests with an empty format, which
should be allowed.
The change in #8127 updated the code to allow the empty format, but also
reintroduced the regression where the server would silently fail when
the format was set, but invalid.
This commit fixes both regressions. The server does not reject the empty
format, but it does reject invalid formats. It also adds tests to help
us catch regressions in the future.
Also, the updated code provides a more detailed error message when a
client sends a non-empty, but invalid format, echoing the invalid format
in the response.
This commits also takes the opportunity to remove superfluous linter
checks.
											 
										 
										
											2024-12-17 13:57:49 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  err  :=  s . sem . Acquire ( ctx ,  1 ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  errors . Is ( err ,  context . Canceled )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Info ( "aborting completion request due to client closing the connection" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Error ( "Failed to acquire semaphore" ,  "error" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  s . sem . Release ( 1 ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// put an upper limit on num_predict to avoid the model running on forever
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  req . Options . NumPredict  <  0  ||  req . Options . NumPredict  >  10 * s . options . NumCtx  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										req . Options . NumPredict  =  10  *  s . options . NumCtx 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									// Make sure the server is ready
 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									status ,  err  :=  s . getServerStatusRetry ( ctx ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									}  else  if  status  !=  ServerStatusReady  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  fmt . Errorf ( "unexpected server status: %s" ,  status ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// Handling JSON marshaling with special characters unescaped.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									buffer  :=  & bytes . Buffer { } 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									enc  :=  json . NewEncoder ( buffer ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									enc . SetEscapeHTML ( false ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  err  :=  enc . Encode ( req ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  fmt . Errorf ( "failed to marshal data: %v" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									endpoint  :=  fmt . Sprintf ( "http://127.0.0.1:%d/completion" ,  s . port ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									serverReq ,  err  :=  http . NewRequestWithContext ( ctx ,  http . MethodPost ,  endpoint ,  buffer ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  fmt . Errorf ( "error creating POST request: %v" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									serverReq . Header . Set ( "Content-Type" ,  "application/json" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									res ,  err  :=  http . DefaultClient . Do ( serverReq ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-14 08:26:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Error ( "post predict" ,  "error" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  errors . New ( "model runner has unexpectedly stopped, this may be due to resource limitations or an internal error, check ollama server logs for details" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  res . Body . Close ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  res . StatusCode  >=  400  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										bodyBytes ,  err  :=  io . ReadAll ( res . Body ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										if  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  fmt . Errorf ( "failed reading llm error response: %w" ,  err ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										log . Printf ( "llm predict error: %s" ,  bodyBytes ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  fmt . Errorf ( "%s" ,  bodyBytes ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									scanner  :=  bufio . NewScanner ( res . Body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									buf  :=  make ( [ ] byte ,  0 ,  maxBufferSize ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									scanner . Buffer ( buf ,  maxBufferSize ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// keep track of the last token generated, this is used to abort if the model starts looping
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  lastToken  string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  tokenRepeat  int 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									for  scanner . Scan ( )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										select  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										case  <- ctx . Done ( ) : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											// This handles the request cancellation
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  ctx . Err ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											line  :=  scanner . Bytes ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  len ( line )  ==  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												continue 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											evt ,  ok  :=  bytes . CutPrefix ( line ,  [ ] byte ( "data: " ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  ! ok  { 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												evt  =  line 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											var  c  CompletionResponse 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											if  err  :=  json . Unmarshal ( evt ,  & c ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-28 08:21:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												return  fmt . Errorf ( "error unmarshalling llm prediction response: %v" ,  err ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											switch  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-13 04:32:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											case  strings . TrimSpace ( c . Content )  ==  lastToken : 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												tokenRepeat ++ 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											default : 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												lastToken  =  strings . TrimSpace ( c . Content ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												tokenRepeat  =  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											// 30 picked as an arbitrary max token repeat limit, modify as needed
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											if  tokenRepeat  >  30  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												slog . Debug ( "prediction aborted, token repeat limit reached" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  ctx . Err ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-09-13 04:32:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											if  c . Content  !=  ""  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												fn ( CompletionResponse { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
													Content :  c . Content , 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												} ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-09 06:07:59 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-09-13 04:32:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											if  c . Done  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-09 06:07:59 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												fn ( c ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-13 04:32:30 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
												return  nil 
							 
						 
					
						
							
								
									
										
										
										
											2025-09-09 06:07:59 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  err  :=  scanner . Err ( ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-11-20 08:26:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  strings . Contains ( err . Error ( ) ,  "unexpected EOF" )  ||  strings . Contains ( err . Error ( ) ,  "forcibly closed" )  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											s . Close ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-11-20 08:26:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											var  msg  string 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											if  s . status  !=  nil  &&  s . status . LastErrMsg  !=  ""  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												msg  =  s . status . LastErrMsg 
							 
						 
					
						
							
								
									
										
										
										
											2024-11-20 08:26:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												msg  =  err . Error ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-11-20 08:26:57 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											return  fmt . Errorf ( "an error was encountered while running the model: %s" ,  msg ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  fmt . Errorf ( "error reading llm response: %v" ,  err ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									return  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  EmbeddingRequest  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Content  string  ` json:"content" ` 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  EmbeddingResponse  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Embedding  [ ] float32  ` json:"embedding" ` 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Embedding ( ctx  context . Context ,  input  string )  ( [ ] float32 ,  error )  {  
						 
					
						
							
								
									
										
										
										
											2025-05-13 02:43:00 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									slog . Log ( ctx ,  logutil . LevelTrace ,  "embedding request" ,  "input" ,  input ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  err  :=  s . sem . Acquire ( ctx ,  1 ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-11-23 00:05:32 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  errors . Is ( err ,  context . Canceled )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Info ( "aborting embedding request due to client closing the connection" ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										}  else  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											slog . Error ( "Failed to acquire semaphore" ,  "error" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  nil ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									defer  s . sem . Release ( 1 ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									// Make sure the server is ready
 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-07 05:22:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									status ,  err  :=  s . getServerStatusRetry ( ctx ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									}  else  if  status  !=  ServerStatusReady  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-15 06:21:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "unexpected server status: %s" ,  status ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									data ,  err  :=  json . Marshal ( EmbeddingRequest { Content :  input } ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 09:54:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "error marshaling embed data: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									r ,  err  :=  http . NewRequestWithContext ( ctx ,  http . MethodPost ,  fmt . Sprintf ( "http://127.0.0.1:%d/embedding" ,  s . port ) ,  bytes . NewBuffer ( data ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "error creating embed request: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									r . Header . Set ( "Content-Type" ,  "application/json" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									resp ,  err  :=  http . DefaultClient . Do ( r ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "do embedding request: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  resp . Body . Close ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									body ,  err  :=  io . ReadAll ( resp . Body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "error reading embed response: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  resp . StatusCode  >=  400  { 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										log . Printf ( "llm embedding error: %s" ,  body ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "%s" ,  body ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									var  e  EmbeddingResponse 
							 
						 
					
						
							
								
									
										
										
										
											2024-07-31 04:12:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  err  :=  json . Unmarshal ( body ,  & e ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
										return  nil ,  fmt . Errorf ( "unmarshal tokenize response: %w" ,  err ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-08-12 02:57:10 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									return  e . Embedding ,  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 09:54:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								type  TokenizeRequest  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Content  string  ` json:"content" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  TokenizeResponse  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Tokens  [ ] int  ` json:"tokens" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Tokenize ( ctx  context . Context ,  content  string )  ( [ ] int ,  error )  {  
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									s . llamaModelLock . Lock ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  s . llamaModelLock . Unlock ( ) 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  s . llamaModel  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  s . llamaModel . Tokenize ( content ,  false ,  true ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 09:54:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  s . textProcessor  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-06 05:27:53 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										tokens ,  err  :=  s . textProcessor . Encode ( content ,  false ) 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  nil ,  err 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										toks  :=  make ( [ ] int ,  len ( tokens ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  i ,  t  :=  range  tokens  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											toks [ i ]  =  int ( t ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  toks ,  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 09:54:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// not reached
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  nil ,  fmt . Errorf ( "no tokenizer configured" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 09:54:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  DetokenizeRequest  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Tokens  [ ] int  ` json:"tokens" ` 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								type  DetokenizeResponse  struct  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									Content  string  ` json:"content" ` 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Detokenize ( ctx  context . Context ,  tokens  [ ] int )  ( string ,  error )  {  
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									s . llamaModelLock . Lock ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									defer  s . llamaModelLock . Unlock ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . llamaModel  !=  nil  { 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										var  resp  string 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  _ ,  token  :=  range  tokens  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											resp  +=  s . llamaModel . TokenToPiece ( token ) 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  resp ,  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									if  s . textProcessor  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										toks  :=  make ( [ ] int32 ,  len ( tokens ) ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										for  i ,  t  :=  range  tokens  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											toks [ i ]  =  int32 ( t ) 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										content ,  err  :=  s . textProcessor . Decode ( toks ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  "" ,  err 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										return  content ,  nil 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-01 09:54:21 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									// not reached
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  "" ,  fmt . Errorf ( "no tokenizer configured" ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llmServer )  Close ( )  error  {  
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									s . llamaModelLock . Lock ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . llamaModel  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										llama . FreeModel ( s . llamaModel ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										s . llamaModel  =  nil 
							 
						 
					
						
							
								
									
										
											 
										
											
												Re-introduce the `llama` package (#5034)
* Re-introduce the llama package
This PR brings back the llama package, making it possible to call llama.cpp and
ggml APIs from Go directly via CGo. This has a few advantages:
- C APIs can be called directly from Go without needing to use the previous
  "server" REST API
- On macOS and for CPU builds on Linux and Windows, Ollama can be built without
  a go generate ./... step, making it easy to get up and running to hack on
  parts of Ollama that don't require fast inference
- Faster build times for AVX,AVX2,CUDA and ROCM (a full build of all runners
  takes <5 min on a fast CPU)
- No git submodule making it easier to clone and build from source
This is a big PR, but much of it is vendor code except for:
- llama.go CGo bindings
- example/: a simple example of running inference
- runner/: a subprocess server designed to replace the llm/ext_server package
- Makefile an as minimal as possible Makefile to build the runner package for
  different targets (cpu, avx, avx2, cuda, rocm)
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
* cache: Clear old KV cache entries when evicting a slot
When forking a cache entry, if no empty slots are available we
evict the least recently used one and copy over the KV entries
from the closest match. However, this copy does not overwrite
existing values but only adds new ones. Therefore, we need to
clear the old slot first.
This change fixes two issues:
 - The KV cache fills up and runs out of space even though we think
   we are managing it correctly
 - Performance gets worse over time as we use new cache entries that
   are not hot in the processor caches
* doc: explain golang objc linker warning (#6830)
* llama: gather transitive dependencies for rocm for dist packaging (#6848)
* Refine go server makefiles to be more DRY (#6924)
This breaks up the monolithic Makefile for the Go based runners into a
set of utility files as well as recursive Makefiles for the runners.
Files starting with the name "Makefile" are buildable, while files that
end with ".make" are utilities to include in other Makefiles.  This
reduces the amount of nearly identical targets and helps set a pattern
for future community contributions for new GPU runner architectures.
When we are ready to switch over to the Go runners, these files should
move to the top of the repo, and we should add targets for the main CLI,
as well as a helper "install" (put all the built binaries on the local
system in a runnable state) and "dist" target (generate the various
tar/zip files for distribution) for local developer use.
* llama: don't create extraneous directories (#6988)
* llama: Exercise the new build in CI (#6989)
Wire up some basic sanity testing in CI for the Go runner.  GPU runners are not covered yet.
* llama: Refine developer docs for Go server (#6842)
This enhances the documentation for development focusing on the new Go
server.  After we complete the transition further doc refinements
can remove the "transition" discussion.
* runner.go: Allocate batches for all sequences during init
We should tell the model that we could have full batches for all
sequences. We already do this when we allocate the batches but it was
missed during initialization.
* llama.go: Don't return nil from Tokenize on zero length input
Potentially receiving nil in a non-error condition is surprising to
most callers - it's better to return an empty slice.
* runner.go: Remove stop tokens from cache
If the last token is EOG then we don't return this and it isn't
present in the cache (because it was never submitted to Decode).
This works well for extending the cache entry with a new sequence.
However, for multi-token stop sequences, we won't return any of the
tokens but all but the last one will be in the cache. This means
when the conversation continues the cache will contain tokens that
don't overlap with the new prompt.
This works (we will pick up the portion where there is overlap) but
it causes unnecessary cache thrashing because we will fork the original
cache entry as it is not a perfect match.
By trimming the cache to the tokens that we actually return this
issue can be avoided.
* runner.go: Simplify flushing of pending tokens
* runner.go: Update TODOs
* runner.go: Don't panic when processing sequences
If there is an error processing a sequence, we should return a
clean HTTP error back to Ollama rather than panicing. This will
make us more resilient to transient failures.
Panics can still occur during startup as there is no way to serve
requests if that fails.
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: More accurately capture timings
Currently prompt processing time doesn't capture the that it takes
to tokenize the input, only decoding time. We should capture the
full process to more accurately reflect reality. This is especially
true once we start processing images where the initial processing
can take significant time. This is also more consistent with the
existing C++ runner.
* runner.go: Support for vision models
In addition to bringing feature parity with the C++ runner, this also
incorporates several improvements:
 - Cache prompting works with images, avoiding the need to re-decode
   embeddings for every message in a conversation
 - Parallelism is supported, avoiding the need to restrict to one
   sequence at a time. (Though for now Ollama will not schedule
   them while we might need to fall back to the old runner.)
Co-authored-by: jmorganca <jmorganca@gmail.com>
* runner.go: Move Unicode checking code and add tests
* runner.go: Export external cache members
Runner and cache are in the same package so the change doesn't
affect anything but it is more internally consistent.
* runner.go: Image embedding cache
Generating embeddings from images can take significant time (on
my machine between 100ms and 8s depending on the model). Although
we already cache the result of decoding these images, the embeddings
need to be regenerated every time. This is not necessary if we get
the same image over and over again, for example, during a conversation.
This currently uses a very small cache with a very simple algorithm
but it is easy to improve as is warranted.
* llama: catch up on patches
Carry forward solar-pro and cli-unicode patches
* runner.go: Don't re-allocate memory for every batch
We can reuse memory allocated from batch to batch since batch
size is fixed. This both saves the cost of reallocation as well
keeps the cache lines hot.
This results in a roughly 1% performance improvement for token
generation with Nvidia GPUs on Linux.
* runner.go: Default to classic input cache policy
The input cache as part of the go runner implemented a cache
policy that aims to maximize hit rate in both single and multi-
user scenarios. When there is a cache hit, the response is
very fast.
However, performance is actually slower when there is an input
cache miss due to worse GPU VRAM locality. This means that
performance is generally better overall for multi-user scenarios
(better input cache hit rate, locality was relatively poor already).
But worse for single users (input cache hit rate is about the same,
locality is now worse).
This defaults the policy back to the old one to avoid a regression
but keeps the new one available through an environment variable
OLLAMA_MULTIUSER_CACHE. This is left undocumented as the goal is
to improve this in the future to get the best of both worlds
without user configuration.
For inputs that result in cache misses, on Nvidia/Linux this
change improves performance by 31% for prompt processing and
13% for token generation.
* runner.go: Increase size of response channel
Generally the CPU can easily keep up with handling reponses that
are generated but there's no reason not to let generation continue
and handle things in larger batches if needed.
* llama: Add CI to verify all vendored changes have patches (#7066)
Make sure we don't accidentally merge changes in the vendored code
that aren't also reflected in the patches.
* llama: adjust clip patch for mingw utf-16 (#7065)
* llama: adjust clip patch for mingw utf-16
* llama: ensure static linking of runtime libs
Avoid runtime dependencies on non-standard libraries
* runner.go: Enable llamafile (all platforms) and BLAS (Mac OS)
These are two features that are shown on llama.cpp's system info
that are currently different between the two runners. On my test
systems the performance difference is very small to negligible
but it is probably still good to equalize the features.
* llm: Don't add BOS/EOS for tokenize requests
This is consistent with what server.cpp currently does. It affects
things like token processing counts for embedding requests.
* runner.go: Don't cache prompts for embeddings
Our integration with server.cpp implicitly disables prompt caching
because it is not part of the JSON object being parsed, this makes
the Go runner behavior similarly.
Prompt caching has been seen to affect the results of text completions
on certain hardware. The results are not wrong either way but they
are non-deterministic. However, embeddings seem to be affected even
on hardware that does not show this behavior for completions. For
now, it is best to maintain consistency with the existing behavior.
* runner.go: Adjust debug log levels
Add system info printed at startup and quiet down noisier logging.
* llama: fix compiler flag differences (#7082)
Adjust the flags for the new Go server to more closely match the
generate flow
* llama: refine developer docs (#7121)
* llama: doc and example clean up (#7122)
* llama: doc and example clean up
* llama: Move new dockerfile into llama dir
Temporary home until we fully transition to the Go server
* llama: runner doc cleanup
* llama.go: Add description for Tokenize error case
---------
Co-authored-by: Jesse Gross <jesse@ollama.com>
Co-authored-by: Daniel Hiltgen <daniel@ollama.com>
Co-authored-by: Daniel Hiltgen <dhiltgen@users.noreply.github.com>
											 
										 
										
											2024-10-08 23:53:54 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
									
										
										
										
											2025-03-05 01:03:46 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									s . llamaModelLock . Unlock ( ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-10-10 07:55:34 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									if  s . cmd  !=  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-08 00:38:17 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Debug ( "stopping llama server" ,  "pid" ,  s . Pid ( ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-29 00:41:38 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										if  err  :=  s . cmd . Process . Kill ( ) ;  err  !=  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  err 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										// if ProcessState is already populated, Wait already completed, no need to wait again
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  s . cmd . ProcessState  ==  nil  { 
							 
						 
					
						
							
								
									
										
										
										
											2025-05-08 00:38:17 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											slog . Debug ( "waiting for llama server to exit" ,  "pid" ,  s . Pid ( ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-10 02:10:28 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											<- s . done 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
									
										
										
										
											2024-04-30 02:06:56 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-08 00:38:17 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										slog . Debug ( "llama server stopped" ,  "pid" ,  s . Pid ( ) ) 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-15 01:24:13 +08:00 
										
									 
								 
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  nil 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llamaServer )  VRAMSize ( )  uint64  {  
						 
					
						
							
								
									
										
										
										
											2024-05-19 03:34:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									return  s . estimate . VRAMSize 
							 
						 
					
						
							
								
									
										
										
										
											2024-03-31 00:50:05 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llamaServer )  TotalSize ( )  uint64  {  
						 
					
						
							
								
									
										
										
										
											2024-05-19 03:34:31 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									return  s . estimate . TotalSize 
							 
						 
					
						
							
								
									
										
										
										
											2024-05-14 08:17:36 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								func  ( s  * llamaServer )  VRAMByGPU ( gpuID  string )  uint64  {  
						 
					
						
							
								
									
										
										
										
											2024-06-04 10:09:23 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
									for  i ,  gpu  :=  range  s . gpus  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  gpu . ID  ==  gpuID  { 
							 
						 
					
						
							
								
									
										
										
										
											2024-11-19 03:48:13 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
											if  i  <  len ( s . estimate . GPUSizes )  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
												return  s . estimate . GPUSizes [ i ] 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											} 
							 
						 
					
						
							
								
									
										
										
										
											2024-06-04 10:09:23 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
									
										
										
										
											2025-05-30 03:21:48 +08:00 
										
									 
								 
							 
							
								
									
										 
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * ollamaServer )  VRAMSize ( )  uint64  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . mem  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									var  mem  uint64 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  g  :=  range  s . mem . GPUs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										mem  +=  g . Allocated ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// Some elements are always on CPU. However, if we have allocated all layers
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									// on the GPU then include the CPU components as well, to represent complete offloading.
 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									noCPULayers  :=  true 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  i  :=  range  s . mem . CPU . Weights  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  s . mem . CPU . Weights [ i ] . Size  !=  0  ||  s . mem . CPU . Cache [ i ] . Size  !=  0  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											noCPULayers  =  false 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											break 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  noCPULayers  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										mem  +=  s . mem . InputWeights . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										mem  +=  s . mem . CPU . Graph . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  mem 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * ollamaServer )  TotalSize ( )  uint64  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . mem  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									mem  :=  s . mem . InputWeights . Size 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									mem  +=  s . mem . CPU . Allocated ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  g  :=  range  s . mem . GPUs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										mem  +=  g . Allocated ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  mem 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								func  ( s  * ollamaServer )  VRAMByGPU ( gpuID  string )  uint64  {  
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									if  s . mem  ==  nil  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										return  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									for  _ ,  g  :=  range  s . mem . GPUs  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										if  g . ID  ==  gpuID  { 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
											return  g . Allocated ( ) 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
										} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									} 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
									return  0 
							 
						 
					
						
							
								
							 
							
								
							 
							
								 
							
							
								}