Problem running GGUF
Trying to run the model with Ollama (version is 0.13.5) results in:
ollama run hf.co/LiquidAI/LFM2.5-1.2B-Instruct-GGUF:Q8_0
pulling manifest
...
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
llama_model_load_from_file_impl: failed to load model
Hey! We’re aware of this issue. It’s related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4
Hey! We’re aware of this issue. It’s related to a recent fix in llama.cpp and will be resolved in the next sync/version update for Ollama. For now we recommend using v0.13.4
Will you get bartowski and/or unsloth quants done as well?
Found it sorry, did something wrong with the search 😁
still this error
ollama --version
ollama version is 0.15.2
Any work arounds for this? I just installed the newest copy of Ollama but ended up with the same error when trying to pull LFM2 models.
ping.
ollama --version
ollama version is 0.16.3
ollama run hf.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF:F16
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'
ollama run hf.co/LiquidAI/LFM2-24B-A2B-GGUF:Q8_0
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm.weight'```
Same here:
ollama --version
ollama version is 0.17.0
(base) nise@localhost Documents % ollama run hf.co/LiquidAI/LFM2.5-Audio-1.5B-GGUF:F16
pulling manifest
pulling 60c8b3c36e52: 100% ▕██████████████████▏ 2.3 GB
...
verifying sha256 digest
writing manifest
success
Error: 500 Internal Server Error: llama runner process has terminated: error loading model: missing tensor 'output_norm'