Transformers
GGUF
conversational

AttributeError: 'LlamaModel' object has no attribute 'sampler'

#2
by devops724 - opened

pip freeze | grep llama
llama_cpp_python==0.3.16

python main.py

ERROR: Traceback (most recent call last):
File "backend/.venv/lib/python3.12/site-packages/starlette/routing.py", line 694, in lifespan
async with self.lifespan_context(app) as maybe_state:
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/lib/python3.12/contextlib.py", line 210, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/workspace/ai/hooshi/backend/main.py", line 49, in lifespan
llm_svc = LLMService()
^^^^^^^^^^^^
File "backend/services/llm_service.py", line 7, in init
self._llm = Llama(
^^^^^^
File "backend/.venv/lib/python3.12/site-packages/llama_cpp/llama.py", line 374, in init
internals.LlamaModel(
File "backend/.venv/lib/python3.12/site-packages/llama_cpp/_internals.py", line 58, in init
raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file:backend/models/tiny-aya-earth-q4_k_m.gguf

Exception ignored in: <function LlamaModel.__del__ at 0x7fde0361e2a0>
Traceback (most recent call last):
File "backend/.venv/lib/python3.12/site-packages/llama_cpp/_internals.py", line 86, in del
self.close()
File "backend/.venv/lib/python3.12/site-packages/llama_cpp/_internals.py", line 78, in close
if self.sampler is not None:
^^^^^^^^^^^^
AttributeError: 'LlamaModel' object has no attribute 'sampler'
ERROR: Application startup failed. Exiting.
((.venv) ) (base) user@holap:~/backend$ pip freeze | grep llama
llama_cpp_python==0.3.16

Sign up or log in to comment