runtime error
Exit code: 1. Reason: meta-llama-3.1-8b.Q4_K_M.gguf: 0%| | 0.00/4.92G [00:00<?, ?B/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 0%| | 137k/4.92G [00:01<10:11:42, 134kB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 3%|▎ | 135M/4.92G [00:02<01:00, 78.5MB/s] [A meta-llama-3.1-8b.Q4_K_M.gguf: 6%|▌ | 291M/4.92G [00:03<00:41, 112MB/s] [A meta-llama-3.1-8b.Q4_K_M.gguf: 9%|▊ | 426M/4.92G [00:05<00:55, 81.2MB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 11%|█ | 528M/4.92G [00:06<00:50, 86.2MB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 14%|█▎ | 674M/4.92G [00:07<00:44, 96.5MB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 20%|█▉ | 974M/4.92G [00:08<00:26, 149MB/s] [A meta-llama-3.1-8b.Q4_K_M.gguf: 42%|████▏ | 2.05G/4.92G [00:09<00:07, 404MB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 51%|█████ | 2.51G/4.92G [00:11<00:07, 332MB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 73%|███████▎ | 3.58G/4.92G [00:12<00:02, 510MB/s][A meta-llama-3.1-8b.Q4_K_M.gguf: 100%|██████████| 4.92G/4.92G [00:13<00:00, 366MB/s] Traceback (most recent call last): File "/app/app.py", line 11, in <module> llm = Llama( File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 318, in __init__ self._n_vocab = self.n_vocab() File "/usr/local/lib/python3.10/site-packages/llama_cpp/llama.py", line 1655, in n_vocab return self._model.n_vocab() File "/usr/local/lib/python3.10/site-packages/llama_cpp/_internals.py", line 67, in n_vocab assert self.model is not None AssertionError
Container logs:
Fetching error logs...