runtime error
Exit code: 1. Reason: 0:00<00:01, 299MiB/s] 35%|██████████████▏ | 163M/461M [00:00<00:01, 305MiB/s] 42%|████████████████▉ | 196M/461M [00:00<00:00, 317MiB/s] 50%|███████████████████▊ | 229M/461M [00:00<00:00, 328MiB/s] 56%|██████████████████████▌ | 260M/461M [00:00<00:00, 326MiB/s] 63%|█████████████████████████▍ | 293M/461M [00:01<00:00, 330MiB/s] 70%|████████████████████████████▏ | 325M/461M [00:01<00:00, 333MiB/s] 77%|██████████████████████████████▉ | 357M/461M [00:01<00:00, 328MiB/s] 84%|█████████████████████████████████▋ | 388M/461M [00:01<00:00, 321MiB/s] 91%|████████████████████████████████████▎ | 419M/461M [00:01<00:00, 310MiB/s] 98%|███████████████████████████████████████▏| 451M/461M [00:01<00:00, 320MiB/s] 100%|████████████████████████████████████████| 461M/461M [00:01<00:00, 310MiB/s] Error initializing OmniInference: Traceback (most recent call last): File "/app/serve_html.py", line 16, in <module> omni = OmniInference() File "/app/inference.py", line 382, in __init__ self.fabric, self.model, self.text_tokenizer, self.snacmodel, self.whispermodel = load_model(ckpt_dir, device) File "/app/inference.py", line 354, in load_model text_tokenizer = Tokenizer(ckpt_dir) File "/app/litgpt/tokenizer.py", line 64, in __init__ raise NotImplementedError NotImplementedError
Container logs:
Fetching error logs...