runtime error

Exit code: 1. Reason: 1.11G/2.30G [00:05<00:04, 284MB/s] model-00007-of-00007.safetensors: 62%|██████▏ | 1.42G/2.30G [00:07<00:03, 265MB/s] model-00007-of-00007.safetensors: 96%|█████████▌| 2.20G/2.30G [00:08<00:00, 415MB/s] model-00007-of-00007.safetensors: 100%|█████████▉| 2.30G/2.30G [00:08<00:00, 265MB/s] Downloading shards: 100%|██████████| 7/7 [00:59<00:00, 8.65s/it] Downloading shards: 100%|██████████| 7/7 [00:59<00:00, 8.50s/it] Traceback (most recent call last): File "/home/user/app/app.py", line 35, in <module> model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(dtype=torch.float16) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 558, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3550, in from_pretrained model = cls(config, *model_args, **model_kwargs) File "/home/user/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-Llama3-V-2_5/fd7f352fac0e06d0d818b23f98e3ec8c64267a57/modeling_minicpmv.py", line 30, in __init__ self.resampler = self.init_resampler(self.embed_dim, self.vision_dim) File "/home/user/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-Llama3-V-2_5/fd7f352fac0e06d0d818b23f98e3ec8c64267a57/modeling_minicpmv.py", line 45, in init_resampler return Resampler( File "/home/user/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-Llama3-V-2_5/fd7f352fac0e06d0d818b23f98e3ec8c64267a57/resampler.py", line 106, in __init__ self._set_2d_pos_cache(self.max_size) File "/home/user/.cache/huggingface/modules/transformers_modules/openbmb/MiniCPM-Llama3-V-2_5/fd7f352fac0e06d0d818b23f98e3ec8c64267a57/resampler.py", line 111, in _set_2d_pos_cache pos_embed = torch.from_numpy(get_2d_sincos_pos_embed(self.embed_dim, max_size)).float().to(device) RuntimeError: Numpy is not available

Container logs:

Fetching error logs...