runtime error
Exit code: 1. Reason: [A preprocessor_config.json: 100%|██████████| 772/772 [00:00<00:00, 3.98MB/s] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. tokenizer_config.json: 0%| | 0.00/2.01k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 2.01k/2.01k [00:00<00:00, 6.34MB/s] tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s][A tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 1.18MB/s] tokenizer.json: 0%| | 0.00/3.51M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 3.51M/3.51M [00:00<00:00, 115MB/s] added_tokens.json: 0%| | 0.00/41.0 [00:00<?, ?B/s][A added_tokens.json: 100%|██████████| 41.0/41.0 [00:00<00:00, 221kB/s] special_tokens_map.json: 0%| | 0.00/552 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 552/552 [00:00<00:00, 3.78MB/s] `torch_dtype` is deprecated! Use `dtype` instead! Traceback (most recent call last): File "/app/app.py", line 11, in <module> model = LlavaForConditionalGeneration.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 277, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4900, in from_pretrained checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1148, in _get_resolved_checkpoint_files raise OSError( OSError: Declan1/llava-v1.6-mistral-7b-sydneyfish-a100 does not appear to have a file named pytorch_model.bin, model.safetensors, tf_model.h5, model.ckpt or flax_model.msgpack.
Container logs:
Fetching error logs...