runtime error
Exit code: 1. Reason: state dictionary to their corresponding key in the module instead of copying them in place?) warnings.warn( /usr/local/lib/python3.10/site-packages/torch/nn/modules/module.py:2409: UserWarning: for vision_model.head.mlp.fc2.bias: copying from a non-meta parameter in the checkpoint to a meta parameter in the current model, which is a no-op. (Did you mean to pass `assign=True` to assign items in the state dictionary to their corresponding key in the module instead of copying them in place?) warnings.warn( preprocessor_config.json: 0%| | 0.00/368 [00:00<?, ?B/s][A preprocessor_config.json: 100%|██████████| 368/368 [00:00<00:00, 2.34MB/s] tokenizer_config.json: 0%| | 0.00/711 [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 711/711 [00:00<00:00, 5.01MB/s] spiece.model: 0%| | 0.00/798k [00:00<?, ?B/s][A spiece.model: 100%|██████████| 798k/798k [00:00<00:00, 2.94MB/s] special_tokens_map.json: 0%| | 0.00/409 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 409/409 [00:00<00:00, 3.18MB/s] Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s][A Loading checkpoint shards: 33%|███▎ | 1/3 [00:02<00:04, 2.29s/it][A Loading checkpoint shards: 100%|██████████| 3/3 [00:02<00:00, 1.14it/s] generation_config.json: 0%| | 0.00/119 [00:00<?, ?B/s][A generation_config.json: 100%|██████████| 119/119 [00:00<00:00, 940kB/s] Some parameters are on the meta device because they were offloaded to the cpu and disk. Traceback (most recent call last): File "/app/app.py", line 24, in <module> model = model.to(dtype).eval() File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 462, in wrapper raise RuntimeError("You can't move a model that has some modules offloaded to cpu or disk.") RuntimeError: You can't move a model that has some modules offloaded to cpu or disk.
Container logs:
Fetching error logs...