runtime error
Exit code: 1. Reason: οΏ½οΏ½β | 703M/3.35G [00:08<00:19, 136MB/s] [A model-00002-of-00002.safetensors: 37%|ββββ | 1.23G/3.35G [00:09<00:09, 232MB/s][A model-00002-of-00002.safetensors: 46%|βββββ | 1.54G/3.35G [00:11<00:09, 184MB/s][A model-00002-of-00002.safetensors: 53%|ββββββ | 1.79G/3.35G [00:13<00:09, 172MB/s][A model-00002-of-00002.safetensors: 62%|βββββββ | 2.09G/3.35G [00:14<00:06, 200MB/s][A model-00002-of-00002.safetensors: 70%|βββββββ | 2.33G/3.35G [00:15<00:05, 199MB/s][A model-00002-of-00002.safetensors: 81%|ββββββββ | 2.71G/3.35G [00:16<00:02, 242MB/s][A model-00002-of-00002.safetensors: 89%|βββββββββ | 2.99G/3.35G [00:17<00:01, 252MB/s][A model-00002-of-00002.safetensors: 100%|ββββββββββ| 3.35G/3.35G [00:18<00:00, 181MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 16, in <module> model = AutoModelForCausalLM.from_pretrained(model_id, device_map="cuda", trust_remote_code=True, torch_dtype="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 585, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 313, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4619, in from_pretrained config = cls._autoset_attn_implementation( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2302, in _autoset_attn_implementation cls._check_and_enable_flash_attn_2( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 2453, in _check_and_enable_flash_attn_2 raise ValueError( ValueError: FlashAttention2 has been toggled on, but it cannot be used due to the following error: Flash Attention 2 is not available on CPU. Please make sure torch can access a CUDA device.
Container logs:
Fetching error logs...