runtime error
Exit code: 1. Reason: ights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. state_dict = torch.load(model_path, map_location="cpu") Download Vocos from huggingface charactr/vocos-mel-24khz vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt tokenizer : custom model : /home/user/.cache/huggingface/hub/models--SWivid--F5-TTS/snapshots/84e5a410d9cead4de2f847e7c9369a6440bdfaca/F5TTS_Base/model_1200000.safetensors vocab : /usr/local/lib/python3.10/site-packages/f5_tts/infer/examples/vocab.txt tokenizer : custom model : /home/user/.cache/huggingface/hub/models--SWivid--E2-TTS/snapshots/851141880b5ca38050025e98dfdee27dc553f86e/E2TTS_Base/model_1200000.safetensors Downloading shards: 0%| | 0/2 [00:00<?, ?it/s][A Downloading shards: 50%|βββββ | 1/2 [00:07<00:07, 7.61s/it][A Downloading shards: 100%|ββββββββββ| 2/2 [00:11<00:00, 5.66s/it][A Downloading shards: 100%|ββββββββββ| 2/2 [00:11<00:00, 5.96s/it] Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s][A Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 47127.01it/s] Traceback (most recent call last): File "/home/user/app/app.py", line 488, in <module> chat_model_state = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto", device_map="auto") File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 564, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4302, in from_pretrained dispatch_model(model, **device_map_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/big_modeling.py", line 496, in dispatch_model raise ValueError( ValueError: You are trying to offload the whole model to the disk. Please use the `disk_offload` function instead.
Container logs:
Fetching error logs...