why is this giving this error???
TypeError: load_model() missing 1 required positional argument: 'ckpt_path'
Please use pip install git+https://github.com/ai4bharat/IndicF5.git to avoid this error
Subject: Issue with Loading AI4Bharat Model β Meta Tensor Error in Colab/Kaggle
Dear AI4Bharat Team,
I am currently trying to use your model (such as ai4bharat/IndicF5) in Google Colab and Kaggle environments, but I am continuously encountering the following error during model loading:
NotImplementedError: Cannot copy out of meta tensor; no data!
Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
I have already tried several fixes:
- Downgrading/upgrading PyTorch versions (1.13, 2.0, 2.1, etc.)
- Trying different Transformers versions (v4.28 to v4.40+)
- Using both device_map="auto" and low_cpu_mem_usage=False
- Downloading the model locally and loading manually
- Running on both CPU and GPU environments
Even after trying all these, the model fails to load properly and gives the same meta tensor error. This seems to indicate that the model is not fully initialized and gets stuck in the meta device, making .to() operations fail.
Earlier this same model was working perfectly fine. Could you please confirm:
- Is there any recent update in the model architecture or Hugging Face config?
- Is there a specific version of PyTorch or Transformers needed now?
- Is there any patch we can apply to make it work again?
Attached is a screenshot of the full error trace.
Thank you for your time and support. Looking forward to your response.
Best regards,
Vikas
NotImplementedError: Cannot copy out of meta tensor; no data!
Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
I give you a Colab file like https://colab.research.google.com/drive/1OXG2R3-AFGTcST5nucTWoUq-ZQvtel0a?usp=sharing. In the file you have easily run the model.
@svp19
C:\Users\Admin\Downloads\IndicF5>run_indicf5_gradio.bat
Moving to project folder...
Setting PYTHONPATH...
Activating Conda environment (indicf5)...
Starting the Gradio Web UI...
C:\Users\Admin.conda\envs\indicf5\lib\site-packages\google\api_core_python_version_support.py:275: FutureWarning: You are using a Python version (3.10.19) which Google will stop supporting in new releases of google.api_core once it reaches its end of life (2026-10-04). Please upgrade to the latest Python version, or at least Python 3.11, to continue receiving updates for google.api_core past that date.
warnings.warn(message, FutureWarning)
C:\Users\Admin.conda\envs\indicf5\lib\site-packages\jieba_compat.py:18: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
import pkg_resources
Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\Admin\AppData\Local\Temp\jieba.cache
Loading model cost 0.309 seconds.
Prefix dict has been built successfully.
Word segmentation module jieba initialized.
Download Vocos from huggingface charactr/vocos-mel-24khz
vocab : /home/tts/ttsteam/repos/F5-TTS/runs/indic_5/vocab.txt
token : custom
model : /home/tts/ttsteam/repos/F5-TTS/runs/indic_5/model_1176000.pt
Traceback (most recent call last):
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\infer_gradio.py", line 79, in
F5TTS_ema_model = load_f5tts()
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\infer_gradio.py", line 60, in load_f5tts
return load_model(DiT, F5TTS_model_cfg, ckpt_path, vocab_file=vocab_path)
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\utils_infer.py", line 242, in load_model
vocab_char_map, vocab_size = get_tokenizer(vocab_file, tokenizer)
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\model\utils.py", line 125, in get_tokenizer
with open(dataset_name, "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/home/tts/ttsteam/repos/F5-TTS/runs/indic_5/vocab.txt'
@svp19 if change path
Download Vocos from huggingface charactr/vocos-mel-24khz
vocab : C:\Users\Admin\Downloads\IndicF5\checkpoints\vocab.txt
token : custom
model : C:\Users\Admin\Downloads\IndicF5\model.safetensors
Traceback (most recent call last):
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\infer_gradio.py", line 79, in
F5TTS_ema_model = load_f5tts()
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\infer_gradio.py", line 60, in load_f5tts
return load_model(DiT, F5TTS_model_cfg, ckpt_path, vocab_file=vocab_path)
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\utils_infer.py", line 260, in load_model
model = load_checkpoint(model, ckpt_path, device, dtype=dtype, use_ema=use_ema)
File "C:\Users\Admin\Downloads\IndicF5\f5_tts\infer\utils_infer.py", line 209, in load_checkpoint
model.load_state_dict(checkpoint["model_state_dict"])
File "C:\Users\Admin.conda\envs\indicf5\lib\site-packages\torch\nn\modules\module.py", line 2593, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for CFM:
Missing key(s) in state_dict: "transformer.time_embed.time_mlp.0.weight", "transformer.time_embed.time_mlp.0.bias", "transformer.time_embed.time_mlp.2.weight", "transformer.time_embed.time_mlp.2.bias", "transformer.text_embed.text_embed.weight", "transformer.text_embed.text_blocks.0.dwconv.weight", "transformer.text_embed.text_blocks.0.dwconv.bias", "transformer.text_embed.text_blocks.0.norm.weight", "transformer.text_embed.text_blocks.0.norm.bias", "transformer.text_embed.text_blocks.0.pwconv1.weight", "transformer.text_embed.text_blocks.0.pwconv1.bias", "transformer.text_embed.text_b