NotImplementedError: Cannot copy out of meta tensor when moving model — how to resolve?
I’m running into an issue while trying to move a model from a meta device to a specific device (like CPU/GPU). The error I get is:
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Not ImplementedError: Cannot copy out of meta tensor; no data! Please use torch. nn.Module.to_empty() instead of torch.n n.Module.to() when moving module from m eta to a different device.
mushe bhi ye error aya hai kya karu
Hhhh
is there any solution for it
Yes, I have fixed it and the deal is also being run
Yes, I have fixed it and the deal is also being run
can you share the code for jupyter or colab
Bilkul bro vaise error isiliye aya hai kyu ki transformer update ho gya hai app ye librery download karlo thik ho jayega
transformers==4.50.3
okay thanks, let me check
Bilkul bro vaise error isiliye aya hai kyu ki transformer update ho gya hai app ye librery download karlo thik ho jayega
transformers==4.50.3
then also its showing error, is there any way we can connect ?
Hmm wait I will give you the full Jupiter notebook in some time
this model is not yet ready; having lot of issues as mentioned above
I used this model few days back
this model is not yet ready; having lot of issues as mentioned above
thanks bro error is solve
thanks bro error is solve
can you share the code
Hmm wait I will give you the full Jupiter notebook in some time
can you send the jupyter notebook , I have a project due
@sumitt can you please share your colab file
https://colab.research.google.com/drive/1ajsjzgKFBTfV9_aLhnG4q_H8U9GPpMId?usp=sharing
Hi everyone,
I’ve been trying to run the ai4bharat/IndicF5 model from Hugging Face. I successfully authenticated with my HF token and followed the instructions provided on the model’s page.
I have attempted to run it in different environments, including RunPod and Google Colab, but I keep running into the same errors (attached below).
Has anyone faced a similar issue and managed to resolve it? Any guidance or solutions would be greatly appreciated.
Thanks in advance!
Error Message:
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.
Full Error:
NotImplementedError
Traceback (most recent call last)Cell In[3], line 7 5 # Load IndicF5 from Hugging Face 6 repo_id = "ai4bharat/IndicF5"----> 7 model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)File /usr/local/lib/python3.11/dist-packages/transformers/models/auto/auto_factory.py:597, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 595 model_class.register_for_auto_class(auto_class=cls) 596 model_class = add_generation_mixin_to_remote_model(model_class)--> 597 return model_class.from_pretrained( 598 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs 599 ) 600 elif type(config) in cls._model_mapping: 601 model_class = _get_model_class(config, cls._model_mapping)
File /usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py:288, in restore_default_dtype.._wrapper(*args, **kwargs) 286 old_dtype = torch.get_default_dtype() 287 try:--> 288 return func(*args, **kwargs) 289 finally: 290 torch.set_default_dtype(old_dtype)File /usr/local/lib/python3.11/dist-packages/transformers/modeling_utils.py:5106, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, weights_only, *model_args, **kwargs) 5103 config = copy.deepcopy(config) # We do not want to modify the config inplace in from_pretrained. 5104 with ContextManagers(model_init_context): 5105 # Let's make sure we don't run the init function of buffer modules-> 5106 model = cls(config, *model_args, **model_kwargs) 5108 # Make sure to tie the weights correctly 5109 model.tie_weights()
File ~/.cache/huggingface/modules/transformers_modules/ai4bharat/IndicF5/b82d286220e3070e171f4ef4b4bd047b9a447c9a/model.py:43, in INF5Model.init(self, config) 40 device = torch.device("cuda" if torch.cuda.is_available() else "cpu") 42 # Load vocoder---> 43 self.vocoder = torch.compile(load_vocoder(vocoder_name="vocos", is_local=False, device=device)) 45 # Download and load model weights 46 # safetensors_path = hf_hub_download(config.name_or_path, filename="model.safetensors") 47 # print(f"Loading model weights from {safetensors_path} (safetensors)...") 48 # state_dict = load_file(safetensors_path, device=str(device)) 49 50 # Download vocab.txt from HF Hub 51 vocab_path = hf_hub_download(config.name_or_path, filename="checkpoints/vocab.txt")
File /usr/local/lib/python3.11/dist-packages/f5_tts/infer/utils_infer.py:115, in load_vocoder(vocoder_name, is_local, local_path, device, hf_cache_dir) 113 state_dict.update(encodec_parameters) 114 vocoder.load_state_dict(state_dict)--> 115 vocoder = vocoder.eval().to(device) 116 elif vocoder_name == "bigvgan": 117 try:
File /usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py:1355, in Module.to(self, *args, **kwargs) 1352 else: 1353 raise-> 1355 return self._apply(convert)File /usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py:915, in Module._apply(self, fn, recurse) 913 if recurse: 914 for module in self.children():--> 915 module._apply(fn) 917 def compute_should_use_set_data(tensor, tensor_applied): 918 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): 919 # If the new tensor has compatible tensor type as the existing tensor, 920 # the current behavior is to change the tensor in-place using .data =, (...) 925 # global flag to let the user control whether they want the future 926 # behavior of overwriting the existing tensor or not.File /usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py:915, in Module._apply(self, fn, recurse) 913 if recurse: 914 for module in self.children():--> 915 module._apply(fn) 917 def compute_should_use_set_data(tensor, tensor_applied): 918 if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): 919 # If the new tensor has compatible tensor type as the existing tensor, 920 # the current behavior is to change the tensor in-place using .data =, (...) 925 # global flag to let the user control whether they want the future 926 # behavior of overwriting the existing tensor or not.File /usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py:942, in Module._apply(self, fn, recurse) 938 # Tensors stored in modules are graph leaves, and we don't want to 939 # track autograd history of param_applied, so we have to use 940 # with torch.no_grad(): 941 with torch.no_grad():--> 942 param_applied = fn(param) 943 p_should_use_set_data = compute_should_use_set_data(param, param_applied) 945 # subclasses may have multiple child tensors so we need to use swap_tensorsFile /usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py:1348, in Module.to..convert(t) 1346 except NotImplementedError as e: 1347 if str(e) == "Cannot copy out of meta tensor; no data!":-> 1348 raise NotImplementedError( 1349 f"{e} Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() " 1350 f"when moving module from meta to a different device." 1351 ) from None 1352 else:
1353 raise
NotImplementedError: Cannot copy out of meta tensor; no data! Please use torch.nn.Module.to_empty() instead of torch.nn.Module.to() when moving module from meta to a different device.