RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:5)

#2
by freebsdx - opened
  • Hareware(GPU information from nvidia-smi):
    Fri Jan 23 15:46:26 2026
    +-----------------------------------------------------------------------------------------+
    | NVIDIA-SMI 590.48.01 Driver Version: 590.48.01 CUDA Version: 13.1 |
    +-----------------------------------------+------------------------+----------------------+
    | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
    | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
    | | | MIG M. |

|=========================================+========================+======================|
| 0 NVIDIA H100 Off | 00000000:16:00.0 Off | 0 |
| N/A 31C P0 77W / 700W | 4MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA H100 Off | 00000000:27:00.0 Off | 0 |
| N/A 32C P0 81W / 700W | 4MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 2 NVIDIA H100 Off | 00000000:38:00.0 Off | 0 |
| N/A 32C P0 81W / 700W | 4MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 3 NVIDIA H100 Off | 00000000:98:00.0 Off | 0 |
| N/A 31C P0 77W / 700W | 4MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 4 NVIDIA H100 Off | 00000000:A8:00.0 Off | 0 |
| N/A 29C P0 87W / 700W | 4MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+
| 5 NVIDIA H100 Off | 00000000:B8:00.0 Off | 0 |
| N/A 31C P0 73W / 700W | 4MiB / 81559MiB | 0% Default |
| | | Disabled |
+-----------------------------------------+------------------------+----------------------+

+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| No running processes found |
+-----------------------------------------------------------------------------------------+

  • Model name: allenai/Molmo2-4B
  • Code: General Video QA(from here: https://huggingface.co/allenai/Molmo2-4B)
  • Issue(full output):
    python genernal_video_qa.py
    Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.
    Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:04<00:00, 1.16s/it]
    /home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/accelerate/utils/modeling.py:1598: UserWarning: The following device_map keys do not match any submodules in the model: ['model.vision_backbone.image_vit.positional_embedding']
    warnings.warn(
    2026-01-23 15:45:41.734 | DEBUG | __main__::29 - models loaded
    2026-01-23 15:45:41.734 | DEBUG | main::43 - input messages: [{'role': 'user', 'content': [{'type': 'text', 'text': 'Which animal appears in the video?'}, {'type': 'video', 'video': '/home/user/projects/molmo2/many_penguins.mp4'}]}]
    2026-01-23 15:45:42.324 | DEBUG | main::54 - processor.apply_chat_template: done
    Traceback (most recent call last):
    File "/home/user/projects/molmo2/genernal_video_qa.py", line 60, in
    generated_ids = model.generate(**inputs, max_new_tokens=2048)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 124, in decorate_context
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/transformers/generation/utils.py", line 2564, in generate
    result = decoding_method(
    ^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/transformers/generation/utils.py", line 2784, in _sample
    outputs = self(**model_inputs, return_dict=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/accelerate/hooks.py", line 175, in new_forward
    output = module._old_forward(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/transformers/utils/generic.py", line 918, in wrapper
    output = func(self, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/.cache/huggingface/modules/transformers_modules/_07c77337853043b7e32909c8722a3db4253e0b13/modeling_molmo2.py", line 1652, in forward
    outputs = self.model(
    ^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/transformers/utils/generic.py", line 918, in wrapper
    output = func(self, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/.cache/huggingface/modules/transformers_modules/_07c77337853043b7e32909c8722a3db4253e0b13/modeling_molmo2.py", line 1503, in forward
    inputs_embeds, image_features = self.build_input_embeddings(
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/.cache/huggingface/modules/transformers_modules/_07c77337853043b7e32909c8722a3db4253e0b13/modeling_molmo2.py", line 1444, in build_input_embeddings
    image_features = self.vision_backbone(images, token_pooling).to(x.device)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1776, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/anaconda3/envs/molmo2/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1787, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "/home/user/.cache/huggingface/modules/transformers_modules/_07c77337853043b7e32909c8722a3db4253e0b13/modeling_molmo2.py", line 456, in forward
    to_pool = image_features.reshape(batch_size, -1, dim)[batch_idx, torch.clip(pooled_patches_idx, 0)]
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cuda:5)

Actually, the code(General Video QA) runs perfectly for the model allenai/Molmo2-8B- .
Thank you for any idea. πŸ˜€

Sign up or log in to comment