ROBOMIMIC WARNING( No private macro file found! It is recommended to use a private macro file To setup, run: python /home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/robomimic/scripts/setup_macros.py ) Warning: unknown parameter wm_ckpt Warning: unknown parameter target_modules wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information. wandb: Currently logged in as: yukizhang0527 (yukizhang0527-harbin-institute-of-technology). Use `wandb login --relogin` to force relogin wandb: - Waiting for wandb.init()... wandb: \ Waiting for wandb.init()... wandb: | Waiting for wandb.init()... wandb: / Waiting for wandb.init()... wandb: Tracking run with wandb version 0.18.3 wandb: Run data is saved locally in /home/zxn/Forewarn/wandb/run-20260409_101818-q6fxa9lv wandb: Run `wandb offline` to turn off syncing. wandb: Syncing run vlm_lora_stack_cups wandb: ⭐️ View project at https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes wandb: 🚀 View run at https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes/runs/q6fxa9lv Warning: custom_dataset does not accept parameter: custom_dataset.task_name Loading checkpoint shards: 0%| | 0/4 [00:00 Model /data/colosseum_dataset/wm_data/mllama_base/Llama-3.2-11B-Vision-Instruct/custom --> /data/colosseum_dataset/wm_data/mllama_base/Llama-3.2-11B-Vision-Instruct/custom has 9808.885776 Million params loading world model from ckpt path None Traceback (most recent call last): File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 540, in _check_seekable f.seek(f.tell()) AttributeError: 'NoneType' object has no attribute 'seek' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/finetuning_wm.py", line 9, in fire.Fire(main) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 135, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/llama_recipes/finetuning_wm.py", line 355, in main model.initialize_vision_model() File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/llama_recipes/models/mllama_model.py", line 140, in initialize_vision_model self.wm_model = Dreamer.from_pretrained(path = wm_config.from_ckpt, obs_space = obs_space, File "/home/zxn/Forewarn/model_based_irl_torch/dreamer/dreamer.py", line 62, in from_pretrained ckpt = torch.load(path) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 997, in load with _open_file_like(f, 'rb') as opened_file: File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 449, in _open_file_like return _open_buffer_reader(name_or_buffer) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 434, in __init__ _check_seekable(buffer) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 543, in _check_seekable raise_err_msg(["seek", "tell"], e) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 536, in raise_err_msg raise type(e)(msg) AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead. wandb: 🚀 View run vlm_lora_stack_cups at: https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes/runs/q6fxa9lv wandb: Find logs at: wandb/run-20260409_101818-q6fxa9lv/logs E0409 10:18:37.779480 140104379844416 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 672684) of binary: /home/zxn/anaconda3/envs/dreamer/bin/python3.10 Traceback (most recent call last): File "/home/zxn/anaconda3/envs/dreamer/bin/torchrun", line 6, in sys.exit(main()) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper return f(*args, **kwargs) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main run(args) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run elastic_launch( File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ============================================================ /home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/finetuning_wm.py FAILED ------------------------------------------------------------ Failures: ------------------------------------------------------------ Root Cause (first observed failure): [0]: time : 2026-04-09_10:18:37 host : node0029 rank : 0 (local_rank: 0) exitcode : 1 (pid: 672684) error_file: traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html ============================================================