File size: 6,487 Bytes
12bd208
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
nohup: ignoring input
ROBOMIMIC WARNING(
    No private macro file found!
    It is recommended to use a private macro file
    To setup, run: python /home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/robomimic/scripts/setup_macros.py
)
Warning: unknown parameter wm_ckpt
Warning: unknown parameter target_modules
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: yukizhang0527 (yukizhang0527-harbin-institute-of-technology). Use `wandb login --relogin` to force relogin
wandb: - Waiting for wandb.init()...
wandb: \ Waiting for wandb.init()...
wandb: Tracking run with wandb version 0.18.3
wandb: Run data is saved locally in /home/zxn/Forewarn/wandb/run-20260409_104618-abl48zp6
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run eternal-sponge-63
wandb: ⭐️ View project at https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes
wandb: πŸš€ View run at https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes/runs/abl48zp6
Warning: custom_dataset does not accept parameter: custom_dataset.task_name

Loading checkpoint shards:   0%|          | 0/4 [00:00<?, ?it/s]
Loading checkpoint shards:  25%|β–ˆβ–ˆβ–Œ       | 1/4 [00:00<00:00,  5.82it/s]
Loading checkpoint shards:  50%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆ     | 2/4 [00:00<00:00,  4.85it/s]
Loading checkpoint shards:  75%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ  | 3/4 [00:00<00:00,  4.74it/s]
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00,  4.72it/s]
Loading checkpoint shards: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4/4 [00:00<00:00,  4.80it/s]
--> Model /data/colosseum_dataset/wm_data/mllama_base/Llama-3.2-11B-Vision-Instruct/custom

--> /data/colosseum_dataset/wm_data/mllama_base/Llama-3.2-11B-Vision-Instruct/custom has 9808.885776 Million params

loading world model from ckpt path None
Traceback (most recent call last):
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 540, in _check_seekable
    f.seek(f.tell())
AttributeError: 'NoneType' object has no attribute 'seek'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/finetuning_wm.py", line 9, in <module>
    fire.Fire(main)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 135, in Fire
    component_trace = _Fire(component, args, parsed_flag_args, context, name)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 468, in _Fire
    component, remaining_args = _CallAndUpdateTrace(
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/fire/core.py", line 684, in _CallAndUpdateTrace
    component = fn(*varargs, **kwargs)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/llama_recipes/finetuning_wm.py", line 355, in main
    model.initialize_vision_model()
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/llama_recipes/models/mllama_model.py", line 140, in initialize_vision_model
    self.wm_model = Dreamer.from_pretrained(path = wm_config.from_ckpt, obs_space = obs_space,
  File "/home/zxn/Forewarn/model_based_irl_torch/dreamer/dreamer.py", line 62, in from_pretrained
    ckpt = torch.load(path)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 997, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 449, in _open_file_like
    return _open_buffer_reader(name_or_buffer)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 434, in __init__
    _check_seekable(buffer)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 543, in _check_seekable
    raise_err_msg(["seek", "tell"], e)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/serialization.py", line 536, in raise_err_msg
    raise type(e)(msg)
AttributeError: 'NoneType' object has no attribute 'seek'. You can only torch.load from a file that is seekable. Please pre-load the data into a buffer like io.BytesIO and try to load from it instead.
wandb: πŸš€ View run eternal-sponge-63 at: https://wandb.ai/yukizhang0527-harbin-institute-of-technology/llama_recipes/runs/abl48zp6
wandb: Find logs at: wandb/run-20260409_104618-abl48zp6/logs
E0409 10:46:32.241449 139940508129088 torch/distributed/elastic/multiprocessing/api.py:826] failed (exitcode: 1) local_rank: 0 (pid: 688539) of binary: /home/zxn/anaconda3/envs/dreamer/bin/python3.10
Traceback (most recent call last):
  File "/home/zxn/anaconda3/envs/dreamer/bin/torchrun", line 6, in <module>
    sys.exit(main())
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 347, in wrapper
    return f(*args, **kwargs)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/run.py", line 879, in main
    run(args)
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/run.py", line 870, in run
    elastic_launch(
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
    return launch_agent(self._config, self._entrypoint, list(args))
  File "/home/zxn/anaconda3/envs/dreamer/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 263, in launch_agent
    raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError: 
============================================================
/home/zxn/Forewarn/vlm/llama-recipes/recipes/quickstart/finetuning/finetuning_wm.py FAILED
------------------------------------------------------------
Failures:
  <NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
  time      : 2026-04-09_10:46:32
  host      : node0029
  rank      : 0 (local_rank: 0)
  exitcode  : 1 (pid: 688539)
  error_file: <N/A>
  traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================