CHYang25 commited on
Commit
5a8367f
·
verified ·
1 Parent(s): 9dcbca9

Upload folder using huggingface_hub

Browse files
Files changed (47) hide show
  1. .gitattributes +1 -0
  2. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/.hydra/config.yaml +115 -0
  3. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/.hydra/hydra.yaml +154 -0
  4. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/.hydra/overrides.yaml +1 -0
  5. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/README.md +202 -0
  6. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/adapter_config.json +38 -0
  7. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/adapter_model.safetensors +3 -0
  8. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/README.md +202 -0
  9. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/adapter_config.json +38 -0
  10. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/adapter_model.safetensors +3 -0
  11. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/config.json +48 -0
  12. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/mlp_projector.bin +3 -0
  13. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/optimizer.pt +3 -0
  14. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/rng_state.pth +3 -0
  15. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/scheduler.pt +3 -0
  16. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/special_tokens_map.json +24 -0
  17. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/tokenizer.json +0 -0
  18. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/tokenizer.model +3 -0
  19. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/tokenizer_config.json +0 -0
  20. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/trainer_state.json +0 -0
  21. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/training_args.bin +3 -0
  22. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/README.md +202 -0
  23. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/adapter_config.json +38 -0
  24. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/adapter_model.safetensors +3 -0
  25. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/config.json +48 -0
  26. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/mlp_projector.bin +3 -0
  27. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/optimizer.pt +3 -0
  28. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/rng_state.pth +3 -0
  29. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/scheduler.pt +3 -0
  30. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/special_tokens_map.json +24 -0
  31. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/tokenizer.json +0 -0
  32. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/tokenizer.model +3 -0
  33. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/tokenizer_config.json +0 -0
  34. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/trainer_state.json +0 -0
  35. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/training_args.bin +3 -0
  36. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/normalizer.pt +3 -0
  37. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/train.log +11 -0
  38. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/debug-internal.log +19 -0
  39. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/debug.log +35 -0
  40. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/config.yaml +749 -0
  41. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/output.log +0 -0
  42. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/wandb-metadata.json +55 -0
  43. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/wandb-summary.json +1 -0
  44. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-core.log +16 -0
  45. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-internal.log +19 -0
  46. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug.log +35 -0
  47. 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/run-x1m9280b.wandb +3 -0
.gitattributes CHANGED
@@ -171,3 +171,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
171
  2026.03.17/12.46.50_train_llm_lowdim_box-close-v2/wandb/run-20260317_124654-kqfw3lgo/run-kqfw3lgo.wandb filter=lfs diff=lfs merge=lfs -text
172
  2026.03.17/16.13.39_train_rbc_reward_model_box-close-v2/logs.json.txt filter=lfs diff=lfs merge=lfs -text
173
  2026.03.17/16.13.39_train_rbc_reward_model_box-close-v2/wandb/run-20260317_161342-ydactfyz/run-ydactfyz.wandb filter=lfs diff=lfs merge=lfs -text
 
 
171
  2026.03.17/12.46.50_train_llm_lowdim_box-close-v2/wandb/run-20260317_124654-kqfw3lgo/run-kqfw3lgo.wandb filter=lfs diff=lfs merge=lfs -text
172
  2026.03.17/16.13.39_train_rbc_reward_model_box-close-v2/logs.json.txt filter=lfs diff=lfs merge=lfs -text
173
  2026.03.17/16.13.39_train_rbc_reward_model_box-close-v2/wandb/run-20260317_161342-ydactfyz/run-ydactfyz.wandb filter=lfs diff=lfs merge=lfs -text
174
+ 2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/run-x1m9280b.wandb filter=lfs diff=lfs merge=lfs -text
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/.hydra/config.yaml ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: train_llm_lowdim
2
+ _target_: llmbc.workspace.train_llm_workspace.TrainLLMWorkspace
3
+ obs_dim: ${task.obs_dim}
4
+ action_dim: ${task.action_dim}
5
+ horizon: 1
6
+ n_obs_steps: 1
7
+ n_action_steps: 1
8
+ task_name: ${task.name}
9
+ exp_name: train llm
10
+ model_name: ${llm.name}
11
+ use_quantization: ${llm.use_quantization}
12
+ lora_config: ${llm.lora_config}
13
+ dataset:
14
+ test_data_ratio: 0.01
15
+ debug: false
16
+ training:
17
+ seed: 42
18
+ per_device_train_batch_size: 32
19
+ per_device_eval_batch_size: 32
20
+ gradient_accumulation_steps: 4
21
+ optim: paged_adamw_32bit
22
+ num_train_epochs: 10
23
+ eval_strategy: steps
24
+ logging_steps: 1
25
+ warmup_steps: 1000
26
+ logging_strategy: steps
27
+ learning_rate: 1.0e-05
28
+ fp16: false
29
+ bf16: true
30
+ tf32: true
31
+ group_by_length: true
32
+ report_to: wandb
33
+ save_steps: 5000
34
+ eval_steps: 10
35
+ use_joint_mlp_projector: ${llm.use_joint_mlp_projector}
36
+ joint_obs_action_mlp_lr: 5.0e-06
37
+ trainer:
38
+ obs_dim: ${obs_dim}
39
+ action_dim: ${action_dim}
40
+ use_joint_mlp_projector: ${llm.use_joint_mlp_projector}
41
+ max_seq_length: ${llm.max_length}
42
+ dataset_text_field: text
43
+ packing: false
44
+ logging:
45
+ project: llm_module_finetuning
46
+ resume: true
47
+ mode: online
48
+ name: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
49
+ tags:
50
+ - ${name}
51
+ - ${task_name}
52
+ - ${exp_name}
53
+ id: null
54
+ group: null
55
+ multi_run:
56
+ run_dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
57
+ wandb_name_base: ${now:%Y.%m.%d-%H.%M.%S}_${name}_${task_name}
58
+ task:
59
+ name: box-close-v2
60
+ obs_dim: 9
61
+ action_dim: 4
62
+ env_runner:
63
+ _target_: llmbc.env_runner.metaworld_lowdim_runner.MetaworldLowdimRunner
64
+ env_name: llf-metaworld-box-close-v2
65
+ n_train: 10
66
+ n_test: 50
67
+ n_envs: 10
68
+ max_steps: 30
69
+ n_obs_steps: ${n_obs_steps}
70
+ n_action_steps: ${n_action_steps}
71
+ instruction_type: b
72
+ feedback_type:
73
+ - hp
74
+ - hn
75
+ - fp
76
+ visual: false
77
+ discount: 0.9
78
+ dataset:
79
+ _target_: llmbc.dataset.metaworld_lowdim_dataset.MetaworldLowdimDataset
80
+ data_path: datasets/box-close-v2-general.pt
81
+ data_path2: datasets/box-close-v2.pt
82
+ horizon: ${horizon}
83
+ pad_before: ${eval:'${n_obs_steps}-1'}
84
+ pad_after: ${eval:'${n_action_steps}-1'}
85
+ obs_eef_target: true
86
+ use_manual_normalizer: false
87
+ val_ratio: 0.05
88
+ dummy_normalizer: false
89
+ instructor:
90
+ _target_: llmbc.translator.instructor.metaworld_instructor.box_close_v2_instructor.BoxCloseV2Instructor
91
+ llm:
92
+ name: mistralai/Mistral-7B-Instruct-v0.3
93
+ model_name: Mistral-7B-Instruct-v0.3
94
+ config_target: llmbc.model.llm.mistral_lowdim_model.LowdimMistralConfig
95
+ causal_lm_target: llmbc.model.llm.mistral_lowdim_model.LowdimMistralForCausalLM
96
+ use_quantization: true
97
+ use_joint_mlp_projector: true
98
+ llm_mode: mlp-finetuned
99
+ finetune_mode: lora
100
+ checkpoint: data/outputs/2026.03.14/23.40.55_train_mlp_projector_box-close-v2/checkpoints/latest.ckpt
101
+ max_length: 100
102
+ lora_config:
103
+ r: 8
104
+ lora_alpha: 16
105
+ lora_dropout: 0.05
106
+ bias: none
107
+ task_type: CAUSAL_LM
108
+ prompter:
109
+ _target_: llmbc.translator.prompter.mistral_prompter.MistralPrompter
110
+ use_joint_mlp_projector: true
111
+ hydra:
112
+ job:
113
+ override_dirname: ${model_name}
114
+ run:
115
+ dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${model_name}
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/.hydra/hydra.yaml ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ hydra:
2
+ run:
3
+ dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
4
+ sweep:
5
+ dir: data/outputs/${now:%Y.%m.%d}/${now:%H.%M.%S}_${name}_${task_name}
6
+ subdir: ${hydra.job.num}
7
+ launcher:
8
+ _target_: hydra._internal.core_plugins.basic_launcher.BasicLauncher
9
+ sweeper:
10
+ _target_: hydra._internal.core_plugins.basic_sweeper.BasicSweeper
11
+ max_batch_size: null
12
+ params: null
13
+ help:
14
+ app_name: ${hydra.job.name}
15
+ header: '${hydra.help.app_name} is powered by Hydra.
16
+
17
+ '
18
+ footer: 'Powered by Hydra (https://hydra.cc)
19
+
20
+ Use --hydra-help to view Hydra specific help
21
+
22
+ '
23
+ template: '${hydra.help.header}
24
+
25
+ == Configuration groups ==
26
+
27
+ Compose your configuration from those groups (group=option)
28
+
29
+
30
+ $APP_CONFIG_GROUPS
31
+
32
+
33
+ == Config ==
34
+
35
+ Override anything in the config (foo.bar=value)
36
+
37
+
38
+ $CONFIG
39
+
40
+
41
+ ${hydra.help.footer}
42
+
43
+ '
44
+ hydra_help:
45
+ template: 'Hydra (${hydra.runtime.version})
46
+
47
+ See https://hydra.cc for more info.
48
+
49
+
50
+ == Flags ==
51
+
52
+ $FLAGS_HELP
53
+
54
+
55
+ == Configuration groups ==
56
+
57
+ Compose your configuration from those groups (For example, append hydra/job_logging=disabled
58
+ to command line)
59
+
60
+
61
+ $HYDRA_CONFIG_GROUPS
62
+
63
+
64
+ Use ''--cfg hydra'' to Show the Hydra config.
65
+
66
+ '
67
+ hydra_help: ???
68
+ hydra_logging:
69
+ version: 1
70
+ formatters:
71
+ simple:
72
+ format: '[%(asctime)s][HYDRA] %(message)s'
73
+ handlers:
74
+ console:
75
+ class: logging.StreamHandler
76
+ formatter: simple
77
+ stream: ext://sys.stdout
78
+ root:
79
+ level: INFO
80
+ handlers:
81
+ - console
82
+ loggers:
83
+ logging_example:
84
+ level: DEBUG
85
+ disable_existing_loggers: false
86
+ job_logging:
87
+ version: 1
88
+ formatters:
89
+ simple:
90
+ format: '[%(asctime)s][%(name)s][%(levelname)s] - %(message)s'
91
+ handlers:
92
+ console:
93
+ class: logging.StreamHandler
94
+ formatter: simple
95
+ stream: ext://sys.stdout
96
+ file:
97
+ class: logging.FileHandler
98
+ formatter: simple
99
+ filename: ${hydra.runtime.output_dir}/${hydra.job.name}.log
100
+ root:
101
+ level: INFO
102
+ handlers:
103
+ - console
104
+ - file
105
+ disable_existing_loggers: false
106
+ env: {}
107
+ mode: RUN
108
+ searchpath: []
109
+ callbacks: {}
110
+ output_subdir: .hydra
111
+ overrides:
112
+ hydra:
113
+ - hydra.mode=RUN
114
+ task: []
115
+ job:
116
+ name: train
117
+ chdir: null
118
+ override_dirname: ''
119
+ id: ???
120
+ num: ???
121
+ config_name: llmdp_llm_box-close-v2_mistral-7b-instruct-v0.3.yaml
122
+ env_set: {}
123
+ env_copy: []
124
+ config:
125
+ override_dirname:
126
+ kv_sep: '='
127
+ item_sep: ','
128
+ exclude_keys: []
129
+ runtime:
130
+ version: 1.2.0
131
+ version_base: '1.2'
132
+ cwd: /tmp2/chyang/workspace/LLM-BC
133
+ config_sources:
134
+ - path: hydra.conf
135
+ schema: pkg
136
+ provider: hydra
137
+ - path: /tmp2/chyang/workspace/LLM-BC/config/llm_backbone
138
+ schema: file
139
+ provider: main
140
+ - path: ''
141
+ schema: structured
142
+ provider: schema
143
+ output_dir: /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2
144
+ choices:
145
+ hydra/env: default
146
+ hydra/callbacks: null
147
+ hydra/job_logging: default
148
+ hydra/hydra_logging: default
149
+ hydra/hydra_help: default
150
+ hydra/help: default
151
+ hydra/sweeper: basic
152
+ hydra/launcher: basic
153
+ hydra/output: default
154
+ verbose: false
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/.hydra/overrides.yaml ADDED
@@ -0,0 +1 @@
 
 
1
+ []
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/adapter_config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 16,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.05,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 8,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "down_proj",
27
+ "v_proj",
28
+ "k_proj",
29
+ "lm_head",
30
+ "up_proj",
31
+ "q_proj",
32
+ "gate_proj",
33
+ "o_proj"
34
+ ],
35
+ "task_type": "CAUSAL_LM",
36
+ "use_dora": false,
37
+ "use_rslora": false
38
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75753f1f0c86677fa1e6c69e014448ccb5af5655d115db5ac002e6bace0c9d71
3
+ size 154365949
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/adapter_config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 16,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.05,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 8,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "down_proj",
27
+ "v_proj",
28
+ "k_proj",
29
+ "lm_head",
30
+ "up_proj",
31
+ "q_proj",
32
+ "gate_proj",
33
+ "o_proj"
34
+ ],
35
+ "task_type": "CAUSAL_LM",
36
+ "use_dora": false,
37
+ "use_rslora": false
38
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a1d3ccbc2d1bff91f403bf2f83d891e0947b255a9729ed4dfa7388db6da427bf
3
+ size 154365949
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
4
+ "action_dim": 4,
5
+ "architectures": [
6
+ "MistralForCausalLM"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 1,
10
+ "eos_token_id": 2,
11
+ "head_dim": 128,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 4096,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 14336,
16
+ "max_position_embeddings": 32768,
17
+ "model_type": "mistral_lowdim",
18
+ "num_attention_heads": 32,
19
+ "num_hidden_layers": 32,
20
+ "num_key_value_heads": 8,
21
+ "obs_dim": 9,
22
+ "quantization_config": {
23
+ "_load_in_4bit": true,
24
+ "_load_in_8bit": false,
25
+ "bnb_4bit_compute_dtype": "bfloat16",
26
+ "bnb_4bit_quant_storage": "uint8",
27
+ "bnb_4bit_quant_type": "nf4",
28
+ "bnb_4bit_use_double_quant": true,
29
+ "llm_int8_enable_fp32_cpu_offload": false,
30
+ "llm_int8_has_fp16_weight": false,
31
+ "llm_int8_skip_modules": [
32
+ "joint_obs_action_projector"
33
+ ],
34
+ "llm_int8_threshold": 6.0,
35
+ "load_in_4bit": true,
36
+ "load_in_8bit": false,
37
+ "quant_method": "bitsandbytes"
38
+ },
39
+ "rms_norm_eps": 1e-05,
40
+ "rope_theta": 1000000.0,
41
+ "sliding_window": null,
42
+ "tie_word_embeddings": false,
43
+ "torch_dtype": "bfloat16",
44
+ "transformers_version": "4.47.1",
45
+ "use_cache": false,
46
+ "use_joint_mlp_projector": true,
47
+ "vocab_size": 32768
48
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/mlp_projector.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2291fb207b9e8ff0aaab49e12bb914eecd0f0b0f11d5e31ec10dfeeaef263f63
3
+ size 67356864
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a17a019b5ad6b1540bb501e9647ba8fac1045e17e44f5a9a00e8fa4f47d05e3
3
+ size 305112506
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b066b3a37fb18b36ee0d4b205258257d4bcd65310f4c486b427394e2080d99f1
3
+ size 14244
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c50c3e97c48d246a11385d766ea14c9a26fa8602590afacafa00be824daaf5dc
3
+ size 1064
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37f00374dea48658ee8f5d0f21895b9bc55cb0103939607c8185bfd1c6ca1f89
3
+ size 587404
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5000/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f633cbc73694f0c37e178ad51e116e8cde4390accf0f74458b31e4225377613b
3
+ size 5944
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/README.md ADDED
@@ -0,0 +1,202 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
3
+ library_name: peft
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
200
+ ### Framework versions
201
+
202
+ - PEFT 0.14.0
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/adapter_config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alpha_pattern": {},
3
+ "auto_mapping": null,
4
+ "base_model_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
5
+ "bias": "none",
6
+ "eva_config": null,
7
+ "exclude_modules": null,
8
+ "fan_in_fan_out": false,
9
+ "inference_mode": true,
10
+ "init_lora_weights": true,
11
+ "layer_replication": null,
12
+ "layers_pattern": null,
13
+ "layers_to_transform": null,
14
+ "loftq_config": {},
15
+ "lora_alpha": 16,
16
+ "lora_bias": false,
17
+ "lora_dropout": 0.05,
18
+ "megatron_config": null,
19
+ "megatron_core": "megatron.core",
20
+ "modules_to_save": null,
21
+ "peft_type": "LORA",
22
+ "r": 8,
23
+ "rank_pattern": {},
24
+ "revision": null,
25
+ "target_modules": [
26
+ "down_proj",
27
+ "v_proj",
28
+ "k_proj",
29
+ "lm_head",
30
+ "up_proj",
31
+ "q_proj",
32
+ "gate_proj",
33
+ "o_proj"
34
+ ],
35
+ "task_type": "CAUSAL_LM",
36
+ "use_dora": false,
37
+ "use_rslora": false
38
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75753f1f0c86677fa1e6c69e014448ccb5af5655d115db5ac002e6bace0c9d71
3
+ size 154365949
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/config.json ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "mistralai/Mistral-7B-Instruct-v0.3",
4
+ "action_dim": 4,
5
+ "architectures": [
6
+ "MistralForCausalLM"
7
+ ],
8
+ "attention_dropout": 0.0,
9
+ "bos_token_id": 1,
10
+ "eos_token_id": 2,
11
+ "head_dim": 128,
12
+ "hidden_act": "silu",
13
+ "hidden_size": 4096,
14
+ "initializer_range": 0.02,
15
+ "intermediate_size": 14336,
16
+ "max_position_embeddings": 32768,
17
+ "model_type": "mistral_lowdim",
18
+ "num_attention_heads": 32,
19
+ "num_hidden_layers": 32,
20
+ "num_key_value_heads": 8,
21
+ "obs_dim": 9,
22
+ "quantization_config": {
23
+ "_load_in_4bit": true,
24
+ "_load_in_8bit": false,
25
+ "bnb_4bit_compute_dtype": "bfloat16",
26
+ "bnb_4bit_quant_storage": "uint8",
27
+ "bnb_4bit_quant_type": "nf4",
28
+ "bnb_4bit_use_double_quant": true,
29
+ "llm_int8_enable_fp32_cpu_offload": false,
30
+ "llm_int8_has_fp16_weight": false,
31
+ "llm_int8_skip_modules": [
32
+ "joint_obs_action_projector"
33
+ ],
34
+ "llm_int8_threshold": 6.0,
35
+ "load_in_4bit": true,
36
+ "load_in_8bit": false,
37
+ "quant_method": "bitsandbytes"
38
+ },
39
+ "rms_norm_eps": 1e-05,
40
+ "rope_theta": 1000000.0,
41
+ "sliding_window": null,
42
+ "tie_word_embeddings": false,
43
+ "torch_dtype": "bfloat16",
44
+ "transformers_version": "4.47.1",
45
+ "use_cache": false,
46
+ "use_joint_mlp_projector": true,
47
+ "vocab_size": 32768
48
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/mlp_projector.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5d1a4f49ae341a396e725dc8e7e49e947e282b85ce828ef874ad875c9c7484c1
3
+ size 67356864
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7e2c3c0158d370c6bf9b2a6c447ef9c35bad3f90bee2ac52476d24786e8a693
3
+ size 305112506
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2a94c1a4456a30398e5ea527622f8e055b6818c0582b4ca87eb9e398f200712
3
+ size 14244
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a5e7ff28843a09e39190ed8d73ee956ee41e68c89bd28c0be2d3ceff5c00ef6
3
+ size 1064
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "</s>",
17
+ "unk_token": {
18
+ "content": "<unk>",
19
+ "lstrip": false,
20
+ "normalized": false,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37f00374dea48658ee8f5d0f21895b9bc55cb0103939607c8185bfd1c6ca1f89
3
+ size 587404
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/trainer_state.json ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/checkpoint-5880/training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f633cbc73694f0c37e178ad51e116e8cde4390accf0f74458b31e4225377613b
3
+ size 5944
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/normalizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a1cb28cf65a776a736481b2a05aa3c4cf0d796a0ec84eaa0b9e4aa06bdc79aa
3
+ size 4514
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/train.log ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [2026-03-18 22:30:56,869][numexpr.utils][INFO] - Note: NumExpr detected 24 cores but "NUMEXPR_MAX_THREADS" not set, so enforcing safe limit of 8.
2
+ [2026-03-18 22:30:56,869][numexpr.utils][INFO] - NumExpr defaulting to 8 threads.
3
+ [2026-03-18 22:31:00,129][datasets][INFO] - PyTorch version 2.2.2 available.
4
+ [2026-03-18 22:31:00,129][datasets][INFO] - TensorFlow version 2.15.1 available.
5
+ [2026-03-18 22:31:00,130][datasets][INFO] - JAX version 0.4.30 available.
6
+ [2026-03-18 22:31:24,560][datasets.arrow_dataset][WARNING] - Setting TOKENIZERS_PARALLELISM=false for forked processes.
7
+ [2026-03-18 22:31:35,877][datasets.arrow_dataset][WARNING] - Setting TOKENIZERS_PARALLELISM=false for forked processes.
8
+ [2026-03-18 22:31:41,531][root][INFO] - gcc -pthread -B /home/chyang/miniconda3/envs/llm-bc/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/chyang/miniconda3/envs/llm-bc/include -I/home/chyang/miniconda3/envs/llm-bc/include -fPIC -O2 -isystem /home/chyang/miniconda3/envs/llm-bc/include -fPIC -c /tmp/tmph6bq3krg/test.c -o /tmp/tmph6bq3krg/test.o
9
+ [2026-03-18 22:31:41,627][root][INFO] - gcc -pthread -B /home/chyang/miniconda3/envs/llm-bc/compiler_compat /tmp/tmph6bq3krg/test.o -laio -o /tmp/tmph6bq3krg/a.out
10
+ [2026-03-18 22:31:42,479][root][INFO] - gcc -pthread -B /home/chyang/miniconda3/envs/llm-bc/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /home/chyang/miniconda3/envs/llm-bc/include -I/home/chyang/miniconda3/envs/llm-bc/include -fPIC -O2 -isystem /home/chyang/miniconda3/envs/llm-bc/include -fPIC -c /tmp/tmpqkrxt68a/test.c -o /tmp/tmpqkrxt68a/test.o
11
+ [2026-03-18 22:31:42,552][root][INFO] - gcc -pthread -B /home/chyang/miniconda3/envs/llm-bc/compiler_compat /tmp/tmpqkrxt68a/test.o -L/usr/local/cuda -L/usr/local/cuda/lib64 -lcufile -o /tmp/tmpqkrxt68a/a.out
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/debug-internal.log ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-03-18T22:31:01.204991579+08:00","level":"INFO","msg":"using version","core version":"0.18.6"}
2
+ {"time":"2026-03-18T22:31:01.205010756+08:00","level":"INFO","msg":"created symlink","path":"/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-core.log"}
3
+ {"time":"2026-03-18T22:31:01.312346847+08:00","level":"INFO","msg":"created new stream","id":"x1m9280b"}
4
+ {"time":"2026-03-18T22:31:01.312367301+08:00","level":"INFO","msg":"stream: started","id":"x1m9280b"}
5
+ {"time":"2026-03-18T22:31:01.312665109+08:00","level":"INFO","msg":"sender: started","stream_id":"x1m9280b"}
6
+ {"time":"2026-03-18T22:31:01.312430257+08:00","level":"INFO","msg":"handler: started","stream_id":{"value":"x1m9280b"}}
7
+ {"time":"2026-03-18T22:31:01.312656373+08:00","level":"INFO","msg":"writer: Do: started","stream_id":{"value":"x1m9280b"}}
8
+ {"time":"2026-03-18T22:31:02.758068808+08:00","level":"INFO","msg":"Starting system monitor"}
9
+ {"time":"2026-03-19T04:22:27.963176902+08:00","level":"INFO","msg":"api: retrying HTTP error","status":502,"url":"https://api.wandb.ai/files/chyang25-national-taiwan-university/llm_module_finetuning/x1m9280b/file_stream"}
10
+ {"time":"2026-03-19T05:59:47.962016684+08:00","level":"INFO","msg":"api: retrying HTTP error","status":502,"url":"https://api.wandb.ai/files/chyang25-national-taiwan-university/llm_module_finetuning/x1m9280b/file_stream"}
11
+ {"time":"2026-03-19T17:31:14.776552794+08:00","level":"INFO","msg":"Stopping system monitor"}
12
+ {"time":"2026-03-19T17:31:14.777008363+08:00","level":"INFO","msg":"Stopped system monitor"}
13
+ {"time":"2026-03-19T17:31:15.776985704+08:00","level":"INFO","msg":"handler: operation stats","stats":{"operations":[{"desc":"uploading wandb-summary.json","runtime_seconds":0.192680949,"progress":"515B/515B"},{"desc":"saving job artifact","runtime_seconds":0.014816469}],"total_operations":2}}
14
+ {"time":"2026-03-19T17:31:21.307731838+08:00","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
15
+ {"time":"2026-03-19T17:31:24.586012627+08:00","level":"INFO","msg":"stream: closing","id":"x1m9280b"}
16
+ {"time":"2026-03-19T17:31:24.586046207+08:00","level":"INFO","msg":"handler: closed","stream_id":{"value":"x1m9280b"}}
17
+ {"time":"2026-03-19T17:31:24.586077373+08:00","level":"INFO","msg":"writer: Close: closed","stream_id":{"value":"x1m9280b"}}
18
+ {"time":"2026-03-19T17:31:24.586176871+08:00","level":"INFO","msg":"sender: closed","stream_id":"x1m9280b"}
19
+ {"time":"2026-03-19T17:31:24.586233462+08:00","level":"INFO","msg":"stream: closed","id":"x1m9280b"}
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/debug.log ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-03-18 22:31:01,198 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Current SDK version is 0.18.6
2
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Configure stats pid to 2041405
3
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Loading settings from /home/chyang/.config/wandb/settings
4
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Loading settings from /tmp2/chyang/workspace/LLM-BC/wandb/settings
5
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Loading settings from environment variables: {}
6
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Applying setup settings: {'mode': 'online', '_disable_service': None}
7
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Inferring run settings from compute environment: {'program_relpath': 'train.py', 'program_abspath': '/tmp2/chyang/workspace/LLM-BC/train.py', 'program': '/tmp2/chyang/workspace/LLM-BC/./train.py'}
8
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Applying login settings: {}
9
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:_log_setup():533] Logging user logs to /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug.log
10
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:_log_setup():534] Logging internal logs to /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-internal.log
11
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():619] calling init triggers
12
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():626] wandb.init called with sweep_config: {}
13
+ config: {'name': 'train_llm_lowdim', '_target_': 'llmbc.workspace.train_llm_workspace.TrainLLMWorkspace', 'obs_dim': 9, 'action_dim': 4, 'horizon': 1, 'n_obs_steps': 1, 'n_action_steps': 1, 'task_name': 'box-close-v2', 'exp_name': 'train llm', 'model_name': 'mistralai/Mistral-7B-Instruct-v0.3', 'use_quantization': True, 'lora_config': {'r': 8, 'lora_alpha': 16, 'lora_dropout': 0.05, 'bias': 'none', 'task_type': 'CAUSAL_LM'}, 'dataset': {'test_data_ratio': 0.01}, 'debug': False, 'training': {'seed': 42, 'per_device_train_batch_size': 32, 'per_device_eval_batch_size': 32, 'gradient_accumulation_steps': 4, 'optim': 'paged_adamw_32bit', 'num_train_epochs': 10, 'eval_strategy': 'steps', 'logging_steps': 1, 'warmup_steps': 1000, 'logging_strategy': 'steps', 'learning_rate': 1e-05, 'fp16': False, 'bf16': True, 'tf32': True, 'group_by_length': True, 'report_to': 'wandb', 'save_steps': 5000, 'eval_steps': 10, 'use_joint_mlp_projector': True, 'joint_obs_action_mlp_lr': 5e-06}, 'trainer': {'obs_dim': 9, 'action_dim': 4, 'use_joint_mlp_projector': True, 'max_seq_length': 100, 'dataset_text_field': 'text', 'packing': False}, 'logging': {'project': 'llm_module_finetuning', 'resume': True, 'mode': 'online', 'name': '2026.03.18-22.30.56_train_llm_lowdim_box-close-v2', 'tags': ['train_llm_lowdim', 'box-close-v2', 'train llm'], 'id': None, 'group': None}, 'multi_run': {'run_dir': 'data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2', 'wandb_name_base': '2026.03.18-22.30.56_train_llm_lowdim_box-close-v2'}, 'task': {'name': 'box-close-v2', 'obs_dim': 9, 'action_dim': 4, 'env_runner': {'_target_': 'llmbc.env_runner.metaworld_lowdim_runner.MetaworldLowdimRunner', 'env_name': 'llf-metaworld-box-close-v2', 'n_train': 10, 'n_test': 50, 'n_envs': 10, 'max_steps': 30, 'n_obs_steps': 1, 'n_action_steps': 1, 'instruction_type': 'b', 'feedback_type': ['hp', 'hn', 'fp'], 'visual': False, 'discount': 0.9}, 'dataset': {'_target_': 'llmbc.dataset.metaworld_lowdim_dataset.MetaworldLowdimDataset', 'data_path': 'datasets/box-close-v2-general.pt', 'data_path2': 'datasets/box-close-v2.pt', 'horizon': 1, 'pad_before': 0, 'pad_after': 0, 'obs_eef_target': True, 'use_manual_normalizer': False, 'val_ratio': 0.05, 'dummy_normalizer': False}, 'instructor': {'_target_': 'llmbc.translator.instructor.metaworld_instructor.box_close_v2_instructor.BoxCloseV2Instructor'}}, 'llm': {'name': 'mistralai/Mistral-7B-Instruct-v0.3', 'model_name': 'Mistral-7B-Instruct-v0.3', 'config_target': 'llmbc.model.llm.mistral_lowdim_model.LowdimMistralConfig', 'causal_lm_target': 'llmbc.model.llm.mistral_lowdim_model.LowdimMistralForCausalLM', 'use_quantization': True, 'use_joint_mlp_projector': True, 'llm_mode': 'mlp-finetuned', 'finetune_mode': 'lora', 'checkpoint': 'data/outputs/2026.03.14/23.40.55_train_mlp_projector_box-close-v2/checkpoints/latest.ckpt', 'max_length': 100, 'lora_config': {'r': 8, 'lora_alpha': 16, 'lora_dropout': 0.05, 'bias': 'none', 'task_type': 'CAUSAL_LM'}, 'prompter': {'_target_': 'llmbc.translator.prompter.mistral_prompter.MistralPrompter', 'use_joint_mlp_projector': True}, 'hydra': {'job': {'override_dirname': 'mistralai/Mistral-7B-Instruct-v0.3'}, 'run': {'dir': 'data/outputs/2026.03.18/22.30.56_mistralai/Mistral-7B-Instruct-v0.3'}}}}
14
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():669] starting backend
15
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():673] sending inform_init request
16
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [backend.py:_multiprocessing_setup():104] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
17
+ 2026-03-18 22:31:01,200 INFO MainThread:2041405 [wandb_init.py:init():686] backend started and connected
18
+ 2026-03-18 22:31:01,204 INFO MainThread:2041405 [wandb_init.py:init():781] updated telemetry
19
+ 2026-03-18 22:31:01,224 INFO MainThread:2041405 [wandb_init.py:init():814] communicating run to backend with 90.0 second timeout
20
+ 2026-03-18 22:31:02,754 INFO MainThread:2041405 [wandb_init.py:init():867] starting run threads in backend
21
+ 2026-03-18 22:31:02,923 INFO MainThread:2041405 [wandb_run.py:_console_start():2451] atexit reg
22
+ 2026-03-18 22:31:02,924 INFO MainThread:2041405 [wandb_run.py:_redirect():2299] redirect: wrap_raw
23
+ 2026-03-18 22:31:02,924 INFO MainThread:2041405 [wandb_run.py:_redirect():2364] Wrapping output streams.
24
+ 2026-03-18 22:31:02,924 INFO MainThread:2041405 [wandb_run.py:_redirect():2389] Redirects installed.
25
+ 2026-03-18 22:31:02,925 INFO MainThread:2041405 [wandb_init.py:init():911] run started, returning control to user process
26
+ 2026-03-18 22:31:51,682 INFO MainThread:2041405 [wandb_run.py:_config_callback():1389] config_cb None None {'peft_config': {'default': {'task_type': 'CAUSAL_LM', 'peft_type': <PeftType.LORA: 'LORA'>, 'auto_mapping': None, 'base_model_name_or_path': 'mistralai/Mistral-7B-Instruct-v0.3', 'revision': None, 'inference_mode': False, 'r': 8, 'target_modules': {'down_proj', 'v_proj', 'k_proj', 'lm_head', 'up_proj', 'q_proj', 'gate_proj', 'o_proj'}, 'exclude_modules': None, 'lora_alpha': 16, 'lora_dropout': 0.05, 'fan_in_fan_out': False, 'bias': 'none', 'use_rslora': False, 'modules_to_save': None, 'init_lora_weights': True, 'layers_to_transform': None, 'layers_pattern': None, 'rank_pattern': {}, 'alpha_pattern': {}, 'megatron_config': None, 'megatron_core': 'megatron.core', 'loftq_config': {}, 'eva_config': None, 'use_dora': False, 'layer_replication': None, 'runtime_config': {'ephemeral_gpu_offload': False}, 'lora_bias': False}}, 'obs_dim': 9, 'action_dim': 4, 'use_joint_mlp_projector': True, 'vocab_size': 32768, 'max_position_embeddings': 32768, 'hidden_size': 4096, 'intermediate_size': 14336, 'num_hidden_layers': 32, 'num_attention_heads': 32, 'sliding_window': None, 'head_dim': 128, 'num_key_value_heads': 8, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-05, 'use_cache': False, 'rope_theta': 1000000.0, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['MistralForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': None, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'mistralai/Mistral-7B-Instruct-v0.3', '_attn_implementation_autoset': True, 'transformers_version': '4.47.1', 'model_type': 'mistral_lowdim', 'quantization_config': {'quant_method': 'BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': ['joint_obs_action_projector'], 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 32, 'per_device_eval_batch_size': 32, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 4, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 1e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 10, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 1000, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/runs/Mar18_22-31-41_A6000-2', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 1, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 5000, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': True, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': True, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_32bit', 'optim_args': None, 'adafactor': False, 'group_by_length': True, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': None, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'dataset_text_field': 'text', 'packing': False, 'max_seq_length': 100, 'dataset_num_proc': None, 'dataset_batch_size': 1000, 'model_init_kwargs': None, 'dataset_kwargs': {}, 'eval_packing': None, 'num_of_sequences': 1024, 'chars_per_token': '<CHARS_PER_TOKEN>', 'use_liger': False, 'joint_obs_action_mlp_lr': 5e-06, 'obs_mlp_lr': None, 'action_mlp_lr': None}
27
+ 2026-03-18 22:31:51,686 INFO MainThread:2041405 [wandb_config.py:__setitem__():154] config set model/num_parameters = 7286128640 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7fe2561236a0>>
28
+ 2026-03-18 22:31:51,686 INFO MainThread:2041405 [wandb_run.py:_config_callback():1389] config_cb model/num_parameters 7286128640 None
29
+ 2026-03-19 17:31:14,775 INFO MainThread:2041405 [wandb_run.py:_finish():2146] finishing run chyang25-national-taiwan-university/llm_module_finetuning/x1m9280b
30
+ 2026-03-19 17:31:14,776 INFO MainThread:2041405 [wandb_run.py:_atexit_cleanup():2414] got exitcode: 0
31
+ 2026-03-19 17:31:14,776 INFO MainThread:2041405 [wandb_run.py:_restore():2396] restore
32
+ 2026-03-19 17:31:14,776 INFO MainThread:2041405 [wandb_run.py:_restore():2402] restore done
33
+ 2026-03-19 17:31:24,580 INFO MainThread:2041405 [wandb_run.py:_footer_history_summary_info():3963] rendering history
34
+ 2026-03-19 17:31:24,581 INFO MainThread:2041405 [wandb_run.py:_footer_history_summary_info():3995] rendering summary
35
+ 2026-03-19 17:31:24,585 INFO MainThread:2041405 [wandb_run.py:_footer_sync_info():3922] logging synced files
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/config.yaml ADDED
@@ -0,0 +1,749 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ _attn_implementation_autoset:
2
+ value: true
3
+ _name_or_path:
4
+ value: mistralai/Mistral-7B-Instruct-v0.3
5
+ _target_:
6
+ value: llmbc.workspace.train_llm_workspace.TrainLLMWorkspace
7
+ _wandb:
8
+ value:
9
+ cli_version: 0.18.6
10
+ m:
11
+ - "1": eval/runtime
12
+ "5": 2
13
+ "6":
14
+ - 1
15
+ - 3
16
+ "7": []
17
+ - "1": train/global_step
18
+ "6":
19
+ - 3
20
+ "7": []
21
+ - "1": eval/loss
22
+ "5": 2
23
+ "6":
24
+ - 1
25
+ - 3
26
+ "7": []
27
+ - "1": eval/steps_per_second
28
+ "5": 2
29
+ "6":
30
+ - 1
31
+ - 3
32
+ "7": []
33
+ - "1": train/grad_norm
34
+ "5": 2
35
+ "6":
36
+ - 1
37
+ - 3
38
+ "7": []
39
+ - "1": train/loss
40
+ "5": 2
41
+ "6":
42
+ - 1
43
+ - 3
44
+ "7": []
45
+ - "1": train/learning_rate
46
+ "5": 2
47
+ "6":
48
+ - 1
49
+ - 3
50
+ "7": []
51
+ - "1": train/epoch
52
+ "5": 2
53
+ "6":
54
+ - 1
55
+ - 3
56
+ "7": []
57
+ - "1": eval/samples_per_second
58
+ "5": 2
59
+ "6":
60
+ - 1
61
+ - 3
62
+ "7": []
63
+ python_version: 3.9.20
64
+ t:
65
+ "1":
66
+ - 1
67
+ - 2
68
+ - 3
69
+ - 5
70
+ - 11
71
+ - 12
72
+ - 41
73
+ - 49
74
+ - 50
75
+ - 51
76
+ - 53
77
+ - 55
78
+ - 71
79
+ - 84
80
+ - 98
81
+ "2":
82
+ - 1
83
+ - 2
84
+ - 3
85
+ - 5
86
+ - 11
87
+ - 12
88
+ - 41
89
+ - 49
90
+ - 50
91
+ - 51
92
+ - 53
93
+ - 55
94
+ - 71
95
+ - 84
96
+ - 98
97
+ "3":
98
+ - 2
99
+ - 7
100
+ - 13
101
+ - 15
102
+ - 16
103
+ - 19
104
+ - 23
105
+ - 55
106
+ - 62
107
+ - 66
108
+ "4": 3.9.20
109
+ "5": 0.18.6
110
+ "6": 4.47.1
111
+ "8":
112
+ - 5
113
+ "9":
114
+ "1": transformers_trainer
115
+ "12": 0.18.6
116
+ "13": linux-x86_64
117
+ accelerator_config:
118
+ value:
119
+ dispatch_batches: null
120
+ even_batches: true
121
+ gradient_accumulation_kwargs: null
122
+ non_blocking: false
123
+ split_batches: false
124
+ use_seedable_sampler: true
125
+ action_dim:
126
+ value: 4
127
+ action_mlp_lr:
128
+ value: null
129
+ adafactor:
130
+ value: false
131
+ adam_beta1:
132
+ value: 0.9
133
+ adam_beta2:
134
+ value: 0.999
135
+ adam_epsilon:
136
+ value: 1e-08
137
+ add_cross_attention:
138
+ value: false
139
+ architectures:
140
+ value:
141
+ - MistralForCausalLM
142
+ attention_dropout:
143
+ value: 0
144
+ auto_find_batch_size:
145
+ value: false
146
+ average_tokens_across_devices:
147
+ value: false
148
+ bad_words_ids:
149
+ value: null
150
+ batch_eval_metrics:
151
+ value: false
152
+ begin_suppress_tokens:
153
+ value: null
154
+ bf16:
155
+ value: true
156
+ bf16_full_eval:
157
+ value: false
158
+ bos_token_id:
159
+ value: 1
160
+ chars_per_token:
161
+ value: <CHARS_PER_TOKEN>
162
+ chunk_size_feed_forward:
163
+ value: 0
164
+ cross_attention_hidden_size:
165
+ value: null
166
+ data_seed:
167
+ value: null
168
+ dataloader_drop_last:
169
+ value: false
170
+ dataloader_num_workers:
171
+ value: 0
172
+ dataloader_persistent_workers:
173
+ value: false
174
+ dataloader_pin_memory:
175
+ value: true
176
+ dataloader_prefetch_factor:
177
+ value: null
178
+ dataset:
179
+ value:
180
+ test_data_ratio: 0.01
181
+ dataset_batch_size:
182
+ value: 1000
183
+ dataset_num_proc:
184
+ value: null
185
+ dataset_text_field:
186
+ value: text
187
+ ddp_backend:
188
+ value: null
189
+ ddp_broadcast_buffers:
190
+ value: null
191
+ ddp_bucket_cap_mb:
192
+ value: null
193
+ ddp_find_unused_parameters:
194
+ value: null
195
+ ddp_timeout:
196
+ value: 1800
197
+ debug:
198
+ value: []
199
+ decoder_start_token_id:
200
+ value: null
201
+ deepspeed:
202
+ value: null
203
+ disable_tqdm:
204
+ value: false
205
+ dispatch_batches:
206
+ value: null
207
+ diversity_penalty:
208
+ value: 0
209
+ do_eval:
210
+ value: true
211
+ do_predict:
212
+ value: false
213
+ do_sample:
214
+ value: false
215
+ do_train:
216
+ value: false
217
+ early_stopping:
218
+ value: false
219
+ encoder_no_repeat_ngram_size:
220
+ value: 0
221
+ eos_token_id:
222
+ value: 2
223
+ eval_accumulation_steps:
224
+ value: null
225
+ eval_delay:
226
+ value: 0
227
+ eval_do_concat_batches:
228
+ value: true
229
+ eval_on_start:
230
+ value: false
231
+ eval_packing:
232
+ value: null
233
+ eval_steps:
234
+ value: 10
235
+ eval_strategy:
236
+ value: steps
237
+ eval_use_gather_object:
238
+ value: false
239
+ evaluation_strategy:
240
+ value: null
241
+ exp_name:
242
+ value: train llm
243
+ exponential_decay_length_penalty:
244
+ value: null
245
+ finetuning_task:
246
+ value: null
247
+ forced_bos_token_id:
248
+ value: null
249
+ forced_eos_token_id:
250
+ value: null
251
+ fp16:
252
+ value: false
253
+ fp16_backend:
254
+ value: auto
255
+ fp16_full_eval:
256
+ value: false
257
+ fp16_opt_level:
258
+ value: O1
259
+ fsdp:
260
+ value: []
261
+ fsdp_config:
262
+ value:
263
+ min_num_params: 0
264
+ xla: false
265
+ xla_fsdp_grad_ckpt: false
266
+ xla_fsdp_v2: false
267
+ fsdp_min_num_params:
268
+ value: 0
269
+ fsdp_transformer_layer_cls_to_wrap:
270
+ value: null
271
+ full_determinism:
272
+ value: false
273
+ gradient_accumulation_steps:
274
+ value: 4
275
+ gradient_checkpointing:
276
+ value: false
277
+ gradient_checkpointing_kwargs:
278
+ value: null
279
+ greater_is_better:
280
+ value: null
281
+ group_by_length:
282
+ value: true
283
+ half_precision_backend:
284
+ value: auto
285
+ head_dim:
286
+ value: 128
287
+ hidden_act:
288
+ value: silu
289
+ hidden_size:
290
+ value: 4096
291
+ horizon:
292
+ value: 1
293
+ hub_always_push:
294
+ value: false
295
+ hub_model_id:
296
+ value: null
297
+ hub_private_repo:
298
+ value: null
299
+ hub_strategy:
300
+ value: every_save
301
+ hub_token:
302
+ value: <HUB_TOKEN>
303
+ id2label:
304
+ value:
305
+ "0": LABEL_0
306
+ "1": LABEL_1
307
+ ignore_data_skip:
308
+ value: false
309
+ include_for_metrics:
310
+ value: []
311
+ include_inputs_for_metrics:
312
+ value: false
313
+ include_num_input_tokens_seen:
314
+ value: false
315
+ include_tokens_per_second:
316
+ value: false
317
+ initializer_range:
318
+ value: 0.02
319
+ intermediate_size:
320
+ value: 14336
321
+ is_decoder:
322
+ value: false
323
+ is_encoder_decoder:
324
+ value: false
325
+ jit_mode_eval:
326
+ value: false
327
+ joint_obs_action_mlp_lr:
328
+ value: 5e-06
329
+ label_names:
330
+ value: null
331
+ label_smoothing_factor:
332
+ value: 0
333
+ label2id:
334
+ value:
335
+ LABEL_0: 0
336
+ LABEL_1: 1
337
+ learning_rate:
338
+ value: 1e-05
339
+ length_column_name:
340
+ value: length
341
+ length_penalty:
342
+ value: 1
343
+ llm:
344
+ value:
345
+ causal_lm_target: llmbc.model.llm.mistral_lowdim_model.LowdimMistralForCausalLM
346
+ checkpoint: data/outputs/2026.03.14/23.40.55_train_mlp_projector_box-close-v2/checkpoints/latest.ckpt
347
+ config_target: llmbc.model.llm.mistral_lowdim_model.LowdimMistralConfig
348
+ finetune_mode: lora
349
+ hydra:
350
+ job:
351
+ override_dirname: mistralai/Mistral-7B-Instruct-v0.3
352
+ run:
353
+ dir: data/outputs/2026.03.18/22.30.56_mistralai/Mistral-7B-Instruct-v0.3
354
+ llm_mode: mlp-finetuned
355
+ lora_config:
356
+ bias: none
357
+ lora_alpha: 16
358
+ lora_dropout: 0.05
359
+ r: 8
360
+ task_type: CAUSAL_LM
361
+ max_length: 100
362
+ model_name: Mistral-7B-Instruct-v0.3
363
+ name: mistralai/Mistral-7B-Instruct-v0.3
364
+ prompter:
365
+ _target_: llmbc.translator.prompter.mistral_prompter.MistralPrompter
366
+ use_joint_mlp_projector: true
367
+ use_joint_mlp_projector: true
368
+ use_quantization: true
369
+ load_best_model_at_end:
370
+ value: false
371
+ local_rank:
372
+ value: 0
373
+ log_level:
374
+ value: passive
375
+ log_level_replica:
376
+ value: warning
377
+ log_on_each_node:
378
+ value: true
379
+ logging:
380
+ value:
381
+ group: null
382
+ id: null
383
+ mode: online
384
+ name: 2026.03.18-22.30.56_train_llm_lowdim_box-close-v2
385
+ project: llm_module_finetuning
386
+ resume: true
387
+ tags:
388
+ - train_llm_lowdim
389
+ - box-close-v2
390
+ - train llm
391
+ logging_dir:
392
+ value: /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/runs/Mar18_22-31-41_A6000-2
393
+ logging_first_step:
394
+ value: false
395
+ logging_nan_inf_filter:
396
+ value: true
397
+ logging_steps:
398
+ value: 1
399
+ logging_strategy:
400
+ value: steps
401
+ lora_config:
402
+ value:
403
+ bias: none
404
+ lora_alpha: 16
405
+ lora_dropout: 0.05
406
+ r: 8
407
+ task_type: CAUSAL_LM
408
+ lr_scheduler_type:
409
+ value: linear
410
+ max_grad_norm:
411
+ value: 1
412
+ max_length:
413
+ value: 20
414
+ max_position_embeddings:
415
+ value: 32768
416
+ max_seq_length:
417
+ value: 100
418
+ max_steps:
419
+ value: -1
420
+ metric_for_best_model:
421
+ value: null
422
+ min_length:
423
+ value: 0
424
+ model/num_parameters:
425
+ value: 7286128640
426
+ model_init_kwargs:
427
+ value: null
428
+ model_name:
429
+ value: mistralai/Mistral-7B-Instruct-v0.3
430
+ model_type:
431
+ value: mistral_lowdim
432
+ mp_parameters:
433
+ value: ""
434
+ multi_run:
435
+ value:
436
+ run_dir: data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2
437
+ wandb_name_base: 2026.03.18-22.30.56_train_llm_lowdim_box-close-v2
438
+ n_action_steps:
439
+ value: 1
440
+ n_obs_steps:
441
+ value: 1
442
+ name:
443
+ value: train_llm_lowdim
444
+ neftune_noise_alpha:
445
+ value: null
446
+ no_cuda:
447
+ value: false
448
+ no_repeat_ngram_size:
449
+ value: 0
450
+ num_attention_heads:
451
+ value: 32
452
+ num_beam_groups:
453
+ value: 1
454
+ num_beams:
455
+ value: 1
456
+ num_hidden_layers:
457
+ value: 32
458
+ num_key_value_heads:
459
+ value: 8
460
+ num_of_sequences:
461
+ value: 1024
462
+ num_return_sequences:
463
+ value: 1
464
+ num_train_epochs:
465
+ value: 10
466
+ obs_dim:
467
+ value: 9
468
+ obs_mlp_lr:
469
+ value: null
470
+ optim:
471
+ value: paged_adamw_32bit
472
+ optim_args:
473
+ value: null
474
+ optim_target_modules:
475
+ value: null
476
+ output_attentions:
477
+ value: false
478
+ output_dir:
479
+ value: /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2
480
+ output_hidden_states:
481
+ value: false
482
+ output_scores:
483
+ value: false
484
+ overwrite_output_dir:
485
+ value: false
486
+ packing:
487
+ value: false
488
+ pad_token_id:
489
+ value: null
490
+ past_index:
491
+ value: -1
492
+ peft_config:
493
+ value:
494
+ default:
495
+ auto_mapping: null
496
+ base_model_name_or_path: mistralai/Mistral-7B-Instruct-v0.3
497
+ bias: none
498
+ eva_config: null
499
+ exclude_modules: null
500
+ fan_in_fan_out: false
501
+ inference_mode: false
502
+ init_lora_weights: true
503
+ layer_replication: null
504
+ layers_pattern: null
505
+ layers_to_transform: null
506
+ lora_alpha: 16
507
+ lora_bias: false
508
+ lora_dropout: 0.05
509
+ megatron_config: null
510
+ megatron_core: megatron.core
511
+ modules_to_save: null
512
+ peft_type: LORA
513
+ r: 8
514
+ revision: null
515
+ runtime_config:
516
+ ephemeral_gpu_offload: false
517
+ target_modules:
518
+ - down_proj
519
+ - v_proj
520
+ - k_proj
521
+ - lm_head
522
+ - up_proj
523
+ - q_proj
524
+ - gate_proj
525
+ - o_proj
526
+ task_type: CAUSAL_LM
527
+ use_dora: false
528
+ use_rslora: false
529
+ per_device_eval_batch_size:
530
+ value: 32
531
+ per_device_train_batch_size:
532
+ value: 32
533
+ per_gpu_eval_batch_size:
534
+ value: null
535
+ per_gpu_train_batch_size:
536
+ value: null
537
+ prediction_loss_only:
538
+ value: false
539
+ prefix:
540
+ value: null
541
+ problem_type:
542
+ value: null
543
+ push_to_hub:
544
+ value: false
545
+ push_to_hub_model_id:
546
+ value: null
547
+ push_to_hub_organization:
548
+ value: null
549
+ push_to_hub_token:
550
+ value: <PUSH_TO_HUB_TOKEN>
551
+ quantization_config:
552
+ value:
553
+ _load_in_4bit: true
554
+ _load_in_8bit: false
555
+ bnb_4bit_compute_dtype: bfloat16
556
+ bnb_4bit_quant_storage: uint8
557
+ bnb_4bit_quant_type: nf4
558
+ bnb_4bit_use_double_quant: true
559
+ llm_int8_enable_fp32_cpu_offload: false
560
+ llm_int8_has_fp16_weight: false
561
+ llm_int8_skip_modules:
562
+ - joint_obs_action_projector
563
+ llm_int8_threshold: 6
564
+ load_in_4bit: true
565
+ load_in_8bit: false
566
+ quant_method: BITS_AND_BYTES
567
+ ray_scope:
568
+ value: last
569
+ remove_invalid_values:
570
+ value: false
571
+ remove_unused_columns:
572
+ value: true
573
+ repetition_penalty:
574
+ value: 1
575
+ report_to:
576
+ value:
577
+ - wandb
578
+ restore_callback_states_from_checkpoint:
579
+ value: false
580
+ resume_from_checkpoint:
581
+ value: null
582
+ return_dict:
583
+ value: true
584
+ return_dict_in_generate:
585
+ value: false
586
+ rms_norm_eps:
587
+ value: 1e-05
588
+ rope_theta:
589
+ value: 1e+06
590
+ run_name:
591
+ value: /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2
592
+ save_on_each_node:
593
+ value: false
594
+ save_only_model:
595
+ value: false
596
+ save_safetensors:
597
+ value: true
598
+ save_steps:
599
+ value: 5000
600
+ save_strategy:
601
+ value: steps
602
+ save_total_limit:
603
+ value: null
604
+ seed:
605
+ value: 42
606
+ sep_token_id:
607
+ value: null
608
+ skip_memory_metrics:
609
+ value: true
610
+ sliding_window:
611
+ value: null
612
+ split_batches:
613
+ value: null
614
+ suppress_tokens:
615
+ value: null
616
+ task:
617
+ value:
618
+ action_dim: 4
619
+ dataset:
620
+ _target_: llmbc.dataset.metaworld_lowdim_dataset.MetaworldLowdimDataset
621
+ data_path: datasets/box-close-v2-general.pt
622
+ data_path2: datasets/box-close-v2.pt
623
+ dummy_normalizer: false
624
+ horizon: 1
625
+ obs_eef_target: true
626
+ pad_after: 0
627
+ pad_before: 0
628
+ use_manual_normalizer: false
629
+ val_ratio: 0.05
630
+ env_runner:
631
+ _target_: llmbc.env_runner.metaworld_lowdim_runner.MetaworldLowdimRunner
632
+ discount: 0.9
633
+ env_name: llf-metaworld-box-close-v2
634
+ feedback_type:
635
+ - hp
636
+ - hn
637
+ - fp
638
+ instruction_type: b
639
+ max_steps: 30
640
+ n_action_steps: 1
641
+ n_envs: 10
642
+ n_obs_steps: 1
643
+ n_test: 50
644
+ n_train: 10
645
+ visual: false
646
+ instructor:
647
+ _target_: llmbc.translator.instructor.metaworld_instructor.box_close_v2_instructor.BoxCloseV2Instructor
648
+ name: box-close-v2
649
+ obs_dim: 9
650
+ task_name:
651
+ value: box-close-v2
652
+ task_specific_params:
653
+ value: null
654
+ temperature:
655
+ value: 1
656
+ tf_legacy_loss:
657
+ value: false
658
+ tf32:
659
+ value: true
660
+ tie_encoder_decoder:
661
+ value: false
662
+ tie_word_embeddings:
663
+ value: false
664
+ tokenizer_class:
665
+ value: null
666
+ top_k:
667
+ value: 50
668
+ top_p:
669
+ value: 1
670
+ torch_compile:
671
+ value: false
672
+ torch_compile_backend:
673
+ value: null
674
+ torch_compile_mode:
675
+ value: null
676
+ torch_dtype:
677
+ value: bfloat16
678
+ torch_empty_cache_steps:
679
+ value: null
680
+ torchdynamo:
681
+ value: null
682
+ torchscript:
683
+ value: false
684
+ tpu_metrics_debug:
685
+ value: false
686
+ tpu_num_cores:
687
+ value: null
688
+ trainer:
689
+ value:
690
+ action_dim: 4
691
+ dataset_text_field: text
692
+ max_seq_length: 100
693
+ obs_dim: 9
694
+ packing: false
695
+ use_joint_mlp_projector: true
696
+ training:
697
+ value:
698
+ bf16: true
699
+ eval_steps: 10
700
+ eval_strategy: steps
701
+ fp16: false
702
+ gradient_accumulation_steps: 4
703
+ group_by_length: true
704
+ joint_obs_action_mlp_lr: 5e-06
705
+ learning_rate: 1e-05
706
+ logging_steps: 1
707
+ logging_strategy: steps
708
+ num_train_epochs: 10
709
+ optim: paged_adamw_32bit
710
+ per_device_eval_batch_size: 32
711
+ per_device_train_batch_size: 32
712
+ report_to: wandb
713
+ save_steps: 5000
714
+ seed: 42
715
+ tf32: true
716
+ use_joint_mlp_projector: true
717
+ warmup_steps: 1000
718
+ transformers_version:
719
+ value: 4.47.1
720
+ typical_p:
721
+ value: 1
722
+ use_bfloat16:
723
+ value: false
724
+ use_cache:
725
+ value: false
726
+ use_cpu:
727
+ value: false
728
+ use_ipex:
729
+ value: false
730
+ use_joint_mlp_projector:
731
+ value: true
732
+ use_legacy_prediction_loop:
733
+ value: false
734
+ use_liger:
735
+ value: false
736
+ use_liger_kernel:
737
+ value: false
738
+ use_mps_device:
739
+ value: false
740
+ use_quantization:
741
+ value: true
742
+ vocab_size:
743
+ value: 32768
744
+ warmup_ratio:
745
+ value: 0
746
+ warmup_steps:
747
+ value: 1000
748
+ weight_decay:
749
+ value: 0
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/output.log ADDED
The diff for this file is too large to render. See raw diff
 
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/wandb-metadata.json ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-6.8.0-101-generic-x86_64-with-glibc2.35",
3
+ "python": "3.9.20",
4
+ "startedAt": "2026-03-18T14:31:01.200411Z",
5
+ "args": [
6
+ "--config-path",
7
+ "config/llm_backbone",
8
+ "--config-name",
9
+ "llmdp_llm_box-close-v2_mistral-7b-instruct-v0.3.yaml"
10
+ ],
11
+ "program": "/tmp2/chyang/workspace/LLM-BC/./train.py",
12
+ "codePath": "train.py",
13
+ "git": {
14
+ "remote": "https://github.com/CHYang25/LLM-BC.git",
15
+ "commit": "2fc4d560b1122e967bf61b61fceac43ad9ddd080"
16
+ },
17
+ "email": "chris920325@gmail.com",
18
+ "root": "/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2",
19
+ "host": "A6000-2",
20
+ "username": "chyang",
21
+ "executable": "/home/chyang/miniconda3/envs/llm-bc/bin/python3",
22
+ "codePathLocal": "train.py",
23
+ "cpu_count": 12,
24
+ "cpu_count_logical": 24,
25
+ "gpu": "NVIDIA RTX A6000",
26
+ "gpu_count": 2,
27
+ "disk": {
28
+ "/": {
29
+ "total": "1967317549056",
30
+ "used": "731233558528"
31
+ }
32
+ },
33
+ "memory": {
34
+ "total": "134538502144"
35
+ },
36
+ "cpu": {
37
+ "count": 12,
38
+ "countLogical": 24
39
+ },
40
+ "gpu_nvidia": [
41
+ {
42
+ "name": "NVIDIA RTX A6000",
43
+ "memoryTotal": "51527024640",
44
+ "cudaCores": 10752,
45
+ "architecture": "Ampere"
46
+ },
47
+ {
48
+ "name": "NVIDIA RTX A6000",
49
+ "memoryTotal": "51527024640",
50
+ "cudaCores": 10752,
51
+ "architecture": "Ampere"
52
+ }
53
+ ],
54
+ "cudaVersion": "12.6"
55
+ }
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/files/wandb-summary.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"_runtime":68413.573929076,"eval/steps_per_second":0.979,"train/learning_rate":0,"train_runtime":68363.1085,"train/global_step":5880,"eval/loss":0.426089346408844,"train/grad_norm":7.623011589050293,"eval/samples_per_second":31.043,"train/loss":1.6558,"train_steps_per_second":0.086,"_wandb":{"runtime":68413},"train_samples_per_second":11.01,"total_flos":3.7409731557159076e+18,"train_loss":1.6857357866352514,"train/epoch":9.984275393115173,"_step":6468,"_timestamp":1.773912674774071e+09,"eval/runtime":24.5144}
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-core.log ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-03-18T22:31:00.544445136+08:00","level":"INFO","msg":"started logging, with flags","port-filename":"/tmp/tmpo9n3ruuy/port-2041405.txt","pid":2041405,"debug":false,"disable-analytics":false}
2
+ {"time":"2026-03-18T22:31:00.544808702+08:00","level":"INFO","msg":"FeatureState","shutdownOnParentExitEnabled":false}
3
+ {"time":"2026-03-18T22:31:00.546269629+08:00","level":"INFO","msg":"Will exit if parent process dies.","ppid":2041405}
4
+ {"time":"2026-03-18T22:31:00.546624189+08:00","level":"INFO","msg":"server is running","addr":{"IP":"127.0.0.1","Port":45249,"Zone":""}}
5
+ {"time":"2026-03-18T22:31:00.712094443+08:00","level":"INFO","msg":"connection: ManageConnectionData: new connection created","id":"127.0.0.1:54638"}
6
+ {"time":"2026-03-18T22:31:01.204499962+08:00","level":"INFO","msg":"handleInformInit: received","streamId":"x1m9280b","id":"127.0.0.1:54638"}
7
+ {"time":"2026-03-18T22:31:01.312370477+08:00","level":"INFO","msg":"handleInformInit: stream started","streamId":"x1m9280b","id":"127.0.0.1:54638"}
8
+ {"time":"2026-03-19T17:31:24.585911347+08:00","level":"INFO","msg":"handleInformFinish: finish message received","streamId":"x1m9280b","id":"127.0.0.1:54638"}
9
+ {"time":"2026-03-19T17:31:24.586245389+08:00","level":"INFO","msg":"handleInformFinish: stream closed","streamId":"x1m9280b","id":"127.0.0.1:54638"}
10
+ {"time":"2026-03-19T17:31:25.077239416+08:00","level":"INFO","msg":"handleInformTeardown: server teardown initiated","id":"127.0.0.1:54638"}
11
+ {"time":"2026-03-19T17:31:25.077301528+08:00","level":"INFO","msg":"handleInformTeardown: server shutdown complete","id":"127.0.0.1:54638"}
12
+ {"time":"2026-03-19T17:31:25.077315957+08:00","level":"INFO","msg":"server is shutting down"}
13
+ {"time":"2026-03-19T17:31:25.077382781+08:00","level":"INFO","msg":"connection: Close: initiating connection closure","id":"127.0.0.1:54638"}
14
+ {"time":"2026-03-19T17:31:25.077537714+08:00","level":"INFO","msg":"connection: Close: connection successfully closed","id":"127.0.0.1:54638"}
15
+ {"time":"2026-03-19T17:31:25.077594017+08:00","level":"INFO","msg":"connection: ManageConnectionData: connection closed","id":"127.0.0.1:54638"}
16
+ {"time":"2026-03-19T17:31:25.077618029+08:00","level":"INFO","msg":"server is closed"}
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-internal.log ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {"time":"2026-03-18T22:31:01.204991579+08:00","level":"INFO","msg":"using version","core version":"0.18.6"}
2
+ {"time":"2026-03-18T22:31:01.205010756+08:00","level":"INFO","msg":"created symlink","path":"/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-core.log"}
3
+ {"time":"2026-03-18T22:31:01.312346847+08:00","level":"INFO","msg":"created new stream","id":"x1m9280b"}
4
+ {"time":"2026-03-18T22:31:01.312367301+08:00","level":"INFO","msg":"stream: started","id":"x1m9280b"}
5
+ {"time":"2026-03-18T22:31:01.312665109+08:00","level":"INFO","msg":"sender: started","stream_id":"x1m9280b"}
6
+ {"time":"2026-03-18T22:31:01.312430257+08:00","level":"INFO","msg":"handler: started","stream_id":{"value":"x1m9280b"}}
7
+ {"time":"2026-03-18T22:31:01.312656373+08:00","level":"INFO","msg":"writer: Do: started","stream_id":{"value":"x1m9280b"}}
8
+ {"time":"2026-03-18T22:31:02.758068808+08:00","level":"INFO","msg":"Starting system monitor"}
9
+ {"time":"2026-03-19T04:22:27.963176902+08:00","level":"INFO","msg":"api: retrying HTTP error","status":502,"url":"https://api.wandb.ai/files/chyang25-national-taiwan-university/llm_module_finetuning/x1m9280b/file_stream"}
10
+ {"time":"2026-03-19T05:59:47.962016684+08:00","level":"INFO","msg":"api: retrying HTTP error","status":502,"url":"https://api.wandb.ai/files/chyang25-national-taiwan-university/llm_module_finetuning/x1m9280b/file_stream"}
11
+ {"time":"2026-03-19T17:31:14.776552794+08:00","level":"INFO","msg":"Stopping system monitor"}
12
+ {"time":"2026-03-19T17:31:14.777008363+08:00","level":"INFO","msg":"Stopped system monitor"}
13
+ {"time":"2026-03-19T17:31:15.776985704+08:00","level":"INFO","msg":"handler: operation stats","stats":{"operations":[{"desc":"uploading wandb-summary.json","runtime_seconds":0.192680949,"progress":"515B/515B"},{"desc":"saving job artifact","runtime_seconds":0.014816469}],"total_operations":2}}
14
+ {"time":"2026-03-19T17:31:21.307731838+08:00","level":"INFO","msg":"fileTransfer: Close: file transfer manager closed"}
15
+ {"time":"2026-03-19T17:31:24.586012627+08:00","level":"INFO","msg":"stream: closing","id":"x1m9280b"}
16
+ {"time":"2026-03-19T17:31:24.586046207+08:00","level":"INFO","msg":"handler: closed","stream_id":{"value":"x1m9280b"}}
17
+ {"time":"2026-03-19T17:31:24.586077373+08:00","level":"INFO","msg":"writer: Close: closed","stream_id":{"value":"x1m9280b"}}
18
+ {"time":"2026-03-19T17:31:24.586176871+08:00","level":"INFO","msg":"sender: closed","stream_id":"x1m9280b"}
19
+ {"time":"2026-03-19T17:31:24.586233462+08:00","level":"INFO","msg":"stream: closed","id":"x1m9280b"}
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug.log ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2026-03-18 22:31:01,198 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Current SDK version is 0.18.6
2
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Configure stats pid to 2041405
3
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Loading settings from /home/chyang/.config/wandb/settings
4
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Loading settings from /tmp2/chyang/workspace/LLM-BC/wandb/settings
5
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Loading settings from environment variables: {}
6
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Applying setup settings: {'mode': 'online', '_disable_service': None}
7
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Inferring run settings from compute environment: {'program_relpath': 'train.py', 'program_abspath': '/tmp2/chyang/workspace/LLM-BC/train.py', 'program': '/tmp2/chyang/workspace/LLM-BC/./train.py'}
8
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_setup.py:_flush():79] Applying login settings: {}
9
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:_log_setup():533] Logging user logs to /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug.log
10
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:_log_setup():534] Logging internal logs to /tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/logs/debug-internal.log
11
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():619] calling init triggers
12
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():626] wandb.init called with sweep_config: {}
13
+ config: {'name': 'train_llm_lowdim', '_target_': 'llmbc.workspace.train_llm_workspace.TrainLLMWorkspace', 'obs_dim': 9, 'action_dim': 4, 'horizon': 1, 'n_obs_steps': 1, 'n_action_steps': 1, 'task_name': 'box-close-v2', 'exp_name': 'train llm', 'model_name': 'mistralai/Mistral-7B-Instruct-v0.3', 'use_quantization': True, 'lora_config': {'r': 8, 'lora_alpha': 16, 'lora_dropout': 0.05, 'bias': 'none', 'task_type': 'CAUSAL_LM'}, 'dataset': {'test_data_ratio': 0.01}, 'debug': False, 'training': {'seed': 42, 'per_device_train_batch_size': 32, 'per_device_eval_batch_size': 32, 'gradient_accumulation_steps': 4, 'optim': 'paged_adamw_32bit', 'num_train_epochs': 10, 'eval_strategy': 'steps', 'logging_steps': 1, 'warmup_steps': 1000, 'logging_strategy': 'steps', 'learning_rate': 1e-05, 'fp16': False, 'bf16': True, 'tf32': True, 'group_by_length': True, 'report_to': 'wandb', 'save_steps': 5000, 'eval_steps': 10, 'use_joint_mlp_projector': True, 'joint_obs_action_mlp_lr': 5e-06}, 'trainer': {'obs_dim': 9, 'action_dim': 4, 'use_joint_mlp_projector': True, 'max_seq_length': 100, 'dataset_text_field': 'text', 'packing': False}, 'logging': {'project': 'llm_module_finetuning', 'resume': True, 'mode': 'online', 'name': '2026.03.18-22.30.56_train_llm_lowdim_box-close-v2', 'tags': ['train_llm_lowdim', 'box-close-v2', 'train llm'], 'id': None, 'group': None}, 'multi_run': {'run_dir': 'data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2', 'wandb_name_base': '2026.03.18-22.30.56_train_llm_lowdim_box-close-v2'}, 'task': {'name': 'box-close-v2', 'obs_dim': 9, 'action_dim': 4, 'env_runner': {'_target_': 'llmbc.env_runner.metaworld_lowdim_runner.MetaworldLowdimRunner', 'env_name': 'llf-metaworld-box-close-v2', 'n_train': 10, 'n_test': 50, 'n_envs': 10, 'max_steps': 30, 'n_obs_steps': 1, 'n_action_steps': 1, 'instruction_type': 'b', 'feedback_type': ['hp', 'hn', 'fp'], 'visual': False, 'discount': 0.9}, 'dataset': {'_target_': 'llmbc.dataset.metaworld_lowdim_dataset.MetaworldLowdimDataset', 'data_path': 'datasets/box-close-v2-general.pt', 'data_path2': 'datasets/box-close-v2.pt', 'horizon': 1, 'pad_before': 0, 'pad_after': 0, 'obs_eef_target': True, 'use_manual_normalizer': False, 'val_ratio': 0.05, 'dummy_normalizer': False}, 'instructor': {'_target_': 'llmbc.translator.instructor.metaworld_instructor.box_close_v2_instructor.BoxCloseV2Instructor'}}, 'llm': {'name': 'mistralai/Mistral-7B-Instruct-v0.3', 'model_name': 'Mistral-7B-Instruct-v0.3', 'config_target': 'llmbc.model.llm.mistral_lowdim_model.LowdimMistralConfig', 'causal_lm_target': 'llmbc.model.llm.mistral_lowdim_model.LowdimMistralForCausalLM', 'use_quantization': True, 'use_joint_mlp_projector': True, 'llm_mode': 'mlp-finetuned', 'finetune_mode': 'lora', 'checkpoint': 'data/outputs/2026.03.14/23.40.55_train_mlp_projector_box-close-v2/checkpoints/latest.ckpt', 'max_length': 100, 'lora_config': {'r': 8, 'lora_alpha': 16, 'lora_dropout': 0.05, 'bias': 'none', 'task_type': 'CAUSAL_LM'}, 'prompter': {'_target_': 'llmbc.translator.prompter.mistral_prompter.MistralPrompter', 'use_joint_mlp_projector': True}, 'hydra': {'job': {'override_dirname': 'mistralai/Mistral-7B-Instruct-v0.3'}, 'run': {'dir': 'data/outputs/2026.03.18/22.30.56_mistralai/Mistral-7B-Instruct-v0.3'}}}}
14
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():669] starting backend
15
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [wandb_init.py:init():673] sending inform_init request
16
+ 2026-03-18 22:31:01,199 INFO MainThread:2041405 [backend.py:_multiprocessing_setup():104] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
17
+ 2026-03-18 22:31:01,200 INFO MainThread:2041405 [wandb_init.py:init():686] backend started and connected
18
+ 2026-03-18 22:31:01,204 INFO MainThread:2041405 [wandb_init.py:init():781] updated telemetry
19
+ 2026-03-18 22:31:01,224 INFO MainThread:2041405 [wandb_init.py:init():814] communicating run to backend with 90.0 second timeout
20
+ 2026-03-18 22:31:02,754 INFO MainThread:2041405 [wandb_init.py:init():867] starting run threads in backend
21
+ 2026-03-18 22:31:02,923 INFO MainThread:2041405 [wandb_run.py:_console_start():2451] atexit reg
22
+ 2026-03-18 22:31:02,924 INFO MainThread:2041405 [wandb_run.py:_redirect():2299] redirect: wrap_raw
23
+ 2026-03-18 22:31:02,924 INFO MainThread:2041405 [wandb_run.py:_redirect():2364] Wrapping output streams.
24
+ 2026-03-18 22:31:02,924 INFO MainThread:2041405 [wandb_run.py:_redirect():2389] Redirects installed.
25
+ 2026-03-18 22:31:02,925 INFO MainThread:2041405 [wandb_init.py:init():911] run started, returning control to user process
26
+ 2026-03-18 22:31:51,682 INFO MainThread:2041405 [wandb_run.py:_config_callback():1389] config_cb None None {'peft_config': {'default': {'task_type': 'CAUSAL_LM', 'peft_type': <PeftType.LORA: 'LORA'>, 'auto_mapping': None, 'base_model_name_or_path': 'mistralai/Mistral-7B-Instruct-v0.3', 'revision': None, 'inference_mode': False, 'r': 8, 'target_modules': {'down_proj', 'v_proj', 'k_proj', 'lm_head', 'up_proj', 'q_proj', 'gate_proj', 'o_proj'}, 'exclude_modules': None, 'lora_alpha': 16, 'lora_dropout': 0.05, 'fan_in_fan_out': False, 'bias': 'none', 'use_rslora': False, 'modules_to_save': None, 'init_lora_weights': True, 'layers_to_transform': None, 'layers_pattern': None, 'rank_pattern': {}, 'alpha_pattern': {}, 'megatron_config': None, 'megatron_core': 'megatron.core', 'loftq_config': {}, 'eva_config': None, 'use_dora': False, 'layer_replication': None, 'runtime_config': {'ephemeral_gpu_offload': False}, 'lora_bias': False}}, 'obs_dim': 9, 'action_dim': 4, 'use_joint_mlp_projector': True, 'vocab_size': 32768, 'max_position_embeddings': 32768, 'hidden_size': 4096, 'intermediate_size': 14336, 'num_hidden_layers': 32, 'num_attention_heads': 32, 'sliding_window': None, 'head_dim': 128, 'num_key_value_heads': 8, 'hidden_act': 'silu', 'initializer_range': 0.02, 'rms_norm_eps': 1e-05, 'use_cache': False, 'rope_theta': 1000000.0, 'attention_dropout': 0.0, 'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'bfloat16', 'use_bfloat16': False, 'tf_legacy_loss': False, 'pruned_heads': {}, 'tie_word_embeddings': False, 'chunk_size_feed_forward': 0, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'typical_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'exponential_decay_length_penalty': None, 'suppress_tokens': None, 'begin_suppress_tokens': None, 'architectures': ['MistralForCausalLM'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': None, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': 'mistralai/Mistral-7B-Instruct-v0.3', '_attn_implementation_autoset': True, 'transformers_version': '4.47.1', 'model_type': 'mistral_lowdim', 'quantization_config': {'quant_method': 'BITS_AND_BYTES', '_load_in_8bit': False, '_load_in_4bit': True, 'llm_int8_threshold': 6.0, 'llm_int8_skip_modules': ['joint_obs_action_projector'], 'llm_int8_enable_fp32_cpu_offload': False, 'llm_int8_has_fp16_weight': False, 'bnb_4bit_quant_type': 'nf4', 'bnb_4bit_use_double_quant': True, 'bnb_4bit_compute_dtype': 'bfloat16', 'bnb_4bit_quant_storage': 'uint8', 'load_in_4bit': True, 'load_in_8bit': False}, 'output_dir': '/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2', 'overwrite_output_dir': False, 'do_train': False, 'do_eval': True, 'do_predict': False, 'eval_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 32, 'per_device_eval_batch_size': 32, 'per_gpu_train_batch_size': None, 'per_gpu_eval_batch_size': None, 'gradient_accumulation_steps': 4, 'eval_accumulation_steps': None, 'eval_delay': 0, 'torch_empty_cache_steps': None, 'learning_rate': 1e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 10, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'lr_scheduler_kwargs': {}, 'warmup_ratio': 0.0, 'warmup_steps': 1000, 'log_level': 'passive', 'log_level_replica': 'warning', 'log_on_each_node': True, 'logging_dir': '/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2/runs/Mar18_22-31-41_A6000-2', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 1, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 5000, 'save_total_limit': None, 'save_safetensors': True, 'save_on_each_node': False, 'save_only_model': False, 'restore_callback_states_from_checkpoint': False, 'no_cuda': False, 'use_cpu': False, 'use_mps_device': False, 'seed': 42, 'data_seed': None, 'jit_mode_eval': False, 'use_ipex': False, 'bf16': True, 'fp16': False, 'fp16_opt_level': 'O1', 'half_precision_backend': 'auto', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': True, 'local_rank': 0, 'ddp_backend': None, 'tpu_num_cores': None, 'tpu_metrics_debug': False, 'debug': [], 'dataloader_drop_last': False, 'eval_steps': 10, 'dataloader_num_workers': 0, 'dataloader_prefetch_factor': None, 'past_index': -1, 'run_name': '/tmp2/chyang/workspace/LLM-BC/data/outputs/2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/mistralai/Mistral-7B-Instruct-v0.3-finetuned-box-close-v2', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': None, 'load_best_model_at_end': False, 'metric_for_best_model': None, 'greater_is_better': None, 'ignore_data_skip': False, 'fsdp': [], 'fsdp_min_num_params': 0, 'fsdp_config': {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}, 'fsdp_transformer_layer_cls_to_wrap': None, 'accelerator_config': {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}, 'deepspeed': None, 'label_smoothing_factor': 0.0, 'optim': 'paged_adamw_32bit', 'optim_args': None, 'adafactor': False, 'group_by_length': True, 'length_column_name': 'length', 'report_to': ['wandb'], 'ddp_find_unused_parameters': None, 'ddp_bucket_cap_mb': None, 'ddp_broadcast_buffers': None, 'dataloader_pin_memory': True, 'dataloader_persistent_workers': False, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': False, 'resume_from_checkpoint': None, 'hub_model_id': None, 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'hub_private_repo': None, 'hub_always_push': False, 'gradient_checkpointing': False, 'gradient_checkpointing_kwargs': None, 'include_inputs_for_metrics': False, 'include_for_metrics': [], 'eval_do_concat_batches': True, 'fp16_backend': 'auto', 'evaluation_strategy': None, 'push_to_hub_model_id': None, 'push_to_hub_organization': None, 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', 'mp_parameters': '', 'auto_find_batch_size': False, 'full_determinism': False, 'torchdynamo': None, 'ray_scope': 'last', 'ddp_timeout': 1800, 'torch_compile': False, 'torch_compile_backend': None, 'torch_compile_mode': None, 'dispatch_batches': None, 'split_batches': None, 'include_tokens_per_second': False, 'include_num_input_tokens_seen': False, 'neftune_noise_alpha': None, 'optim_target_modules': None, 'batch_eval_metrics': False, 'eval_on_start': False, 'use_liger_kernel': False, 'eval_use_gather_object': False, 'average_tokens_across_devices': False, 'dataset_text_field': 'text', 'packing': False, 'max_seq_length': 100, 'dataset_num_proc': None, 'dataset_batch_size': 1000, 'model_init_kwargs': None, 'dataset_kwargs': {}, 'eval_packing': None, 'num_of_sequences': 1024, 'chars_per_token': '<CHARS_PER_TOKEN>', 'use_liger': False, 'joint_obs_action_mlp_lr': 5e-06, 'obs_mlp_lr': None, 'action_mlp_lr': None}
27
+ 2026-03-18 22:31:51,686 INFO MainThread:2041405 [wandb_config.py:__setitem__():154] config set model/num_parameters = 7286128640 - <bound method Run._config_callback of <wandb.sdk.wandb_run.Run object at 0x7fe2561236a0>>
28
+ 2026-03-18 22:31:51,686 INFO MainThread:2041405 [wandb_run.py:_config_callback():1389] config_cb model/num_parameters 7286128640 None
29
+ 2026-03-19 17:31:14,775 INFO MainThread:2041405 [wandb_run.py:_finish():2146] finishing run chyang25-national-taiwan-university/llm_module_finetuning/x1m9280b
30
+ 2026-03-19 17:31:14,776 INFO MainThread:2041405 [wandb_run.py:_atexit_cleanup():2414] got exitcode: 0
31
+ 2026-03-19 17:31:14,776 INFO MainThread:2041405 [wandb_run.py:_restore():2396] restore
32
+ 2026-03-19 17:31:14,776 INFO MainThread:2041405 [wandb_run.py:_restore():2402] restore done
33
+ 2026-03-19 17:31:24,580 INFO MainThread:2041405 [wandb_run.py:_footer_history_summary_info():3963] rendering history
34
+ 2026-03-19 17:31:24,581 INFO MainThread:2041405 [wandb_run.py:_footer_history_summary_info():3995] rendering summary
35
+ 2026-03-19 17:31:24,585 INFO MainThread:2041405 [wandb_run.py:_footer_sync_info():3922] logging synced files
2026.03.18/22.30.56_train_llm_lowdim_box-close-v2/wandb/run-20260318_223101-x1m9280b/run-x1m9280b.wandb ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28702b88803b5c2c0e3359c7dded6327484703469450657739c49f1cb5b68fbe
3
+ size 35538433