Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
Dataset Viewer
Auto-converted to Parquet Duplicate
timestamp
stringdate
2025-10-07 11:56:27
2025-10-07 17:18:39
end_timestamp
stringdate
2025-10-07 11:59:29
2025-10-08 11:06:30
stage_name
stringclasses
1 value
stage_number
int64
1
1
level
stringclasses
1 value
message
stringclasses
1 value
stdout_content
stringclasses
10 values
stderr_content
stringlengths
1.12k
23.8k
experiment_name
stringclasses
1 value
elapsed_time_seconds
float64
56.9
64.1k
stage_complete
bool
1 class
2025-10-07T11:56:27.345818
2025-10-07T11:59:29.759393
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null hydra.job.chdir=False actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=32768 actor_rollout_ref.model.torch_dtype=bfloat16 reward_model.model.torch_dtype=bfloat16 actor_rollout_ref.actor.fsdp_config.sharding_strategy=FULL_SHARD actor_rollout_ref.actor.fsdp_config.use_orig_params=True actor_rollout_ref.actor.fsdp_config.mixed_precision=True actor_rollout_ref.actor.fsdp_config.min_num_params=100000000.0 actor_rollout_ref.rollout.tensor_parallel_sync_buffers=True data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 [DEBUG] Found 0 global_step directories Could not override 'actor_rollout_ref.model.torch_dtype'. To append to your config use +actor_rollout_ref.model.torch_dtype=bfloat16 Key 'torch_dtype' is not in struct full_key: actor_rollout_ref.model.torch_dtype object_type=dict Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s]/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( LICENSE: 0.00B [00:00, ?B/s] LICENSE: 11.3kB [00:00, 63.7MB/s] README.md: 0.00B [00:00, ?B/s] README.md: 4.92kB [00:00, 37.0MB/s] .gitattributes: 0.00B [00:00, ?B/s] .gitattributes: 1.52kB [00:00, 14.8MB/s] Fetching 10 files: 10%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 1/10 [00:00<00:03, 2.66it/s] Fetching 10 files: 70%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰ | 7/10 [00:05<00:02, 1.26it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:05<00:00, 1.84it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 661.82it/s] 2025-10-07 11:59:02,478 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 11:59:02,483 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
182.413575
true
2025-10-07T12:01:55.877799
2025-10-07T12:02:52.788322
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [ERROR] SLURM Ray cluster setup failed: Command '['srun', '--nodes=1', '--ntasks=1', '-w', 'c609-051', 'hostname', '--ip-address']' timed out after 30 seconds [ERROR] Stage error: RuntimeError: Failed to setup SLURM Ray cluster: Command '['srun', '--nodes=1', '--ntasks=1', '-w', 'c609-051', 'hostname', '--ip-address']' timed out after 30 seconds
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 509.72it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 717.67it/s]
rl_rlonly__32k
56.910523
true
2025-10-07T12:07:59.709374
2025-10-07T12:08:57.303753
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [ERROR] SLURM Ray cluster setup failed: Command '['srun', '--nodes=1', '--ntasks=1', '-w', 'c609-051', 'hostname', '--ip-address']' timed out after 30 seconds [ERROR] Stage error: RuntimeError: Failed to setup SLURM Ray cluster: Command '['srun', '--nodes=1', '--ntasks=1', '-w', 'c609-051', 'hostname', '--ip-address']' timed out after 30 seconds
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 452.22it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 734.32it/s]
rl_rlonly__32k
57.594379
true
2025-10-07T12:11:48.517479
2025-10-07T12:14:20.048015
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=32768 actor_rollout_ref.model.torch_dtype=bfloat16 reward_model.model.torch_dtype=bfloat16 actor_rollout_ref.actor.fsdp_config.sharding_strategy=FULL_SHARD actor_rollout_ref.actor.fsdp_config.use_orig_params=True actor_rollout_ref.actor.fsdp_config.mixed_precision=True actor_rollout_ref.actor.fsdp_config.min_num_params=100000000.0 actor_rollout_ref.rollout.tensor_parallel_sync_buffers=True data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 [DEBUG] Found 0 global_step directories Could not override 'actor_rollout_ref.model.torch_dtype'. To append to your config use +actor_rollout_ref.model.torch_dtype=bfloat16 Key 'torch_dtype' is not in struct full_key: actor_rollout_ref.model.torch_dtype object_type=dict Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 574.65it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 717.23it/s] 2025-10-07 12:14:07,602 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:14:07,607 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
151.530536
true
2025-10-07T12:16:11.389848
2025-10-07T12:18:42.016546
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=32768 actor_rollout_ref.actor.fsdp_config.sharding_strategy=FULL_SHARD actor_rollout_ref.actor.fsdp_config.use_orig_params=True actor_rollout_ref.actor.fsdp_config.mixed_precision=True actor_rollout_ref.actor.fsdp_config.min_num_params=100000000.0 actor_rollout_ref.rollout.tensor_parallel_sync_buffers=True data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 [DEBUG] Found 0 global_step directories Could not override 'actor_rollout_ref.actor.fsdp_config.sharding_strategy'. To append to your config use +actor_rollout_ref.actor.fsdp_config.sharding_strategy=FULL_SHARD Key 'sharding_strategy' is not in struct full_key: actor_rollout_ref.actor.fsdp_config.sharding_strategy object_type=dict Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 645.61it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 771.92it/s] 2025-10-07 12:18:29,541 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:18:29,546 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
150.626698
true
2025-10-07T12:22:22.467324
2025-10-07T12:24:54.932272
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=32768 actor_rollout_ref.actor.fsdp_config.use_orig_params=True actor_rollout_ref.actor.fsdp_config.mixed_precision=True actor_rollout_ref.actor.fsdp_config.min_num_params=100000000.0 actor_rollout_ref.rollout.tensor_parallel_sync_buffers=True data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 [DEBUG] Found 0 global_step directories Could not override 'actor_rollout_ref.actor.fsdp_config.use_orig_params'. To append to your config use +actor_rollout_ref.actor.fsdp_config.use_orig_params=True Key 'use_orig_params' is not in struct full_key: actor_rollout_ref.actor.fsdp_config.use_orig_params object_type=dict Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 690.23it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 906.43it/s] 2025-10-07 12:24:42,374 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:24:42,379 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
152.464948
true
2025-10-07T12:26:01.732677
2025-10-07T12:28:33.786408
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=32768 actor_rollout_ref.actor.fsdp_config.mixed_precision=True actor_rollout_ref.actor.fsdp_config.min_num_params=100000000.0 actor_rollout_ref.rollout.tensor_parallel_sync_buffers=True data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 [DEBUG] Found 0 global_step directories Could not override 'actor_rollout_ref.actor.fsdp_config.mixed_precision'. To append to your config use +actor_rollout_ref.actor.fsdp_config.mixed_precision=True Key 'mixed_precision' is not in struct full_key: actor_rollout_ref.actor.fsdp_config.mixed_precision object_type=dict Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 515.77it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 631.30it/s] 2025-10-07 12:28:21,461 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:28:21,466 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
152.053731
true
2025-10-07T12:30:13.750674
2025-10-07T12:33:58.251941
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=32768 data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 [DEBUG] Found 0 global_step directories 2025-10-07 12:32:41,672 INFO worker.py:1554 -- Using address 129.114.17.42:6379 set in the environment variable RAY_ADDRESS 2025-10-07 12:32:41,672 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:32:41,677 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265  [DEBUG] Found 0 global_step directories (TaskRunner pid=518213) Generating train split: 0 examples [00:00, ? examples/s] (TaskRunner pid=518213) Generating train split: 1000 examples [00:00, 5692.53 examples/s] (TaskRunner pid=518213) Generating train split: 1000 examples [00:00, 3459.26 examples/s] (TaskRunner pid=518213) Generating train split: 0 examples [00:00, ? examples/s] (TaskRunner pid=518213) Generating train split: 4450 examples [00:00, 94632.50 examples/s] (TaskRunner pid=518213) DeprecationWarning: `ray.state.available_resources_per_node` is a private attribute and access will be removed in a future Ray version. (TaskRunner pid=518213) WARNING:2025-10-07 12:33:11,362:Waiting for register center actor H7gr2M_register_center to be ready. Elapsed time: 0 seconds out of 300 seconds. [DEBUG] Found 0 global_step directories (WorkerDict pid=1952306, ip=129.114.17.87) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` (WorkerDict pid=1952306, ip=129.114.17.87) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. (TaskRunner pid=518213) TaskRunner hostname: c609-051.vista.tacc.utexas.edu, PID: 518213 (TaskRunner pid=518213) {'actor_rollout_ref': {'actor': {'checkpoint': {'load_contents': ['model', (TaskRunner pid=518213) 'optimizer', (TaskRunner pid=518213) 'extra'], (TaskRunner pid=518213) 'save_contents': ['model', (TaskRunner pid=518213) 'optimizer', (TaskRunner pid=518213) 'extra']}, (TaskRunner pid=518213) 'clip_ratio': 0.2, (TaskRunner pid=518213) 'clip_ratio_c': 3.0, (TaskRunner pid=518213) 'clip_ratio_high': 0.2, (TaskRunner pid=518213) 'clip_ratio_low': 0.2, (TaskRunner pid=518213) 'entropy_checkpointing': False, (TaskRunner pid=518213) 'entropy_coeff': 0, (TaskRunner pid=518213) 'entropy_from_logits_with_chunking': False, (TaskRunner pid=518213) 'fsdp_config': {'forward_prefetch': True, (TaskRunner pid=518213) 'fsdp_size': -1, (TaskRunner pid=518213) 'offload_policy': False, (TaskRunner pid=518213) 'optimizer_offload': False, (TaskRunner pid=518213) 'param_offload': False, (TaskRunner pid=518213) 'reshard_after_forward': True, (TaskRunner pid=518213) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=518213) 'grad_clip': 1.0, (TaskRunner pid=518213) 'kl_loss_coef': 0.001, (TaskRunner pid=518213) 'kl_loss_type': 'low_var_kl', (TaskRunner pid=518213) 'loss_agg_mode': 'token-mean', (TaskRunner pid=518213) 'optim': {'lr': 1e-06, (TaskRunner pid=518213) 'lr_warmup_steps': -1, (TaskRunner pid=518213) 'lr_warmup_steps_ratio': 0.0, (TaskRunner pid=518213) 'min_lr_ratio': 0.0, (TaskRunner pid=518213) 'num_cycles': 0.5, (TaskRunner pid=518213) 'total_training_steps': -1, (TaskRunner pid=518213) 'warmup_style': 'constant', (TaskRunner pid=518213) 'weight_decay': 0.01}, (TaskRunner pid=518213) 'policy_loss': {'clip_cov_lb': 1.0, (TaskRunner pid=518213) 'clip_cov_ratio': 0.0002, (TaskRunner pid=518213) 'clip_cov_ub': 5.0, (TaskRunner pid=518213) 'kl_cov_ratio': 0.0002, (TaskRunner pid=518213) 'loss_mode': 'vanilla', (TaskRunner pid=518213) 'ppo_kl_coef': 0.1}, (TaskRunner pid=518213) 'ppo_epochs': 1, (TaskRunner pid=518213) 'ppo_max_token_len_per_gpu': 16384, (TaskRunner pid=518213) 'ppo_micro_batch_size': None, (TaskRunner pid=518213) 'ppo_micro_batch_size_per_gpu': 4, (TaskRunner pid=518213) 'ppo_mini_batch_size': 64, (TaskRunner pid=518213) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=518213) 'all_ranks': False, (TaskRunner pid=518213) 'discrete': False, (TaskRunner pid=518213) 'ranks': []}, (TaskRunner pid=518213) 'shuffle': False, (TaskRunner pid=518213) 'strategy': 'fsdp2', (TaskRunner pid=518213) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=518213) 'use_dynamic_bsz': False, (TaskRunner pid=518213) 'use_kl_loss': False, (TaskRunner pid=518213) 'use_torch_compile': True}, (TaskRunner pid=518213) 'hybrid_engine': True, (TaskRunner pid=518213) 'model': {'custom_chat_template': None, (TaskRunner pid=518213) 'enable_activation_offload': False, (TaskRunner pid=518213) 'enable_gradient_checkpointing': True, (TaskRunner pid=518213) 'exclude_modules': None, (TaskRunner pid=518213) 'external_lib': None, (TaskRunner pid=518213) 'fused_kernel_options': {'impl_backend': 'torch'}, (TaskRunner pid=518213) 'lora_alpha': 16, (TaskRunner pid=518213) 'lora_rank': 0, (TaskRunner pid=518213) 'override_config': {}, (TaskRunner pid=518213) 'path': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=518213) 'target_modules': 'all-linear', (TaskRunner pid=518213) 'trust_remote_code': True, (TaskRunner pid=518213) 'use_fused_kernels': False, (TaskRunner pid=518213) 'use_liger': False, (TaskRunner pid=518213) 'use_remove_padding': True, (TaskRunner pid=518213) 'use_shm': False}, (TaskRunner pid=518213) 'ref': {'entropy_checkpointing': False, (TaskRunner pid=518213) 'entropy_from_logits_with_chunking': False, (TaskRunner pid=518213) 'fsdp_config': {'forward_prefetch': True, (TaskRunner pid=518213) 'param_offload': False, (TaskRunner pid=518213) 'reshard_after_forward': True, (TaskRunner pid=518213) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=518213) 'log_prob_max_token_len_per_gpu': 16384, (TaskRunner pid=518213) 'log_prob_micro_batch_size': None, (TaskRunner pid=518213) 'log_prob_micro_batch_size_per_gpu': 4, (TaskRunner pid=518213) 'log_prob_use_dynamic_bsz': False, (TaskRunner pid=518213) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=518213) 'all_ranks': False, (TaskRunner pid=518213) 'discrete': False, (TaskRunner pid=518213) 'ranks': []}, (TaskRunner pid=518213) 'strategy': 'fsdp2', (TaskRunner pid=518213) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=518213) 'use_torch_compile': True}, (TaskRunner pid=518213) 'rollout': {'agent': {'custom_async_server': {'name': None, (TaskRunner pid=518213) 'path': None}, (TaskRunner pid=518213) 'num_workers': 8}, (TaskRunner pid=518213) 'calculate_log_probs': False, (TaskRunner pid=518213) 'disable_log_stats': True, (TaskRunner pid=518213) 'do_sample': True, (TaskRunner pid=518213) 'dtype': 'bfloat16', (TaskRunner pid=518213) 'enable_chunked_prefill': True, (TaskRunner pid=518213) 'enforce_eager': True, (TaskRunner pid=518213) 'engine_kwargs': {'sglang': {'attention_backend': None}, (TaskRunner pid=518213) 'vllm': {'disable_mm_preprocessor_cache': False, (TaskRunner pid=518213) 'swap_space': None}}, (TaskRunner pid=518213) 'free_cache_engine': True, (TaskRunner pid=518213) 'gpu_memory_utilization': 0.8, (TaskRunner pid=518213) 'ignore_eos': False, (TaskRunner pid=518213) 'layered_summon': False, (TaskRunner pid=518213) 'load_format': 'dummy_dtensor', (TaskRunner pid=518213) 'log_prob_max_token_len_per_gpu': 16384, (TaskRunner pid=518213) 'log_prob_micro_batch_size': None, (TaskRunner pid=518213) 'log_prob_micro_batch_size_per_gpu': 4, (TaskRunner pid=518213) 'log_prob_use_dynamic_bsz': False, (TaskRunner pid=518213) 'max_model_len': None, (TaskRunner pid=518213) 'max_num_batched_tokens': 32768, (TaskRunner pid=518213) 'max_num_seqs': 256, (TaskRunner pid=518213) 'mode': 'sync', (TaskRunner pid=518213) 'multi_stage_wake_up': False, (TaskRunner pid=518213) 'multi_turn': {'completion_callback': None, (TaskRunner pid=518213) 'enable': False, (TaskRunner pid=518213) 'format': 'hermes', (TaskRunner pid=518213) 'interaction_config_path': None, (TaskRunner pid=518213) 'max_assistant_turns': None, (TaskRunner pid=518213) 'max_parallel_calls': 1, (TaskRunner pid=518213) 'max_tool_response_length': 256, (TaskRunner pid=518213) 'max_user_turns': None, (TaskRunner pid=518213) 'tokenization_sanity_check_mode': 'strict', (TaskRunner pid=518213) 'tool_config_path': None, (TaskRunner pid=518213) 'tool_response_truncate_side': 'middle', (TaskRunner pid=518213) 'use_inference_chat_template': False}, (TaskRunner pid=518213) 'n': 16, (TaskRunner pid=518213) 'name': 'vllm', (TaskRunner pid=518213) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=518213) 'all_ranks': False, (TaskRunner pid=518213) 'discrete': False, (TaskRunner pid=518213) 'ranks': []}, (TaskRunner pid=518213) 'prompt_length': 512, (TaskRunner pid=518213) 'response_length': 32768, (TaskRunner pid=518213) 'temperature': 1.0, (TaskRunner pid=518213) 'tensor_model_parallel_size': 1, (TaskRunner pid=518213) 'top_k': -1, (TaskRunner pid=518213) 'top_p': 1, (TaskRunner pid=518213) 'val_kwargs': {'do_sample': False, (TaskRunner pid=518213) 'n': 1, (TaskRunner pid=518213) 'temperature': 0, (TaskRunner pid=518213) 'top_k': -1, (TaskRunner pid=518213) 'top_p': 1.0}}}, (TaskRunner pid=518213) 'algorithm': {'_target_': 'verl.trainer.config.AlgoConfig', (TaskRunner pid=518213) 'adv_estimator': 'grpo', (TaskRunner pid=518213) 'gamma': 1.0, (TaskRunner pid=518213) 'kl_ctrl': {'_target_': 'verl.trainer.config.KLControlConfig', (TaskRunner pid=518213) 'horizon': 10000, (TaskRunner pid=518213) 'kl_coef': 0.001, (TaskRunner pid=518213) 'target_kl': 0.1, (TaskRunner pid=518213) 'type': 'fixed'}, (TaskRunner pid=518213) 'kl_penalty': 'kl', (TaskRunner pid=518213) 'lam': 1.0, (TaskRunner pid=518213) 'norm_adv_by_std_in_grpo': True, (TaskRunner pid=518213) 'pf_ppo': {'_target_': 'verl.trainer.config.PFPPOConfig', (TaskRunner pid=518213) 'reweight_method': 'pow', (TaskRunner pid=518213) 'weight_pow': 2.0}, (TaskRunner pid=518213) 'use_kl_in_reward': False, (TaskRunner pid=518213) 'use_pf_ppo': False}, (TaskRunner pid=518213) 'critic': {'checkpoint': {'load_contents': ['model', 'optimizer', 'extra'], (TaskRunner pid=518213) 'save_contents': ['model', 'optimizer', 'extra']}, (TaskRunner pid=518213) 'cliprange_value': 0.5, (TaskRunner pid=518213) 'forward_max_token_len_per_gpu': 32768, (TaskRunner pid=518213) 'forward_micro_batch_size': None, (TaskRunner pid=518213) 'forward_micro_batch_size_per_gpu': 1, (TaskRunner pid=518213) 'grad_clip': 1.0, (TaskRunner pid=518213) 'loss_agg_mode': 'token-mean', (TaskRunner pid=518213) 'model': {'enable_activation_offload': False, (TaskRunner pid=518213) 'enable_gradient_checkpointing': True, (TaskRunner pid=518213) 'external_lib': None, (TaskRunner pid=518213) 'fsdp_config': {'forward_prefetch': False, (TaskRunner pid=518213) 'fsdp_size': -1, (TaskRunner pid=518213) 'offload_policy': False, (TaskRunner pid=518213) 'optimizer_offload': False, (TaskRunner pid=518213) 'param_offload': False, (TaskRunner pid=518213) 'reshard_after_forward': True, (TaskRunner pid=518213) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=518213) 'lora_alpha': 16, (TaskRunner pid=518213) 'lora_rank': 0, (TaskRunner pid=518213) 'override_config': {}, (TaskRunner pid=518213) 'path': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=518213) 'target_modules': 'all-linear', (TaskRunner pid=518213) 'tokenizer_path': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=518213) 'trust_remote_code': True, (TaskRunner pid=518213) 'use_remove_padding': False, (TaskRunner pid=518213) 'use_shm': False}, (TaskRunner pid=518213) 'optim': {'lr': 1e-05, (TaskRunner pid=518213) 'lr_warmup_steps_ratio': 0.0, (TaskRunner pid=518213) 'min_lr_ratio': None, (TaskRunner pid=518213) 'total_training_steps': -1, (TaskRunner pid=518213) 'warmup_style': 'constant', (TaskRunner pid=518213) 'weight_decay': 0.01}, (TaskRunner pid=518213) 'ppo_epochs': 1, (TaskRunner pid=518213) 'ppo_max_token_len_per_gpu': 32768, (TaskRunner pid=518213) 'ppo_micro_batch_size': None, (TaskRunner pid=518213) 'ppo_micro_batch_size_per_gpu': 1, (TaskRunner pid=518213) 'ppo_mini_batch_size': 64, (TaskRunner pid=518213) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=518213) 'all_ranks': False, (TaskRunner pid=518213) 'discrete': False, (TaskRunner pid=518213) 'ranks': []}, (TaskRunner pid=518213) 'rollout_n': 16, (TaskRunner pid=518213) 'shuffle': False, (TaskRunner pid=518213) 'strategy': 'fsdp2', (TaskRunner pid=518213) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=518213) 'use_dynamic_bsz': False}, (TaskRunner pid=518213) 'custom_reward_function': {'name': 'compute_score_batch', (TaskRunner pid=518213) 'path': '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py', (TaskRunner pid=518213) 'reward_kwargs': {'complex_format_reward_weight': 0.0, (TaskRunner pid=518213) 'final_answer_in_samples_reward_weight': 0.0, (TaskRunner pid=518213) 'reflection_correctness_reward_weight': 0.0, (TaskRunner pid=518213) 'response_or_sample': 'sample', (TaskRunner pid=518213) 'reward_max': 10.0, (TaskRunner pid=518213) 'reward_min': 0.0, (TaskRunner pid=518213) 'sample_correctness_reward_weight': 0.0, (TaskRunner pid=518213) 'sample_count_penalty_weight': 0.0, (TaskRunner pid=518213) 'similarity_penalty_weight': 0.0, (TaskRunner pid=518213) 'simple_format_reward_weight': 0.0, (TaskRunner pid=518213) 'transition_penalty_weight': 0.0, (TaskRunner pid=518213) 'verdict_correctness_reward_weight': 0.0}}, (TaskRunner pid=518213) 'data': {'custom_cls': {'name': None, 'path': None}, (TaskRunner pid=518213) 'dataloader_num_workers': 8, (TaskRunner pid=518213) 'filter_overlong_prompts': False, (TaskRunner pid=518213) 'filter_overlong_prompts_workers': 1, (TaskRunner pid=518213) 'image_key': 'images', (TaskRunner pid=518213) 'max_prompt_length': 512, (TaskRunner pid=518213) 'max_response_length': 32768, (TaskRunner pid=518213) 'prompt_key': 'prompt', (TaskRunner pid=518213) 'return_full_prompt': False, (TaskRunner pid=518213) 'return_multi_modal_inputs': True, (TaskRunner pid=518213) 'return_raw_chat': False, (TaskRunner pid=518213) 'return_raw_input_ids': False, (TaskRunner pid=518213) 'reward_fn_key': 'data_source', (TaskRunner pid=518213) 'sampler': {'class_name': None, 'class_path': None}, (TaskRunner pid=518213) 'shuffle': True, (TaskRunner pid=518213) 'tokenizer': None, (TaskRunner pid=518213) 'train_batch_size': 512, (TaskRunner pid=518213) 'train_files': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet', (TaskRunner pid=518213) 'truncation': 'error', (TaskRunner pid=518213) 'trust_remote_code': False, (TaskRunner pid=518213) 'use_shm': False, (TaskRunner pid=518213) 'val_batch_size': None, (TaskRunner pid=518213) 'val_files': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet', (TaskRunner pid=518213) 'validation_shuffle': False, (TaskRunner pid=518213) 'video_key': 'videos'}, (TaskRunner pid=518213) 'ray_init': {'num_cpus': None, 'timeline_json_file': None}, (TaskRunner pid=518213) 'reward_model': {'enable': False, (TaskRunner pid=518213) 'forward_max_token_len_per_gpu': 32768, (TaskRunner pid=518213) 'launch_reward_fn_async': True, (TaskRunner pid=518213) 'max_length': None, (TaskRunner pid=518213) 'micro_batch_size': None, (TaskRunner pid=518213) 'micro_batch_size_per_gpu': None, (TaskRunner pid=518213) 'model': {'external_lib': None, (TaskRunner pid=518213) 'fsdp_config': {'forward_prefetch': True, (TaskRunner pid=518213) 'fsdp_size': -1, (TaskRunner pid=518213) 'param_offload': False, (TaskRunner pid=518213) 'reshard_after_forward': True, (TaskRunner pid=518213) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=518213) 'input_tokenizer': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=518213) 'path': '~/models/FsfairX-LLaMA3-RM-v0.1', (TaskRunner pid=518213) 'trust_remote_code': False, (TaskRunner pid=518213) 'use_fused_kernels': False, (TaskRunner pid=518213) 'use_remove_padding': False, (TaskRunner pid=518213) 'use_shm': False}, (TaskRunner pid=518213) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=518213) 'all_ranks': False, (TaskRunner pid=518213) 'discrete': False, (TaskRunner pid=518213) 'ranks': []}, (TaskRunner pid=518213) 'reward_manager': 'batch', (TaskRunner pid=518213) 'sandbox_fusion': {'max_concurrent': 64, (TaskRunner pid=518213) 'memory_limit_mb': 1024, (TaskRunner pid=518213) 'url': None}, (TaskRunner pid=518213) 'strategy': 'fsdp2', (TaskRunner pid=518213) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=518213) 'use_dynamic_bsz': False}, (TaskRunner pid=518213) 'trainer': {'balance_batch': True, (TaskRunner pid=518213) 'controller_nsight_options': {'cuda-graph-trace': 'graph', (TaskRunner pid=518213) 'cuda-memory-usage': 'true', (TaskRunner pid=518213) 'trace': 'cuda,nvtx,cublas,ucx'}, (TaskRunner pid=518213) 'critic_warmup': 0, (TaskRunner pid=518213) 'default_hdfs_dir': None, (TaskRunner pid=518213) 'default_local_dir': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints', (TaskRunner pid=518213) 'del_local_ckpt_after_load': False, (TaskRunner pid=518213) 'device': 'cuda', (TaskRunner pid=518213) 'esi_redundant_time': 0, (TaskRunner pid=518213) 'experiment_name': 'rl_rlonly__32k_rl', (TaskRunner pid=518213) 'log_val_generations': 0, (TaskRunner pid=518213) 'logger': ['console', 'wandb'], (TaskRunner pid=518213) 'max_actor_ckpt_to_keep': None, (TaskRunner pid=518213) 'max_critic_ckpt_to_keep': None, (TaskRunner pid=518213) 'n_gpus_per_node': 1, (TaskRunner pid=518213) 'nnodes': 16, (TaskRunner pid=518213) 'profile_steps': None, (TaskRunner pid=518213) 'project_name': 'jackrl', (TaskRunner pid=518213) 'ray_wait_register_center_timeout': 300, (TaskRunner pid=518213) 'resume_from_path': None, (TaskRunner pid=518213) 'resume_mode': 'auto', (TaskRunner pid=518213) 'rollout_data_dir': None, (TaskRunner pid=518213) 'save_freq': 20, (TaskRunner pid=518213) 'test_freq': 10, (TaskRunner pid=518213) 'total_epochs': 50, (TaskRunner pid=518213) 'total_training_steps': None, (TaskRunner pid=518213) 'val_before_train': True, (TaskRunner pid=518213) 'val_only': False, (TaskRunner pid=518213) 'validation_data_dir': None, (TaskRunner pid=518213) 'worker_nsight_options': {'capture-range': 'cudaProfilerApi', (TaskRunner pid=518213) 'capture-range-end': None, (TaskRunner pid=518213) 'cuda-graph-trace': 'graph', (TaskRunner pid=518213) 'cuda-memory-usage': 'true', (TaskRunner pid=518213) 'kill': 'none', (TaskRunner pid=518213) 'trace': 'cuda,nvtx,cublas,ucx'}}} (TaskRunner pid=518213) Registered source: longmult (TaskRunner pid=518213) Registered source: countdown (TaskRunner pid=518213) Registered source: gsm8k (TaskRunner pid=518213) Registered source: arc (TaskRunner pid=518213) Registered source: arc_challenge (TaskRunner pid=518213) Registered source: arc_easy (TaskRunner pid=518213) Registered source: piqa (TaskRunner pid=518213) Registered source: mmlu (TaskRunner pid=518213) Registered source: mmlu_pro (TaskRunner pid=518213) Registered source: csqa (TaskRunner pid=518213) Registered source: social_iqa (TaskRunner pid=518213) Registered source: strategy_qa (TaskRunner pid=518213) Registered source: winogrande (TaskRunner pid=518213) Registered source: bbh (TaskRunner pid=518213) Registered source: letter_countdown (TaskRunner pid=518213) Registered source: acronym (TaskRunner pid=518213) using customized reward function 'compute_score_batch' from '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py' (TaskRunner pid=518213) using customized reward function 'compute_score_batch' from '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py' (TaskRunner pid=518213) Using dataset class: RLHFDataset (TaskRunner pid=518213) dataset len: 1000 (TaskRunner pid=518213) Using dataset class: RLHFDataset (TaskRunner pid=518213) dataset len: 4450 (TaskRunner pid=518213) Using critic: False (TaskRunner pid=518213) [validate_config] All configuration checks passed successfully! (TaskRunner pid=518213) Size of train dataloader: 1, Size of val dataloader: 1 (TaskRunner pid=518213) Total training steps: 50 (TaskRunner pid=518213) {'08e5fdf8751708bd93e9e4513cb9a43f3f4c60b129b707630ae7e25d': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162669962854.0, (TaskRunner pid=518213) 'node:129.114.18.49': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '4282e36180b0406b9b46da9e4273695796b44d04d55780c81c652238': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162420598374.0, (TaskRunner pid=518213) 'node:129.114.18.55': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '5579ca55278ca018cd5c7d2a726332724305ec3322426f167bb5a01c': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160389441126.0, (TaskRunner pid=518213) 'node:129.114.18.56': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '566710352cf14f84565026ef3be7beda04e46b8210410a6a6476e445': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160356279910.0, (TaskRunner pid=518213) 'node:129.114.17.86': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '59bc3ca04c5779b63614dd76eb0988c220b87b9eca57caa642d5b6be': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162651809382.0, (TaskRunner pid=518213) 'node:129.114.18.51': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '6432975d2248111278763f679bb23ef38f9ddfd597df4dd17b839265': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162680448614.0, (TaskRunner pid=518213) 'node:129.114.18.52': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '8055988e74bafae32d9e41ab6c9f26ee3da9cfc97af2d74cd4555ac2': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162660853350.0, (TaskRunner pid=518213) 'node:129.114.18.50': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '867228e604b7b771f78eb353e7baec9ad7660f5bc29c0fba1dd896d5': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162413454950.0, (TaskRunner pid=518213) 'node:129.114.18.53': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) '9b7b4c6738d97aed80c06367effa4dc66065d3bb9ad6738f2730bc4f': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 162441373286.0, (TaskRunner pid=518213) 'node:129.114.18.54': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) 'c24668204fa163fde01dd020c0b26a1ff5babb4e40513621a668c4f7': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160324494950.0, (TaskRunner pid=518213) 'node:129.114.17.87': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) 'ca58e7976672f4a913c7ae5385c596fa959a9f6c75247bb3f1778743': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160717645414.0, (TaskRunner pid=518213) 'node:129.114.17.91': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) 'd3678c66c09cf82b346e3e0d9073b75c4b109b8ce909c351a066ab59': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 161035039539.0, (TaskRunner pid=518213) 'node:129.114.17.85': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056095948.0}, (TaskRunner pid=518213) 'e26370653e277bddd0ddb452e4a0d1148775436e64591c37b2a674d7': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160369518182.0, (TaskRunner pid=518213) 'node:129.114.17.88': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) 'e3c00b9ab3426e7c1120651f444f25ef01fc5930340a803fc78a5fc2': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160893085286.0, (TaskRunner pid=518213) 'node:129.114.17.90': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) 'e5cb843c0347c49b588e423545a2c40c8ddab0f295dc08e7c9e6c7eb': {'CPU': 32.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 160851600998.0, (TaskRunner pid=518213) 'node:129.114.17.89': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60056033689.0}, (TaskRunner pid=518213) 'f61acc6bdc79f45aa384b3ce94b3de27d677eead09c8be00ae2a0381': {'CPU': 31.0, (TaskRunner pid=518213) 'GPU': 1.0, (TaskRunner pid=518213) 'accelerator_type:GH200': 1.0, (TaskRunner pid=518213) 'memory': 158839839129.0, (TaskRunner pid=518213) 'node:129.114.17.42': 1.0, (TaskRunner pid=518213) 'node:__internal_head__': 1.0, (TaskRunner pid=518213) 'object_store_memory': 60055971430.0}} (TaskRunner pid=518213) ('Resource pool to cls: {<verl.single_controller.ray.base.RayResourcePool ' (TaskRunner pid=518213) "object at 0x400f27f66200>: {'actor_rollout': " (TaskRunner pid=518213) '<verl.single_controller.ray.base.RayClassWithInitArgs object at ' (TaskRunner pid=518213) '0x400f27f66230>}}') (TaskRunner pid=518213) colocated worker base class <class 'verl.single_controller.base.worker.Worker'> (WorkerDict pid=518986) Model config after override: Qwen2Config { (WorkerDict pid=518986) "_name_or_path": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct", (WorkerDict pid=518986) "architectures": [ (WorkerDict pid=518986) "Qwen2ForCausalLM" (WorkerDict pid=518986) ], (WorkerDict pid=518986) "attention_dropout": 0.0, (WorkerDict pid=518986) "eos_token_id": 151645, (WorkerDict pid=518986) "hidden_act": "silu", (WorkerDict pid=518986) "hidden_size": 1536, (WorkerDict pid=518986) "initializer_range": 0.02, (WorkerDict pid=518986) "intermediate_size": 8960, (WorkerDict pid=518986) "max_position_embeddings": 32768, (WorkerDict pid=518986) "max_window_layers": 21, (WorkerDict pid=518986) "model_type": "qwen2", (WorkerDict pid=518986) "num_attention_heads": 12, (WorkerDict pid=518986) "num_hidden_layers": 28, (WorkerDict pid=518986) "num_key_value_heads": 2, (WorkerDict pid=518986) "pad_token_id": 151643, (WorkerDict pid=518986) "rms_norm_eps": 1e-06, (WorkerDict pid=518986) "rope_scaling": null, (WorkerDict pid=518986) "rope_theta": 1000000.0, (WorkerDict pid=518986) "sliding_window": 32768, (WorkerDict pid=518986) "tie_word_embeddings": true, (WorkerDict pid=518986) "torch_dtype": "bfloat16", (WorkerDict pid=518986) "transformers_version": "4.49.0", (WorkerDict pid=518986) "use_cache": true, (WorkerDict pid=518986) "use_sliding_window": false, (WorkerDict pid=518986) "vocab_size": 151936 (WorkerDict pid=518986) } (WorkerDict pid=518986) (WorkerDict pid=1952306, ip=129.114.17.87) Monkey patch state_dict in AutoModelForCausalLMWithValueHead. (WorkerDict pid=1952306, ip=129.114.17.87) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention (WorkerDict pid=1952306, ip=129.114.17.87) Skipping monkey patch for Qwen2ForCausalLM as use_fused_kernels is False or fused_kernels_backend is torch (WorkerDict pid=518986) Qwen2ForCausalLM contains 1.54B parameters (WorkerDict pid=518986) wrap_policy: functools.partial(<function _or_policy at 0x400ee5b25bd0>, policies=[functools.partial(<function transformer_auto_wrap_policy at 0x400ee5b25ab0>, transformer_layer_cls={<class 'transformers.models.qwen2.modeling_qwen2.Qwen2DecoderLayer'>})]) (WorkerDict pid=518986) NCCL version 2.21.5+cuda12.6 (WorkerDict pid=518986) Total steps: 50, num_warmup_steps: 0 (WorkerDict pid=518986) Actor use_remove_padding=True (WorkerDict pid=518986) Actor use_fused_kernels=False Error executing job with overrides: ['trainer.total_epochs=50', 'actor_rollout_ref.actor.optim.lr=1e-06', 'trainer.save_freq=20', 'trainer.test_freq=10', 'trainer.val_before_train=True', 'algorithm.adv_estimator=grpo', 'actor_rollout_ref.rollout.n=16', 'data.train_batch_size=512', 'actor_rollout_ref.actor.ppo_mini_batch_size=64', 'actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4', 'actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4', 'actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4', 'custom_reward_function.reward_kwargs.response_or_sample=sample', 'custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0', 'custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0', 'custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0', 'custom_reward_function.reward_kwargs.transition_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.reward_min=0.0', 'custom_reward_function.reward_kwargs.reward_max=10.0', 'reward_model.reward_manager=batch', 'custom_reward_function.name=compute_score_batch', 'reward_model.launch_reward_fn_async=True', 'actor_rollout_ref.model.enable_gradient_checkpointing=True', 'actor_rollout_ref.model.enable_activation_offload=False', 'actor_rollout_ref.rollout.gpu_memory_utilization=0.8', 'actor_rollout_ref.model.use_remove_padding=True', 'actor_rollout_ref.actor.strategy=fsdp2', 'actor_rollout_ref.actor.fsdp_config.forward_prefetch=True', 'actor_rollout_ref.ref.fsdp_config.forward_prefetch=True', 'reward_model.model.fsdp_config.forward_prefetch=True', 'actor_rollout_ref.rollout.max_num_batched_tokens=32768', 'actor_rollout_ref.rollout.max_num_seqs=256', 'actor_rollout_ref.rollout.tensor_model_parallel_size=1', 'data.max_response_length=32768', 'data.max_prompt_length=512', 'actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', 'actor_rollout_ref.rollout.dtype=bfloat16', 'critic.optim.lr=1e-05', 'critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', 'critic.ppo_micro_batch_size_per_gpu=1', 'algorithm.kl_ctrl.kl_coef=0.001', 'trainer.logger=[console,wandb]', 'trainer.project_name=jackrl', 'trainer.experiment_name=rl_rlonly__32k_rl', 'data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet', 'data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet', 'custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py', 'trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints', 'actor_rollout_ref.model.trust_remote_code=True', 'critic.model.trust_remote_code=True', 'trainer.nnodes=16', 'trainer.n_gpus_per_node=1'] Traceback (most recent call last): File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 39, in main run_ppo(config) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 69, in run_ppo ray.get(runner.run.remote(config)) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper return fn(*args, **kwargs) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper return func(*args, **kwargs) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/worker.py", line 2822, in get values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/worker.py", line 930, in get_objects raise value.as_instanceof_cause() ray.exceptions.RayTaskError(AssertionError): ray::TaskRunner.run() (pid=518213, ip=129.114.17.42, actor_id=77b69e213e3405c0854e860c02000000, repr=<main_ppo.TaskRunner object at 0x400f05c10a30>) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 232, in run trainer.init_workers() File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/ppo/ray_trainer.py", line 931, in init_workers self.actor_rollout_wg.init_model() File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 51, in __call__ output = ray.get(output) ray.exceptions.RayTaskError(AssertionError): ray::WorkerDict.actor_rollout_init_model() (pid=518986, ip=129.114.17.42, actor_id=372bd910031fdf401cfb3d8c02000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x401133014070>) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 708, in func return getattr(self.worker_dict[key], name)(*args, **kwargs) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/base/decorator.py", line 549, in inner return func(*args, **kwargs) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 630, in init_model self.rollout, self.rollout_sharding_manager = self._build_rollout( File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 499, in _build_rollout rollout = vllm_rollout_cls( File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py", line 121, in __init__ assert max_position_embeddings >= config.prompt_length + config.response_length, ( AssertionError: model context length should be greater than total sequence length Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. (WorkerDict pid=460125, ip=129.114.17.90) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` [repeated 15x across cluster] (WorkerDict pid=460125, ip=129.114.17.90) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [repeated 15x across cluster] (WorkerDict pid=1033553, ip=129.114.17.89) Monkey patch state_dict in AutoModelForCausalLMWithValueHead.  [repeated 15x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.) (WorkerDict pid=1033553, ip=129.114.17.89) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention [repeated 15x across cluster] (WorkerDict pid=1033553, ip=129.114.17.89) Skipping monkey patch for Qwen2ForCausalLM as use_fused_kernels is False or fused_kernels_backend is torch [repeated 15x across cluster] [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 556.60it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 567.58it/s] 2025-10-07 12:32:34,806 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:32:34,812 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
224.501267
true
2025-10-07T12:38:08.446583
2025-10-07T12:43:40.312606
verl_rl
1
INFO
Complete log capture for stage: verl_rl
[INFO] Starting stage: VeRL RL training - rl [INFO] Data preparation succeeded [INFO] Setting up ray cluster [DEBUG] SLURM cluster info: 16 nodes, 1 GPUs/node [INFO] Node list: c609-051,c610-[102,111-112,121-122,131-132],c622-[082,091-092,101-102,111-112,121] [DEBUG] Head node: c609-051 [DEBUG] Ray head address: 129.114.17.42:6379 [INFO] Starting Ray head on c609-051... [INFO] Waiting for head node to initialize... [DEBUG] Starting 15 worker nodes... [DEBUG] Starting worker 1: c610-102 [DEBUG] Starting worker 2: c610-111 [DEBUG] Starting worker 3: c610-112 [DEBUG] Starting worker 4: c610-121 [DEBUG] Starting worker 5: c610-122 [DEBUG] Starting worker 6: c610-131 [DEBUG] Starting worker 7: c610-132 [DEBUG] Starting worker 8: c622-082 [DEBUG] Starting worker 9: c622-091 [DEBUG] Starting worker 10: c622-092 [DEBUG] Starting worker 11: c622-101 [DEBUG] Starting worker 12: c622-102 [DEBUG] Starting worker 13: c622-111 [DEBUG] Starting worker 14: c622-112 [DEBUG] Starting worker 15: c622-121 [INFO] Waiting for Ray cluster to stabilize... [INFO] Connecting to Ray cluster at 129.114.17.42:6379... [INFO] Ray cluster connected successfully (stats from the connection): [INFO] Total GPUs: 16.0 [INFO] Available GPUs: 16.0 [INFO] Total CPUs: 512.0 [INFO] SLURM Ray cluster setup completed [INFO] Starting checkpoint monitoring for intermediate uploads...[INFO] Intermediate checkpoint upload enabled [DEBUG] Found 0 global_step directories [DEBUG] Running verl command: python -m verl.trainer.main_ppo trainer.total_epochs=50 actor_rollout_ref.actor.optim.lr=1e-06 trainer.save_freq=20 trainer.test_freq=10 trainer.val_before_train=True algorithm.adv_estimator=grpo actor_rollout_ref.rollout.n=16 data.train_batch_size=512 actor_rollout_ref.actor.ppo_mini_batch_size=64 actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 custom_reward_function.reward_kwargs.response_or_sample=sample custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0 custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0 custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0 custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0 custom_reward_function.reward_kwargs.transition_penalty_weight=0.0 custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0 custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0 custom_reward_function.reward_kwargs.reward_min=0.0 custom_reward_function.reward_kwargs.reward_max=10.0 reward_model.reward_manager=batch custom_reward_function.name=compute_score_batch reward_model.launch_reward_fn_async=True actor_rollout_ref.model.enable_gradient_checkpointing=True actor_rollout_ref.model.enable_activation_offload=False actor_rollout_ref.rollout.gpu_memory_utilization=0.8 actor_rollout_ref.model.use_remove_padding=True actor_rollout_ref.actor.strategy=fsdp2 actor_rollout_ref.actor.fsdp_config.forward_prefetch=True actor_rollout_ref.ref.fsdp_config.forward_prefetch=True reward_model.model.fsdp_config.forward_prefetch=True actor_rollout_ref.rollout.max_num_batched_tokens=32768 actor_rollout_ref.rollout.max_num_seqs=256 hydra.run.dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/hydra hydra.output_subdir=null actor_rollout_ref.rollout.tensor_model_parallel_size=1 data.max_response_length=28000 data.max_prompt_length=512 actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct actor_rollout_ref.rollout.dtype=bfloat16 critic.optim.lr=1e-05 critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct critic.ppo_micro_batch_size_per_gpu=1 algorithm.kl_ctrl.kl_coef=0.001 trainer.logger=[console,wandb] trainer.project_name=jackrl trainer.experiment_name=rl_rlonly__32k_rl data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints actor_rollout_ref.model.trust_remote_code=True critic.model.trust_remote_code=True trainer.nnodes=16 trainer.n_gpus_per_node=1 2025-10-07 12:40:36,104 INFO worker.py:1554 -- Using address 129.114.17.42:6379 set in the environment variable RAY_ADDRESS 2025-10-07 12:40:36,104 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:40:36,110 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265  (TaskRunner pid=527581) Generating train split: 0 examples [00:00, ? examples/s] (TaskRunner pid=527581) Generating train split: 1000 examples [00:00, 5695.81 examples/s] (TaskRunner pid=527581) Generating train split: 1000 examples [00:00, 3203.63 examples/s] (TaskRunner pid=527581) Generating train split: 0 examples [00:00, ? examples/s] (TaskRunner pid=527581) Generating train split: 4450 examples [00:00, 54866.99 examples/s] (TaskRunner pid=527581) DeprecationWarning: `ray.state.available_resources_per_node` is a private attribute and access will be removed in a future Ray version. (TaskRunner pid=527581) WARNING:2025-10-07 12:40:48,066:Waiting for register center actor AldW30_register_center to be ready. Elapsed time: 0 seconds out of 300 seconds. [DEBUG] Found 0 global_step directories (WorkerDict pid=1515761, ip=129.114.18.54) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` (WorkerDict pid=1515761, ip=129.114.18.54) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [DEBUG] Found 0 global_step directories [DEBUG] Found 0 global_step directories [DEBUG] Found 0 global_step directories [DEBUG] Found 0 global_step directories [DEBUG] Found 0 global_step directories (WorkerDict pid=461409, ip=129.114.17.90) [rank6]:[E1007 12:43:33.100889892 ProcessGroupGloo.cpp:145] Gloo connectFullMesh failed with [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:144] no error (WorkerDict pid=528348) Flash Attention 2.0 only supports torch.float16 and torch.bfloat16 dtypes, but the current dype in Qwen2ForCausalLM is torch.float32. You should run training or inference using Automatic Mixed-Precision via the `with torch.autocast(device_type='torch_device'):` decorator, or load the model with the `torch_dtype` argument. Example: `model = AutoModel.from_pretrained("openai/whisper-tiny", attn_implementation="flash_attention_2", torch_dtype=torch.float16)` [repeated 15x across cluster] (WorkerDict pid=528348) You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`. [repeated 15x across cluster] (TaskRunner pid=527581) Unhandled error (suppress with 'RAY_IGNORE_UNHANDLED_ERRORS=1'): ray::WorkerDict.actor_rollout_init_model() (pid=966336, ip=129.114.17.91, actor_id=7c07b357ea021d4412717dac02000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x4011441248b0>) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 708, in func (TaskRunner pid=527581) return getattr(self.worker_dict[key], name)(*args, **kwargs) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/base/decorator.py", line 549, in inner (TaskRunner pid=527581) return func(*args, **kwargs) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 630, in init_model (TaskRunner pid=527581) self.rollout, self.rollout_sharding_manager = self._build_rollout( (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 499, in _build_rollout (TaskRunner pid=527581) rollout = vllm_rollout_cls( (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py", line 152, in __init__ (TaskRunner pid=527581) self.inference_engine = LLM( (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/utils.py", line 1161, in inner (TaskRunner pid=527581) return fn(*args, **kwargs) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/entrypoints/llm.py", line 247, in __init__ (TaskRunner pid=527581) self.llm_engine = LLMEngine.from_engine_args( (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/engine/llm_engine.py", line 510, in from_engine_args (TaskRunner pid=527581) return engine_cls.from_vllm_config( (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/llm_engine.py", line 112, in from_vllm_config (TaskRunner pid=527581) return cls(vllm_config=vllm_config, (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/llm_engine.py", line 92, in __init__ (TaskRunner pid=527581) self.engine_core = EngineCoreClient.make_client( (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core_client.py", line 75, in make_client (TaskRunner pid=527581) return InprocClient(vllm_config, executor_class, log_stats) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core_client.py", line 198, in __init__ (TaskRunner pid=527581) self.engine_core = EngineCore(*args, **kwargs) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core.py", line 71, in __init__ (TaskRunner pid=527581) self._initialize_kv_caches(vllm_config) (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core.py", line 129, in _initialize_kv_caches (TaskRunner pid=527581) available_gpu_memory = self.model_executor.determine_available_memory() (TaskRunner pid=527581) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/executor/abstract.py", line 111, in determine_available_memory (TaskRunner pid=527581) dist.all_reduce(memory_tensor, group=cpu_group, op=dist.ReduceOp.MIN) (TaskRunner pid=527581) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 81, in wrapper (TaskRunner pid=527581) return func(*args, **kwargs) (TaskRunner pid=527581) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 2811, in all_reduce (TaskRunner pid=527581) work.wait() (TaskRunner pid=527581) RuntimeError: [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:525] Read error [129.114.17.90]:40841: Connection reset by peer (TaskRunner pid=527581) TaskRunner hostname: c609-051.vista.tacc.utexas.edu, PID: 527581 (TaskRunner pid=527581) {'actor_rollout_ref': {'actor': {'checkpoint': {'load_contents': ['model', (TaskRunner pid=527581) 'optimizer', (TaskRunner pid=527581) 'extra'], (TaskRunner pid=527581) 'save_contents': ['model', (TaskRunner pid=527581) 'optimizer', (TaskRunner pid=527581) 'extra']}, (TaskRunner pid=527581) 'clip_ratio': 0.2, (TaskRunner pid=527581) 'clip_ratio_c': 3.0, (TaskRunner pid=527581) 'clip_ratio_high': 0.2, (TaskRunner pid=527581) 'clip_ratio_low': 0.2, (TaskRunner pid=527581) 'entropy_checkpointing': False, (TaskRunner pid=527581) 'entropy_coeff': 0, (TaskRunner pid=527581) 'entropy_from_logits_with_chunking': False, (TaskRunner pid=527581) 'fsdp_config': {'forward_prefetch': True, (TaskRunner pid=527581) 'fsdp_size': -1, (TaskRunner pid=527581) 'offload_policy': False, (TaskRunner pid=527581) 'optimizer_offload': False, (TaskRunner pid=527581) 'param_offload': False, (TaskRunner pid=527581) 'reshard_after_forward': True, (TaskRunner pid=527581) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=527581) 'grad_clip': 1.0, (TaskRunner pid=527581) 'kl_loss_coef': 0.001, (TaskRunner pid=527581) 'kl_loss_type': 'low_var_kl', (TaskRunner pid=527581) 'loss_agg_mode': 'token-mean', (TaskRunner pid=527581) 'optim': {'lr': 1e-06, (TaskRunner pid=527581) 'lr_warmup_steps': -1, (TaskRunner pid=527581) 'lr_warmup_steps_ratio': 0.0, (TaskRunner pid=527581) 'min_lr_ratio': 0.0, (TaskRunner pid=527581) 'num_cycles': 0.5, (TaskRunner pid=527581) 'total_training_steps': -1, (TaskRunner pid=527581) 'warmup_style': 'constant', (TaskRunner pid=527581) 'weight_decay': 0.01}, (TaskRunner pid=527581) 'policy_loss': {'clip_cov_lb': 1.0, (TaskRunner pid=527581) 'clip_cov_ratio': 0.0002, (TaskRunner pid=527581) 'clip_cov_ub': 5.0, (TaskRunner pid=527581) 'kl_cov_ratio': 0.0002, (TaskRunner pid=527581) 'loss_mode': 'vanilla', (TaskRunner pid=527581) 'ppo_kl_coef': 0.1}, (TaskRunner pid=527581) 'ppo_epochs': 1, (TaskRunner pid=527581) 'ppo_max_token_len_per_gpu': 16384, (TaskRunner pid=527581) 'ppo_micro_batch_size': None, (TaskRunner pid=527581) 'ppo_micro_batch_size_per_gpu': 4, (TaskRunner pid=527581) 'ppo_mini_batch_size': 64, (TaskRunner pid=527581) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=527581) 'all_ranks': False, (TaskRunner pid=527581) 'discrete': False, (TaskRunner pid=527581) 'ranks': []}, (TaskRunner pid=527581) 'shuffle': False, (TaskRunner pid=527581) 'strategy': 'fsdp2', (TaskRunner pid=527581) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=527581) 'use_dynamic_bsz': False, (TaskRunner pid=527581) 'use_kl_loss': False, (TaskRunner pid=527581) 'use_torch_compile': True}, (TaskRunner pid=527581) 'hybrid_engine': True, (TaskRunner pid=527581) 'model': {'custom_chat_template': None, (TaskRunner pid=527581) 'enable_activation_offload': False, (TaskRunner pid=527581) 'enable_gradient_checkpointing': True, (TaskRunner pid=527581) 'exclude_modules': None, (TaskRunner pid=527581) 'external_lib': None, (TaskRunner pid=527581) 'fused_kernel_options': {'impl_backend': 'torch'}, (TaskRunner pid=527581) 'lora_alpha': 16, (TaskRunner pid=527581) 'lora_rank': 0, (TaskRunner pid=527581) 'override_config': {}, (TaskRunner pid=527581) 'path': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=527581) 'target_modules': 'all-linear', (TaskRunner pid=527581) 'trust_remote_code': True, (TaskRunner pid=527581) 'use_fused_kernels': False, (TaskRunner pid=527581) 'use_liger': False, (TaskRunner pid=527581) 'use_remove_padding': True, (TaskRunner pid=527581) 'use_shm': False}, (TaskRunner pid=527581) 'ref': {'entropy_checkpointing': False, (TaskRunner pid=527581) 'entropy_from_logits_with_chunking': False, (TaskRunner pid=527581) 'fsdp_config': {'forward_prefetch': True, (TaskRunner pid=527581) 'param_offload': False, (TaskRunner pid=527581) 'reshard_after_forward': True, (TaskRunner pid=527581) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=527581) 'log_prob_max_token_len_per_gpu': 16384, (TaskRunner pid=527581) 'log_prob_micro_batch_size': None, (TaskRunner pid=527581) 'log_prob_micro_batch_size_per_gpu': 4, (TaskRunner pid=527581) 'log_prob_use_dynamic_bsz': False, (TaskRunner pid=527581) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=527581) 'all_ranks': False, (TaskRunner pid=527581) 'discrete': False, (TaskRunner pid=527581) 'ranks': []}, (TaskRunner pid=527581) 'strategy': 'fsdp2', (TaskRunner pid=527581) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=527581) 'use_torch_compile': True}, (TaskRunner pid=527581) 'rollout': {'agent': {'custom_async_server': {'name': None, (TaskRunner pid=527581) 'path': None}, (TaskRunner pid=527581) 'num_workers': 8}, (TaskRunner pid=527581) 'calculate_log_probs': False, (TaskRunner pid=527581) 'disable_log_stats': True, (TaskRunner pid=527581) 'do_sample': True, (TaskRunner pid=527581) 'dtype': 'bfloat16', (TaskRunner pid=527581) 'enable_chunked_prefill': True, (TaskRunner pid=527581) 'enforce_eager': True, (TaskRunner pid=527581) 'engine_kwargs': {'sglang': {'attention_backend': None}, (TaskRunner pid=527581) 'vllm': {'disable_mm_preprocessor_cache': False, (TaskRunner pid=527581) 'swap_space': None}}, (TaskRunner pid=527581) 'free_cache_engine': True, (TaskRunner pid=527581) 'gpu_memory_utilization': 0.8, (TaskRunner pid=527581) 'ignore_eos': False, (TaskRunner pid=527581) 'layered_summon': False, (TaskRunner pid=527581) 'load_format': 'dummy_dtensor', (TaskRunner pid=527581) 'log_prob_max_token_len_per_gpu': 16384, (TaskRunner pid=527581) 'log_prob_micro_batch_size': None, (TaskRunner pid=527581) 'log_prob_micro_batch_size_per_gpu': 4, (TaskRunner pid=527581) 'log_prob_use_dynamic_bsz': False, (TaskRunner pid=527581) 'max_model_len': None, (TaskRunner pid=527581) 'max_num_batched_tokens': 32768, (TaskRunner pid=527581) 'max_num_seqs': 256, (TaskRunner pid=527581) 'mode': 'sync', (TaskRunner pid=527581) 'multi_stage_wake_up': False, (TaskRunner pid=527581) 'multi_turn': {'completion_callback': None, (TaskRunner pid=527581) 'enable': False, (TaskRunner pid=527581) 'format': 'hermes', (TaskRunner pid=527581) 'interaction_config_path': None, (TaskRunner pid=527581) 'max_assistant_turns': None, (TaskRunner pid=527581) 'max_parallel_calls': 1, (TaskRunner pid=527581) 'max_tool_response_length': 256, (TaskRunner pid=527581) 'max_user_turns': None, (TaskRunner pid=527581) 'tokenization_sanity_check_mode': 'strict', (TaskRunner pid=527581) 'tool_config_path': None, (TaskRunner pid=527581) 'tool_response_truncate_side': 'middle', (TaskRunner pid=527581) 'use_inference_chat_template': False}, (TaskRunner pid=527581) 'n': 16, (TaskRunner pid=527581) 'name': 'vllm', (TaskRunner pid=527581) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=527581) 'all_ranks': False, (TaskRunner pid=527581) 'discrete': False, (TaskRunner pid=527581) 'ranks': []}, (TaskRunner pid=527581) 'prompt_length': 512, (TaskRunner pid=527581) 'response_length': 28000, (TaskRunner pid=527581) 'temperature': 1.0, (TaskRunner pid=527581) 'tensor_model_parallel_size': 1, (TaskRunner pid=527581) 'top_k': -1, (TaskRunner pid=527581) 'top_p': 1, (TaskRunner pid=527581) 'val_kwargs': {'do_sample': False, (TaskRunner pid=527581) 'n': 1, (TaskRunner pid=527581) 'temperature': 0, (TaskRunner pid=527581) 'top_k': -1, (TaskRunner pid=527581) 'top_p': 1.0}}}, (TaskRunner pid=527581) 'algorithm': {'_target_': 'verl.trainer.config.AlgoConfig', (TaskRunner pid=527581) 'adv_estimator': 'grpo', (TaskRunner pid=527581) 'gamma': 1.0, (TaskRunner pid=527581) 'kl_ctrl': {'_target_': 'verl.trainer.config.KLControlConfig', (TaskRunner pid=527581) 'horizon': 10000, (TaskRunner pid=527581) 'kl_coef': 0.001, (TaskRunner pid=527581) 'target_kl': 0.1, (TaskRunner pid=527581) 'type': 'fixed'}, (TaskRunner pid=527581) 'kl_penalty': 'kl', (TaskRunner pid=527581) 'lam': 1.0, (TaskRunner pid=527581) 'norm_adv_by_std_in_grpo': True, (TaskRunner pid=527581) 'pf_ppo': {'_target_': 'verl.trainer.config.PFPPOConfig', (TaskRunner pid=527581) 'reweight_method': 'pow', (TaskRunner pid=527581) 'weight_pow': 2.0}, (TaskRunner pid=527581) 'use_kl_in_reward': False, (TaskRunner pid=527581) 'use_pf_ppo': False}, (TaskRunner pid=527581) 'critic': {'checkpoint': {'load_contents': ['model', 'optimizer', 'extra'], (TaskRunner pid=527581) 'save_contents': ['model', 'optimizer', 'extra']}, (TaskRunner pid=527581) 'cliprange_value': 0.5, (TaskRunner pid=527581) 'forward_max_token_len_per_gpu': 32768, (TaskRunner pid=527581) 'forward_micro_batch_size': None, (TaskRunner pid=527581) 'forward_micro_batch_size_per_gpu': 1, (TaskRunner pid=527581) 'grad_clip': 1.0, (TaskRunner pid=527581) 'loss_agg_mode': 'token-mean', (TaskRunner pid=527581) 'model': {'enable_activation_offload': False, (TaskRunner pid=527581) 'enable_gradient_checkpointing': True, (TaskRunner pid=527581) 'external_lib': None, (TaskRunner pid=527581) 'fsdp_config': {'forward_prefetch': False, (TaskRunner pid=527581) 'fsdp_size': -1, (TaskRunner pid=527581) 'offload_policy': False, (TaskRunner pid=527581) 'optimizer_offload': False, (TaskRunner pid=527581) 'param_offload': False, (TaskRunner pid=527581) 'reshard_after_forward': True, (TaskRunner pid=527581) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=527581) 'lora_alpha': 16, (TaskRunner pid=527581) 'lora_rank': 0, (TaskRunner pid=527581) 'override_config': {}, (TaskRunner pid=527581) 'path': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=527581) 'target_modules': 'all-linear', (TaskRunner pid=527581) 'tokenizer_path': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=527581) 'trust_remote_code': True, (TaskRunner pid=527581) 'use_remove_padding': False, (TaskRunner pid=527581) 'use_shm': False}, (TaskRunner pid=527581) 'optim': {'lr': 1e-05, (TaskRunner pid=527581) 'lr_warmup_steps_ratio': 0.0, (TaskRunner pid=527581) 'min_lr_ratio': None, (TaskRunner pid=527581) 'total_training_steps': -1, (TaskRunner pid=527581) 'warmup_style': 'constant', (TaskRunner pid=527581) 'weight_decay': 0.01}, (TaskRunner pid=527581) 'ppo_epochs': 1, (TaskRunner pid=527581) 'ppo_max_token_len_per_gpu': 32768, (TaskRunner pid=527581) 'ppo_micro_batch_size': None, (TaskRunner pid=527581) 'ppo_micro_batch_size_per_gpu': 1, (TaskRunner pid=527581) 'ppo_mini_batch_size': 64, (TaskRunner pid=527581) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=527581) 'all_ranks': False, (TaskRunner pid=527581) 'discrete': False, (TaskRunner pid=527581) 'ranks': []}, (TaskRunner pid=527581) 'rollout_n': 16, (TaskRunner pid=527581) 'shuffle': False, (TaskRunner pid=527581) 'strategy': 'fsdp2', (TaskRunner pid=527581) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=527581) 'use_dynamic_bsz': False}, (TaskRunner pid=527581) 'custom_reward_function': {'name': 'compute_score_batch', (TaskRunner pid=527581) 'path': '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py', (TaskRunner pid=527581) 'reward_kwargs': {'complex_format_reward_weight': 0.0, (TaskRunner pid=527581) 'final_answer_in_samples_reward_weight': 0.0, (TaskRunner pid=527581) 'reflection_correctness_reward_weight': 0.0, (TaskRunner pid=527581) 'response_or_sample': 'sample', (TaskRunner pid=527581) 'reward_max': 10.0, (TaskRunner pid=527581) 'reward_min': 0.0, (TaskRunner pid=527581) 'sample_correctness_reward_weight': 0.0, (TaskRunner pid=527581) 'sample_count_penalty_weight': 0.0, (TaskRunner pid=527581) 'similarity_penalty_weight': 0.0, (TaskRunner pid=527581) 'simple_format_reward_weight': 0.0, (TaskRunner pid=527581) 'transition_penalty_weight': 0.0, (TaskRunner pid=527581) 'verdict_correctness_reward_weight': 0.0}}, (TaskRunner pid=527581) 'data': {'custom_cls': {'name': None, 'path': None}, (TaskRunner pid=527581) 'dataloader_num_workers': 8, (TaskRunner pid=527581) 'filter_overlong_prompts': False, (TaskRunner pid=527581) 'filter_overlong_prompts_workers': 1, (TaskRunner pid=527581) 'image_key': 'images', (TaskRunner pid=527581) 'max_prompt_length': 512, (TaskRunner pid=527581) 'max_response_length': 28000, (TaskRunner pid=527581) 'prompt_key': 'prompt', (TaskRunner pid=527581) 'return_full_prompt': False, (TaskRunner pid=527581) 'return_multi_modal_inputs': True, (TaskRunner pid=527581) 'return_raw_chat': False, (TaskRunner pid=527581) 'return_raw_input_ids': False, (TaskRunner pid=527581) 'reward_fn_key': 'data_source', (TaskRunner pid=527581) 'sampler': {'class_name': None, 'class_path': None}, (TaskRunner pid=527581) 'shuffle': True, (TaskRunner pid=527581) 'tokenizer': None, (TaskRunner pid=527581) 'train_batch_size': 512, (TaskRunner pid=527581) 'train_files': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet', (TaskRunner pid=527581) 'truncation': 'error', (TaskRunner pid=527581) 'trust_remote_code': False, (TaskRunner pid=527581) 'use_shm': False, (TaskRunner pid=527581) 'val_batch_size': None, (TaskRunner pid=527581) 'val_files': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet', (TaskRunner pid=527581) 'validation_shuffle': False, (TaskRunner pid=527581) 'video_key': 'videos'}, (TaskRunner pid=527581) 'ray_init': {'num_cpus': None, 'timeline_json_file': None}, (TaskRunner pid=527581) 'reward_model': {'enable': False, (TaskRunner pid=527581) 'forward_max_token_len_per_gpu': 32768, (TaskRunner pid=527581) 'launch_reward_fn_async': True, (TaskRunner pid=527581) 'max_length': None, (TaskRunner pid=527581) 'micro_batch_size': None, (TaskRunner pid=527581) 'micro_batch_size_per_gpu': None, (TaskRunner pid=527581) 'model': {'external_lib': None, (TaskRunner pid=527581) 'fsdp_config': {'forward_prefetch': True, (TaskRunner pid=527581) 'fsdp_size': -1, (TaskRunner pid=527581) 'param_offload': False, (TaskRunner pid=527581) 'reshard_after_forward': True, (TaskRunner pid=527581) 'wrap_policy': {'min_num_params': 0}}, (TaskRunner pid=527581) 'input_tokenizer': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', (TaskRunner pid=527581) 'path': '~/models/FsfairX-LLaMA3-RM-v0.1', (TaskRunner pid=527581) 'trust_remote_code': False, (TaskRunner pid=527581) 'use_fused_kernels': False, (TaskRunner pid=527581) 'use_remove_padding': False, (TaskRunner pid=527581) 'use_shm': False}, (TaskRunner pid=527581) 'profiler': {'_target_': 'verl.utils.profiler.ProfilerConfig', (TaskRunner pid=527581) 'all_ranks': False, (TaskRunner pid=527581) 'discrete': False, (TaskRunner pid=527581) 'ranks': []}, (TaskRunner pid=527581) 'reward_manager': 'batch', (TaskRunner pid=527581) 'sandbox_fusion': {'max_concurrent': 64, (TaskRunner pid=527581) 'memory_limit_mb': 1024, (TaskRunner pid=527581) 'url': None}, (TaskRunner pid=527581) 'strategy': 'fsdp2', (TaskRunner pid=527581) 'ulysses_sequence_parallel_size': 1, (TaskRunner pid=527581) 'use_dynamic_bsz': False}, (TaskRunner pid=527581) 'trainer': {'balance_batch': True, (TaskRunner pid=527581) 'controller_nsight_options': {'cuda-graph-trace': 'graph', (TaskRunner pid=527581) 'cuda-memory-usage': 'true', (TaskRunner pid=527581) 'trace': 'cuda,nvtx,cublas,ucx'}, (TaskRunner pid=527581) 'critic_warmup': 0, (TaskRunner pid=527581) 'default_hdfs_dir': None, (TaskRunner pid=527581) 'default_local_dir': '/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints', (TaskRunner pid=527581) 'del_local_ckpt_after_load': False, (TaskRunner pid=527581) 'device': 'cuda', (TaskRunner pid=527581) 'esi_redundant_time': 0, (TaskRunner pid=527581) 'experiment_name': 'rl_rlonly__32k_rl', (TaskRunner pid=527581) 'log_val_generations': 0, (TaskRunner pid=527581) 'logger': ['console', 'wandb'], (TaskRunner pid=527581) 'max_actor_ckpt_to_keep': None, (TaskRunner pid=527581) 'max_critic_ckpt_to_keep': None, (TaskRunner pid=527581) 'n_gpus_per_node': 1, (TaskRunner pid=527581) 'nnodes': 16, (TaskRunner pid=527581) 'profile_steps': None, (TaskRunner pid=527581) 'project_name': 'jackrl', (TaskRunner pid=527581) 'ray_wait_register_center_timeout': 300, (TaskRunner pid=527581) 'resume_from_path': None, (TaskRunner pid=527581) 'resume_mode': 'auto', (TaskRunner pid=527581) 'rollout_data_dir': None, (TaskRunner pid=527581) 'save_freq': 20, (TaskRunner pid=527581) 'test_freq': 10, (TaskRunner pid=527581) 'total_epochs': 50, (TaskRunner pid=527581) 'total_training_steps': None, (TaskRunner pid=527581) 'val_before_train': True, (TaskRunner pid=527581) 'val_only': False, (TaskRunner pid=527581) 'validation_data_dir': None, (TaskRunner pid=527581) 'worker_nsight_options': {'capture-range': 'cudaProfilerApi', (TaskRunner pid=527581) 'capture-range-end': None, (TaskRunner pid=527581) 'cuda-graph-trace': 'graph', (TaskRunner pid=527581) 'cuda-memory-usage': 'true', (TaskRunner pid=527581) 'kill': 'none', (TaskRunner pid=527581) 'trace': 'cuda,nvtx,cublas,ucx'}}} (TaskRunner pid=527581) Registered source: longmult (TaskRunner pid=527581) Registered source: countdown (TaskRunner pid=527581) Registered source: gsm8k (TaskRunner pid=527581) Registered source: arc (TaskRunner pid=527581) Registered source: arc_challenge (TaskRunner pid=527581) Registered source: arc_easy (TaskRunner pid=527581) Registered source: piqa (TaskRunner pid=527581) Registered source: mmlu (TaskRunner pid=527581) Registered source: mmlu_pro (TaskRunner pid=527581) Registered source: csqa (TaskRunner pid=527581) Registered source: social_iqa (TaskRunner pid=527581) Registered source: strategy_qa (TaskRunner pid=527581) Registered source: winogrande (TaskRunner pid=527581) Registered source: bbh (TaskRunner pid=527581) Registered source: letter_countdown (TaskRunner pid=527581) Registered source: acronym (TaskRunner pid=527581) using customized reward function 'compute_score_batch' from '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py' (TaskRunner pid=527581) using customized reward function 'compute_score_batch' from '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py' (TaskRunner pid=527581) Using dataset class: RLHFDataset (TaskRunner pid=527581) dataset len: 1000 (TaskRunner pid=527581) Using dataset class: RLHFDataset (TaskRunner pid=527581) dataset len: 4450 (TaskRunner pid=527581) Using critic: False (TaskRunner pid=527581) [validate_config] All configuration checks passed successfully! (TaskRunner pid=527581) Size of train dataloader: 1, Size of val dataloader: 1 (TaskRunner pid=527581) Total training steps: 50 (TaskRunner pid=527581) {'0a4bc2462d8b8aec1ee67cdcabec595fcfd5ad785b398123a853e3d0': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162587453030.0, (TaskRunner pid=527581) 'node:129.114.18.49': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '1e1a2bbe4bc90938e3b3a3681146fb83cf649ea177668cca1132ef59': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162377213542.0, (TaskRunner pid=527581) 'node:129.114.18.53': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '271bd8658bd12d56c919704a14f9cc465efea8d8f1f85c49adb4b1f5': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160221996646.0, (TaskRunner pid=527581) 'node:129.114.18.56': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '39bfbb7070da1d2800c4e21777a3bd579ba6d5bd691de7f2cbfae05d': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162578015846.0, (TaskRunner pid=527581) 'node:129.114.18.50': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '582061aacf73f92adff4aec6af07d0595e62981a57c87c29d67133c0': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162629461606.0, (TaskRunner pid=527581) 'node:129.114.18.52': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '5ce80999c40d83d1777403af24b27be5b3bcce68bd14b351249f3a45': {'CPU': 31.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 158806219161.0, (TaskRunner pid=527581) 'node:129.114.17.42': 1.0, (TaskRunner pid=527581) 'node:__internal_head__': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60055971430.0}, (TaskRunner pid=527581) '63f08f6fe4ca654b44ad9617b91929c8c23ae3b046602e69eed8b725': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162379179622.0, (TaskRunner pid=527581) 'node:129.114.18.54': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '77bf3152745dfda5bf0070485b80f4d43c1c145cd830a649d10a29ac': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160166880870.0, (TaskRunner pid=527581) 'node:129.114.17.86': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '7cb70b88d74ec902b9c0dbc8ab35a1bb2a54548d7f36c3e8ef8443f7': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160509175398.0, (TaskRunner pid=527581) 'node:129.114.17.91': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '800361fc3a41f7e68f6e3169ba4b00d1fe24681d528681a27fdc9501': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160685598310.0, (TaskRunner pid=527581) 'node:129.114.17.89': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '83e78a16756c4eec10cf6f63b618303ece2d460ceeca6568f1890db4': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162383504998.0, (TaskRunner pid=527581) 'node:129.114.18.55': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) '86574ca383fdf391e77b6fdbdd7a95e1d44fe772050637b8294dd525': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160642279014.0, (TaskRunner pid=527581) 'node:129.114.17.90': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) 'a2134df5479e558cca9f2f3d0f319336837f918bf1a3b72162145b9e': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160325674598.0, (TaskRunner pid=527581) 'node:129.114.17.87': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) 'b0374490e000d4cad47517403aabc1ad52e9cf47605e802b303725e0': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 162623825510.0, (TaskRunner pid=527581) 'node:129.114.18.51': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}, (TaskRunner pid=527581) 'b400c9687a072d5835fb2b4a08c576778c5c6754c8f387b50a9b6463': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160903377715.0, (TaskRunner pid=527581) 'node:129.114.17.85': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056095948.0}, (TaskRunner pid=527581) 'ecda89016b0d73c0d5928e7eb78e5655371ac9b2308ef079dd51561d': {'CPU': 32.0, (TaskRunner pid=527581) 'GPU': 1.0, (TaskRunner pid=527581) 'accelerator_type:GH200': 1.0, (TaskRunner pid=527581) 'memory': 160161048166.0, (TaskRunner pid=527581) 'node:129.114.17.88': 1.0, (TaskRunner pid=527581) 'object_store_memory': 60056033689.0}} (TaskRunner pid=527581) ('Resource pool to cls: {<verl.single_controller.ray.base.RayResourcePool ' (TaskRunner pid=527581) "object at 0x400f4728a080>: {'actor_rollout': " (TaskRunner pid=527581) '<verl.single_controller.ray.base.RayClassWithInitArgs object at ' (TaskRunner pid=527581) '0x400f4728a0b0>}}') (TaskRunner pid=527581) colocated worker base class <class 'verl.single_controller.base.worker.Worker'> (WorkerDict pid=528348) Model config after override: Qwen2Config { (WorkerDict pid=528348) "_name_or_path": "/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct", (WorkerDict pid=528348) "architectures": [ (WorkerDict pid=528348) "Qwen2ForCausalLM" (WorkerDict pid=528348) ], (WorkerDict pid=528348) "attention_dropout": 0.0, (WorkerDict pid=528348) "eos_token_id": 151645, (WorkerDict pid=528348) "hidden_act": "silu", (WorkerDict pid=528348) "hidden_size": 1536, (WorkerDict pid=528348) "initializer_range": 0.02, (WorkerDict pid=528348) "intermediate_size": 8960, (WorkerDict pid=528348) "max_position_embeddings": 32768, (WorkerDict pid=528348) "max_window_layers": 21, (WorkerDict pid=528348) "model_type": "qwen2", (WorkerDict pid=528348) "num_attention_heads": 12, (WorkerDict pid=528348) "num_hidden_layers": 28, (WorkerDict pid=528348) "num_key_value_heads": 2, (WorkerDict pid=528348) "pad_token_id": 151643, (WorkerDict pid=528348) "rms_norm_eps": 1e-06, (WorkerDict pid=528348) "rope_scaling": null, (WorkerDict pid=528348) "rope_theta": 1000000.0, (WorkerDict pid=528348) "sliding_window": 32768, (WorkerDict pid=528348) "tie_word_embeddings": true, (WorkerDict pid=528348) "torch_dtype": "bfloat16", (WorkerDict pid=528348) "transformers_version": "4.49.0", (WorkerDict pid=528348) "use_cache": true, (WorkerDict pid=528348) "use_sliding_window": false, (WorkerDict pid=528348) "vocab_size": 151936 (WorkerDict pid=528348) } (WorkerDict pid=528348) (WorkerDict pid=1130466, ip=129.114.18.52) Monkey patch state_dict in AutoModelForCausalLMWithValueHead. (WorkerDict pid=1130466, ip=129.114.18.52) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention (WorkerDict pid=1130466, ip=129.114.18.52) Skipping monkey patch for Qwen2ForCausalLM as use_fused_kernels is False or fused_kernels_backend is torch (WorkerDict pid=528348) Qwen2ForCausalLM contains 1.54B parameters (WorkerDict pid=528348) wrap_policy: functools.partial(<function _or_policy at 0x400eefdd5bd0>, policies=[functools.partial(<function transformer_auto_wrap_policy at 0x400eefdd5ab0>, transformer_layer_cls={<class 'transformers.models.qwen2.modeling_qwen2.Qwen2DecoderLayer'>})]) (WorkerDict pid=528348) NCCL version 2.21.5+cuda12.6 (WorkerDict pid=528348) Total steps: 50, num_warmup_steps: 0 (WorkerDict pid=528348) Actor use_remove_padding=True (WorkerDict pid=528348) Actor use_fused_kernels=False (WorkerDict pid=528348) WARNING 10-07 12:41:14 [cuda.py:93] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used (WorkerDict pid=528348) Monkey patch state_dict in AutoModelForCausalLMWithValueHead.  [repeated 15x across cluster] (Ray deduplicates logs by default. Set RAY_DEDUP_LOGS=0 to disable log deduplication, or see https://docs.ray.io/en/master/ray-observability/user-guides/configure-logging.html#log-deduplication for more options.) (WorkerDict pid=528348) Monkey patch _flash_attention_forward in transformers.integrations.flash_attention [repeated 15x across cluster] (WorkerDict pid=528348) Skipping monkey patch for Qwen2ForCausalLM as use_fused_kernels is False or fused_kernels_backend is torch [repeated 15x across cluster] (WorkerDict pid=1130466, ip=129.114.18.52) WARNING 10-07 12:41:20 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x40113fe09ba0> (WorkerDict pid=947655, ip=129.114.18.50) WARNING 10-07 12:41:15 [cuda.py:93] To see benefits of async output processing, enable CUDA graph. Since, enforce-eager is enabled, async output processor cannot be used [repeated 15x across cluster] (WorkerDict pid=1130466, ip=129.114.18.52) WARNING 10-07 12:41:21 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. Error executing job with overrides: ['trainer.total_epochs=50', 'actor_rollout_ref.actor.optim.lr=1e-06', 'trainer.save_freq=20', 'trainer.test_freq=10', 'trainer.val_before_train=True', 'algorithm.adv_estimator=grpo', 'actor_rollout_ref.rollout.n=16', 'data.train_batch_size=512', 'actor_rollout_ref.actor.ppo_mini_batch_size=64', 'actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4', 'actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4', 'actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4', 'custom_reward_function.reward_kwargs.response_or_sample=sample', 'custom_reward_function.reward_kwargs.simple_format_reward_weight=0.0', 'custom_reward_function.reward_kwargs.complex_format_reward_weight=0.0', 'custom_reward_function.reward_kwargs.sample_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.verdict_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.reflection_correctness_reward_weight=0.0', 'custom_reward_function.reward_kwargs.final_answer_in_samples_reward_weight=0.0', 'custom_reward_function.reward_kwargs.transition_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.similarity_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.sample_count_penalty_weight=0.0', 'custom_reward_function.reward_kwargs.reward_min=0.0', 'custom_reward_function.reward_kwargs.reward_max=10.0', 'reward_model.reward_manager=batch', 'custom_reward_function.name=compute_score_batch', 'reward_model.launch_reward_fn_async=True', 'actor_rollout_ref.model.enable_gradient_checkpointing=True', 'actor_rollout_ref.model.enable_activation_offload=False', 'actor_rollout_ref.rollout.gpu_memory_utilization=0.8', 'actor_rollout_ref.model.use_remove_padding=True', 'actor_rollout_ref.actor.strategy=fsdp2', 'actor_rollout_ref.actor.fsdp_config.forward_prefetch=True', 'actor_rollout_ref.ref.fsdp_config.forward_prefetch=True', 'reward_model.model.fsdp_config.forward_prefetch=True', 'actor_rollout_ref.rollout.max_num_batched_tokens=32768', 'actor_rollout_ref.rollout.max_num_seqs=256', 'actor_rollout_ref.rollout.tensor_model_parallel_size=1', 'data.max_response_length=28000', 'data.max_prompt_length=512', 'actor_rollout_ref.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', 'actor_rollout_ref.rollout.dtype=bfloat16', 'critic.optim.lr=1e-05', 'critic.model.path=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/prefetched_models/Qwen__Qwen2_5_1_5B_Instruct', 'critic.ppo_micro_batch_size_per_gpu=1', 'algorithm.kl_ctrl.kl_coef=0.001', 'trainer.logger=[console,wandb]', 'trainer.project_name=jackrl', 'trainer.experiment_name=rl_rlonly__32k_rl', 'data.train_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/train.parquet', 'data.val_files=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/data/test.parquet', 'custom_reward_function.path=/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/sf_scripts/skill_factory_rewards.py', 'trainer.default_local_dir=/scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints', 'actor_rollout_ref.model.trust_remote_code=True', 'critic.model.trust_remote_code=True', 'trainer.nnodes=16', 'trainer.n_gpus_per_node=1'] Traceback (most recent call last): File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 39, in main run_ppo(config) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 69, in run_ppo ray.get(runner.run.remote(config)) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/auto_init_hook.py", line 21, in auto_init_wrapper return fn(*args, **kwargs) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/client_mode_hook.py", line 103, in wrapper return func(*args, **kwargs) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/worker.py", line 2822, in get values, debugger_breakpoint = worker.get_objects(object_refs, timeout=timeout) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/ray/_private/worker.py", line 930, in get_objects raise value.as_instanceof_cause() ray.exceptions.RayTaskError(RuntimeError): ray::TaskRunner.run() (pid=527581, ip=129.114.17.42, actor_id=604b9b90d24c2827a46380df02000000, repr=<main_ppo.TaskRunner object at 0x400f34b7cbb0>) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/main_ppo.py", line 232, in run trainer.init_workers() File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/trainer/ppo/ray_trainer.py", line 931, in init_workers self.actor_rollout_wg.init_model() File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 51, in __call__ output = ray.get(output) ray.exceptions.RayTaskError(RuntimeError): ray::WorkerDict.actor_rollout_init_model() (pid=461409, ip=129.114.17.90, actor_id=f0efc961a90c9e7fd0f8bd0f02000000, repr=<verl.single_controller.ray.base.WorkerDict object at 0x401123d46e30>) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/ray/base.py", line 708, in func return getattr(self.worker_dict[key], name)(*args, **kwargs) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/single_controller/base/decorator.py", line 549, in inner return func(*args, **kwargs) File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 630, in init_model self.rollout, self.rollout_sharding_manager = self._build_rollout( File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/fsdp_workers.py", line 499, in _build_rollout rollout = vllm_rollout_cls( File "/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/verl/verl/workers/rollout/vllm_rollout/vllm_rollout_spmd.py", line 152, in __init__ self.inference_engine = LLM( File "/scratch/10416/zaynesprague/projects2/vllm/vllm/utils.py", line 1161, in inner return fn(*args, **kwargs) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/entrypoints/llm.py", line 247, in __init__ self.llm_engine = LLMEngine.from_engine_args( File "/scratch/10416/zaynesprague/projects2/vllm/vllm/engine/llm_engine.py", line 510, in from_engine_args return engine_cls.from_vllm_config( File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/llm_engine.py", line 112, in from_vllm_config return cls(vllm_config=vllm_config, File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/llm_engine.py", line 92, in __init__ self.engine_core = EngineCoreClient.make_client( File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core_client.py", line 75, in make_client return InprocClient(vllm_config, executor_class, log_stats) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core_client.py", line 198, in __init__ self.engine_core = EngineCore(*args, **kwargs) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/engine/core.py", line 64, in __init__ self.model_executor = executor_class(vllm_config) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/executor/executor_base.py", line 52, in __init__ self._init_executor() File "/scratch/10416/zaynesprague/projects2/vllm/vllm/executor/uniproc_executor.py", line 121, in _init_executor self.collective_rpc("init_device") File "/scratch/10416/zaynesprague/projects2/vllm/vllm/executor/uniproc_executor.py", line 56, in collective_rpc answer = run_method(self.driver_worker, method, args, kwargs) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/utils.py", line 2456, in run_method return func(*args, **kwargs) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/worker/worker_base.py", line 604, in init_device self.worker.init_device() # type: ignore File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/worker/gpu_worker.py", line 135, in init_device init_worker_distributed_environment(self.vllm_config, self.rank, File "/scratch/10416/zaynesprague/projects2/vllm/vllm/v1/worker/gpu_worker.py", line 323, in init_worker_distributed_environment init_distributed_environment(parallel_config.world_size, rank, File "/scratch/10416/zaynesprague/projects2/vllm/vllm/distributed/parallel_state.py", line 909, in init_distributed_environment _WORLD = init_world_group(ranks, local_rank, backend) File "/scratch/10416/zaynesprague/projects2/vllm/vllm/distributed/parallel_state.py", line 771, in init_world_group return GroupCoordinator( File "/scratch/10416/zaynesprague/projects2/vllm/vllm/distributed/parallel_state.py", line 225, in __init__ cpu_group = torch.distributed.new_group(ranks, backend="gloo") File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/torch/distributed/c10d_logger.py", line 95, in wrapper func_return = func(*args, **kwargs) File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 4981, in new_group return _new_group_with_tag( File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 5071, in _new_group_with_tag pg, pg_store = _new_process_group_helper( File "/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/torch/distributed/distributed_c10d.py", line 1953, in _new_process_group_helper backend_class = ProcessGroupGloo( RuntimeError: Gloo connectFullMesh failed with [/pytorch/third_party/gloo/gloo/transport/tcp/pair.cc:144] no error Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace. (WorkerDict pid=1465467, ip=129.114.18.55) WARNING 10-07 12:41:20 [utils.py:2522] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x40116fd69ba0> [repeated 15x across cluster] (WorkerDict pid=1465467, ip=129.114.18.55) WARNING 10-07 12:41:21 [topk_topp_sampler.py:69] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. [repeated 12x across cluster] [INFO] Extracting model from VeRL checkpoint at /scratch/10416/zaynesprague/skill_inject_outputs/sf_experiments/sft_v2/rlonly__32k/rl_rlonly__32k/verl/checkpoints [ERROR] No global_step directories found EXTRACT OUT: False [ERROR] Stage error: RuntimeError: Model extraction failed
/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_download.py:980: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`. For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder. warnings.warn( Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 613.61it/s] Fetching 10 files: 0%| | 0/10 [00:00<?, ?it/s] Fetching 10 files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10/10 [00:00<00:00, 688.25it/s] 2025-10-07 12:40:29,128 INFO worker.py:1694 -- Connecting to existing Ray cluster at address: 129.114.17.42:6379... 2025-10-07 12:40:29,134 INFO worker.py:1879 -- Connected to Ray cluster. View the dashboard at 127.0.0.1:8265 
rl_rlonly__32k
331.866023
true
2025-10-07T12:49:09.082205
2025-10-07T16:59:33.374609
verl_rl
1
INFO
Complete log capture for stage: verl_rl
"[INFO] Starting stage: VeRL RL training - rl\n[INFO] Data preparation succeeded\n[INFO] Setting up (...TRUNCATED)
"/work/10416/zaynesprague/anaconda3/envs/verl2/lib/python3.10/site-packages/huggingface_hub/file_dow(...TRUNCATED)
rl_rlonly__32k
15,024.292404
true
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
7