timestamp stringdate 2025-09-23 20:41:09 2025-09-23 23:23:12 | end_timestamp stringdate 2025-09-23 20:41:12 2025-09-23 23:32:20 | stage_name stringclasses 1 value | stage_number int64 1 1 | level stringclasses 1 value | message stringclasses 1 value | stdout_content stringlengths 448 562k | stderr_content stringlengths 2.48k 8.16k | experiment_name stringclasses 1 value | elapsed_time_seconds float64 3.05 570 | stage_complete bool 1 class |
|---|---|---|---|---|---|---|---|---|---|---|
2025-09-23T20:41:09.172841 | 2025-09-23T20:41:12.226356 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[ERROR] LLaMAFactory stage 'sft' failed: [Errno 13] Permission denied: '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/data/dataset_info.json'
[ERROR] Stage error: PermissionError: [Errno 13] Permission denied: '/scratch/10416/zaynesprague/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/data/dataset_info.json'
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 39.76ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 39.60ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 20%|ββββββββββββββββββββββββββββββ | 2.62M/12.9M [00:00<00:00, 13.1MB/s][A
Uploading...: 98%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ | 12.6M/12.9M [00:00<00:00, 34.7MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:01<00:00, 7.40MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:02<00:00, 2.07s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:02<00:00, 2.07s/ shards]
| BASELINE_r1_distillation | 3.053515 | true |
2025-09-23T20:46:34.883811 | 2025-09-23T20:46:45.960537 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Registered dataset: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train -> TAUR-dev/D-SFT_C-BASELINE_r1_distillation-sft-data (format: sharegpt)
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 5 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 6)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load cuda/12.4 nccl/12.4 nvidia_math/12.4
source /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh;conda activate verl2
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
Due to MODULEPATH changes, the following have been reloaded:
1) openmpi/5.0.5
The following have been reloaded with a version change:
1) cuda/12.8 => cuda/12.4
/scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/run_training.sh: line 7: /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh: Permission denied
EnvironmentNameNotFound: Could not find conda environment: verl2
You can list all discoverable environments with `conda info --envs`.
Python environment check: /home1/10286/georgetsoukalas/miniconda3/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/bin/torchrun
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-101: task 0: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-102: task 1: Exited with exit code 1
srun: error: c619-112: task 3: Exited with exit code 1
srun: error: c619-111: task 2: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 43.94ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.6MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 24.0MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.12s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.12s/ shards]
README.md: 0%| | 0.00/391 [00:00<?, ?B/s]
README.md: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 391/391 [00:00<00:00, 5.90MB/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.25kB [00:00, 11.1MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/4.42k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.42k/4.42k [00:00<00:00, 12.9kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4.42k/4.42k [00:00<00:00, 12.9kB/s]
Generating train split: 0%| | 0/5 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:00<00:00, 1241.43 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 3026.19ba/s]
Uploading...: 0%| | 0.00/5.86k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/5.86k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/5.86k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.86k/5.86k [00:00<00:00, 29.3kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.86k/5.86k [00:00<00:00, 7.07kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.07s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.07s/ shards]
| BASELINE_r1_distillation | 11.076726 | true |
2025-09-23T20:53:13.441740 | 2025-09-23T20:53:19.987384 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Found existing dataset registration: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 9 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 10)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load cuda/12.4 nccl/12.4 nvidia_math/12.4
source /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh;conda activate verl2
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
Due to MODULEPATH changes, the following have been reloaded:
1) openmpi/5.0.5
The following have been reloaded with a version change:
1) cuda/12.6 => cuda/12.4
/scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/run_training.sh: line 7: /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh: Permission denied
EnvironmentNameNotFound: Could not find conda environment: verl2
You can list all discoverable environments with `conda info --envs`.
Python environment check: /home1/10286/georgetsoukalas/miniconda3/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/bin/torchrun
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-101: task 0: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-102: task 1: Exited with exit code 1
srun: error: c619-112: task 3: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-111: task 2: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 44.16ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 39.3MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 24.4MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.02s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.02s/ shards]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.25kB [00:00, 13.4MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/5.92k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.92k/5.92k [00:00<00:00, 18.3kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5.92k/5.92k [00:00<00:00, 18.2kB/s]
Generating train split: 0%| | 0/9 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 9/9 [00:00<00:00, 2517.76 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 3315.66ba/s]
Uploading...: 0%| | 0.00/6.05k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.05k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.05k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.05k/6.05k [00:00<00:00, 30.2kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.05k/6.05k [00:00<00:00, 7.31kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.10s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.10s/ shards]
| BASELINE_r1_distillation | 6.545644 | true |
2025-09-23T20:59:18.023433 | 2025-09-23T20:59:24.260352 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Found existing dataset registration: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 13 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 14)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load cuda/12.4 nccl/12.4 nvidia_math/12.4
source /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh;conda activate verl2
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
Due to MODULEPATH changes, the following have been reloaded:
1) openmpi/5.0.5
The following have been reloaded with a version change:
1) cuda/12.8 => cuda/12.4
/scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/run_training.sh: line 7: /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh: Permission denied
EnvironmentNameNotFound: Could not find conda environment: verl2
You can list all discoverable environments with `conda info --envs`.
Python environment check: /home1/10286/georgetsoukalas/miniconda3/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/bin/torchrun
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-101: task 0: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-111: task 2: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-112: task 3: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-102: task 1: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 44.17ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.8MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 24.2MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.15 shards/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.15 shards/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.26kB [00:00, 10.6MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/6.12k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.12k/6.12k [00:00<00:00, 18.8kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.12k/6.12k [00:00<00:00, 18.8kB/s]
Generating train split: 0%| | 0/13 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 13/13 [00:00<00:00, 3006.84 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 2508.56ba/s]
Uploading...: 0%| | 0.00/6.25k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.25k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.25k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.25k/6.25k [00:00<00:00, 31.2kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.25k/6.25k [00:00<00:00, 7.55kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.14s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.14s/ shards]
| BASELINE_r1_distillation | 6.236919 | true |
2025-09-23T21:28:32.347147 | 2025-09-23T21:28:38.508036 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Found existing dataset registration: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 17 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 18)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load cuda/12.4 nccl/12.4 nvidia_math/12.4
source /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh;conda activate verl2
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
The following have been reloaded with a version change:
1) cuda/12.8 => cuda/12.4
/scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/run_training.sh: line 7: /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh: Permission denied
EnvironmentNameNotFound: Could not find conda environment: verl2
You can list all discoverable environments with `conda info --envs`.
Python environment check: /home1/10286/georgetsoukalas/miniconda3/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/bin/torchrun
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-101: task 0: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
from torch._C import * # noqa: F403
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-102: task 1: Exited with exit code 1
srun: error: c619-111: task 2: Exited with exit code 1
srun: error: c619-112: task 3: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 43.99ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.4MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 24.0MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.16 shards/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.16 shards/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.26kB [00:00, 10.6MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/6.33k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.33k/6.33k [00:00<00:00, 19.6kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.33k/6.33k [00:00<00:00, 19.6kB/s]
Generating train split: 0%| | 0/17 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 17/17 [00:00<00:00, 3979.19 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 2641.25ba/s]
Uploading...: 0%| | 0.00/6.46k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.46k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.46k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.46k/6.46k [00:00<00:00, 32.3kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.46k/6.46k [00:00<00:00, 7.76kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.05s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.05s/ shards]
| BASELINE_r1_distillation | 6.160889 | true |
2025-09-23T21:30:15.929135 | 2025-09-23T21:30:22.369206 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Found existing dataset registration: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 21 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 22)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load cuda/12.4 nccl/12.4 nvidia_math/12.4
source /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh;conda activate verl2
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
The following have been reloaded with a version change:
1) cuda/12.8 => cuda/12.4
/scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/run_training.sh: line 7: /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh: Permission denied
EnvironmentNameNotFound: Could not find conda environment: verl2
You can list all discoverable environments with `conda info --envs`.
Python environment check: /home1/10286/georgetsoukalas/miniconda3/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/bin/torchrun
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-111: task 2: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-102: task 1: Exited with exit code 1
srun: error: c619-112: task 3: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-101: task 0: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 43.94ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.7MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 24.1MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.23 shards/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.23 shards/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.26kB [00:00, 10.8MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/6.53k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.53k/6.53k [00:00<00:00, 19.1kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.53k/6.53k [00:00<00:00, 19.1kB/s]
Generating train split: 0%| | 0/21 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 21/21 [00:00<00:00, 5344.68 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 2770.35ba/s]
Uploading...: 0%| | 0.00/6.66k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.66k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.66k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.66k/6.66k [00:00<00:00, 33.3kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.66k/6.66k [00:00<00:00, 8.01kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.04s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.04s/ shards]
| BASELINE_r1_distillation | 6.440071 | true |
2025-09-23T21:32:03.212178 | 2025-09-23T21:32:13.302628 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Found existing dataset registration: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 25 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 26)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load gcc/14 cuda/12.8 nccl/12.4 nvidia_math/12.4
source /home1/10286/georgetsoukalas/miniconda3/etc/profile.d/conda.sh && conda deactivate && conda deactivate && conda activate && conda activate vllm
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
The following have been reloaded with a version change:
1) gcc/13.2.0 => gcc/14.2.0
Python environment check: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/torchrun
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
E0923 21:32:13.045000 816862 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 816867) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
E0923 21:32:13.046000 3667132 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 3667135) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
E0923 21:32:13.045000 917863 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 917866) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
E0923 21:32:13.045000 3461454 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 3461457) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
return f(*args, **kwargs)
Traceback (most recent call last):
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
run(args)
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
return f(*args, **kwargs)
main()
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
run(args)
return launch_agent(self._config, self._entrypoint, list(args))
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
run(args)
raise ChildFailedError(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:32:13
host : c619-101.vista.tacc.utexas.edu
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 816867)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
run(args)
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:32:13
host : c619-102.vista.tacc.utexas.edu
rank : 1 (local_rank: 0)
exitcode : 2 (pid: 3667135)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
return launch_agent(self._config, self._entrypoint, list(args))
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:32:13
host : c619-112.vista.tacc.utexas.edu
rank : 3 (local_rank: 0)
exitcode : 2 (pid: 917866)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:32:13
host : c619-111.vista.tacc.utexas.edu
rank : 2 (local_rank: 0)
exitcode : 2 (pid: 3461457)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: c619-101: task 0: Exited with exit code 1
srun: error: c619-111: task 2: Exited with exit code 1
srun: error: c619-102: task 1: Exited with exit code 1
srun: error: c619-112: task 3: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 43.94ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.4MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 24.0MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.22 shards/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.22 shards/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.26kB [00:00, 1.67MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/6.74k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.74k/6.74k [00:00<00:00, 20.3kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.74k/6.74k [00:00<00:00, 20.3kB/s]
Generating train split: 0%| | 0/25 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 25/25 [00:00<00:00, 5622.39 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 2744.96ba/s]
Uploading...: 0%| | 0.00/6.92k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.92k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/6.92k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.92k/6.92k [00:00<00:00, 34.5kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.92k/6.92k [00:00<00:00, 8.29kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.03s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.03s/ shards]
| BASELINE_r1_distillation | 10.09045 | true |
2025-09-23T21:34:45.027978 | 2025-09-23T21:34:54.235864 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Found existing dataset registration: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 29 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 30)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load gcc/14 cuda/12.8 nccl/12.4 nvidia_math/12.4
source /home1/10286/georgetsoukalas/miniconda3/etc/profile.d/conda.sh && conda deactivate && conda deactivate && conda activate && conda activate vllm
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
The following have been reloaded with a version change:
1) gcc/13.2.0 => gcc/14.2.0
Python environment check: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/torchrun
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
/home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python: can't open file '/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py': [Errno 2] No such file or directory
E0923 21:34:54.016000 3667164 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 3667167) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
E0923 21:34:54.016000 818564 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 818569) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
E0923 21:34:54.016000 3461485 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 3461488) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
E0923 21:34:54.015000 917894 /work/10286/georgetsoukalas/vista/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/api.py:874] failed (exitcode: 2) local_rank: 0 (pid: 917897) of binary: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
Traceback (most recent call last):
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
run(args)
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
Traceback (most recent call last):
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 896, in <module>
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
return f(*args, **kwargs)
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:34:54
host : c619-102.vista.tacc.utexas.edu
rank : 1 (local_rank: 0)
exitcode : 2 (pid: 3667167)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
main()
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
run(args)
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
run(args)
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return f(*args, **kwargs)
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 892, in main
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:34:54
host : c619-101.vista.tacc.utexas.edu
rank : 0 (local_rank: 0)
exitcode : 2 (pid: 818569)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
run(args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/run.py", line 883, in run
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:34:54
host : c619-111.vista.tacc.utexas.edu
rank : 2 (local_rank: 0)
exitcode : 2 (pid: 3461488)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
elastic_launch(
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
/scratch/10286/georgetsoukalas/skill_factory_dir/skill-factory/thirdparty/LLaMA-Factory/src/train.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-09-23_21:34:54
host : c619-112.vista.tacc.utexas.edu
rank : 3 (local_rank: 0)
exitcode : 2 (pid: 917897)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
srun: error: c619-101: task 0: Exited with exit code 1
srun: error: c619-102: task 1: Exited with exit code 1
srun: error: c619-111: task 2: Exited with exit code 1
srun: error: c619-112: task 3: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 43.83ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.2MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 23.9MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.14 shards/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.14 shards/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.26kB [00:00, 10.6MB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/6.99k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.99k/6.99k [00:00<00:00, 20.4kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6.99k/6.99k [00:00<00:00, 20.4kB/s]
Generating train split: 0%| | 0/29 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 29/29 [00:00<00:00, 3827.16 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 2853.27ba/s]
Uploading...: 0%| | 0.00/7.13k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/7.13k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/7.13k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.13k/7.13k [00:00<00:00, 35.6kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.13k/7.13k [00:00<00:00, 8.55kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.12s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.12s/ shards]
| BASELINE_r1_distillation | 9.207886 | true |
2025-09-23T21:35:46.741998 | 2025-09-23T21:35:53.832806 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | [INFO] Starting stage: LLaMAFactory training - sft
[INFO] Starting LLaMAFactory Training
[INFO] Registered dataset: TAUR_dev__D_SFT_C_BASELINE_r1_distillation_sft_data__sft_train -> TAUR-dev/D-SFT_C-BASELINE_r1_distillation-sft-data (format: sharegpt)
[INFO] Created training config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
[INFO] Created merge config: /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/merge_config.yaml
[INFO]οΈ Starting LLaMAFactory training...
[DEBUG] Loaded 33 existing entries from metadata
[DEBUG] Successfully appended 1 entries to metadata (total: 34)
[DEBUG] Training Script #!/bin/bash
cd /scratch/10286/georgetsoukalas/skillfactory/skill-factory/thirdparty/LLaMA-Factory
source ~/.profile; source /opt/apps/lmod/lmod/init/bash;module load cuda/12.4 nccl/12.4 nvidia_math/12.4
source /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh;conda activate verl2
# Verify python environment is working
PYTHON_PATH=$(which python)
echo "Python environment check: $PYTHON_PATH"
export HF_HOME="/scratch/10286/georgetsoukalas/hf_cache"
export TRITON_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/triton"
export OUTLINES_CACHE_DIR="/scratch/10286/georgetsoukalas/.cache/outlines"
export PYTHONPATH="/scratch/10286/georgetsoukalas/skillfactory/skill-factory"
export CUDA_LAUNCH_BLOCKING="0"
export DISABLE_VERSION_CHECK="1"
export CC="gcc"
export CXX="g++"
export FORCE_TORCHRUN="1"
export NCCL_PROTO="simple"
export FI_EFA_FORK_SAFE="1"
export FI_LOG_LEVEL="1"
export FI_EFA_USE_DEVICE_RDMA="1"
export NCCL_NET_GDR_LEVEL="SYS"
export NCCL_NET_GDR_READ="1"
export PYTHONFAULTHANDLER="1"
export OMPI_MCA_mtl_base_verbose="1"
export FI_EFA_ENABLE_SHM_TRANSFER="0"
export FI_PROVIDER="efa"
export FI_EFA_TX_MIN_CREDITS="64"
export NCCL_TREE_THRESHOLD="0"
export NCCL_DEBUG="INFO"
export HF_DATASETS_DISABLE_MEMMAP="1"
export DATASETS_DISABLE_MEMMAP="1"
export HF_DATASETS_CACHE="/tmp/.sf_cache/datasets"
export HF_DATASETS_DISABLE_MEMMAP=1
export DATASETS_DISABLE_MEMMAP=1
# Master node coordination
export MASTER_ADDR=$(scontrol show hostnames "$SLURM_JOB_NODELIST" | head -n 1)
export MASTER_PORT=12802
# Python path setup
export PYTHONPATH=$PWD:$PYTHONPATH
echo "π Multi-node setup: MASTER_ADDR=$MASTER_ADDR, MASTER_PORT=$MASTER_PORT"
echo "π Starting multi-node training with 1 GPUs per node across $SLURM_JOB_NUM_NODES nodes"
echo "π Working directory: $(pwd)"
echo "π Python path: $(which python)"
echo "π₯ Torchrun path: $(which torchrun)"
srun /home1/10286/georgetsoukalas/miniconda3/envs/vllm/bin/python -m torch.distributed.run \
--nproc-per-node 1 \
--nnodes $SLURM_JOB_NUM_NODES \
--rdzv_id=$SLURM_JOB_ID \
--rdzv_backend=c10d \
--rdzv_endpoint="$MASTER_ADDR:$MASTER_PORT" \
/scratch/10286/georgetsoukalas/skillfactory/skill-factory/thirdparty/LLaMA-Factory/src/train.py /scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/configs/training_config.yaml
π Starting training with real-time output...
================================================================================
The following have been reloaded with a version change:
1) cuda/12.8 => cuda/12.4
/scratch/10286/georgetsoukalas/skill_inject_outputs/llamafactory/run_training.sh: line 7: /work/10416/zaynesprague/vista/../anaconda3/etc/profile.d/conda.sh: Permission denied
EnvironmentNameNotFound: Could not find conda environment: verl2
You can list all discoverable environments with `conda info --envs`.
Python environment check: /home1/10286/georgetsoukalas/miniconda3/bin/python
π Multi-node setup: MASTER_ADDR=c619-101, MASTER_PORT=12802
π Starting multi-node training with 1 GPUs per node across 4 nodes
π Working directory: /scratch/10286/georgetsoukalas/skillfactory/skill-factory/thirdparty/LLaMA-Factory
π Python path: /home1/10286/georgetsoukalas/miniconda3/bin/python
π₯ Torchrun path: /home1/10286/georgetsoukalas/miniconda3/bin/torchrun
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-101: task 0: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-112: task 3: Exited with exit code 1
srun: error: c619-102: task 1: Exited with exit code 1
Traceback (most recent call last):
File "<frozen runpy>", line 189, in _run_module_as_main
File "<frozen runpy>", line 112, in _get_module_details
File "/home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/__init__.py", line 409, in <module>
from torch._C import * # noqa: F403
^^^^^^^^^^^^^^^^^^^^^^
ImportError: /home1/10286/georgetsoukalas/miniconda3/envs/vllm/lib/python3.12/site-packages/torch/lib/libtorch_cpu.so: undefined symbol: cuptiActivityEnableDriverApi, version libcupti.so.12
srun: error: c619-111: task 2: Exited with exit code 1
================================================================================
[ERROR] Training failed with return code 1
[ERROR] LLaMAFactory stage 'sft' failed: Training failed
[ERROR] Stage error: RuntimeError: Training failed
|
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/4 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 4/4 [00:00<00:00, 44.33ba/s]
Uploading...: 0%| | 0.00/12.9M [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 38.1MB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 12.9M/12.9M [00:00<00:00, 23.9MB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.22 shards/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 1.22 shards/s]
README.md: 0.00B [00:00, ?B/s]
README.md: 1.26kB [00:00, 29.5kB/s]
metadata/train-00000-of-00001.parquet: 0%| | 0.00/7.21k [00:00<?, ?B/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.21k/7.21k [00:00<00:00, 18.8kB/s]
metadata/train-00000-of-00001.parquet: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.21k/7.21k [00:00<00:00, 18.8kB/s]
Generating train split: 0%| | 0/33 [00:00<?, ? examples/s]
Generating train split: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 33/33 [00:00<00:00, 9715.85 examples/s]
Uploading the dataset shards: 0%| | 0/1 [00:00<?, ? shards/s]
Creating parquet from Arrow format: 0%| | 0/1 [00:00<?, ?ba/s][A
Creating parquet from Arrow format: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 2861.05ba/s]
Uploading...: 0%| | 0.00/7.37k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/7.37k [00:00<?, ?B/s][A
Uploading...: 0%| | 0.00/7.37k [00:00<?, ?B/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.37k/7.37k [00:00<00:00, 36.8kB/s][A
Uploading...: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 7.37k/7.37k [00:00<00:00, 8.82kB/s]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.08s/ shards]
Uploading the dataset shards: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:01<00:00, 1.08s/ shards]
| BASELINE_r1_distillation | 7.090808 | true |
2025-09-23T21:39:17.461189 | 2025-09-23T21:39:43.626183 | llamafactory_sft | 1 | INFO | Complete log capture for stage: llamafactory_sft | "[INFO] Starting stage: LLaMAFactory training - sft\n[INFO] Starting LLaMAFactory Training\n[INFO] F(...TRUNCATED) | "\rUploading the dataset shards: 0%| (...TRUNCATED) | BASELINE_r1_distillation | 26.164994 | true |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 8