text
stringlengths
0
1.16k
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
internvl/train/internvl_chat_finetune.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-10-22_17:32:36
host : 73F3-5xA6000-134
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 2084214)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
/home/yunjie/anaconda3/envs/internvl/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 24 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/home/yunjie/anaconda3/envs/internvl/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 20 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
/home/yunjie/anaconda3/envs/internvl/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 20 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
[2024-10-23 14:41:01,322] torch.distributed.run: [WARNING]
[2024-10-23 14:41:01,322] torch.distributed.run: [WARNING] *****************************************
[2024-10-23 14:41:01,322] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-10-23 14:41:01,322] torch.distributed.run: [WARNING] *****************************************
[2024-10-23 14:41:03,013] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 14:41:03,024] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 14:41:03,029] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-23 14:41:03,037] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
[2024-10-23 14:41:06,003] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2024-10-23 14:41:06,003] [INFO] [comm.py:616:init_distributed] cdb=None
[2024-10-23 14:41:06,003] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
10/23/2024 14:41:06 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
10/23/2024 14:41:06 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=True,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=4,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,