text
stringlengths
0
1.16k
File "/mnt/SSD1_4TB/yunjie/Internvl_NLVR/InternVL/internvl_chat/internvl/train/internvl_chat_finetune.py", line 898, in <module>
main()
File "/mnt/SSD1_4TB/yunjie/Internvl_NLVR/InternVL/internvl_chat/internvl/train/internvl_chat_finetune.py", line 807, in main
train_dataset = build_datasets(
File "/mnt/SSD1_4TB/yunjie/Internvl_NLVR/InternVL/internvl_chat/internvl/train/internvl_chat_finetune.py", line 598, in build_datasets
dataset = LazySupervisedDataset(
File "/mnt/SSD1_4TB/yunjie/Internvl_NLVR/InternVL/internvl_chat/internvl/train/internvl_chat_finetune.py", line 251, in __init__
with open(meta['annotation'], 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'playground/opensource/sharegpt4v_instruct_gpt4-vision_cap100k.jsonl'
[2024-10-22 17:03:18,980] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: 1) local_rank: 0 (pid: 2064903) of binary: /home/yunjie/anaconda3/envs/internvl/bin/python
Traceback (most recent call last):
File "/home/yunjie/anaconda3/envs/internvl/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 806, in main
run(args)
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/run.py", line 797, in run
elastic_launch(
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/yunjie/anaconda3/envs/internvl/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 264, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
internvl/train/internvl_chat_finetune.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2024-10-22_17:03:18
host : 73F3-5xA6000-134
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 2064904)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[2]:
time : 2024-10-22_17:03:18
host : 73F3-5xA6000-134
rank : 2 (local_rank: 2)
exitcode : 1 (pid: 2064905)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
[3]:
time : 2024-10-22_17:03:18
host : 73F3-5xA6000-134
rank : 3 (local_rank: 3)
exitcode : 1 (pid: 2064906)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-10-22_17:03:18
host : 73F3-5xA6000-134
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2064903)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
[2024-10-22 17:03:38,251] torch.distributed.run: [WARNING]
[2024-10-22 17:03:38,251] torch.distributed.run: [WARNING] *****************************************
[2024-10-22 17:03:38,251] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-10-22 17:03:38,251] torch.distributed.run: [WARNING] *****************************************
[2024-10-22 17:03:39,947] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:03:39,964] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:03:39,972] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:03:39,993] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
[2024-10-22 17:03:43,075] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2024-10-22 17:03:43,075] [INFO] [comm.py:616:init_distributed] cdb=None
[2024-10-22 17:03:43,075] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2024-10-22 17:03:43,238] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2024-10-22 17:03:43,239] [INFO] [comm.py:616:init_distributed] cdb=None
10/22/2024 17:03:43 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
10/22/2024 17:03:43 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=True,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=4,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
ddp_backend=None,