text
stringlengths 0
1.16k
|
|---|
[2024-10-22 17:03:03,960] torch.distributed.run: [WARNING]
|
[2024-10-22 17:03:03,960] torch.distributed.run: [WARNING] *****************************************
|
[2024-10-22 17:03:03,960] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
|
[2024-10-22 17:03:03,960] torch.distributed.run: [WARNING] *****************************************
|
[2024-10-22 17:03:05,665] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
[2024-10-22 17:03:05,676] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
[2024-10-22 17:03:05,680] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
[2024-10-22 17:03:05,686] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
Replace train sampler!!
|
petrel_client is not installed. Using PIL to load images.
|
[2024-10-22 17:03:08,735] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
|
[2024-10-22 17:03:08,735] [INFO] [comm.py:616:init_distributed] cdb=None
|
[2024-10-22 17:03:08,735] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
|
10/22/2024 17:03:09 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
|
10/22/2024 17:03:09 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
|
_n_gpu=1,
|
adafactor=False,
|
adam_beta1=0.9,
|
adam_beta2=0.999,
|
adam_epsilon=1e-08,
|
auto_find_batch_size=False,
|
bf16=True,
|
bf16_full_eval=False,
|
data_seed=None,
|
dataloader_drop_last=False,
|
dataloader_num_workers=4,
|
dataloader_persistent_workers=False,
|
dataloader_pin_memory=True,
|
ddp_backend=None,
|
ddp_broadcast_buffers=None,
|
ddp_bucket_cap_mb=None,
|
ddp_find_unused_parameters=None,
|
ddp_timeout=1800,
|
debug=[],
|
deepspeed=zero_stage1_config.json,
|
disable_tqdm=False,
|
dispatch_batches=None,
|
do_eval=False,
|
do_predict=False,
|
do_train=True,
|
eval_accumulation_steps=None,
|
eval_delay=0,
|
eval_steps=None,
|
evaluation_strategy=no,
|
fp16=False,
|
fp16_backend=auto,
|
fp16_full_eval=False,
|
fp16_opt_level=O1,
|
fsdp=[],
|
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
|
fsdp_min_num_params=0,
|
fsdp_transformer_layer_cls_to_wrap=None,
|
full_determinism=False,
|
gradient_accumulation_steps=1,
|
gradient_checkpointing=False,
|
gradient_checkpointing_kwargs=None,
|
greater_is_better=None,
|
group_by_length=True,
|
half_precision_backend=auto,
|
hub_always_push=False,
|
hub_model_id=None,
|
hub_private_repo=False,
|
hub_strategy=every_save,
|
hub_token=<HUB_TOKEN>,
|
ignore_data_skip=False,
|
include_inputs_for_metrics=False,
|
include_num_input_tokens_seen=False,
|
include_tokens_per_second=False,
|
jit_mode_eval=False,
|
label_names=None,
|
label_smoothing_factor=0.0,
|
learning_rate=4e-05,
|
length_column_name=length,
|
load_best_model_at_end=False,
|
local_rank=0,
|
log_level=passive,
|
log_level_replica=warning,
|
log_on_each_node=True,
|
logging_dir=work_dirs/internvl_chat_v1_5/internvl_chat_v1_5_internlm2_1_8b_dynamic_res_2nd_finetune_lora/runs/Oct22_17-03-08_73F3-5xA6000-134,
|
logging_first_step=False,
|
logging_nan_inf_filter=True,
|
logging_steps=500,
|
logging_strategy=epoch,
|
lr_scheduler_kwargs={},
|
lr_scheduler_type=cosine,
|
max_grad_norm=1.0,
|
max_steps=-1,
|
metric_for_best_model=None,
|
mp_parameters=,
|
neftune_noise_alpha=None,
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- -