text
stringlengths
0
1.16k
1.4117e-05, 4.5579e-04], device='cuda:0', grad_fn=<SoftmaxBackward0>)
yes *************
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([8.9710e-01, 1.2218e-02, 8.8825e-02, 1.0302e-03, 1.0371e-04, 2.4893e-04,
1.4117e-05, 4.5579e-04], device='cuda:0', grad_fn=<SelectBackward0>)
最后的概率分布为: {True: tensor(0.8971, device='cuda:0', grad_fn=<UnbindBackward0>), False: tensor(0.0888, device='cuda:0', grad_fn=<UnbindBackward0>), 'Execute Error': tensor(0.0141, device='cuda:0', grad_fn=<SubBackward0>)}
tensor([5.1581e-01, 2.4645e-02, 4.5520e-01, 1.2764e-03, 1.6309e-04, 1.7475e-03,
1.2594e-04, 1.0319e-03], device='cuda:2', grad_fn=<SoftmaxBackward0>)
yes *************
['yes', 'congratulations', 'no', 'honey', 'solid', 'right', 'candle', 'chocolate'] tensor([5.1581e-01, 2.4645e-02, 4.5520e-01, 1.2764e-03, 1.6309e-04, 1.7475e-03,
1.2594e-04, 1.0319e-03], device='cuda:2', grad_fn=<SelectBackward0>)
最后的概率分布为: {True: tensor(0.5158, device='cuda:2', grad_fn=<DivBackward0>), False: tensor(0.4552, device='cuda:2', grad_fn=<DivBackward0>), 'Execute Error': tensor(0.0290, device='cuda:2', grad_fn=<DivBackward0>)}
[2024-10-22 17:18:15,754] torch.distributed.run: [WARNING]
[2024-10-22 17:18:15,754] torch.distributed.run: [WARNING] *****************************************
[2024-10-22 17:18:15,754] torch.distributed.run: [WARNING] Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
[2024-10-22 17:18:15,754] torch.distributed.run: [WARNING] *****************************************
[2024-10-22 17:18:17,475] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:18:17,483] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:18:17,485] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-10-22 17:18:17,485] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
petrel_client is not installed. If you read data locally instead of from ceph, ignore it.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
Replace train sampler!!
petrel_client is not installed. Using PIL to load images.
[2024-10-22 17:18:20,648] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2024-10-22 17:18:20,648] [INFO] [comm.py:616:init_distributed] cdb=None
[2024-10-22 17:18:20,648] [INFO] [comm.py:643:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl
[2024-10-22 17:18:20,872] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2024-10-22 17:18:20,872] [INFO] [comm.py:616:init_distributed] cdb=None
[2024-10-22 17:18:20,877] [WARNING] [comm.py:152:init_deepspeed_backend] NCCL backend in DeepSpeed not yet implemented
[2024-10-22 17:18:20,877] [INFO] [comm.py:616:init_distributed] cdb=None
10/22/2024 17:18:20 - WARNING - __main__ - Process rank: 0, device: cuda:0, n_gpu: 1distributed training: True, 16-bits training: False
10/22/2024 17:18:20 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
auto_find_batch_size=False,
bf16=True,
bf16_full_eval=False,
data_seed=None,
dataloader_drop_last=False,
dataloader_num_workers=4,
dataloader_persistent_workers=False,
dataloader_pin_memory=True,
ddp_backend=None,
ddp_broadcast_buffers=None,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
ddp_timeout=1800,
debug=[],
deepspeed=zero_stage1_config.json,
disable_tqdm=False,
dispatch_batches=None,
do_eval=False,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_delay=0,
eval_steps=None,
evaluation_strategy=no,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
fsdp=[],
fsdp_config={'min_num_params': 0, 'xla': False, 'xla_fsdp_grad_ckpt': False},
fsdp_min_num_params=0,
fsdp_transformer_layer_cls_to_wrap=None,
full_determinism=False,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
gradient_checkpointing_kwargs=None,
greater_is_better=None,
group_by_length=True,
half_precision_backend=auto,
hub_always_push=False,
hub_model_id=None,
hub_private_repo=False,
hub_strategy=every_save,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
include_inputs_for_metrics=False,
include_num_input_tokens_seen=False,
include_tokens_per_second=False,
jit_mode_eval=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=4e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=0,