File size: 2,594 Bytes
12df14e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
[2026-01-26 07:21:36,287] [WARNING] [py.warnings._showwarnmsg:110] [PID:300] /root/miniconda3/envs/py3.11/lib/python3.11/site-packages/torch/backends/__init__.py:46: UserWarning: Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices (Triggered internally at /pytorch/aten/src/ATen/Context.cpp:80.)
self.setter(val)
Saving the dataset (0/3 shards): 0%| | 0/1000 [00:00<?, ? examples/s]
Saving the dataset (1/3 shards): 33%|ββββ | 334/1000 [00:00<00:00, 3796.19 examples/s]
Saving the dataset (2/3 shards): 67%|βββββββ | 667/1000 [00:00<00:00, 7487.84 examples/s]
Saving the dataset (3/3 shards): 100%|ββββββββββ| 1000/1000 [00:00<00:00, 11109.06 examples/s]
Saving the dataset (3/3 shards): 100%|ββββββββββ| 1000/1000 [00:00<00:00, 6303.92 examples/s]
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Loading checkpoint shards: 50%|βββββ | 1/2 [00:00<00:00, 1.78it/s]
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.40it/s]
Loading checkpoint shards: 100%|ββββββββββ| 2/2 [00:00<00:00, 2.28it/s]
[2026-01-26 07:21:58,152] [WARNING] [py.warnings._showwarnmsg:110] [PID:300] <string>:246: FutureWarning: The `max_prompt_length` argument is deprecated and will be removed in version 0.28.0. You should instead filter your dataset before training to ensure that prompts do not exceed your desired length.
[2026-01-26 07:21:58,799] [WARNING] [py.warnings._showwarnmsg:110] [PID:300] /workspace/axolotl/src/axolotl/core/trainers/mixins/optimizer.py:209: UserWarning: You are importing from 'rollout_func', which is an experimental feature. This API may change or be removed at any time without prior notice. Silence this warning by setting environment variable TRL_EXPERIMENTAL_SILENCE=1.
super().__init__(*args, **kwargs)
2026-01-26 07:22:41,128 - INFO - autotuner.py:256 - flashinfer.jit: [Autotuner]: Autotuning process starts ...
2026-01-26 07:22:41,141 - INFO - autotuner.py:262 - flashinfer.jit: [Autotuner]: Autotuning process ends
AlfWorld endpoint initialized on rank 1 at http://environment-server-1:8000
|