| root@c35059eec1c7:~/alpaca-lora# WORLD_SIZE=4 CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=3192 finetune.py --base_model '/root/llama-7b-hf/' --data_path './alpaca_data_cleaned.json' --output_dir './lora-alpaca' --batch_size 1024 --micro_batch_size 128 |
| WARNING:torch.distributed.run: |
| ***************************************** |
| Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. |
| ***************************************** |
|
|
| ===================================BUG REPORT=================================== |
| Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues |
| ================================================================================ |
|
|
| ===================================BUG REPORT=================================== |
| Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues |
| ================================================================================ |
|
|
| ===================================BUG REPORT=================================== |
| Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues |
| ================================================================================ |
|
|
| ===================================BUG REPORT=================================== |
| Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues |
| ================================================================================ |
| /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic__vgbqj6n/none_m7wuat9y/attempt_0/1/error.json')} |
| warn(msg) |
| CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... |
| CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so |
| CUDA SETUP: Highest compute capability among GPUs detected: 8.0 |
| CUDA SETUP: Detected CUDA version 116 |
| CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda116.so... |
| /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic__vgbqj6n/none_m7wuat9y/attempt_0/2/error.json')} |
| warn(msg) |
| CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... |
| CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so |
| CUDA SETUP: Highest compute capability among GPUs detected: 8.0 |
| CUDA SETUP: Detected CUDA version 116 |
| CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda116.so... |
| /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic__vgbqj6n/none_m7wuat9y/attempt_0/0/error.json')} |
| warn(msg) |
| CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... |
| CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so |
| CUDA SETUP: Highest compute capability among GPUs detected: 8.0 |
| CUDA SETUP: Detected CUDA version 116 |
| CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda116.so... |
| /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:136: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic__vgbqj6n/none_m7wuat9y/attempt_0/3/error.json')} |
| warn(msg) |
| CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... |
| CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so |
| CUDA SETUP: Highest compute capability among GPUs detected: 8.0 |
| CUDA SETUP: Detected CUDA version 116 |
| CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda116.so... |
| Training Alpaca-LoRA model with params: |
| base_model: /root/llama-7b-hf/ |
| data_path: ./alpaca_data_cleaned.json |
| output_dir: ./lora-alpaca |
| batch_size: 1024 |
| micro_batch_size: 128 |
| num_epochs: 3 |
| learning_rate: 0.0003 |
| cutoff_len: 256 |
| val_set_size: 2000 |
| lora_r: 8 |
| lora_alpha: 16 |
| lora_dropout: 0.05 |
| lora_target_modules: ['q_proj', 'v_proj'] |
| train_on_inputs: True |
| group_by_length: False |
| wandb_project: |
| wandb_run_name: |
| wandb_watch: |
| wandb_log_model: |
| resume_from_checkpoint: None |
|
|
| Training Alpaca-LoRA model with params: |
| base_model: /root/llama-7b-hf/ |
| data_path: ./alpaca_data_cleaned.json |
| output_dir: ./lora-alpaca |
| batch_size: 1024 |
| micro_batch_size: 128 |
| num_epochs: 3 |
| learning_rate: 0.0003 |
| cutoff_len: 256 |
| val_set_size: 2000 |
| lora_r: 8 |
| lora_alpha: 16 |
| lora_dropout: 0.05 |
| lora_target_modules: ['q_proj', 'v_proj'] |
| train_on_inputs: True |
| group_by_length: False |
| wandb_project: |
| wandb_run_name: |
| wandb_watch: |
| wandb_log_model: |
| resume_from_checkpoint: None |
|
|
| Training Alpaca-LoRA model with params: |
| base_model: /root/llama-7b-hf/ |
| data_path: ./alpaca_data_cleaned.json |
| output_dir: ./lora-alpaca |
| batch_size: 1024 |
| micro_batch_size: 128 |
| num_epochs: 3 |
| learning_rate: 0.0003 |
| cutoff_len: 256 |
| val_set_size: 2000 |
| lora_r: 8 |
| lora_alpha: 16 |
| lora_dropout: 0.05 |
| lora_target_modules: ['q_proj', 'v_proj'] |
| train_on_inputs: True |
| group_by_length: False |
| wandb_project: |
| wandb_run_name: |
| wandb_watch: |
| wandb_log_model: |
| resume_from_checkpoint: None |
|
|
| Training Alpaca-LoRA model with params: |
| base_model: /root/llama-7b-hf/ |
| data_path: ./alpaca_data_cleaned.json |
| output_dir: ./lora-alpaca |
| batch_size: 1024 |
| micro_batch_size: 128 |
| num_epochs: 3 |
| learning_rate: 0.0003 |
| cutoff_len: 256 |
| val_set_size: 2000 |
| lora_r: 8 |
| lora_alpha: 16 |
| lora_dropout: 0.05 |
| lora_target_modules: ['q_proj', 'v_proj'] |
| train_on_inputs: True |
| group_by_length: False |
| wandb_project: |
| wandb_run_name: |
| wandb_watch: |
| wandb_log_model: |
| resume_from_checkpoint: None |
|
|
| Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.12s/it] |
| Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.28s/it] |
| Found cached dataset json (/root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) |
| 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 536.56it/s] |
| trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199 |
| Loading cached split indices for dataset at /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-014b126db45d32ac.arrow and /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-24bb91d28b1fd433.arrow |
| Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:10<00:00, 5.35s/it] |
| Map: 1%|ββ | 389/49759 [00:00<00:38, 1294.21 examples/s]Found cached dataset json (/root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) |
| 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 554.80it/s] |
| trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199 |
| Loading cached split indices for dataset at /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-014b126db45d32ac.arrow and /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-24bb91d28b1fd433.arrow |
| Map: 1%|ββ | 523/49759 [00:00<00:37, 1309.32 examples/s]Found cached dataset json (/root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) |
| 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 374.96it/s] |
| trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199 |
| Loading cached split indices for dataset at /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-014b126db45d32ac.arrow and /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-24bb91d28b1fd433.arrow |
| Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:12<00:00, 6.13s/it] |
| Map: 5%|ββββββββββ | 2525/49759 [00:01<00:36, 1279.49 examples/s]Found cached dataset json (/root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51) |
| 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 463.66it/s] |
| trainable params: 4194304 || all params: 6742609920 || trainable%: 0.06220594176090199 |
| Loading cached split indices for dataset at /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-014b126db45d32ac.arrow and /root/.cache/huggingface/datasets/json/default-089f33be67988669/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/cache-24bb91d28b1fd433.arrow |
| wandb: Currently logged in as: jeffwan. Use `wandb login --relogin` to force relogin |
| wandb: Tracking run with wandb version 0.14.0 |
| wandb: Run data is saved locally in /root/alpaca-lora/wandb/run-20230328_174559-mzmf7gh8 |
| wandb: Run `wandb offline` to turn off syncing. |
| wandb: Syncing run curious-moon-1 |
| wandb: βοΈ View project at https://wandb.ai/jeffwan/alpaca |
| wandb: π View run at https://wandb.ai/jeffwan/alpaca/runs/mzmf7gh8 |
|
|
|
|
| {'loss': 2.2763, 'learning_rate': 2.9999999999999997e-05, 'epoch': 0.2} |
| {'loss': 2.1957, 'learning_rate': 5.9999999999999995e-05, 'epoch': 0.41} |
| {'loss': 1.9686, 'learning_rate': 8.999999999999999e-05, 'epoch': 0.61} |
| {'loss': 1.5397, 'learning_rate': 0.00011999999999999999, 'epoch': 0.82} |
| {'loss': 1.2312, 'learning_rate': 0.00015, 'epoch': 1.02} |
| {'loss': 1.1251, 'learning_rate': 0.00017999999999999998, 'epoch': 1.22} |
| {'loss': 0.9606, 'learning_rate': 0.00020999999999999998, 'epoch': 1.43} |
| {'loss': 0.8537, 'learning_rate': 0.00023999999999999998, 'epoch': 1.63} |
| {'loss': 0.8412, 'learning_rate': 0.00027, 'epoch': 1.84} |
| {'loss': 0.838, 'learning_rate': 0.0003, 'epoch': 2.04} |
| {'loss': 0.8305, 'learning_rate': 0.00023617021276595742, 'epoch': 2.24} |
| {'loss': 0.8215, 'learning_rate': 0.0001723404255319149, 'epoch': 2.45} |
| {'loss': 0.8147, 'learning_rate': 0.00010851063829787234, 'epoch': 2.65} |
| {'loss': 0.8147, 'learning_rate': 4.468085106382978e-05, 'epoch': 2.86} |
| {'train_runtime': 3187.753, 'train_samples_per_second': 46.828, 'train_steps_per_second': 0.046, 'train_loss': 1.2030427374807344, 'epoch': 3.0} |
| 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 147/147 [53:05<00:00, 21.67s/it] |
|
|
| If there's a warning about missing keys above, please disregard :) |
|
|
| If there's a warning about missing keys above, please disregard :) |
|
|
| If there's a warning about missing keys above, please disregard :) |
|
|
| If there's a warning about missing keys above, please disregard :) |
| wandb: Waiting for W&B process to finish... (success). |
| wandb: / 0.015 MB of 0.015 MB uploaded (0.000 MB deduped) |
| wandb: Run history: |
| wandb: train/epoch ββββββββ
β
ββββββ |
| wandb: train/global_step ββββββββ
β
ββββββ |
| wandb: train/learning_rate ββββββ
ββββββ
ββ |
| wandb: train/loss ββββββββββββββ |
| wandb: train/total_flos β |
| wandb: train/train_loss β |
| wandb: train/train_runtime β |
| wandb: train/train_samples_per_second β |
| wandb: train/train_steps_per_second β |
| wandb: |
| wandb: Run summary: |
| wandb: train/epoch 3.0 |
| wandb: train/global_step 147 |
| wandb: train/learning_rate 4e-05 |
| wandb: train/loss 0.8147 |
| wandb: train/total_flos 1.5159002374476923e+18 |
| wandb: train/train_loss 1.20304 |
| wandb: train/train_runtime 3187.753 |
| wandb: train/train_samples_per_second 46.828 |
| wandb: train/train_steps_per_second 0.046 |
| wandb: |
| wandb: π View run curious-moon-1 at: https://wandb.ai/jeffwan/alpaca/runs/mzmf7gh8 |
| wandb: Synced 5 W&B file(s), 0 media file(s), 2 artifact file(s) and 0 other file(s) |
| wandb: Find logs at: ./wandb/run-20230328_174559-mzmf7gh8/logs |
|
|
|
|