| | --- |
| | library_name: peft |
| | tags: |
| | - axolotl |
| | - generated_from_trainer |
| | - text-generation-inference |
| | base_model: meta-llama/Llama-2-7b-hf |
| | model-index: |
| | - name: logic_magazine_jsonl |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
| | <details><summary>See axolotl config</summary> |
| |
|
| | axolotl version: `0.3.0` |
| | ```yaml |
| | # Image: winglian/axolotl:main-py3.10-cu118-2.0.1 |
| | base_model: meta-llama/Llama-2-7b-hf |
| | base_model_config: meta-llama/Llama-2-7b-hf |
| | model_type: LlamaForCausalLM |
| | tokenizer_type: LlamaTokenizer |
| | is_llama_derived_model: true |
| | |
| | load_in_8bit: false |
| | load_in_4bit: true |
| | strict: false |
| | |
| | datasets: |
| | - path: bentarnoff/logic_magazine_jsonl |
| | type: sharegpt |
| | hub_model_id: bentarnoff/logic_magazine_jsonl |
| | val_set_size: 0.01 |
| | output_dir: ./qlora-out |
| | |
| | adapter: qlora |
| | lora_model_dir: |
| | |
| | sequence_len: 4096 |
| | sample_packing: false |
| | pad_to_sequence_len: true |
| | |
| | lora_r: 32 |
| | lora_alpha: 16 |
| | lora_dropout: 0.05 |
| | lora_target_modules: |
| | lora_target_linear: true |
| | lora_fan_in_fan_out: |
| | |
| | wandb_project: "logic_magazine" |
| | wandb_entity: |
| | wandb_watch: |
| | wandb_run_id: |
| | wandb_log_model: "checkpoint" |
| | |
| | gradient_accumulation_steps: 4 |
| | micro_batch_size: 2 |
| | num_epochs: 3 |
| | optimizer: paged_adamw_32bit |
| | lr_scheduler: cosine |
| | learning_rate: 0.0002 |
| | |
| | train_on_inputs: false |
| | group_by_length: false |
| | bf16: true |
| | fp16: false |
| | tf32: false |
| | |
| | gradient_checkpointing: true |
| | early_stopping_patience: |
| | resume_from_checkpoint: |
| | local_rank: |
| | logging_steps: 1 |
| | xformers_attention: |
| | flash_attention: true |
| | |
| | warmup_steps: 10 |
| | eval_steps: 20 |
| | eval_table_size: 5 |
| | save_steps: |
| | debug: |
| | deepspeed: |
| | weight_decay: 0.0 |
| | fsdp: |
| | fsdp_config: |
| | special_tokens: |
| | bos_token: "<s>" |
| | eos_token: "</s>" |
| | unk_token: "<unk>" |
| | ``` |
| |
|
| | </details><br> |
| |
|
| | # logic_magazine_jsonl |
| |
|
| | This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset. |
| | It achieves the following results on the evaluation set: |
| | - Loss: 2.3642 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | More information needed |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| |
|
| | The following `bitsandbytes` quantization config was used during training: |
| | - quant_method: bitsandbytes |
| | - load_in_8bit: False |
| | - load_in_4bit: True |
| | - llm_int8_threshold: 6.0 |
| | - llm_int8_skip_modules: None |
| | - llm_int8_enable_fp32_cpu_offload: False |
| | - llm_int8_has_fp16_weight: False |
| | - bnb_4bit_quant_type: nf4 |
| | - bnb_4bit_use_double_quant: True |
| | - bnb_4bit_compute_dtype: bfloat16 |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.0002 |
| | - train_batch_size: 2 |
| | - eval_batch_size: 2 |
| | - seed: 42 |
| | - distributed_type: multi-GPU |
| | - num_devices: 4 |
| | - gradient_accumulation_steps: 4 |
| | - total_train_batch_size: 32 |
| | - total_eval_batch_size: 8 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: cosine |
| | - lr_scheduler_warmup_steps: 10 |
| | - num_epochs: 3 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | 2.3553 | 0.15 | 20 | 2.4167 | |
| | | 2.2767 | 0.31 | 40 | 2.3869 | |
| | | 2.2854 | 0.46 | 60 | 2.3658 | |
| | | 2.2849 | 0.61 | 80 | 2.3470 | |
| | | 2.353 | 0.76 | 100 | 2.3337 | |
| | | 2.2412 | 0.92 | 120 | 2.3363 | |
| | | 2.1992 | 1.07 | 140 | 2.3240 | |
| | | 2.1069 | 1.22 | 160 | 2.3404 | |
| | | 2.2444 | 1.37 | 180 | 2.3403 | |
| | | 2.1424 | 1.53 | 200 | 2.3446 | |
| | | 2.1739 | 1.68 | 220 | 2.3404 | |
| | | 2.1423 | 1.83 | 240 | 2.3382 | |
| | | 2.1721 | 1.98 | 260 | 2.3378 | |
| | | 2.1621 | 2.14 | 280 | 2.3630 | |
| | | 2.0394 | 2.29 | 300 | 2.3623 | |
| | | 2.0631 | 2.44 | 320 | 2.3665 | |
| | | 2.0234 | 2.6 | 340 | 2.3632 | |
| | | 2.1042 | 2.75 | 360 | 2.3654 | |
| | | 2.02 | 2.9 | 380 | 2.3642 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - PEFT 0.7.0 |
| | - Transformers 4.37.0.dev0 |
| | - Pytorch 2.0.1+cu118 |
| | - Datasets 2.16.1 |
| | - Tokenizers 0.15.0 |