| --- |
| license: apache-2.0 |
| datasets: |
| - m-a-p/CodeFeedback-Filtered-Instruction |
| - m-a-p/Code-Feedback |
| language: |
| - en |
| library_name: transformers |
| tags: |
| - llama-factory |
| - unsloth |
| base_model: h2oai/h2o-danube2-1.8b-base |
| --- |
| # h2o-danube2 with ChatML template |
|
|
| This model was first fine-tuned with [BAdam](https://arxiv.org/abs/2404.02827 "BAdam: A Memory Efficient Full Parameter Optimization Method for Large Language Models") on [m-a-p/CodeFeedback-Filtered-Instruction](https://huggingface.co/datasets/m-a-p/CodeFeedback-Filtered-Instruction) and [m-a-p/Code-Feedback](https://huggingface.co/datasets/m-a-p/Code-Feedback), unfiltered from the latest [dolphin dataset](https://huggingface.co/datasets/cognitivecomputations/dolphin-2.9.3), using LLama-Factory. |
|
|
| ## Quants |
|
|
| Thanks to [mradermacher](https://huggingface.co/mradermacher)! |
|
|
| - [mradermacher/danube2-1.8b-CodeFeedback-GGUF](https://huggingface.co/mradermacher/danube2-1.8b-CodeFeedback-GGUF) |
|
|
| ## Template |
|
|
| ```jinja |
| <|im_start|>system |
| You are a helpful coding assistant.<|im_end|> |
| <|im_start|>user |
| {{instruction}}<|im_end|> |
| <|im_start|>assistant |
| {{response}}<|im_end|> |
| ``` |
|
|
| ## BAdam config |
|
|
| **System:** You are a helpful coding assistant. |
|
|
| ```yaml |
| ### model |
| model_name_or_path: danube2-base-chatml |
| |
| ### method |
| stage: sft |
| do_train: true |
| finetuning_type: full |
| use_badam: true |
| badam_switch_mode: ascending |
| badam_switch_interval: 50 |
| badam_verbose: 1 |
| badam_start_block: 10 |
| seed: 720 |
| |
| ### dataset |
| dataset: codefeedback_instruct_unfiltered,codefeedback_unfiltered |
| template: hermes_chatml |
| cutoff_len: 8192 |
| overwrite_cache: false |
| preprocessing_num_workers: 12 |
| |
| ### output |
| output_dir: code-feedback-chatml-badam |
| logging_steps: 5 |
| save_steps: 1 |
| save_strategy: epoch |
| plot_loss: true |
| overwrite_output_dir: false |
| |
| ### train |
| per_device_train_batch_size: 2 |
| gradient_accumulation_steps: 8 |
| learning_rate: 0.00001 |
| num_train_epochs: 1 |
| lr_scheduler_type: cosine |
| warmup_ratio: 0.01 |
| bf16: true |
| flash_attn: fa2 |
| |
| ### eval |
| val_size: 0.01 |
| per_device_eval_batch_size: 1 |
| eval_strategy: steps |
| eval_steps: 2000 |
| ``` |
|
|
| ### BAdam training results |
|
|
| | Training Loss | Epoch | Step | Validation Loss | |
| |:-------------:|:------:|:-----:|:---------------:| |
| | 0.6181 | 0.1789 | 2000 | 0.6044 | |
| | 0.6835 | 0.3578 | 4000 | 0.5949 | |
| | 0.5649 | 0.5367 | 6000 | 0.5893 | |
| | 0.6559 | 0.7155 | 8000 | 0.5850 | |
| | 0.6591 | 0.8944 | 10000 | 0.5839 | |
|
|
|
|