| license: apache-2.0 | |
| Llama2 (7B) model fine-tuned on the CodeAlpaca 20k instructions dataset by using the method QLoRA with PEFT library. | |
| Training and evaluation data 📚 | |
| CodeAlpaca_20K: contains 20K instruction-following data used for fine-tuning the Code Alpaca model. | |
| Data is here: https://huggingface.co/mrm8488/falcon-7b-ft-codeAlpaca_20k | |
| The adapter is here: https://huggingface.co/skar01/llama2-coder | |
| The base model is: TinyPixel/Llama-2-7B-bf16-sharded |