|
|
--- |
|
|
license: llama2 |
|
|
base_model: codellama/CodeLlama-7b-hf |
|
|
tags: |
|
|
- code |
|
|
- llama |
|
|
- gguf |
|
|
- merged |
|
|
- python |
|
|
--- |
|
|
|
|
|
# CodeLlama 7B Python AI Assistant (Merged GGUF) |
|
|
|
|
|
This is a merged version of the QLoRA fine-tuned CodeLlama-7B model. The LoRA weights have been merged with the base model and converted to GGUF format for easy deployment. |
|
|
|
|
|
## Model Details |
|
|
|
|
|
- **Base Model**: CodeLlama-7b-hf |
|
|
- **Original LoRA Adapter**: pranav-pvnn/codellama-7b-python-ai-assistant |
|
|
- **Fine-tuning Method**: QLoRA (4-bit quantization with LoRA) |
|
|
- **Format**: GGUF (self-contained, no separate adapter needed) |
|
|
- **Training Framework**: Unsloth |
|
|
|
|
|
## Available Quantizations |
|
|
|
|
|
- `codellama-7b-merged-f16.gguf` - Full precision (FP16) - ~13 GB |
|
|
- `codellama-7b-merged-Q4_K_M.gguf` - 4-bit quantization (recommended) - ~4 GB |
|
|
- `codellama-7b-merged-Q5_K_M.gguf` - 5-bit quantization (higher quality) - ~5 GB |
|
|
- `codellama-7b-merged-Q8_0.gguf` - 8-bit quantization (highest quality) - ~7 GB |
|
|
|
|
|
## Usage |
|
|
|
|
|
### With llama.cpp: |
|
|
```bash |
|
|
./llama-cli -m codellama-7b-merged-Q4_K_M.gguf -p "### Instruction:\nWrite a Python function to calculate factorial.\n### Response:\n" |
|
|
``` |
|
|
|
|
|
### With Python (llama-cpp-python): |
|
|
```python |
|
|
from llama_cpp import Llama |
|
|
|
|
|
llm = Llama(model_path="codellama-7b-merged-Q4_K_M.gguf") |
|
|
prompt = "### Instruction:\nWrite a Python function to calculate factorial.\n### Response:\n" |
|
|
output = llm(prompt, max_tokens=256) |
|
|
print(output['choices'][0]['text']) |
|
|
``` |
|
|
|
|
|
### With Ollama: |
|
|
1. Create a Modelfile: |
|
|
``` |
|
|
FROM ./codellama-7b-merged-Q4_K_M.gguf |
|
|
``` |
|
|
|
|
|
2. Create the model: |
|
|
```bash |
|
|
ollama create my-codellama -f Modelfile |
|
|
ollama run my-codellama "Write a Python function to sort a list" |
|
|
``` |
|
|
|
|
|
## Training Details |
|
|
|
|
|
- **Quantization**: 4-bit QLoRA |
|
|
- **LoRA Rank**: 64 |
|
|
- **Learning Rate**: 2e-4 |
|
|
- **Epochs**: 4 |
|
|
- **Max Seq Length**: 2048 |
|
|
- **Training Data**: Custom Python programming examples (~2,000 examples) |
|
|
- **GPU**: NVIDIA Tesla T4 |
|
|
|
|
|
## Prompt Format |
|
|
|
|
|
``` |
|
|
### Instruction: |
|
|
[Your instruction here] |
|
|
### Response: |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
Same as base model (Llama 2 license) |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
- Base Model: [Meta's CodeLlama](https://huggingface.co/codellama/CodeLlama-7b-hf) |
|
|
- Training Framework: [Unsloth](https://github.com/unslothai/unsloth) |
|
|
|