| | --- |
| | tags: |
| | - gguf |
| | - llama.cpp |
| | - unsloth |
| |
|
| | --- |
| | # π§ Fine-Tuned Unit Test Generator (Llama-3-8B) |
| |
|
| | This repository contains the fine-tuned weights for the Unit Test Generator project. |
| |
|
| | ## π Training Details |
| | * **Base Model:** `unsloth/llama-3-8b-bnb-4bit` |
| | * **Dataset:** `iamtarun/python_code_instructions_18k_alpaca` |
| | * **Method:** QLoRA (Quantized Low-Rank Adaptation) |
| | * **Framework:** Unsloth + PyTorch + Hugging Face TRL |
| |
|
| | ## π¦ Files |
| | * `llama-3-8b.Q4_K_M.gguf`: The quantized model optimized for CPU inference (used in the [Live Demo](https://huggingface.co/spaces/nihardon/unit-test-generator)). |
| | * `adapter_model.safetensors`: The raw LoRA adapters. |
| |
|
| | ## π Live Demo |
| | You can try the model in the interactive web app here: |
| | [**Launch Unit Test Generator**](https://huggingface.com/spaces/nihardon/unit-test-generator) |