π§ Fine-Tuned Unit Test Generator (Llama-3-8B)
This repository contains the fine-tuned weights for the Unit Test Generator project.
π Training Details
- Base Model:
unsloth/llama-3-8b-bnb-4bit - Dataset:
iamtarun/python_code_instructions_18k_alpaca - Method: QLoRA (Quantized Low-Rank Adaptation)
- Framework: Unsloth + PyTorch + Hugging Face TRL
π¦ Files
llama-3-8b.Q4_K_M.gguf: The quantized model optimized for CPU inference (used in the Live Demo).adapter_model.safetensors: The raw LoRA adapters.
π Live Demo
You can try the model in the interactive web app here: Launch Unit Test Generator
- Downloads last month
- 47
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support