Spaces:
Sleeping
Sleeping
File size: 1,314 Bytes
497b284 db1b5af 497b284 2177edc 497b284 2177edc a15a834 2177edc a15a834 2177edc a15a834 2177edc a15a834 2177edc a15a834 497b284 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
---
title: Unit Test Generator
emoji: ๐
colorFrom: indigo
colorTo: purple
sdk: docker
sdk_version: 6.3.0
app_file: app.py
pinned: false
license: openrail
short_description: Fine-tuned Llama-3 for generating Python unit tests
---
# ๐งช AI Unit Test Generator
**A Fine-Tuned Llama-3 Model for Automated QA**
This application uses a custom fine-tuned version of **Meta's Llama-3-8B** to automatically generate `pytest` unit tests for Python functions.
## ๐ How it Works
1. **Paste your Python function** into the box.
2. **Click Generate.**
3. The model produces a complete, runnable `pytest` test case.
## โ๏ธ Technical Architecture
To run a robust LLM on the free CPU tier, this project uses a split-architecture approach:
* **Model:** Llama-3-8B fine-tuned on the **Alpaca-Python-18k** dataset.
* **Quantization:** The model weights were converted to **GGUF (Q4_K_M)** format to reduce RAM usage from 16GB to ~5GB.
* **Inference:** Running locally on CPU using `llama-cpp-python`.
* **Weights:** Hosted separately in the [Model Repository](https://huggingface.co/nihardon/fine-tuned-unit-test-generator).
---
*Built by [Nihar Donthireddy](https://www.linkedin.com/in/nihar-donthireddy-048b96277/)*
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|