Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,27 +10,24 @@ pinned: false
|
|
| 10 |
license: openrail
|
| 11 |
short_description: Fine-tuned Llama-3 for generating Python unit tests
|
| 12 |
---
|
| 13 |
-
# π§ͺ AI Unit Test Generator
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
-
1. **Paste your Python function** into the input box (e.g., a function that adds two numbers or handles a database connection).
|
| 19 |
-
2. Click **"Generate Pytest"**.
|
| 20 |
-
3. The model will output a complete, runnable `pytest` code block, including edge cases and assertions.
|
| 21 |
|
| 22 |
-
##
|
| 23 |
-
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
* **Optimization:** Utilized **Unsloth** for 2x faster training and optimized inference memory usage.
|
| 28 |
-
* **Infrastructure:**
|
| 29 |
-
* **Training:** Google Colab (T4 GPU) with Weights & Biases logging.
|
| 30 |
-
* **Inference:** Hugging Face Spaces (CPU/GPU) via `peft` and `gradio`.
|
| 31 |
|
| 32 |
-
|
| 33 |
-
The model
|
|
|
|
|
|
|
| 34 |
|
| 35 |
---
|
| 36 |
*Built by [Nihar Donthireddy](https://www.linkedin.com/in/nihar-donthireddy-048b96277/)*
|
|
|
|
| 10 |
license: openrail
|
| 11 |
short_description: Fine-tuned Llama-3 for generating Python unit tests
|
| 12 |
---
|
| 13 |
+
# π§ͺ AI Unit Test Generator
|
| 14 |
|
| 15 |
+
**A Fine-Tuned Llama-3 Model for Automated QA**
|
| 16 |
|
| 17 |
+
This application uses a custom fine-tuned version of **Meta's Llama-3-8B** to automatically generate `pytest` unit tests for Python functions.
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
+
## π How it Works
|
| 20 |
+
1. **Paste your Python function** into the box.
|
| 21 |
+
2. **Click Generate.**
|
| 22 |
+
3. The model produces a complete, runnable `pytest` test case.
|
| 23 |
|
| 24 |
+
## βοΈ Technical Architecture
|
| 25 |
+
To run a robust LLM on the free CPU tier, this project uses a split-architecture approach:
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
+
* **Model:** Llama-3-8B fine-tuned on the **Alpaca-Python-18k** dataset.
|
| 28 |
+
* **Quantization:** The model weights were converted to **GGUF (Q4_K_M)** format to reduce RAM usage from 16GB to ~5GB.
|
| 29 |
+
* **Inference:** Running locally on CPU using `llama-cpp-python`.
|
| 30 |
+
* **Weights:** Hosted separately in the [Model Repository](https://huggingface.co/nihardon/fine-tuned-unit-test-generator).
|
| 31 |
|
| 32 |
---
|
| 33 |
*Built by [Nihar Donthireddy](https://www.linkedin.com/in/nihar-donthireddy-048b96277/)*
|