File size: 4,648 Bytes
7b2cc6b cdcb7db 7b2cc6b cdcb7db 7b2cc6b cdcb7db 7b2cc6b cdcb7db 7b2cc6b cdcb7db 7b2cc6b cdcb7db 010f13c cdcb7db 7b2cc6b cdcb7db 7b2cc6b cdcb7db | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 | ---
language:
- en
license: other
license_name: qianwen
license_link: https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE
base_model:
- Qwen/Qwen3-4B
tags:
- fine-tuned
- education
- python
- socratic
- qlora
- unsloth
---
# Fyve-AI
**Fyve-AI** is a fine-tuned version of [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) trained for one specific task: reading a student's broken Python code and responding with a Socratic 3-sentence hint β not the answer.
It is the AI model powering [PyFyve](https://github.com/Macmill-340/PyFyve), a fully offline Python tutoring application.
---
## What It Does
Given a task description, the student's buggy code, and the Python error it produced, the model outputs a JSON object with two fields:
- `reasoning` β internal diagnosis of what went wrong and why
- `hint` β exactly 3 sentences following a fixed structure:
1. **Diagnosis** β names the specific variable, expression, or construct that caused the error
2. **Rule** β states the Python rule that was violated
3. **Directive** β starts with *"Think about..."* or *"Consider..."* and guides without giving the fix
The model never gives corrected code. It never gives more than 3 sentences. It does not know how to do anything outside this task.
---
## Input Format
```
Task:
<what the student was asked to do>
Code:
<the student's broken code>
Error:
<the Python error message>
```
---
## Output Format
```json
{
"reasoning": "...",
"hint": "Sentence 1.\nSentence 2.\nSentence 3."
}
```
---
## Example
**Input:**
```
Task:
Create a variable score = 95 and print its value.
Code:
score = 95
print(Score)
Error:
NameError: name 'Score' is not defined at line 2
```
**Output:**
```json
{
"reasoning": "score is defined lowercase but Score (capital S) is used in print. I name both spellings, explain case sensitivity, and direct toward comparing the two usages.",
"hint": "You defined a variable called score on line 1 but referenced Score on line 2.\nIn Python, variable names are case-sensitive, so score and Score are treated as two completely different identifiers.\nConsider whether the capitalisation of the variable name is consistent between where it was defined and where it is used."
}
```
---
## Training Details
| Detail | Value |
|--------|-------|
| Base model | Qwen3-4B |
| Method | QLoRA via [Unsloth](https://github.com/unslothai/unsloth) |
| Hardware | Google Colab T4 (free tier) |
| Dataset | 555 curated (task, code, error, hint) pairs |
| Dataset source | Synthetic β generated using Qwen3-30B-A3B as teacher model |
| Error types covered | SyntaxError, NameError, TypeError, IndexError, KeyError, ValueError, AttributeError, UnboundLocalError, RecursionError, ZeroDivisionError, and more |
The training data was generated by a 30B teacher model, manually reviewed for quality, and filtered through a validation pipeline that checks hint structure, sentence count, and semantic rules (e.g. AttributeError on strings must guide toward `+` or `+=`, not list conversion).
---
## Intended Use
This model is designed exclusively for use inside the PyFyve app. It is not a general-purpose assistant and will produce poor results for tasks outside its training distribution.
**It is not designed to:**
- Answer general Python questions
- Explain concepts freely
- Write or complete code
- Serve as a chatbot
---
## Limitations
- Trained on 555 examples β covers common beginner and intermediate Python errors well, but unusual or advanced errors may produce weaker hints
- No coverage of logic errors (code that runs but produces wrong output)
- Some uncommon syntax patterns (e.g. trailing comma creating a tuple) are outside the training distribution
- The 3-sentence format is enforced by the prompt at inference time β removing the few-shot examples from the prompt degrades output quality significantly
---
## Usage with Ollama
This model is distributed as a GGUF file for use with [Ollama](https://ollama.com). The `Modelfile` in this repository contains the Ollama model definition.
```bash
ollama create fyve-ai -f Modelfile
```
Or use the PyFyve app, which handles setup automatically.
---
## License
The fine-tuned weights are released under the same license as the base model: the [Apache License](https://huggingface.co/Qwen/Qwen3-4B/blob/main/LICENSE).
Please read it before redistributing β it permits research and personal use but has restrictions on commercial use above certain usage thresholds.
---
## Citation
If you use this model in research or build on it, please link back to the [PyFyve repository](https://github.com/Macmill-340/PyFyve).
|