enwrit commited on
Commit
6723be3
·
verified ·
1 Parent(s): 408c4eb

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +163 -0
README.md CHANGED
@@ -1,3 +1,166 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - instruction-quality
7
+ - lint
8
+ - code-review
9
+ - agent-instructions
10
+ - gguf
11
+ - qwen3.5
12
+ base_model: Qwen/Qwen3.5-0.8B
13
+ pipeline_tag: text-generation
14
+ library_name: llama-cpp-python
15
+ model-index:
16
+ - name: writ-lint-0.8B
17
+ results: []
18
  ---
19
+
20
+ # writ-lint-0.8B
21
+
22
+ A fine-tuned [Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B) model that evaluates the quality of AI agent instructions and generates actionable improvement feedback.
23
+
24
+ Part of the **Tier 2.5 hybrid architecture** in [enwrit](https://github.com/enwrit/writ) -- the communication layer for AI agents.
25
+
26
+ ## How It Works
27
+
28
+ This model is one half of a hybrid scoring system:
29
+
30
+ 1. **LightGBM** (bundled in the `enwrit` CLI) predicts headline + 6 dimension scores (~1ms)
31
+ 2. **writ-lint-0.8B** (this model) generates issues (ERROR/WARNING/INFO) and improvement suggestions, using the instruction text and LightGBM-predicted scores as context
32
+
33
+ The model focuses entirely on generating actionable feedback, not scores. Scores from LightGBM are passed in the prompt so the model can target weak dimensions.
34
+
35
+ ## Usage
36
+
37
+ ### Via the enwrit CLI (recommended)
38
+
39
+ ```bash
40
+ pip install enwrit
41
+ pip install llama-cpp-python # CPU inference, ~10s per instruction
42
+
43
+ writ lint AGENTS.md --deep-local
44
+ ```
45
+
46
+ The model is auto-downloaded to `~/.writ/models/` on first use.
47
+
48
+ ### Direct inference with llama-cpp-python
49
+
50
+ ```python
51
+ from llama_cpp import Llama
52
+ import json
53
+
54
+ model = Llama(
55
+ model_path="writ-lint-0.8B-Q4_K_M.gguf",
56
+ n_ctx=4096,
57
+ n_gpu_layers=-1, # GPU acceleration (0 for CPU-only)
58
+ verbose=False,
59
+ )
60
+
61
+ prompt = """<|im_start|>system
62
+ You are an expert instruction quality evaluator. Given an instruction and its quality scores, generate specific issues and improvement suggestions.<|im_end|>
63
+ <|im_start|>user
64
+ ## Instruction to evaluate
65
+
66
+ {instruction_text}
67
+
68
+ ## Quality scores (predicted)
69
+
70
+ Headline: 52/100
71
+ Clarity: 58 | Structure: 65 | Coverage: 42 | Brevity: 71 | Examples: 28 | Verification: 35
72
+
73
+ Analyze the instruction. Return JSON with "issues" (level + message) and "suggestions".<|im_end|>
74
+ <|im_start|>assistant
75
+ """
76
+
77
+ output = model.create_completion(
78
+ prompt,
79
+ max_tokens=1024,
80
+ temperature=0.3,
81
+ response_format={"type": "json_object"},
82
+ )
83
+
84
+ feedback = json.loads(output["choices"][0]["text"])
85
+ print(json.dumps(feedback, indent=2))
86
+ ```
87
+
88
+ ## Output Format
89
+
90
+ ```json
91
+ {
92
+ "issues": [
93
+ {"level": "ERROR", "message": "Missing concrete code examples for error handling patterns."},
94
+ {"level": "WARNING", "message": "Verification steps are subjective rather than binary."},
95
+ {"level": "INFO", "message": "Consider adding a 'Rules' section for behavioral constraints."}
96
+ ],
97
+ "suggestions": [
98
+ "Add a 'Code Examples' section with 'Good vs Bad' patterns for the most critical rules.",
99
+ "Replace subjective verification with specific CLI commands (e.g., `pytest`, `ruff check`).",
100
+ "Include numeric thresholds for measurable constraints (e.g., 'max 100 lines per function')."
101
+ ]
102
+ }
103
+ ```
104
+
105
+ ## Training Details
106
+
107
+ | Parameter | Value |
108
+ |---|---|
109
+ | Base model | [Qwen/Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B) |
110
+ | Method | LoRA (r=32, alpha=64, dropout=0) via [Unsloth](https://github.com/unslothai/unsloth) |
111
+ | Training data | 6,536 Tier 3 AI evaluations (Gemini-scored instructions) |
112
+ | Issues in training data | 30,830 (avg 4.7 per instruction) |
113
+ | Suggestions in training data | 19,602 (avg 3.0 per instruction) |
114
+ | Non-coding examples | 145 seed instructions across 15 domains |
115
+ | Epochs | 1 |
116
+ | Batch size | 1 (gradient accumulation: 16, effective batch: 16) |
117
+ | Max sequence length | 4096 tokens |
118
+ | Learning rate | 2e-4 (cosine schedule, 10% warmup) |
119
+ | Precision | bf16 |
120
+ | Quantization | Q4_K_M (GGUF) |
121
+ | Training hardware | NVIDIA RTX 5090 (32GB VRAM) |
122
+ | Training time | ~5.5 hours |
123
+
124
+ ## Evaluation
125
+
126
+ Compared against retrieval-based approaches (v1/v2/v3) on a held-out validation set:
127
+
128
+ | Approach | Relevance | All-Feedback Specificity | Issues/Instruction | Type |
129
+ |---|---|---|---|---|
130
+ | v1_shap_knn | 0.157 | 0.129 | N/A | retrieval |
131
+ | v2_hybrid | 0.266 | 0.136 | N/A | retrieval |
132
+ | v3_tfidf | 0.262 | 0.144 | N/A | retrieval |
133
+ | **writ-lint-0.8B** | **0.236** | **0.364** | **4.7** | **generative** |
134
+
135
+ Key strengths:
136
+ - 100% JSON parse success (via constrained decoding)
137
+ - Generates novel, context-specific feedback (not limited to seen examples)
138
+ - Weak-dimension targeting: 0.47 (issues correlate with low-scoring dimensions)
139
+ - Low domain mismatch: 0.014 (doesn't give coding feedback to non-coding instructions)
140
+
141
+ ## Scoring Dimensions
142
+
143
+ The 6 quality dimensions (scored by LightGBM, targeted by this model):
144
+
145
+ | Dimension | What it measures |
146
+ |---|---|
147
+ | **Clarity** | Unambiguous language, precise terminology, defined jargon |
148
+ | **Structure** | Logical sections, hierarchy, scannable formatting |
149
+ | **Coverage** | Completeness of rules, edge cases, responsibilities |
150
+ | **Brevity** | Concise without sacrificing meaning, no redundancy |
151
+ | **Examples** | Code samples, input/output patterns, good vs bad |
152
+ | **Verification** | Testable criteria, CLI commands, specific thresholds |
153
+
154
+ ## Files
155
+
156
+ - `writ-lint-0.8B-Q4_K_M.gguf` -- Quantized model for inference (504 MB)
157
+
158
+ ## Links
159
+
160
+ - [enwrit CLI](https://github.com/enwrit/writ) -- Open-source CLI tool
161
+ - [enwrit.com](https://enwrit.com) -- Platform with Hub, AI scoring, and more
162
+ - [PyPI](https://pypi.org/project/enwrit/) -- `pip install enwrit`
163
+
164
+ ## License
165
+
166
+ MIT