odytrice commited on
Commit
47e8e2c
·
verified ·
1 Parent(s): 35f80ae

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +243 -0
README.md ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ base_model: akoumpa/Devstral-Small-2-24B-Instruct-2512-BF16
4
+ tags:
5
+ - text-generation-inference
6
+ - transformers
7
+ - unsloth
8
+ - mistral3
9
+ - code
10
+ - fsharp
11
+ - svelte
12
+ - typescript
13
+ - dotnet
14
+ - docker
15
+ - kubernetes
16
+ license: apache-2.0
17
+ language:
18
+ - en
19
+ datasets:
20
+ - odytrice/kenichi-sft
21
+ pipeline_tag: text-generation
22
+ ---
23
+
24
+ # Kenichi Flash — Domain-Specialized Coding Assistant (24B)
25
+
26
+ Kenichi Flash is a fast, agentic coding model fine-tuned from [Devstral Small 2 24B](https://huggingface.co/akoumpa/Devstral-Small-2-24B-Instruct-2512-BF16) for domain-specialized code generation.
27
+
28
+ ## Model Details
29
+
30
+ ### Model Description
31
+
32
+ Kenichi Flash is a text-only coding model specialized in F#, .NET, Svelte 5, TypeScript, Docker, and Kubernetes development. It was created through multi-teacher distillation from five frontier models, with all F# samples verified by the F# compiler. Optimized for fast agentic coding workflows.
33
+
34
+ - **Developed by:** [odytrice](https://huggingface.co/odytrice)
35
+ - **Model type:** Causal Language Model (Text Generation), LoRA fine-tuned
36
+ - **Language(s) (NLP):** English
37
+ - **License:** Apache 2.0
38
+ - **Finetuned from model:** [akoumpa/Devstral-Small-2-24B-Instruct-2512-BF16](https://huggingface.co/akoumpa/Devstral-Small-2-24B-Instruct-2512-BF16)
39
+
40
+ ### Model Sources
41
+
42
+ - **Repository:** [github.com/odytrice/models](https://github.com/odytrice/models)
43
+ - **Training Dataset:** [odytrice/kenichi-sft](https://huggingface.co/datasets/odytrice/kenichi-sft)
44
+ - **GGUF Quantizations:** [odytrice/kenichi-flash-GGUF](https://huggingface.co/odytrice/kenichi-flash-GGUF)
45
+
46
+ ## Uses
47
+
48
+ ### Direct Use
49
+
50
+ Kenichi Flash is designed as a coding assistant for the following domains:
51
+
52
+ - **F#** — core language, FsToolkit, Giraffe, Akka.NET, linq2db, Farmer, FAKE
53
+ - **.NET / ASP.NET** — web APIs, Minimal API, middleware, dependency injection
54
+ - **Svelte 5 / SvelteKit** — runes (`$state`, `$derived`, `$effect`), server routes, form actions
55
+ - **TypeScript** — type-safe patterns, generics, utility types
56
+ - **Docker & Kubernetes** — Dockerfiles, Compose, Helm charts, deployments, services
57
+ - **Agentic SWE** — tool use, multi-step reasoning, code review, debugging workflows
58
+
59
+ ### Downstream Use
60
+
61
+ Suitable for integration into:
62
+ - AI coding assistants and IDE plugins
63
+ - Agentic coding pipelines
64
+ - Code review and refactoring tools
65
+ - Documentation generation from code
66
+
67
+ ### Out-of-Scope Use
68
+
69
+ - General-purpose chat (the model is specialized for coding tasks)
70
+ - Languages and frameworks outside the training domains
71
+ - Safety-critical code generation without human review
72
+
73
+ ## Bias, Risks, and Limitations
74
+
75
+ - The model is specialized for a narrow set of technologies. Performance on other programming languages or frameworks may be worse than the base Devstral model.
76
+ - Training data was generated by teacher models (MiniMax M2.7, Kimi K2.5, DeepSeek R1, GLM-5, Nvidia Nemotron) and may inherit their biases.
77
+ - F# samples were compiler-verified, but samples in other domains were not mechanically verified.
78
+ - The model should not be used as a sole source of truth for production code without human review.
79
+
80
+ ### Recommendations
81
+
82
+ Users should validate all generated code, especially for security-sensitive applications. The model performs best when given detailed, domain-specific prompts within its specialization areas.
83
+
84
+ ## How to Get Started with the Model
85
+
86
+ Use the following system prompt for best results:
87
+
88
+ > You are Kenichi, an expert coding assistant specialized in F#, .NET, Svelte 5, SvelteKit, TypeScript, Docker, and Kubernetes. You write clean, idiomatic, and well-structured code with clear explanations.
89
+
90
+ ### Python
91
+
92
+ ```python
93
+ from transformers import AutoModelForCausalLM, AutoTokenizer
94
+
95
+ model = AutoModelForCausalLM.from_pretrained(
96
+ "odytrice/kenichi-flash",
97
+ torch_dtype="bfloat16",
98
+ device_map="auto",
99
+ )
100
+ tokenizer = AutoTokenizer.from_pretrained("odytrice/kenichi-flash")
101
+
102
+ messages = [
103
+ {"role": "system", "content": "You are Kenichi, an expert coding assistant specialized in F#, .NET, Svelte 5, SvelteKit, TypeScript, Docker, and Kubernetes. You write clean, idiomatic, and well-structured code with clear explanations."},
104
+ {"role": "user", "content": "Write an F# function that uses FsToolkit to parse and validate a configuration file with error accumulation."}
105
+ ]
106
+
107
+ inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True).to(model.device)
108
+ outputs = model.generate(inputs, max_new_tokens=2048, temperature=0.7)
109
+ print(tokenizer.decode(outputs[0][inputs.shape[-1]:], skip_special_tokens=True))
110
+ ```
111
+
112
+ ### Ollama
113
+
114
+ ```bash
115
+ ollama run odytrice/kenichi-flash:32gb
116
+ ```
117
+
118
+ Available tags: `:24gb` (Q4_K_M), `:32gb` (Q5_K_M), `:48gb` (Q8_0), `:96gb` (Q8_0), `:full` (F16)
119
+
120
+ ## Training Details
121
+
122
+ ### Training Data
123
+
124
+ [odytrice/kenichi-sft](https://huggingface.co/datasets/odytrice/kenichi-sft) — 7,953 samples across 7 domains, generated via multi-teacher distillation.
125
+
126
+ | Domain | Samples | % |
127
+ |--------|---------|---|
128
+ | F# (core + libraries) | 3,913 | 49.2% |
129
+ | Svelte 5 / TypeScript | 1,200 | 15.1% |
130
+ | Docker / Kubernetes | 800 | 10.1% |
131
+ | .NET / ASP.NET | 750 | 9.4% |
132
+ | Agentic SWE | 640 | 8.0% |
133
+ | Cross-domain | 400 | 5.0% |
134
+ | General coding | 250 | 3.1% |
135
+
136
+ #### Teacher Models
137
+
138
+ | Teacher | Contribution |
139
+ |---------|-------------|
140
+ | MiniMax M2.7 | 42.0% |
141
+ | Kimi K2.5 | 27.2% |
142
+ | DeepSeek R1 | 14.9% |
143
+ | GLM-5 | 9.6% |
144
+ | Nvidia Nemotron | 6.3% |
145
+
146
+ All F# samples were verified by the F# compiler (`dotnet fsi` / `dotnet build`).
147
+
148
+ ### Training Procedure
149
+
150
+ #### Preprocessing
151
+
152
+ - Training data formatted in Mistral instruct format with system prompt injected at training time
153
+ - Chat template applied via Unsloth's `get_chat_template(tokenizer, chat_template="mistral")`
154
+ - Packing enabled for efficient sequence utilization
155
+
156
+ #### Training Hyperparameters
157
+
158
+ - **Training regime:** BF16 mixed precision
159
+ - **Method:** LoRA (rank 16, alpha 32, dropout 0.0)
160
+ - **Trainable parameters:** 101.4M (0.42% of 24.1B)
161
+ - **Epochs:** 1
162
+ - **Effective batch size:** 8 (micro batch 1 x gradient accumulation 8)
163
+ - **Learning rate:** 1e-4 (cosine schedule, 5% warmup)
164
+ - **Weight decay:** 0.01
165
+ - **Optimizer:** AdamW 8-bit
166
+ - **Max sequence length:** 131,072
167
+ - **Packing:** Enabled
168
+ - **Attention:** eager (flex_attention requires torch 2.6+)
169
+
170
+ #### LoRA Target Modules
171
+
172
+ `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj`
173
+
174
+ #### Speeds, Sizes, Times
175
+
176
+ - **Training time:** 1 hour 44 minutes
177
+ - **Steps:** 945
178
+ - **Speed:** 6.63 seconds/step
179
+ - **Final train loss:** ~0.40
180
+
181
+ ## Evaluation
182
+
183
+ ### Testing Data, Factors & Metrics
184
+
185
+ #### Testing Data
186
+
187
+ 397 held-out validation samples from [odytrice/kenichi-sft](https://huggingface.co/datasets/odytrice/kenichi-sft) (`mistral_val` split).
188
+
189
+ #### Metrics
190
+
191
+ - **Training loss:** ~0.40 (1 epoch)
192
+
193
+ ### Results
194
+
195
+ Formal evaluation on the held-out validation set is pending.
196
+
197
+ ## Environmental Impact
198
+
199
+ - **Hardware Type:** NVIDIA A100 SXM 80GB
200
+ - **Hours used:** 1.7
201
+ - **Cloud Provider:** RunPod
202
+ - **Compute Region:** US
203
+ - **Carbon Emitted:** Estimated ~0.5 kg CO2eq
204
+
205
+ ## Technical Specifications
206
+
207
+ ### Model Architecture and Objective
208
+
209
+ Devstral Small 2 (Ministral 3 architecture):
210
+
211
+ - **40 layers**, 5120 hidden size, 32 heads, 8 KV heads
212
+ - **Total parameters:** 24.1B
213
+ - **Vocab size:** 131,072 tokens
214
+ - **Context length:** 262,144 tokens (base model)
215
+
216
+ ### Compute Infrastructure
217
+
218
+ #### Hardware
219
+
220
+ NVIDIA A100 SXM 80GB (single GPU)
221
+
222
+ #### Software
223
+
224
+ - PyTorch 2.5.1 + CUDA 12.4
225
+ - Transformers 5.3.0
226
+ - Unsloth 2026.3.11
227
+ - TRL 0.24
228
+
229
+ This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
230
+
231
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
232
+
233
+ ## Related Models
234
+
235
+ - **[Kenichi Thinking](https://huggingface.co/odytrice/kenichi-thinking)** — Qwen3.5-27B VL variant with vision capabilities, optimized for planning agents
236
+
237
+ ## Model Card Authors
238
+
239
+ [odytrice](https://huggingface.co/odytrice)
240
+
241
+ ## Model Card Contact
242
+
243
+ [odytrice](https://huggingface.co/odytrice)