Update README.md
Browse files
README.md
CHANGED
|
@@ -18,86 +18,233 @@ metrics:
|
|
| 18 |
- code_eval
|
| 19 |
new_version: OpceanAI/Yuuki-v0.1
|
| 20 |
---
|
|
|
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
---
|
| 27 |
|
| 28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 29 |
|
| 30 |
-
|
|
|
|
|
|
|
|
|
|
| 31 |
|
| 32 |
-
|
| 33 |
-
- π± This model is being trained **entirely on a smartphone** by a **single person**
|
| 34 |
-
- π A **research paper** will be published soon exploring whether it's possible to train a code generation model on a mobile device
|
| 35 |
-
- π§ This is an **early-stage research project**, not a production-ready model
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
|
| 44 |
|
| 45 |
-
-
|
| 46 |
-
- β
The dataset is being **correctly learned**
|
| 47 |
-
- β
The model is capable of generating **real, structured code-like outputs**
|
| 48 |
-
- β
Early language specialization (due to dataset order) is **clearly observable**
|
| 49 |
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
-
|
| 53 |
|
| 54 |
---
|
| 55 |
|
| 56 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 57 |
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
---
|
| 67 |
|
| 68 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
| Language | Score |
|
| 73 |
-
|---------
|
| 74 |
-
| Agda | 20 |
|
| 75 |
-
| C | 8 |
|
| 76 |
-
| Assembly | 2 |
|
| 77 |
-
| **Average** | **
|
| 78 |
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
| Language | Score |
|
| 82 |
-
|---------
|
| 83 |
-
| Agda | 55 |
|
| 84 |
-
| C | 20 |
|
| 85 |
-
| Assembly | 15 |
|
| 86 |
| **Average** | **24.6 / 100** |
|
| 87 |
|
| 88 |
-
>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
---
|
| 91 |
|
| 92 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
| 94 |
### π» Checkpoint 1400 β Early, Noisy, Unstable
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
```agda
|
| 100 |
-
BLAS : Bad memory unallocation! :
|
| 101 |
module Main where
|
| 102 |
|
| 103 |
open import Data.PropositionalEquality using (PropositionalEquality;
|
|
@@ -106,48 +253,41 @@ open import Data.Nat
|
|
| 106 |
data import Data.PropositionalEquality using (Propos;
|
| 107 |
open-Equality -PropositionalEquality) using (Prop-} (Prop-Prop-ElAdvertisements)
|
| 108 |
```
|
| 109 |
-
β Severe corruption, mixed tokens, runtime noise.
|
| 110 |
|
| 111 |
-
|
| 112 |
|
| 113 |
-
|
| 114 |
-
|
|
|
|
|
|
|
| 115 |
|
| 116 |
```c
|
| 117 |
int main() {
|
| 118 |
prostitution of '/*' (prostitution of '*'
|
| 119 |
prost = $0)
|
| 120 |
```
|
| 121 |
-
β No syntactic coherence, random token associations.
|
| 122 |
|
| 123 |
-
|
| 124 |
|
| 125 |
-
|
| 126 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
```asm
|
| 129 |
mov eax,
|
| 130 |
|
| 131 |
the International Press S.
|
| 132 |
```
|
| 133 |
-
β No instruction continuation.
|
| 134 |
|
| 135 |
-
|
| 136 |
|
| 137 |
-
|
| 138 |
-
|
| 139 |
|
| 140 |
-
|
| 141 |
-
function test() {
|
| 142 |
-
all test(1) {
|
| 143 |
-
all a,b : a
|
| 144 |
-
```
|
| 145 |
-
β Structural confusion, repeated patterns.
|
| 146 |
-
|
| 147 |
-
---
|
| 148 |
-
|
| 149 |
-
#### [Test 5/5] Python
|
| 150 |
-
**Prompt:** `def hello():`
|
| 151 |
|
| 152 |
```python
|
| 153 |
def hello():
|
|
@@ -155,14 +295,22 @@ def hello():
|
|
| 155 |
open program
|
| 156 |
open import HUnit.H.P.B.P.P-E
|
| 157 |
```
|
| 158 |
-
β Python not learned, heavy cross-language contamination.
|
| 159 |
|
| 160 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
|
| 162 |
-
|
|
|
|
|
|
|
| 163 |
|
| 164 |
-
|
| 165 |
-
**Prompt:** `module Main where`
|
| 166 |
|
| 167 |
```agda
|
| 168 |
module Main where
|
|
@@ -186,12 +334,13 @@ open import Data.Unary
|
|
| 186 |
|
| 187 |
open import data
|
| 188 |
```
|
| 189 |
-
β
Shows strong familiarity with real Agda modules and imports.
|
| 190 |
|
| 191 |
-
|
|
|
|
|
|
|
|
|
|
| 192 |
|
| 193 |
-
|
| 194 |
-
**Prompt:** `int main() {`
|
| 195 |
|
| 196 |
```c
|
| 197 |
int main() {
|
|
@@ -205,12 +354,15 @@ int main() {
|
|
| 205 |
}
|
| 206 |
run run
|
| 207 |
```
|
| 208 |
-
β οΈ Syntax is unstable, but structure and intent are present.
|
| 209 |
|
| 210 |
-
|
| 211 |
|
| 212 |
-
|
| 213 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 214 |
|
| 215 |
```asm
|
| 216 |
mov eax, "v2.2"
|
|
@@ -219,12 +371,13 @@ ov eax, "v2.2"
|
|
| 219 |
ov eax, "v2._v1"
|
| 220 |
ov eax, "v3"
|
| 221 |
```
|
| 222 |
-
π± Demonstrates token-level familiarity with assembly-like patterns.
|
| 223 |
|
| 224 |
-
--
|
|
|
|
|
|
|
|
|
|
| 225 |
|
| 226 |
-
|
| 227 |
-
**Prompt:** `function test() {`
|
| 228 |
|
| 229 |
```javascript
|
| 230 |
function test() {
|
|
@@ -232,74 +385,389 @@ function test() {
|
|
| 232 |
(a=0; i++) {
|
| 233 |
.-1 . A
|
| 234 |
```
|
| 235 |
-
β οΈ Logic is broken, but syntactic scaffolding is emerging.
|
| 236 |
|
| 237 |
-
|
| 238 |
|
| 239 |
-
|
| 240 |
-
|
|
|
|
| 241 |
|
| 242 |
-
|
| 243 |
-
def hello():
|
| 244 |
-
"Hello!
|
| 245 |
-
tell-tell-it -o:Hello!
|
| 246 |
-
if not exist (
|
| 247 |
-
echo "Hello!
|
| 248 |
-
```
|
| 249 |
-
β Python not yet learned (expected due to alphabetical dataset order).
|
| 250 |
|
| 251 |
---
|
| 252 |
|
| 253 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 254 |
|
| 255 |
-
|
|
|
|
|
|
|
|
|
|
| 256 |
|
| 257 |
-
|
| 258 |
-
|
| 259 |
-
|
| 260 |
-
- π **Improves gradually**, not magically
|
| 261 |
|
| 262 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 263 |
|
| 264 |
---
|
| 265 |
|
| 266 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 267 |
|
| 268 |
Between **3.7% β 5.3%** training progress, Yuuki shows:
|
| 269 |
|
| 270 |
-
- β
Major qualitative gains
|
| 271 |
-
- β
Clear specialization trends
|
| 272 |
-
- β
Rapid early learning
|
| 273 |
|
| 274 |
This validates the project's core claim:
|
| 275 |
|
| 276 |
> **Progress is real, measurable, and reproducible β even at $0 cost.**
|
| 277 |
|
|
|
|
|
|
|
| 278 |
---
|
| 279 |
|
| 280 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 281 |
|
| 282 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 283 |
|
| 284 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| 285 |
Licensed under the Apache License, Version 2.0 (the "License");
|
| 286 |
you may not use this file except in compliance with the License.
|
| 287 |
You may obtain a copy of the License at
|
| 288 |
|
| 289 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 290 |
```
|
| 291 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 292 |
---
|
| 293 |
|
| 294 |
-
|
| 295 |
|
| 296 |
-
|
| 297 |
-
- π Research Paper (Coming Soon)
|
| 298 |
-
- [Training code](https://github.com/YuuKi-OS/yuuki-training)
|
| 299 |
|
| 300 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 301 |
|
| 302 |
-
<
|
| 303 |
-
<i>Built with patience, a phone, and zero budget.</i><br>
|
| 304 |
-
<b>πΈ Yuuki Project</b>
|
| 305 |
-
</p>
|
|
|
|
| 18 |
- code_eval
|
| 19 |
new_version: OpceanAI/Yuuki-v0.1
|
| 20 |
---
|
| 21 |
+
<div align="center">
|
| 22 |
|
| 23 |
+
<br>
|
| 24 |
|
| 25 |
+
<img src="https://img.shields.io/badge/%E2%9C%A6-YUUKI--BEST-000000?style=for-the-badge&labelColor=000000" alt="Yuuki Best" height="50">
|
| 26 |
+
|
| 27 |
+
<br><br>
|
| 28 |
+
|
| 29 |
+
# The Best Checkpoint of the $0 Phone-Trained LLM
|
| 30 |
+
|
| 31 |
+
**Strongest initial model trained entirely on a smartphone.**<br>
|
| 32 |
+
**GPT-2 architecture. Checkpoint 2000. 146% improvement over checkpoint 1400.**
|
| 33 |
+
|
| 34 |
+
<br>
|
| 35 |
+
|
| 36 |
+
<a href="#features"><img src="https://img.shields.io/badge/FEATURES-000000?style=for-the-badge" alt="Features"></a>
|
| 37 |
+
|
| 38 |
+
<a href="https://huggingface.co/spaces/OpceanAI/Yuuki"><img src="https://img.shields.io/badge/LIVE_DEMO-000000?style=for-the-badge" alt="Demo"></a>
|
| 39 |
+
|
| 40 |
+
<a href="https://github.com/sponsors/aguitauwu"><img src="https://img.shields.io/badge/SPONSOR-000000?style=for-the-badge" alt="Sponsor"></a>
|
| 41 |
+
|
| 42 |
+
<br><br>
|
| 43 |
+
|
| 44 |
+
[](LICENSE)
|
| 45 |
+
|
| 46 |
+
[](https://huggingface.co/openai-community/gpt2)
|
| 47 |
+
|
| 48 |
+
[](https://huggingface.co/datasets/bigcode/the-stack)
|
| 49 |
+
|
| 50 |
+
[](https://pytorch.org/)
|
| 51 |
+
|
| 52 |
+
[](https://huggingface.co/docs/transformers)
|
| 53 |
+
|
| 54 |
+
<br>
|
| 55 |
|
| 56 |
---
|
| 57 |
|
| 58 |
+
<br>
|
| 59 |
+
|
| 60 |
+
</div>
|
| 61 |
+
|
| 62 |
+
## β οΈ Important Disclaimer
|
| 63 |
+
|
| 64 |
+
**Yuuki-best** is the **strongest checkpoint** of the Yuuki project at checkpoint 2000 (5.3% training progress). This is an **early-stage research snapshot**, not a production-ready model.
|
| 65 |
|
| 66 |
+
- π¬ **Research project** - Exploring mobile-based LLM training
|
| 67 |
+
- π± **Single-person effort** - Trained entirely on a smartphone
|
| 68 |
+
- π **Research paper coming** - Full methodology and findings
|
| 69 |
+
- π§ **Early development** - Performance will improve significantly in v0.1
|
| 70 |
|
| 71 |
+
<br>
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
---
|
| 74 |
|
| 75 |
+
<br>
|
| 76 |
+
|
| 77 |
+
<div align="center">
|
| 78 |
+
|
| 79 |
+
## What is Yuuki-best?
|
| 80 |
+
|
| 81 |
+
</div>
|
| 82 |
|
| 83 |
+
<br>
|
| 84 |
|
| 85 |
+
**Yuuki-best** represents the **best checkpoint** (step 2000) of the Yuuki code generation model β a multilingual LLM trained entirely on a **Redmi 12 smartphone** with **zero cloud budget**. This checkpoint demonstrates major qualitative improvements over earlier versions, with a **146% average score increase** and clear evidence of real language learning.
|
| 86 |
|
| 87 |
+
The model is based on **GPT-2 architecture** (82M parameters) and has been trained on **The Stack** dataset with 75,000 code examples. This checkpoint shows:
|
|
|
|
|
|
|
|
|
|
| 88 |
|
| 89 |
+
- β
**Functional training pipeline** - Proven to work on mobile CPU
|
| 90 |
+
- β
**Real language learning** - Generates actual Agda imports and structures
|
| 91 |
+
- β
**Structured code outputs** - Syntactic scaffolding emerging
|
| 92 |
+
- β
**Measurable progress** - 146% improvement in just 1.6% more training
|
| 93 |
|
| 94 |
+
<br>
|
| 95 |
|
| 96 |
---
|
| 97 |
|
| 98 |
+
<br>
|
| 99 |
+
|
| 100 |
+
<div align="center">
|
| 101 |
+
|
| 102 |
+
## Features
|
| 103 |
+
|
| 104 |
+
</div>
|
| 105 |
+
|
| 106 |
+
<br>
|
| 107 |
+
|
| 108 |
+
<table>
|
| 109 |
+
<tr>
|
| 110 |
+
<td width="50%" valign="top">
|
| 111 |
+
|
| 112 |
+
**Best Initial Checkpoint**
|
| 113 |
+
|
| 114 |
+
Checkpoint 2000 represents the strongest model snapshot so far, with clear improvements in code structure, language awareness, and quality scores compared to earlier checkpoints.
|
| 115 |
+
|
| 116 |
+
<br>
|
| 117 |
+
|
| 118 |
+
**Real Language Learning**
|
| 119 |
+
|
| 120 |
+
Generates genuine Agda imports (Cubical, Data.Nat, Function) and shows early understanding of language-specific tokens and patterns across multiple programming languages.
|
| 121 |
+
|
| 122 |
+
<br>
|
| 123 |
+
|
| 124 |
+
**Transparent Evaluation**
|
| 125 |
+
|
| 126 |
+
Unfiltered generation samples showing both successes and limitations. Honest assessment of current capabilities at ~5% training progress with clear quality metrics.
|
| 127 |
+
|
| 128 |
+
<br>
|
| 129 |
|
| 130 |
+
**146% Quality Improvement**
|
| 131 |
+
|
| 132 |
+
Average evaluation score increased from 10/100 (checkpoint 1400) to 24.6/100 (checkpoint 2000) despite only 1.6% additional training β demonstrating rapid early learning.
|
| 133 |
+
|
| 134 |
+
</td>
|
| 135 |
+
<td width="50%" valign="top">
|
| 136 |
+
|
| 137 |
+
**Zero-Budget Training**
|
| 138 |
+
|
| 139 |
+
Trained on a $150 Android phone with no cloud compute, no GPU acceleration, and no data center infrastructure. Proof that AI training is accessible to everyone.
|
| 140 |
+
|
| 141 |
+
<br>
|
| 142 |
+
|
| 143 |
+
**Multiple Quantized Formats**
|
| 144 |
+
|
| 145 |
+
Available in GGUF format with multiple quantization levels (Q4_0, Q4_K_M, Q5_K_M, Q8_0, F32) for efficient CPU and mobile inference.
|
| 146 |
+
|
| 147 |
+
<br>
|
| 148 |
+
|
| 149 |
+
**Early Language Specialization**
|
| 150 |
+
|
| 151 |
+
Due to alphabetical dataset ordering, shows strongest performance in Agda (55/100), with C, Assembly, and other languages progressively learning.
|
| 152 |
+
|
| 153 |
+
<br>
|
| 154 |
+
|
| 155 |
+
**Part of Complete Ecosystem**
|
| 156 |
+
|
| 157 |
+
Integrated with CLI tools (yuy, yuy-chat), web interfaces (Yuuki-chat, Yuuki-web), and comprehensive documentation for easy deployment.
|
| 158 |
+
|
| 159 |
+
</td>
|
| 160 |
+
</tr>
|
| 161 |
+
</table>
|
| 162 |
+
|
| 163 |
+
<br>
|
| 164 |
|
| 165 |
---
|
| 166 |
|
| 167 |
+
<br>
|
| 168 |
+
|
| 169 |
+
<div align="center">
|
| 170 |
+
|
| 171 |
+
## Checkpoint Comparison
|
| 172 |
+
|
| 173 |
+
</div>
|
| 174 |
+
|
| 175 |
+
<br>
|
| 176 |
+
|
| 177 |
+
### Performance Metrics
|
| 178 |
+
|
| 179 |
+
| Metric | Checkpoint 1400 | Checkpoint 2000 | Improvement |
|
| 180 |
+
|:-------|:----------------|:----------------|:------------|
|
| 181 |
+
| **Training Progress** | 1,400 / 37,500 (3.7%) | 2,000 / 37,500 (5.3%) | +1.6% |
|
| 182 |
+
| **Average Loss** | 1.70 β 2.23 | 1.69 β 2.31 | Similar |
|
| 183 |
+
| **Training Speed** | ~100 sec/step | ~86 sec/step | **14% faster** |
|
| 184 |
+
| **Model Size** | 988 MB | 988 MB | Same |
|
| 185 |
+
| **Evaluated Languages** | 5 languages | 5 languages | Same |
|
| 186 |
|
| 187 |
+
<br>
|
| 188 |
+
|
| 189 |
+
### Language Evaluation Scores
|
| 190 |
+
|
| 191 |
+
<table>
|
| 192 |
+
<tr>
|
| 193 |
+
<td width="50%" valign="top">
|
| 194 |
+
|
| 195 |
+
**Checkpoint 1400**
|
| 196 |
|
| 197 |
| Language | Score |
|
| 198 |
+
|:---------|:------|
|
| 199 |
+
| Agda | 20 / 100 |
|
| 200 |
+
| C | 8 / 100 |
|
| 201 |
+
| Assembly | 2 / 100 |
|
| 202 |
+
| **Average** | **10 / 100** |
|
| 203 |
|
| 204 |
+
</td>
|
| 205 |
+
<td width="50%" valign="top">
|
| 206 |
+
|
| 207 |
+
**Checkpoint 2000 (Yuuki-best)**
|
| 208 |
|
| 209 |
| Language | Score |
|
| 210 |
+
|:---------|:------|
|
| 211 |
+
| Agda | 55 / 100 |
|
| 212 |
+
| C | 20 / 100 |
|
| 213 |
+
| Assembly | 15 / 100 |
|
| 214 |
| **Average** | **24.6 / 100** |
|
| 215 |
|
| 216 |
+
</td>
|
| 217 |
+
</tr>
|
| 218 |
+
</table>
|
| 219 |
+
|
| 220 |
+
<br>
|
| 221 |
+
|
| 222 |
+
**π +146% average improvement** despite minimal additional training (β1.6% more steps).
|
| 223 |
+
|
| 224 |
+
<br>
|
| 225 |
|
| 226 |
---
|
| 227 |
|
| 228 |
+
<br>
|
| 229 |
+
|
| 230 |
+
<div align="center">
|
| 231 |
+
|
| 232 |
+
## Code Quality Evolution
|
| 233 |
+
|
| 234 |
+
</div>
|
| 235 |
+
|
| 236 |
+
<br>
|
| 237 |
|
| 238 |
### π» Checkpoint 1400 β Early, Noisy, Unstable
|
| 239 |
|
| 240 |
+
<table>
|
| 241 |
+
<tr>
|
| 242 |
+
<td width="50%" valign="top">
|
| 243 |
+
|
| 244 |
+
**Agda Test**
|
| 245 |
|
| 246 |
```agda
|
| 247 |
+
BLAS : Bad memory unallocation! : 50 0x72ec90a000
|
| 248 |
module Main where
|
| 249 |
|
| 250 |
open import Data.PropositionalEquality using (PropositionalEquality;
|
|
|
|
| 253 |
data import Data.PropositionalEquality using (Propos;
|
| 254 |
open-Equality -PropositionalEquality) using (Prop-} (Prop-Prop-ElAdvertisements)
|
| 255 |
```
|
|
|
|
| 256 |
|
| 257 |
+
β Severe corruption, mixed tokens, runtime noise
|
| 258 |
|
| 259 |
+
</td>
|
| 260 |
+
<td width="50%" valign="top">
|
| 261 |
+
|
| 262 |
+
**C Test**
|
| 263 |
|
| 264 |
```c
|
| 265 |
int main() {
|
| 266 |
prostitution of '/*' (prostitution of '*'
|
| 267 |
prost = $0)
|
| 268 |
```
|
|
|
|
| 269 |
|
| 270 |
+
β No syntactic coherence, random token associations
|
| 271 |
|
| 272 |
+
</td>
|
| 273 |
+
</tr>
|
| 274 |
+
<tr>
|
| 275 |
+
<td width="50%" valign="top">
|
| 276 |
+
|
| 277 |
+
**Assembly Test**
|
| 278 |
|
| 279 |
```asm
|
| 280 |
mov eax,
|
| 281 |
|
| 282 |
the International Press S.
|
| 283 |
```
|
|
|
|
| 284 |
|
| 285 |
+
β No instruction continuation
|
| 286 |
|
| 287 |
+
</td>
|
| 288 |
+
<td width="50%" valign="top">
|
| 289 |
|
| 290 |
+
**Python Test**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 291 |
|
| 292 |
```python
|
| 293 |
def hello():
|
|
|
|
| 295 |
open program
|
| 296 |
open import HUnit.H.P.B.P.P-E
|
| 297 |
```
|
|
|
|
| 298 |
|
| 299 |
+
β Python not learned, heavy cross-language contamination
|
| 300 |
+
|
| 301 |
+
</td>
|
| 302 |
+
</tr>
|
| 303 |
+
</table>
|
| 304 |
+
|
| 305 |
+
<br>
|
| 306 |
+
|
| 307 |
+
### πΊ Checkpoint 2000 β Structured, Language-Aware (Best)
|
| 308 |
|
| 309 |
+
<table>
|
| 310 |
+
<tr>
|
| 311 |
+
<td width="50%" valign="top">
|
| 312 |
|
| 313 |
+
**Agda Test** β
|
|
|
|
| 314 |
|
| 315 |
```agda
|
| 316 |
module Main where
|
|
|
|
| 334 |
|
| 335 |
open import data
|
| 336 |
```
|
|
|
|
| 337 |
|
| 338 |
+
**Strong familiarity with real Agda modules and imports**
|
| 339 |
+
|
| 340 |
+
</td>
|
| 341 |
+
<td width="50%" valign="top">
|
| 342 |
|
| 343 |
+
**C Test** β οΈ
|
|
|
|
| 344 |
|
| 345 |
```c
|
| 346 |
int main() {
|
|
|
|
| 354 |
}
|
| 355 |
run run
|
| 356 |
```
|
|
|
|
| 357 |
|
| 358 |
+
**Syntax unstable, but structure and intent present**
|
| 359 |
|
| 360 |
+
</td>
|
| 361 |
+
</tr>
|
| 362 |
+
<tr>
|
| 363 |
+
<td width="50%" valign="top">
|
| 364 |
+
|
| 365 |
+
**Assembly Test** π±
|
| 366 |
|
| 367 |
```asm
|
| 368 |
mov eax, "v2.2"
|
|
|
|
| 371 |
ov eax, "v2._v1"
|
| 372 |
ov eax, "v3"
|
| 373 |
```
|
|
|
|
| 374 |
|
| 375 |
+
**Token-level familiarity with assembly-like patterns**
|
| 376 |
+
|
| 377 |
+
</td>
|
| 378 |
+
<td width="50%" valign="top">
|
| 379 |
|
| 380 |
+
**JavaScript Test** β οΈ
|
|
|
|
| 381 |
|
| 382 |
```javascript
|
| 383 |
function test() {
|
|
|
|
| 385 |
(a=0; i++) {
|
| 386 |
.-1 . A
|
| 387 |
```
|
|
|
|
| 388 |
|
| 389 |
+
**Logic broken, but syntactic scaffolding emerging**
|
| 390 |
|
| 391 |
+
</td>
|
| 392 |
+
</tr>
|
| 393 |
+
</table>
|
| 394 |
|
| 395 |
+
<br>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 396 |
|
| 397 |
---
|
| 398 |
|
| 399 |
+
<br>
|
| 400 |
+
|
| 401 |
+
<div align="center">
|
| 402 |
+
|
| 403 |
+
## Interpretation
|
| 404 |
+
|
| 405 |
+
</div>
|
| 406 |
+
|
| 407 |
+
<br>
|
| 408 |
+
|
| 409 |
+
These outputs confirm that Yuuki at checkpoint 2000:
|
| 410 |
+
|
| 411 |
+
<table>
|
| 412 |
+
<tr>
|
| 413 |
+
<td width="50%" valign="top">
|
| 414 |
+
|
| 415 |
+
**What It Learned**
|
| 416 |
+
|
| 417 |
+
- π **Real libraries** - Actual Agda modules (Data.Nat, Cubical, Function)
|
| 418 |
+
- ποΈ **Early structure** - Syntactic scaffolding before correctness
|
| 419 |
+
- π― **Language tokens** - Recognizes language-specific patterns
|
| 420 |
+
- π **Gradual improvement** - Natural learning progression
|
| 421 |
+
|
| 422 |
+
</td>
|
| 423 |
+
<td width="50%" valign="top">
|
| 424 |
+
|
| 425 |
+
**What It Shows**
|
| 426 |
|
| 427 |
+
- β
**Dataset ordering effects** - Early languages learned first
|
| 428 |
+
- β
**Real progress** - 146% quality improvement
|
| 429 |
+
- β
**Functional pipeline** - Training works on mobile CPU
|
| 430 |
+
- β
**Healthy behavior** - Expected at ~5% training progress
|
| 431 |
|
| 432 |
+
</td>
|
| 433 |
+
</tr>
|
| 434 |
+
</table>
|
|
|
|
| 435 |
|
| 436 |
+
<br>
|
| 437 |
+
|
| 438 |
+
**This behavior is expected and healthy at ~5% total training.**
|
| 439 |
+
|
| 440 |
+
<br>
|
| 441 |
|
| 442 |
---
|
| 443 |
|
| 444 |
+
<br>
|
| 445 |
+
|
| 446 |
+
<div align="center">
|
| 447 |
+
|
| 448 |
+
## Key Takeaway
|
| 449 |
+
|
| 450 |
+
</div>
|
| 451 |
+
|
| 452 |
+
<br>
|
| 453 |
|
| 454 |
Between **3.7% β 5.3%** training progress, Yuuki shows:
|
| 455 |
|
| 456 |
+
- β
**Major qualitative gains** - 146% improvement in evaluation scores
|
| 457 |
+
- β
**Clear specialization trends** - Strong Agda performance emerging
|
| 458 |
+
- β
**Rapid early learning** - Despite CPU-only constraints
|
| 459 |
|
| 460 |
This validates the project's core claim:
|
| 461 |
|
| 462 |
> **Progress is real, measurable, and reproducible β even at $0 cost.**
|
| 463 |
|
| 464 |
+
<br>
|
| 465 |
+
|
| 466 |
---
|
| 467 |
|
| 468 |
+
<br>
|
| 469 |
+
|
| 470 |
+
<div align="center">
|
| 471 |
+
|
| 472 |
+
## Available Formats
|
| 473 |
+
|
| 474 |
+
</div>
|
| 475 |
+
|
| 476 |
+
<br>
|
| 477 |
+
|
| 478 |
+
### GGUF Quantized Models
|
| 479 |
+
|
| 480 |
+
Optimized for CPU inference with llama.cpp and Ollama.
|
| 481 |
|
| 482 |
+
| Format | Size | Use Case | Quality |
|
| 483 |
+
|:-------|:-----|:---------|:--------|
|
| 484 |
+
| **yuuki-best-f32.gguf** | ~328 MB | Full precision baseline | Best |
|
| 485 |
+
| **yuuki-best-q8_0.gguf** | ~87 MB | High quality, smaller size | Excellent |
|
| 486 |
+
| **yuuki-best-q5_k_m.gguf** | ~56 MB | Balanced quality/size | Very Good |
|
| 487 |
+
| **yuuki-best-q4_k_m.gguf** | ~47 MB | Good quality, efficient | Good |
|
| 488 |
+
| **yuuki-best-q4_0.gguf** | ~46 MB | Most efficient, fast | Good |
|
| 489 |
+
|
| 490 |
+
<br>
|
| 491 |
+
|
| 492 |
+
---
|
| 493 |
+
|
| 494 |
+
<br>
|
| 495 |
+
|
| 496 |
+
<div align="center">
|
| 497 |
+
|
| 498 |
+
## Usage
|
| 499 |
+
|
| 500 |
+
</div>
|
| 501 |
+
|
| 502 |
+
<br>
|
| 503 |
+
|
| 504 |
+
### With Transformers (PyTorch)
|
| 505 |
+
|
| 506 |
+
```python
|
| 507 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
| 508 |
+
|
| 509 |
+
# Load model
|
| 510 |
+
model = AutoModelForCausalLM.from_pretrained("OpceanAI/Yuuki-best")
|
| 511 |
+
tokenizer = AutoTokenizer.from_pretrained("OpceanAI/Yuuki-best")
|
| 512 |
+
|
| 513 |
+
# Generate code
|
| 514 |
+
prompt = "module Main where"
|
| 515 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 516 |
+
outputs = model.generate(**inputs, max_length=100, temperature=0.7)
|
| 517 |
+
code = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 518 |
+
print(code)
|
| 519 |
+
```
|
| 520 |
+
|
| 521 |
+
<br>
|
| 522 |
+
|
| 523 |
+
### With llama.cpp (GGUF)
|
| 524 |
+
|
| 525 |
+
```bash
|
| 526 |
+
# Run inference with quantized model
|
| 527 |
+
./llama.cpp/main -m yuuki-best-q4_k_m.gguf \
|
| 528 |
+
-p "module Main where" \
|
| 529 |
+
-n 50 \
|
| 530 |
+
-t 4 \
|
| 531 |
+
--temp 0.7
|
| 532 |
+
```
|
| 533 |
+
|
| 534 |
+
<br>
|
| 535 |
+
|
| 536 |
+
### With Ollama
|
| 537 |
+
|
| 538 |
+
```bash
|
| 539 |
+
# Create Modelfile
|
| 540 |
+
cat > Modelfile << EOF
|
| 541 |
+
FROM ./yuuki-best-q4_k_m.gguf
|
| 542 |
+
|
| 543 |
+
TEMPLATE """{{ .Prompt }}"""
|
| 544 |
+
|
| 545 |
+
PARAMETER temperature 0.7
|
| 546 |
+
PARAMETER top_p 0.9
|
| 547 |
+
EOF
|
| 548 |
+
|
| 549 |
+
# Import and run
|
| 550 |
+
ollama create yuuki-best -f Modelfile
|
| 551 |
+
ollama run yuuki-best "module Main where"
|
| 552 |
+
```
|
| 553 |
+
|
| 554 |
+
<br>
|
| 555 |
+
|
| 556 |
+
---
|
| 557 |
+
|
| 558 |
+
<br>
|
| 559 |
+
|
| 560 |
+
<div align="center">
|
| 561 |
+
|
| 562 |
+
## Training Configuration
|
| 563 |
+
|
| 564 |
+
</div>
|
| 565 |
+
|
| 566 |
+
<br>
|
| 567 |
+
|
| 568 |
+
<table>
|
| 569 |
+
<tr>
|
| 570 |
+
<td width="50%" valign="top">
|
| 571 |
+
|
| 572 |
+
**Hardware**
|
| 573 |
+
|
| 574 |
+
| Component | Specification |
|
| 575 |
+
|:----------|:--------------|
|
| 576 |
+
| Device | Redmi 12 (Android phone) |
|
| 577 |
+
| CPU | Snapdragon 685 (8-core ARM) |
|
| 578 |
+
| RAM | 6 GB |
|
| 579 |
+
| Storage | 128 GB |
|
| 580 |
+
| Training Mode | CPU only |
|
| 581 |
+
| Cost | **$0.00** |
|
| 582 |
+
|
| 583 |
+
</td>
|
| 584 |
+
<td width="50%" valign="top">
|
| 585 |
+
|
| 586 |
+
**Model Parameters**
|
| 587 |
+
|
| 588 |
+
| Parameter | Value |
|
| 589 |
+
|:----------|:------|
|
| 590 |
+
| Base Model | GPT-2 (82M parameters) |
|
| 591 |
+
| Dataset | The Stack + Yuuki-dataset |
|
| 592 |
+
| Checkpoint | 2000 / 37,500 steps |
|
| 593 |
+
| Progress | 5.3% |
|
| 594 |
+
| Training Speed | ~86 sec/step |
|
| 595 |
+
| Loss Range | 1.69 β 2.31 |
|
| 596 |
+
|
| 597 |
+
</td>
|
| 598 |
+
</tr>
|
| 599 |
+
</table>
|
| 600 |
+
|
| 601 |
+
<br>
|
| 602 |
+
|
| 603 |
+
---
|
| 604 |
+
|
| 605 |
+
<br>
|
| 606 |
+
|
| 607 |
+
<div align="center">
|
| 608 |
+
|
| 609 |
+
## Philosophy
|
| 610 |
+
|
| 611 |
+
</div>
|
| 612 |
+
|
| 613 |
+
<br>
|
| 614 |
+
|
| 615 |
+
> **"Progress is real, measurable, and reproducible β even at $0 cost."**
|
| 616 |
+
|
| 617 |
+
This checkpoint proves:
|
| 618 |
+
|
| 619 |
+
- β
**LLM training works on mobile** - Real learning on consumer hardware
|
| 620 |
+
- β
**Quality improves rapidly** - 146% gain with minimal training
|
| 621 |
+
- β
**$0 budget is viable** - No cloud compute needed
|
| 622 |
+
- β
**Anyone can contribute** - Breaking barriers to AI development
|
| 623 |
+
|
| 624 |
+
<br>
|
| 625 |
+
|
| 626 |
+
---
|
| 627 |
+
|
| 628 |
+
<br>
|
| 629 |
+
|
| 630 |
+
<div align="center">
|
| 631 |
+
|
| 632 |
+
## Related Projects
|
| 633 |
+
|
| 634 |
+
</div>
|
| 635 |
+
|
| 636 |
+
<br>
|
| 637 |
+
|
| 638 |
+
| Project | Description |
|
| 639 |
+
|:--------|:------------|
|
| 640 |
+
| [Yuuki-v0.1](https://huggingface.co/OpceanAI/Yuuki-v0.1) | Latest release version (coming soon) |
|
| 641 |
+
| [Yuuki-3.7](https://huggingface.co/OpceanAI/Yuuki-3.7) | Intermediate checkpoint model |
|
| 642 |
+
| [yuy](https://github.com/YuuKi-OS/yuy) | CLI for downloading, managing, and running Yuuki models |
|
| 643 |
+
| [yuy-chat](https://github.com/YuuKi-OS/yuy-chat) | TUI chat interface for local AI conversations |
|
| 644 |
+
| [Yuuki-chat](https://github.com/YuuKi-OS/Yuuki-chat) | Web-based chat interface with research modes |
|
| 645 |
+
| [Yuuki-web](https://github.com/YuuKi-OS/Yuuki-web) | Official landing page and project showcase |
|
| 646 |
+
| [yuuki-training](https://github.com/YuuKi-OS/yuuki-training) | Training code and scripts |
|
| 647 |
+
| [Yuuki Space](https://huggingface.co/spaces/OpceanAI/Yuuki) | Web-based interactive demo |
|
| 648 |
+
|
| 649 |
+
<br>
|
| 650 |
+
|
| 651 |
+
---
|
| 652 |
+
|
| 653 |
+
<br>
|
| 654 |
+
|
| 655 |
+
<div align="center">
|
| 656 |
+
|
| 657 |
+
## Links
|
| 658 |
+
|
| 659 |
+
</div>
|
| 660 |
+
|
| 661 |
+
<br>
|
| 662 |
+
|
| 663 |
+
<div align="center">
|
| 664 |
+
|
| 665 |
+
[](https://huggingface.co/OpceanAI/Yuuki-best)
|
| 666 |
+
|
| 667 |
+
[](https://huggingface.co/spaces/OpceanAI/Yuuki)
|
| 668 |
+
|
| 669 |
+
[](https://github.com/YuuKi-OS/yuuki-training)
|
| 670 |
+
|
| 671 |
+
<br>
|
| 672 |
+
|
| 673 |
+
[](https://github.com/YuuKi-OS/yuy)
|
| 674 |
+
|
| 675 |
+
[](https://github.com/YuuKi-OS/yuy-chat)
|
| 676 |
+
|
| 677 |
+
[](https://github.com/sponsors/aguitauwu)
|
| 678 |
+
|
| 679 |
+
</div>
|
| 680 |
+
|
| 681 |
+
<br>
|
| 682 |
+
|
| 683 |
+
---
|
| 684 |
+
|
| 685 |
+
<br>
|
| 686 |
+
|
| 687 |
+
<div align="center">
|
| 688 |
+
|
| 689 |
+
## Community
|
| 690 |
+
|
| 691 |
+
</div>
|
| 692 |
+
|
| 693 |
+
<br>
|
| 694 |
+
|
| 695 |
+
Join the Yuuki community:
|
| 696 |
+
|
| 697 |
+
- π¬ [Discord Server](https://discord.gg/j8zV2u8k) - Chat with other users and contributors
|
| 698 |
+
- π¦ [Twitter Updates](https://twitter.com/aguitauwu) - Follow development progress
|
| 699 |
+
- πΊ [GitHub](https://github.com/aguitauwu) - Star repos and contribute
|
| 700 |
+
- π [GitHub Sponsors](https://github.com/sponsors/aguitauwu) - Support the project
|
| 701 |
+
|
| 702 |
+
<br>
|
| 703 |
+
|
| 704 |
+
---
|
| 705 |
+
|
| 706 |
+
<br>
|
| 707 |
+
|
| 708 |
+
<div align="center">
|
| 709 |
+
|
| 710 |
+
## Acknowledgments
|
| 711 |
+
|
| 712 |
+
</div>
|
| 713 |
+
|
| 714 |
+
<br>
|
| 715 |
+
|
| 716 |
+
- **HuggingFace** - Infrastructure and transformers library
|
| 717 |
+
- **BigCode** - The Stack dataset
|
| 718 |
+
- **The ML community** - For inspiration and support
|
| 719 |
+
- **Everyone following along** - Your interest makes this worthwhile
|
| 720 |
+
|
| 721 |
+
<br>
|
| 722 |
+
|
| 723 |
+
---
|
| 724 |
+
|
| 725 |
+
<br>
|
| 726 |
+
|
| 727 |
+
<div align="center">
|
| 728 |
+
|
| 729 |
+
## License
|
| 730 |
+
|
| 731 |
+
</div>
|
| 732 |
+
|
| 733 |
+
<br>
|
| 734 |
|
| 735 |
```
|
| 736 |
+
Apache License 2.0
|
| 737 |
+
|
| 738 |
+
Copyright (c) 2026 Yuuki Project
|
| 739 |
+
|
| 740 |
Licensed under the Apache License, Version 2.0 (the "License");
|
| 741 |
you may not use this file except in compliance with the License.
|
| 742 |
You may obtain a copy of the License at
|
| 743 |
|
| 744 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
| 745 |
+
|
| 746 |
+
Unless required by applicable law or agreed to in writing, software
|
| 747 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
| 748 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
| 749 |
+
See the License for the specific language governing permissions and
|
| 750 |
+
limitations under the License.
|
| 751 |
```
|
| 752 |
|
| 753 |
+
**You can use Yuuki commercially, modify it, distribute it. Just give credit.** β
|
| 754 |
+
|
| 755 |
+
<br>
|
| 756 |
+
|
| 757 |
---
|
| 758 |
|
| 759 |
+
<br>
|
| 760 |
|
| 761 |
+
<div align="center">
|
|
|
|
|
|
|
| 762 |
|
| 763 |
+
**Built with patience, a phone, and zero budget.**
|
| 764 |
+
|
| 765 |
+
<br>
|
| 766 |
+
|
| 767 |
+
[](https://huggingface.co/OpceanAI)
|
| 768 |
+
|
| 769 |
+
<br>
|
| 770 |
+
|
| 771 |
+
*The best checkpoint so far. More improvements coming soon.* πΈ
|
| 772 |
|
| 773 |
+
</div>
|
|
|
|
|
|
|
|
|