File size: 3,800 Bytes
53f0cc2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
# Final Project README - MINDI 1.0 420M (Windows, RTX 4060 8GB)

## What This Project Is
This is a fully local coding-assistant model system built step-by-step from scratch.
It supports:
- custom tokenizer for code
- dataset cleaning + tokenization pipeline
- 420M transformer model
- memory-optimized training
- evaluation + inference improvements
- local chat UI
- LoRA fine-tuning
- INT8 export + portable package

Everything runs locally on your machine without internet after setup.

---

## What You Built (High Level)
1. **Project setup** with reproducible environment and verification scripts.
2. **Custom code tokenizer** (Python + JavaScript aware).
3. **Dataset pipeline** with cleaning, dedupe, and tokenization.
4. **420M transformer architecture** (modular config).
5. **Training pipeline** (FP16, checkpointing, accumulation, resume, early stopping).
6. **Evaluation system** (val metrics + generation checks).
7. **Inference engine** (greedy mode, stop rules, syntax-aware retry).
8. **Local chat interface** with history, copy button, timing, and mode selector.
9. **LoRA fine-tuning pipeline** for your own examples.
10. **Export/quantization/packaging** with benchmark report and portable launcher.

---

## Most Important File Locations

### Core model and data
- Base checkpoint: `checkpoints/component5_420m/step_3200.pt`
- Tokenized training data: `data/processed/train_tokenized.jsonl`
- Tokenizer: `artifacts/tokenizer/code_tokenizer_v1/`

### LoRA
- Best LoRA adapter: `models/lora/custom_lora_v1/best.pt`
- LoRA metadata: `models/lora/custom_lora_v1/adapter_meta.json`

### Quantized model
- INT8 model: `models/quantized/model_step3200_int8_state.pt`
- Benchmark report: `artifacts/export/component10_benchmark_report.json`

### Chat interface
- Launcher: `scripts/launch_component8_chat.py`
- Chat config: `configs/component8_chat_config.yaml`

### Portable package
- Folder: `release/MINDI_1.0_420M`
- Double-click launcher: `release/MINDI_1.0_420M/Start_MINDI.bat`

---

## Launch the Main Chat UI
From project root (`C:\AI 2`):

```powershell
.\.venv\Scripts\Activate.ps1
python .\scripts\launch_component8_chat.py --config .\configs\component8_chat_config.yaml
```

Open in browser:
- `http://127.0.0.1:7860`

### Live model selector in UI
You can switch without restart:
- `base`
- `lora`
- `int8`

Status box shows:
- active mode
- mode load time
- live VRAM usage

---

## How to Add More Training Data (Future Improvement)

### A) Add more base-training pairs (full training path)
1. Put new JSONL/JSON files in `data/raw/`.
2. Run dataset processing scripts (Component 3 path).
3. Continue/refresh base training with Component 5.

### B) Add targeted improvements quickly (LoRA recommended)
1. Edit `data/raw/custom_finetune_pairs.jsonl` with your new prompt/code pairs.
   - Required fields per row: `prompt`, `code`
   - Optional: `language` (`python` or `javascript`)
2. Run LoRA fine-tuning:

```powershell
python .\scripts\run_component9_lora_finetune.py --config .\configs\component9_lora_config.yaml
```

3. Use updated adapter in chat by selecting `lora` mode.

---

## Recommended Next Habit
When quality is weak on specific tasks:
1. Add 20-200 clean examples of exactly that task style to `custom_finetune_pairs.jsonl`.
2. Re-run LoRA fine-tuning.
3. Test in chat `lora` mode.
4. Repeat in small cycles.

This gives faster improvement than retraining the full base model each time.

---

## One-File Health Check Commands

```powershell
python .\scripts\verify_component1_setup.py
python .\scripts\verify_component4_model.py --config .\configs\component4_model_config.yaml --batch_size 1 --seq_len 256
python .\scripts\verify_component9_lora.py
```

---

## Current Status
Project is complete across Components 1-10 and verified on your hardware.