File size: 2,876 Bytes
f203641
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
# OktoEngine Examples

Complete working examples demonstrating OktoEngine capabilities.

---

## Table of Contents

1. [Basic Training](#basic-training)
2. [LoRA Fine-tuning](#lora-fine-tuning)
3. [Chatbot Training](#chatbot-training)
4. [Multi-format Export](#multi-format-export)

---

## Basic Training

**Location:** [`basic-training/`](./basic-training/)

Minimal working example for training a simple model.

**Files:**
- `scripts/train.okt` - Training configuration
- `dataset/train.jsonl` - Sample training data
- `dataset/val.jsonl` - Sample validation data

**Usage:**
```bash

cd basic-training

okto validate

okto train

```

---

## LoRA Fine-tuning

**Location:** [`lora-training/`](./lora-training/)

Example of efficient LoRA fine-tuning for large models.

**Files:**
- `scripts/train.okt` - LoRA configuration
- `dataset/train.jsonl` - Training data

**Usage:**
```bash

cd lora-training

okto validate

okto train

```

---

## Chatbot Training

**Location:** [`chatbot/`](./chatbot/)

Complete example for training a conversational AI model.

**Files:**
- `scripts/train.okt` - Chatbot configuration
- `dataset/train.jsonl` - Conversation data
- `dataset/val.jsonl` - Validation conversations

**Usage:**
```bash

cd chatbot

okto validate

okto train

okto eval

```

---

## Multi-format Export

**Location:** [`multi-export/`](./multi-export/)

Example showing how to export models to multiple formats.

**Files:**
- `scripts/train.okt` - Configuration with multiple export formats

**Usage:**
```bash

cd multi-export

okto train

okto export --format okm,onnx,gguf

```

---

## Running Examples

1. **Navigate to example directory:**
   ```bash

   cd examples/basic-training

   ```

2. **Validate configuration:**
   ```bash

   okto validate

   ```

3. **Train the model:**
   ```bash

   okto train

   ```

4. **Check results:**
   ```bash

   ls runs/

   ls export/

   ```

---

## Customizing Examples

All examples can be customized:

1. **Edit `scripts/train.okt`** - Modify training parameters
2. **Replace `dataset/*.jsonl`** - Use your own data

3. **Adjust `MODEL.base`** - Use different base models
4. **Modify `EXPORT.format`** - Change export formats

---

## Example Output

**Training output:**
```

πŸ™ OktoEngine v0.1

πŸ“„ Reading: "scripts/train.okt"



πŸ“Š Environment Check:

  βœ” Runtime: Python 3.14.0

  βœ” GPU: NVIDIA GeForce RTX 4070

  βœ” RAM: 63GB (40GB available)



πŸš€ Starting training pipeline...



Epoch 1/5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 500/500 [02:15<00:00, 3.70it/s]

  Loss: 2.345 β†’ 1.892



βœ… Training completed successfully!

πŸ“ Output: runs/MyModel/

```

---

**Need help?** Check the [Getting Started Guide](../docs/GETTING_STARTED.md) or [FAQ](../docs/FAQ.md).