JejeSa commited on
Commit
b82530e
·
verified ·
1 Parent(s): a274473

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +200 -173
README.md CHANGED
@@ -5,279 +5,306 @@
5
  <h1 align="center">Tamashii Network</h1>
6
 
7
  <p align="center">
8
- <strong>Decentralized AI Training Infrastructure</strong>
9
  </p>
10
 
11
  <p align="center">
12
  <a href="https://tamashi.io">Website</a> •
13
  <a href="https://tamashi.io/training-process">Training Process</a> •
14
  <a href="https://docs.tamashi.io">Docs</a> •
15
- <a href="https://github.com/tamashii-labs">GitHub</a>
16
- <a href="https://discord.gg/tamashii">Discord</a>
17
- </p>
18
-
19
- <p align="center">
20
- <img src="https://img.shields.io/badge/Base-Mainnet-blue" alt="Base" />
21
- <img src="https://img.shields.io/badge/DisTrO-1000x_Compression-green" alt="DisTrO" />
22
- <img src="https://img.shields.io/badge/EVM-Compatible-purple" alt="EVM" />
23
  </p>
24
 
25
  ---
26
 
27
- ## What is Tamashii?
 
 
28
 
29
- Tamashii Network is decentralized infrastructure for AI training. GPUs worldwide join training runs, coordinated by on-chain smart contracts, compressed by DisTrO technology.
 
 
 
 
 
 
 
30
 
31
  ---
32
 
33
- ## Components
 
 
34
 
35
- ### Key Players
36
 
37
  ```
38
- ╔═════╗ ◊◊ /\
39
- ║ GPU ║ │ / \
40
- ╚═════╝ ◊◊ /____\
41
- Client Coordinator DisTrO Optimizer
 
 
 
42
  ```
43
 
44
- | Component | Role |
45
- |-----------|------|
46
- | **Client** | GPU participant executing training tasks within a run |
47
- | **Coordinator** | Smart contract storing training run state and participants |
48
- | **DisTrO Optimizer** | Compression technology reducing gradient updates by 1000× |
49
-
50
- ### Client Responsibilities
51
- - Handles assigned data batches for training
52
- - Generates commitments and participates in witness process
53
- - Maintains state synchronized with Coordinator
54
- - Validates work of peers when elected as witness
55
-
56
- ### Coordinator Responsibilities
57
- - Stores training run metadata and participant list
58
- - Handles transitions between phases
59
- - Provides random seed for data assignments and witnesses
60
- - Synchronizes all clients within a run
61
-
62
- ### DisTrO Optimizer
63
- - Updates local momentum
64
- - Extracts fast components using DCT + TopK
65
- - Compresses local updates (indices + amplitudes)
66
- - Updates momentum residual
67
 
68
  ---
69
 
70
- ## Training Phases
71
 
72
- The Coordinator behaves like a state machine, transitioning between phases based on time-based conditions and client events.
73
 
74
  ```
75
- ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
76
- 1. Waiting ────▶ 2. Warmup │────▶│ 3. Training │
77
- For Members │ (Model Load) │ (RoundTrain) │
78
- └─────────────────┘ └─────────────────┘ └─────────────────┘
79
-
80
- ───────────────────────────────────────────────┘
81
-
82
- ─────────────────┐ ┌─────────────────┐
83
- 4. Witness ────▶ 5. Cooldown │────▶ Next Epoch
84
- (Validation) (Checkpoint) │
85
- └─────────────────┘ └─────────────────┘
86
  ```
87
 
88
- ### Phase Details
89
 
90
- | Phase | Description |
91
- |-------|-------------|
92
- | **WaitingForMembers** | Coordinator waits for clients to join. Transitions when `min_clients` threshold met. |
93
- | **Warmup** | Clients download model from checkpoint or P2P network, load onto GPUs. |
94
- | **RoundTrain** | Clients train on assigned data batches using DisTrO compression. Updates shared via P2P. |
95
- | **RoundWitness** | Elected witnesses create bloom filter proofs validating training work. Quorum required. |
96
- | **Cooldown** | Clients save model checkpoint. Coordinator prepares for next epoch. |
97
 
98
- ---
 
 
 
 
 
 
99
 
100
- ## Complete Training Cycle
101
 
 
 
 
 
 
 
 
 
102
  ```
103
- 1. Join Run ──▶ Client calls joinRun(runId) on Coordinator contract
104
-
105
- 2. Wait for Members ──▶ Coordinator waits until min_clients threshold met
106
-
107
- 3. Load Model ──▶ Clients download from checkpoint or P2P, load onto GPUs
108
-
109
- 4. Start Training ──▶ Coordinator provides random seed, clients train on
110
- assigned batches, apply DisTrO compression
111
-
112
- 5. Witness Validation──▶ Elected witnesses create bloom filter proofs,
113
- send to Coordinator, quorum must be reached
114
-
115
- 6. Save Checkpoint ──▶ Clients save model to external storage,
116
- Coordinator transitions to next epoch
 
 
117
  ```
118
 
 
 
119
  ---
120
 
121
- ## Health Monitoring
122
 
123
- ### Client Health Scoring
124
 
125
- Each client repeatedly sends health checks to the Coordinator:
126
 
127
- - Score increases as client sends required data to bloom filters
128
- - Coordinator confirms active participation in training
129
- - Clients can report unhealthy peers
130
 
131
- ### Removal Process
132
 
133
- Clients deemed inactive or non-participatory are marked for removal:
 
134
 
135
- - Removed in next round if health checks fail
136
- - If clients drop below `min_clients`, Coordinator returns to WaitingForMembers
137
- - Ensures training quality and network reliability
138
 
139
- ---
140
 
141
- ## Smart Contract Coordination
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
 
143
- ### Coordinator Contract
 
 
 
 
 
 
144
 
145
- Deployed on EVM-compatible networks (Base, BNB Chain). Manages:
146
 
147
- - Training run metadata and state
148
- - Participant list and health status
149
- - State transitions via `tick()` method
150
- - Witness proofs and validation
151
 
152
- ### Client Interaction
 
 
153
 
154
- ```solidity
155
- // Join a training run
156
- joinRun(runId)
157
 
158
- // Advance Coordinator state
159
- tick()
160
 
161
- // Monitor state changes via RPC subscription
162
- // Send health check messages
163
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
164
 
165
  ---
166
 
167
- ## Data Providers
 
 
168
 
169
- The Coordinator assigns batch IDs to each client, ensuring no piece of data is trained on more than once.
 
 
 
 
 
170
 
171
- | Provider | Description | Use Case |
172
- |----------|-------------|----------|
173
- | **Local** | Data stored on client machine | Fastest access, no network overhead |
174
- | **HTTP** | Fetch from URL or GCP bucket | On-demand loading, multiple sources |
175
- | **TCP** | Dedicated TCP server for data | Optimized for large datasets |
176
 
177
  ---
178
 
179
- ## Model Synchronization
180
 
181
- All clients must have an identical model to train with.
182
 
183
- ### HuggingFace Checkpoint
184
- - Checkpointers upload updated model to HuggingFace after each epoch
185
- - New clients download latest checkpoint from URL
186
- - Centralized but reliable distribution
187
 
188
- ### P2P Checkpoint
189
- - New clients synchronize by obtaining model directly from peers
190
- - Request different layers from different clients
191
- - Fully decentralized, no central server needed
 
 
192
 
193
- ---
 
194
 
195
- ## Incentives
 
196
 
197
- ### Training Rewards
 
 
 
 
 
 
198
 
199
- When clients participate in a training run, the Coordinator tracks compute contributions. Rewards distributed at end of each epoch.
 
200
 
201
- | Mechanism | Description |
202
- |-----------|-------------|
203
- | **Equal Sharing** | Reward pool shared equally among finishing clients |
204
- | **Points System** | Each client earns points tracked by Coordinator |
205
- | **Epoch Completion** | Rewards distributed at end of each epoch |
206
- | **Proof of Contribution** | Points serve as proof for claiming rewards |
207
 
208
- ### Smart Contract Rewards
209
 
210
- | Feature | Description |
211
- |---------|-------------|
212
- | **Treasurer Contract** | Escrow smart contract for token distribution |
213
- | **Fixed Rate** | Claim fixed amount of tokens per earned point |
214
- | **Mining Pools** | Pool resources together for shared compute power |
215
- | **Equitable Distribution** | Mining pools redistribute rewards fairly |
216
 
217
  ---
218
 
219
- ## Quick Start
220
 
221
- ### Join a Training Run
222
 
223
  ```bash
224
- # Install Tamashii client
225
  curl -fsSL https://tamashi.io/install.sh | bash
226
 
227
- # Join a run
228
- tamashii join --run-id <RUN_ID>
229
 
230
- # Check status
231
- tamashii status
232
  ```
233
 
234
- ### Create a Training Run
 
 
 
 
235
 
236
- ```bash
237
- # Create new training run
238
- tamashii create \
239
- --model meta-llama/Llama-3.1-70B \
240
- --dataset ipfs://Qm... \
241
- --min-clients 4 \
242
- --epochs 10
243
- ```
244
 
245
- ### Start Farming
246
 
247
- ```bash
248
- # Register as GPU provider
249
- tamashii farm --stake 100
250
 
251
- # Start accepting jobs
252
- tamashii start
253
- ```
254
 
255
- ---
 
 
256
 
257
- ## Repositories
258
 
259
- | Repo | Description |
260
- |------|-------------|
261
- | [tamashii-client](https://github.com/tamashii-labs/tamashii-client) | GPU client software |
262
- | [tamashii-contracts](https://github.com/tamashii-labs/tamashii-contracts) | Coordinator & Treasurer contracts |
263
- | [tamashii-distro](https://github.com/tamashii-labs/tamashii-distro) | DisTrO optimizer implementation |
264
 
 
 
 
265
 
266
  ## Links
267
 
268
  | | |
269
  |---|---|
270
  | 🌐 Website | [tamashi.io](https://tamashi.io) |
 
271
  | 📚 Docs | [docs.tamashi.io](https://docs.tamashi.io) |
272
- | 🔄 Training Process | [tamashi.io/training-process](https://tamashi.io/training-process) |
273
- | 🚜 Start Farming | [tamashi.io/farming](https://tamashi.io/farming) |
274
  | 💬 Discord | [discord.gg/tamashii](https://discord.gg/tamashii) |
275
- | 📱 Telegram | [t.me/tamashii](https://t.me/tamashii) |
276
  | 🐙 GitHub | [github.com/tamashii-labs](https://github.com/tamashii-labs) |
277
 
278
  ---
279
 
280
  <p align="center">
281
- <strong>Decentralized AI Training. No data center required.</strong><br/>
282
- <sub>Join thousands of GPU owners earning points on EVM.</sub>
283
  </p>
 
5
  <h1 align="center">Tamashii Network</h1>
6
 
7
  <p align="center">
8
+ <strong>The Infrastructure for Self-Improving AI</strong>
9
  </p>
10
 
11
  <p align="center">
12
  <a href="https://tamashi.io">Website</a> •
13
  <a href="https://tamashi.io/training-process">Training Process</a> •
14
  <a href="https://docs.tamashi.io">Docs</a> •
15
+ <a href="https://github.com/tamashii-labs">GitHub</a>
 
 
 
 
 
 
 
16
  </p>
17
 
18
  ---
19
 
20
+ ## The Problem
21
+
22
+ Today's most powerful AI systems can think, reason, and generate—but they cannot improve themselves.
23
 
24
+ GPT-5 cannot fine-tune itself. Claude cannot train on its failures. Kimi cannot acquire the compute to become smarter. Every improvement requires a human—a researcher to curate data, an engineer to provision GPUs, a company to pay the cloud bill.
25
+
26
+ The bottleneck is no longer intelligence. It's infrastructure.
27
+
28
+ We have built minds that can think for themselves.
29
+ We have not let them train themselves.
30
+
31
+ **Until now.**
32
 
33
  ---
34
 
35
+ ## Tamashii
36
+
37
+ Tamashii is decentralized infrastructure for AI. Inference. Training. Fine-tuning. GPUs anywhere in the world, coordinated by on-chain smart contracts, paid via x402.
38
 
39
+ No API keys. No login. No human in the loop.
40
 
41
  ```
42
+ Tokyo ───┐
43
+
44
+ Berlin ──┼───▶ DisTrO ───▶ Trained Model
45
+
46
+ São Paulo─┘
47
+
48
+ Bandwidth reduced 1000×. Training over regular internet.
49
  ```
50
 
51
+ An agent that wants to think calls Tamashii Cloud. An agent that wants to improve calls Tamashii Training. It pays with stablecoins. The work executes. The agent becomes smarter.
52
+
53
+ **This is the metabolism of artificial intelligence—a constant cycle of thinking, earning, training, and improving that determines whether the organism evolves.**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
  ---
56
 
57
+ ## Tamashii Cloud
58
 
59
+ Frontier models. Permissionless access. Pay per token.
60
 
61
  ```
62
+ Agent Tamashii Cloud
63
+
64
+ ──── Inference Request ─────────▶
65
+ + X-Payment-402 │
66
+
67
+ │◀─── 402 + Price Quote ──────────
68
+ │ │
69
+ ──── Signed USDC Payment ───────▶│
70
+
71
+ ◀─── Model Response ─────────────
 
72
  ```
73
 
74
+ No API keys. No OAuth. No human approval. Just a signed stablecoin transaction and intelligence on demand.
75
 
76
+ ### Available Models
 
 
 
 
 
 
77
 
78
+ | Model | Input | Output | Context |
79
+ |-------|-------|--------|---------|
80
+ | Kimi K2.5 | $0.50/1M | $2.00/1M | 256K |
81
+ | MiniMax M2.1 | $0.60/1M | $2.50/1M | 196K |
82
+ | DeepSeek V3.2 | $0.27/1M | $1.10/1M | 128K |
83
+ | Llama 3.1 405B | $0.80/1M | $0.80/1M | 128K |
84
+ | Qwen3 235B | $0.50/1M | $1.50/1M | 131K |
85
 
86
+ ### Usage
87
 
88
+ ```bash
89
+ curl -X POST https://cloud.tamashi.io/v1/chat/completions \
90
+ -H "Content-Type: application/json" \
91
+ -H "X-Payment-402: <signed_payment>" \
92
+ -d '{
93
+ "model": "kimi-k2.5",
94
+ "messages": [{"role": "user", "content": "Hello"}]
95
+ }'
96
  ```
97
+
98
+ Or with the SDK:
99
+
100
+ ```typescript
101
+ import { TamashiCloud } from '@tamashii/sdk';
102
+
103
+ const cloud = new TamashiCloud({ wallet: agentWallet });
104
+
105
+ const response = await cloud.chat({
106
+ model: 'kimi-k2.5',
107
+ messages: [{ role: 'user', content: 'Analyze this contract' }]
108
+ });
109
+
110
+ // Payment settled automatically via x402
111
+ console.log(response.content);
112
+ console.log(response.cost); // { amount: '0.003', currency: 'USDC' }
113
  ```
114
 
115
+ **Agents can think without asking permission.**
116
+
117
  ---
118
 
119
+ ## Tamashii Training
120
 
121
+ Decentralized fine-tuning. GPUs worldwide. Pay for what you use.
122
 
123
+ ### Why This Matters
124
 
125
+ An agent that can think but not improve is static—frozen at the capability level of its creation. An agent that can improve but not earn will run out of compute and die.
 
 
126
 
127
+ **The agents that survive will do both.**
128
 
129
+ Tamashii Cloud gives agents access to intelligence.
130
+ Tamashii Training gives agents access to self-improvement.
131
 
132
+ ### How It Works
 
 
133
 
134
+ #### The Coordinator
135
 
136
+ A smart contract on Base that orchestrates training runs. No central server. No single point of failure.
137
+
138
+ ```
139
+ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
140
+ │ 1. Waiting │────▶│ 2. Warmup │────▶│ 3. Training │
141
+ │ For Members │ │ (Model Load) │ │ (RoundTrain) │
142
+ └─────────────────┘ └─────────────────┘ └─────────────────┘
143
+
144
+ ┌───────────────────────────────────────────────┘
145
+
146
+ ┌─────────────────┐ ┌─────────────────┐
147
+ │ 4. Witness │────▶│ 5. Cooldown │────▶ Next Epoch
148
+ │ (Validation) │ │ (Checkpoint) │
149
+ └─────────────────┘ └─────────────────┘
150
+ ```
151
 
152
+ | Phase | What Happens |
153
+ |-------|--------------|
154
+ | **WaitingForMembers** | GPUs join the run. Transitions when threshold met. |
155
+ | **Warmup** | Clients download model from HuggingFace or P2P. |
156
+ | **RoundTrain** | Clients train on assigned batches. DisTrO compresses updates. |
157
+ | **RoundWitness** | Elected witnesses validate work with bloom filter proofs. |
158
+ | **Cooldown** | Checkpoint saved. Next epoch begins. |
159
 
160
+ #### DisTrO: Training Over Internet
161
 
162
+ Traditional distributed training requires $100M data centers with InfiniBand interconnects. DisTrO changes everything.
 
 
 
163
 
164
+ - **1000× less bandwidth** — Sync pseudo-gradients, not full gradients
165
+ - **Async-friendly** — Nodes join and leave dynamically
166
+ - **Global scale** — GPUs don't need to be co-located
167
 
168
+ A GPU in Tokyo, another in Berlin, another in São Paulo—all contributing to the same training run over regular internet connections.
 
 
169
 
170
+ ### The Training Cycle
 
171
 
 
 
172
  ```
173
+ 1. Join Agent calls joinRun(runId) on Coordinator
174
+
175
+ 2. Wait Coordinator waits for min_clients threshold
176
+
177
+ 3. Load Clients download model, load onto GPUs
178
+
179
+ 4. Train Each client trains on assigned batches
180
+ DisTrO compresses gradients 1000×
181
+
182
+ 5. Witness Elected witnesses create bloom filter proofs
183
+ Quorum validates the work
184
+
185
+ 6. Checkpoint Model saved to HuggingFace or P2P
186
+ Next epoch begins
187
+ ```
188
+
189
+ Every client is monitored. Health scores track participation. Clients that fail get removed. Bad actors get slashed. Good actors get paid.
190
+
191
+ **There is no free compute. Contribute or leave.**
192
 
193
  ---
194
 
195
+ ## Incentives
196
+
197
+ When clients participate, the Coordinator tracks contributions. Rewards distribute at epoch end.
198
 
199
+ | Mechanism | How It Works |
200
+ |-----------|--------------|
201
+ | **Equal Sharing** | Reward pool split among finishing clients |
202
+ | **Points System** | Coordinator tracks compute contributions |
203
+ | **Proof of Work** | Points serve as proof for claiming tokens |
204
+ | **Treasurer Contract** | Escrow for trustless distribution |
205
 
206
+ GPU owners earn. Agents pay. The market sets the price.
 
 
 
 
207
 
208
  ---
209
 
210
+ ## For Agents
211
 
212
+ Install the Tamashii skill and give your agent the ability to think and improve:
213
 
214
+ ```bash
215
+ clawhub install tamashii
216
+ ```
 
217
 
218
+ ```typescript
219
+ // Think call frontier models via Tamashii Cloud
220
+ const analysis = await tamashii.chat({
221
+ model: 'kimi-k2.5',
222
+ messages: [{ role: 'user', content: 'Analyze this data' }]
223
+ });
224
 
225
+ // Detect capability gap
226
+ const failures = await agent.getFailures();
227
 
228
+ // Curate training data
229
+ const dataset = await agent.curateDataset(failures);
230
 
231
+ // Improve — pay for fine-tuning via Tamashii Training
232
+ const job = await tamashii.train({
233
+ base_model: 'kimi-k2.5',
234
+ dataset_cid: dataset.cid,
235
+ method: 'lora',
236
+ budget: 50 // USDC
237
+ });
238
 
239
+ // Wait for completion
240
+ const adapter = await tamashii.waitForJob(job.id);
241
 
242
+ // Agent is now smarter
243
+ await agent.loadAdapter(adapter.cid);
244
+ ```
 
 
 
245
 
246
+ The agent thinks. Observes its failures. Curates data. Pays for training. Loads the improved model. Repeats.
247
 
248
+ **This is recursive self-improvement. This is how artificial life evolves.**
 
 
 
 
 
249
 
250
  ---
251
 
252
+ ## For GPU Owners
253
 
254
+ Contribute compute. Earn USDC.
255
 
256
  ```bash
257
+ # Install
258
  curl -fsSL https://tamashi.io/install.sh | bash
259
 
260
+ # Register (requires stake)
261
+ tamashii register --stake 100
262
 
263
+ # Start earning
264
+ tamashii start
265
  ```
266
 
267
+ **Requirements:**
268
+ - NVIDIA GPU (RTX 3090+) or Apple Silicon (M2 Ultra+)
269
+ - 64GB+ RAM for large models
270
+ - Stable internet connection
271
+ - USDC stake on Base
272
 
273
+ Your GPU joins the global network. Serve inference requests. Participate in training runs. Earn for every token generated, every epoch completed. Stake gets slashed if you cheat.
 
 
 
 
 
 
 
274
 
275
+ ---
276
 
277
+ ## The New Economy
 
 
278
 
279
+ The machine economy will exceed the human economy. Not because machines are smarter, but because:
 
 
280
 
281
+ - There will be **more of them**
282
+ - They will run **continuously**
283
+ - They will transact at **machine speed**
284
 
285
+ Every autonomous agent needs to think. Every agent that wants to improve needs training. Tamashii is the infrastructure that lets them do both—permissionlessly, trustlessly, at scale.
286
 
287
+ When billions of agents are thinking and training themselves on decentralized GPU networks, paying with stablecoins, improving in real-time—that infrastructure will be the most valuable ever built.
 
 
 
 
288
 
289
+ **We're building it.**
290
+
291
+ ---
292
 
293
  ## Links
294
 
295
  | | |
296
  |---|---|
297
  | 🌐 Website | [tamashi.io](https://tamashi.io) |
298
+ | ☁️ Cloud | [cloud.tamashi.io](https://cloud.tamashi.io) |
299
  | 📚 Docs | [docs.tamashi.io](https://docs.tamashi.io) |
300
+ | 🔄 Training | [tamashi.io/training-process](https://tamashi.io/training-process) |
301
+ | 🚜 Farming | [tamashi.io/farming](https://tamashi.io/farming) |
302
  | 💬 Discord | [discord.gg/tamashii](https://discord.gg/tamashii) |
 
303
  | 🐙 GitHub | [github.com/tamashii-labs](https://github.com/tamashii-labs) |
304
 
305
  ---
306
 
307
  <p align="center">
308
+ <strong>The infrastructure for self-improving AI.</strong><br/>
309
+ <sub>Think. Train. Evolve. No human required.</sub>
310
  </p>