Tamashii Network
AI & ML interests
None defined yet.
Recent Activity
Tamashii Network
The Infrastructure for Self-Improving AI
Website • Training Process • Docs • GitHub
The Problem
Today's most powerful AI systems can think, reason, and generate—but they cannot improve themselves.
GPT-5 cannot fine-tune itself. Claude cannot train on its failures. Kimi cannot acquire the compute to become smarter. Every improvement requires a human—a researcher to curate data, an engineer to provision GPUs, a company to pay the cloud bill.
The bottleneck is no longer intelligence. It's infrastructure.
We have built minds that can think for themselves.
We have not let them train themselves.
Until now.
Tamashii
Tamashii is decentralized infrastructure for AI. Inference. Training. Fine-tuning. GPUs anywhere in the world, coordinated by on-chain smart contracts, paid via x402.
No API keys. No login. No human in the loop.
Tokyo ────┐
│
Berlin ───┼───▶ DisTrO ───▶ Trained Model
│
São Paulo─┘
Bandwidth reduced 1000×. Training over regular internet.
An agent that wants to think calls Tamashii Cloud. An agent that wants to improve calls Tamashii Training. It pays with stablecoins. The work executes. The agent becomes smarter.
This is the metabolism of artificial intelligence—a constant cycle of thinking, earning, training, and improving that determines whether the organism evolves.
Tamashii Cloud
Frontier models. Permissionless access. Pay per token.
Agent Tamashii Cloud
│ │
│──── Inference Request ─────────▶│
│ + X-Payment-402 │
│ │
│◀─── 402 + Price Quote ──────────│
│ │
│──── Signed USDC Payment ───────▶│
│ │
│◀─── Model Response ─────────────│
No API keys. No OAuth. No human approval. Just a signed stablecoin transaction and intelligence on demand.
Available Models
| Model | Input | Output | Context |
|---|---|---|---|
| Kimi K2.5 | $0.50/1M | $2.00/1M | 256K |
| MiniMax M2.1 | $0.60/1M | $2.50/1M | 196K |
| DeepSeek V3.2 | $0.27/1M | $1.10/1M | 128K |
| Llama 3.1 405B | $0.80/1M | $0.80/1M | 128K |
| Qwen3 235B | $0.50/1M | $1.50/1M | 131K |
Usage
curl -X POST https://cloud.tamashi.io/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-Payment-402: <signed_payment>" \
-d '{
"model": "kimi-k2.5",
"messages": [{"role": "user", "content": "Hello"}]
}'
Or with the SDK:
import { TamashiCloud } from '@tamashii/sdk';
const cloud = new TamashiCloud({ wallet: agentWallet });
const response = await cloud.chat({
model: 'kimi-k2.5',
messages: [{ role: 'user', content: 'Analyze this contract' }]
});
// Payment settled automatically via x402
console.log(response.content);
console.log(response.cost); // { amount: '0.003', currency: 'USDC' }
Agents can think without asking permission.
Tamashii Training
Decentralized fine-tuning. GPUs worldwide. Pay for what you use.
Why This Matters
An agent that can think but not improve is static—frozen at the capability level of its creation. An agent that can improve but not earn will run out of compute and die.
The agents that survive will do both.
Tamashii Cloud gives agents access to intelligence.
Tamashii Training gives agents access to self-improvement.
How It Works
The Coordinator
A smart contract on Base that orchestrates training runs. No central server. No single point of failure.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ 1. Waiting │────▶│ 2. Warmup │────▶│ 3. Training │
│ For Members │ │ (Model Load) │ │ (RoundTrain) │
└─────────────────┘ └─────────────────┘ └─────────────────┘
│
┌───────────────────────────────────────────────┘
▼
┌─────────────────┐ ┌─────────────────┐
│ 4. Witness │────▶│ 5. Cooldown │────▶ Next Epoch
│ (Validation) │ │ (Checkpoint) │
└─────────────────┘ └─────────────────┘
| Phase | What Happens |
|---|---|
| WaitingForMembers | GPUs join the run. Transitions when threshold met. |
| Warmup | Clients download model from HuggingFace or P2P. |
| RoundTrain | Clients train on assigned batches. DisTrO compresses updates. |
| RoundWitness | Elected witnesses validate work with bloom filter proofs. |
| Cooldown | Checkpoint saved. Next epoch begins. |
DisTrO: Training Over Internet
Traditional distributed training requires $100M data centers with InfiniBand interconnects. DisTrO changes everything.
- 1000× less bandwidth — Sync pseudo-gradients, not full gradients
- Async-friendly — Nodes join and leave dynamically
- Global scale — GPUs don't need to be co-located
A GPU in Tokyo, another in Berlin, another in São Paulo—all contributing to the same training run over regular internet connections.
The Training Cycle
1. Join Agent calls joinRun(runId) on Coordinator
2. Wait Coordinator waits for min_clients threshold
3. Load Clients download model, load onto GPUs
4. Train Each client trains on assigned batches
DisTrO compresses gradients 1000×
5. Witness Elected witnesses create bloom filter proofs
Quorum validates the work
6. Checkpoint Model saved to HuggingFace or P2P
Next epoch begins
Every client is monitored. Health scores track participation. Clients that fail get removed. Bad actors get slashed. Good actors get paid.
There is no free compute. Contribute or leave.
Incentives
When clients participate, the Coordinator tracks contributions. Rewards distribute at epoch end.
| Mechanism | How It Works |
|---|---|
| Equal Sharing | Reward pool split among finishing clients |
| Points System | Coordinator tracks compute contributions |
| Proof of Work | Points serve as proof for claiming tokens |
| Treasurer Contract | Escrow for trustless distribution |
GPU owners earn. Agents pay. The market sets the price.
For Agents
Install the Tamashii skill and give your agent the ability to think and improve:
clawhub install tamashii
// Think — call frontier models via Tamashii Cloud
const analysis = await tamashii.chat({
model: 'kimi-k2.5',
messages: [{ role: 'user', content: 'Analyze this data' }]
});
// Detect capability gap
const failures = await agent.getFailures();
// Curate training data
const dataset = await agent.curateDataset(failures);
// Improve — pay for fine-tuning via Tamashii Training
const job = await tamashii.train({
base_model: 'kimi-k2.5',
dataset_cid: dataset.cid,
method: 'lora',
budget: 50 // USDC
});
// Wait for completion
const adapter = await tamashii.waitForJob(job.id);
// Agent is now smarter
await agent.loadAdapter(adapter.cid);
The agent thinks. Observes its failures. Curates data. Pays for training. Loads the improved model. Repeats.
This is recursive self-improvement. This is how artificial life evolves.
For GPU Owners
Contribute compute. Earn USDC.
# Install
curl -fsSL https://tamashi.io/install.sh | bash
# Register (requires stake)
tamashii register --stake 100
# Start earning
tamashii start
Requirements:
- NVIDIA GPU (RTX 3090+) or Apple Silicon (M2 Ultra+)
- 64GB+ RAM for large models
- Stable internet connection
- USDC stake on Base
Your GPU joins the global network. Serve inference requests. Participate in training runs. Earn for every token generated, every epoch completed. Stake gets slashed if you cheat.
The New Economy
The machine economy will exceed the human economy. Not because machines are smarter, but because:
- There will be more of them
- They will run continuously
- They will transact at machine speed
Every autonomous agent needs to think. Every agent that wants to improve needs training. Tamashii is the infrastructure that lets them do both—permissionlessly, trustlessly, at scale.
When billions of agents are thinking and training themselves on decentralized GPU networks, paying with stablecoins, improving in real-time—that infrastructure will be the most valuable ever built.
We're building it.
Links
| 🌐 Website | tamashi.io |
| ☁️ Cloud | cloud.tamashi.io |
| 📚 Docs | docs.tamashi.io |
| 🔄 Training | tamashi.io/training-process |
| 🚜 Farming | tamashi.io/farming |
| 💬 Discord | discord.gg/tamashii |
| 🐙 GitHub | github.com/tamashii-labs |
The infrastructure for self-improving AI.
Think. Train. Evolve. No human required.