metadata
title: TARX
emoji: 🖥️
colorFrom: blue
colorTo: gray
sdk: static
pinned: false
TARX
Local-first AI that belongs to you.
We build AI infrastructure that runs on your machine. Your conversations never leave your device.
What We're Building
TARX is inverting AI. From centralized to distributed. From extraction to participation. From users as products to users as owners.
- Local Inference: TX models optimized for consumer hardware
- Mesh Network: Distributed supercomputer owned by users
- Zero Surveillance: We can't see your data because we never receive it
Models
| Model | Parameters | RAM Required | Best For |
|---|---|---|---|
| TX-8G | 7B | 8 GB | General use, most users |
| TX-12G | 12B | 12 GB | Complex reasoning, code |
| TX-16G | 14B | 16 GB | Maximum capability |
Quick Start
# Download TARX Desktop (includes models)
# macOS, Windows, Linux
https://tarx.com/download
Or use models directly:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Tarxxxxxx/TX-8G")
tokenizer = AutoTokenizer.from_pretrained("Tarxxxxxx/TX-8G")
Links
- 🌐 Website
- 📖 Documentation
- 📥 Download
- 🐦 Twitter/X
About
TARX is building the protocol for decentralized AI. Designed by John Wantz Jr., In Austin Texas in the United States of America.
This is infrastructure, not a chatbot wrapper.
Your AI. Your machine. Your data.