TX-8G
Local AI model optimized for consumer hardware. Runs on 8GB RAM.
TX-8G is TARX's default model, designed to run efficiently on most modern computers while delivering strong performance across general tasks.
Model Details
| Property | Value |
|---|---|
| Parameters | 7B |
| Quantization | 8-bit (GGUF) |
| RAM Required | 8 GB minimum |
| Context Length | 8,192 tokens |
| License | Apache 2.0 |
Capabilities
- ✅ General conversation
- ✅ Writing assistance
- ✅ Code explanation & simple generation
- ✅ Document analysis
- ✅ Image understanding (vision)
- ✅ Research & summarization
Performance
Benchmarks vs comparable models:
| Benchmark | TX-8G | Llama-3-8B | Qwen2.5-7B |
|---|---|---|---|
| MMLU | TBD | 66.6 | 74.2 |
| HumanEval | TBD | 62.2 | 75.6 |
| MT-Bench | TBD | 8.0 | 8.5 |
Full benchmarks coming Q1 2026
Usage
With TARX Desktop (Recommended)
Download TARX and the model is included:
https://tarx.com/download
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "Tarxxxxxx/TX-8G"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto"
)
messages = [
{"role": "user", "content": "Explain how local AI protects privacy."}
]
input_ids = tokenizer.apply_chat_template(
messages,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=512,
do_sample=True,
temperature=0.7
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With llama.cpp
# Download GGUF
wget https://huggingface.co/Tarxxxxxx/TX-8G/resolve/main/tx-8g.Q8_0.gguf
# Run with llama.cpp
./main -m tx-8g.Q8_0.gguf -p "Hello, I'm TARX." -n 256
With Ollama
ollama run tarx/tx-8g
Hardware Requirements
| Hardware | Performance |
|---|---|
| Apple M1/M2/M3 (8GB) | ⭐⭐⭐⭐⭐ Excellent |
| Apple M1/M2/M3 (16GB+) | ⭐⭐⭐⭐⭐ Excellent |
| Intel i5 + 16GB RAM | ⭐⭐⭐⭐ Good |
| Intel i7 + NVIDIA GPU | ⭐⭐⭐⭐⭐ Excellent |
| AMD Ryzen + 16GB | ⭐⭐⭐⭐ Good |
Quantization Options
| Format | Size | RAM | Speed | Quality |
|---|---|---|---|---|
| Q8_0 | 7.2 GB | 8 GB | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ |
| Q6_K | 5.5 GB | 6 GB | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
| Q4_K_M | 4.1 GB | 5 GB | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ |
Training
TX-8G is fine-tuned from Qwen2.5-7B-Instruct with:
- Additional instruction tuning for local-first use cases
- Optimization for consumer hardware inference
- Enhanced privacy-aware responses
Training data does not include any TARX user conversations (we don't have access to them).
Ethical Considerations
TX-8G is designed for local, private use. Because it runs on user devices:
- No user data is collected
- No conversations are logged
- No usage is monitored
- Users have complete control
Citation
@misc{tarx2026tx8g,
title={TX-8G: Local-First Language Model for Consumer Hardware},
author={TARX Team},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/Tarxxxxxx/TX-8G}
}
Links
Built by TARX | tarx.com
- Downloads last month
- 5
Hardware compatibility
Log In
to add your hardware
We're not able to determine the quantization variants.