TX-16G
Maximum local capability. Runs on 16GB RAM.
TX-16G is TARX's flagship model, offering the best reasoning and generation quality available for local inference.
Model Details
| Property | Value |
|---|---|
| Parameters | 14B |
| Quantization | Minimal (near full precision) |
| RAM Required | 16 GB minimum |
| GPU VRAM | 12 GB+ recommended |
| Context Length | 32,768 tokens |
| License | Apache 2.0 |
Capabilities
- β Everything TX-12G does, plus:
- β Long-context reasoning (32K tokens)
- β Complex creative writing
- β Advanced code architecture
- β Nuanced analysis with citations
- β Multi-document synthesis
When to Use TX-16G
TX-16G is for users who:
- Have 16GB+ RAM / high-end GPU
- Work on complex, nuanced tasks
- Need maximum quality
- Process long documents
For most users, TX-8G or TX-12G is sufficient and faster.
Usage
With TARX Desktop
Settings β Model β TX-16G
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"Tarxxxxxx/TX-16G",
device_map="auto",
torch_dtype=torch.bfloat16
)
Hardware Requirements
| Hardware | Performance |
|---|---|
| Apple M2/M3 Pro/Max (32GB+) | βββββ Excellent |
| NVIDIA RTX 4080/4090 | βββββ Excellent |
| NVIDIA RTX 3090 | ββββ Good |
| 32GB+ System RAM | βββ Usable (slow) |
Links
Built by TARX | tarx.com
- Downloads last month
- 1
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.