Taipei 3.1
By Tripplet AI (Tripplet Artificial General Intelligence Research Institute)
Taipei 3.1 is a massive 754B parameter Mixture-of-Experts language model built on the GLM-5.1 architecture, delivering frontier-level performance in both English and Chinese.
Model Details
- Parameters: 753.9B (MoE)
- Architecture: GLM MoE with Dense-Sparse Attention (glm_moe_dsa)
- Languages: English, Chinese
- License: MIT
- Precision: bfloat16
- Model Size: ~1.51 TB
Key Features
- Frontier-class MoE architecture with dense-sparse attention
- Bilingual (English + Chinese) with strong multilingual capabilities
- Optimized for instruction following and conversational tasks
- Competitive with leading closed-source models
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("tripplet-research/taipei-3.1", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"tripplet-research/taipei-3.1",
trust_remote_code=True,
torch_dtype="bfloat16",
device_map="auto",
)
messages = [
{"role": "user", "content": "Hello, tell me about yourself."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt", add_generation_prompt=True)
output = model.generate(inputs.to(model.device), max_new_tokens=512)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Hardware Requirements
This model requires significant compute resources:
- Minimum: 8x A100 80GB or equivalent (~640GB+ VRAM)
- Recommended: 16x A100 80GB or H100 cluster for full precision inference
About Tripplet AI
Tripplet Artificial General Intelligence Research Institute is dedicated to advancing the frontiers of artificial general intelligence through open research and model development.
License
MIT
- Downloads last month
- 486
Model tree for tripplet-research/taipei-3.1
Base model
zai-org/GLM-5.1Evaluation results
- MathArena Aime 2026 on MathArena/aime_2026 View evaluation results source leaderboard 95.3
- MathArena Hmmt Feb 2026 on MathArena/hmmt_feb_2026 View evaluation results source leaderboard 82.6
- Diamond on Idavidrein/gpqa View evaluation results source leaderboard 86.2
- Hle
- on cais/hle View evaluation results source31
- on cais/hle View evaluation results sourceWith tools52.3 *
- SWE Bench Pro on ScaleAI/SWE-bench_Pro View evaluation results source leaderboard 58.4 *
- Terminalbench 2
- on harborframework/terminal-bench-2.0 View evaluation results source leaderboard63.5
- on harborframework/terminal-bench-2.0 View evaluation results source leaderboardagent: Terminus 2(Claude Code)69 *