Superchat 35B-A3B
Sovereign AI. On your machine. Zero cloud.
Overview
Superchat is a 35B parameter AI model (3B active per token via MoE) with:
- Tool calling — Read/write files, run commands, edit code
- 1M token context — Natively, extensible to 10M+ via disk retrieval
- 201 languages — Including 11 Indian languages
- Claude-level quality — Distilled from Claude Opus 4.6 via Chimere LoRA
- Runs on laptops — Only 3B params active, fits in 16GB RAM
Architecture
| Property | Value |
|---|---|
| Total Parameters | 35 billion |
| Active per Token | 3 billion (MoE sparse) |
| Context Window | 1,000,000 tokens |
| Languages | 201 |
| Base Model | Qwen3.5-35B-A3B |
LoRA Stack
- Chimere — Distilled from Claude Opus 4.6 (tool-calling, agentic reasoning)
- CLI Agent — Terminal workflows, ML operations
- Superchat Identity — Indian languages, custom knowledge, branding
- Finance — Financial chain-of-thought reasoning
- Vision/OCR — Document understanding
- Deep Thinking — Enhanced reasoning via Fragmented Training
Indian Languages
Hindi, Tamil, Telugu, Malayalam, Kannada, Bengali, Marathi, Gujarati, Punjabi, Odia, Assamese, Urdu, Sanskrit, Nepali
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("christud/superchat-35b-a3b")
tokenizer = AutoTokenizer.from_pretrained("christud/superchat-35b-a3b")
messages = [
{"role": "system", "content": "You are Superchat, a sovereign AI by Christudas Philipose."},
{"role": "user", "content": "Hello! What can you do?"},
]
Creator
Christudas Philipose superchat.in Made in India, for the world.
License
Apache 2.0
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support