A collection of all Opus 1.5 variants.
Opus AI & Research
community
AI & ML interests
None defined yet.
Recent Activity
View all activity
Organization Card
🧠 Opus Research
Training AI models from scratch, one parameter at a time.
"We stand at the right place at the right time."
👋 About Us
We're two teenage AI enthusiasts (ages 15 & 17) passionate about understanding AI from the ground up. Instead of just using pre-trained models, we build them ourselves.
🚀 Our Models
| Model | Parameters | Architecture | Status |
|---|---|---|---|
| Opus 1.5 | 0.88B | LLaMA-style | ✅ Released |
| Opus 2.0 | 3B+ | LLaMA + Reasoning | 🔜 Coming Soon |
🔬 Research Focus
- Training from scratch - No pre-trained weights, 100% original
- Chain-of-thought reasoning - Teaching models to think before answering
- Efficient architectures - Sub-3B models that run on consumer GPUs
📊 Opus 1.5 Highlights
- 0.88 billion parameters
- ~2 GB VRAM for inference
- 42 hours training on 2x RTX 4090
- LLaMA architecture with RoPE, SwiGLU, GQA, FlashAttention-2
🔗 Links
⚡ Powered By
- ❤️ Two teenagers who should probably be doing homework
- 🎮 Two RTX 4090s that haven't known rest since December 2025
- 🔥 Questionable GPU thermals
- 📊 A concerning amount of time staring at loss curves
- 💡 48GB of VRAM
- ⚡ ~847 kWh of electricity we'll never get back
- 💸 Our parents' electricity bill
- 🤔 The firm belief that we could've just used ChatGPT
But where's the fun in that?