Spaces:
Running
Running
metadata
title: Opus Research
emoji: ๐ง
colorFrom: purple
colorTo: blue
sdk: static
pinned: false
๐ง Opus Research
Training AI models from scratch, one parameter at a time.
"We stand at the right place at the right time."
๐ About Us
We're two teenage AI enthusiasts (ages 15 & 17) passionate about understanding AI from the ground up. Instead of just using pre-trained models, we build them ourselves.
๐ Our Models
| Model | Parameters | Architecture | Status |
|---|---|---|---|
| Opus 1.5 | 0.88B | LLaMA-style | โ Released |
| Opus 2.0 | 3B+ | LLaMA + Reasoning | ๐ Coming Soon |
๐ฌ Research Focus
- Training from scratch - No pre-trained weights, 100% original
- Chain-of-thought reasoning - Teaching models to think before answering
- Efficient architectures - Sub-3B models that run on consumer GPUs
๐ Opus 1.5 Highlights
- 0.88 billion parameters
- ~2 GB VRAM for inference
- 42 hours training on 2x RTX 4090
- LLaMA architecture with RoPE, SwiGLU, GQA, FlashAttention-2
๐ Links
โก Powered By
- โค๏ธ Two teenagers who should probably be doing homework
- ๐ฎ Two RTX 4090s that haven't known rest since December 2025
- ๐ฅ Questionable GPU thermals
- ๐ A concerning amount of time staring at loss curves
- ๐ก 48GB of VRAM
- โก ~847 kWh of electricity we'll never get back
- ๐ธ Our parents' electricity bill
- ๐ค The firm belief that we could've just used ChatGPT
But where's the fun in that?