Spaces:
Running
Running
| title: Opus Research | |
| emoji: ๐ง | |
| colorFrom: purple | |
| colorTo: blue | |
| sdk: static | |
| pinned: false | |
| <div align="center"> | |
| <h1>๐ง Opus Research</h1> | |
| <p><strong>Training AI models from scratch, one parameter at a time.</strong></p> | |
| <p><em>"We stand at the right place at the right time."</em></p> | |
| </div> | |
| ## ๐ About Us | |
| We're two teenage AI enthusiasts (ages 15 & 17) passionate about understanding AI from the ground up. Instead of just using pre-trained models, we build them ourselves. | |
| ## ๐ Our Models | |
| | Model | Parameters | Architecture | Status | | |
| |-------|-----------|--------------|--------| | |
| | [Opus 1.5](https://huggingface.co/opus-research/opus-1.5) | 0.88B | LLaMA-style | โ Released | | |
| | Opus 2.0 | 3B+ | LLaMA + Reasoning | ๐ Coming Soon | | |
| ## ๐ฌ Research Focus | |
| - **Training from scratch** - No pre-trained weights, 100% original | |
| - **Chain-of-thought reasoning** - Teaching models to think before answering | |
| - **Efficient architectures** - Sub-3B models that run on consumer GPUs | |
| ## ๐ Opus 1.5 Highlights | |
| - **0.88 billion parameters** | |
| - **~2 GB VRAM** for inference | |
| - **42 hours** training on 2x RTX 4090 | |
| - **LLaMA architecture** with RoPE, SwiGLU, GQA, FlashAttention-2 | |
| ## ๐ Links | |
| - ๐ฆ [Opus 1.5 on HuggingFace](https://huggingface.co/opus-research/opus-1.5) | |
| ## โก Powered By | |
| - โค๏ธ Two teenagers who should probably be doing homework | |
| - ๐ฎ Two RTX 4090s that haven't known rest since December 2025 | |
| - ๐ฅ Questionable GPU thermals | |
| - ๐ A concerning amount of time staring at loss curves | |
| - ๐ก 48GB of VRAM | |
| - โก ~847 kWh of electricity we'll never get back | |
| - ๐ธ Our parents' electricity bill | |
| - ๐ค The firm belief that we could've just used ChatGPT | |
| <div align="center"> | |
| <p><em>But where's the fun in that?</em></p> | |
| </div> |