README / README.md
spoodddddd's picture
Update README.md
a07d848 verified
metadata
title: Opus Research
emoji: ๐Ÿง 
colorFrom: purple
colorTo: blue
sdk: static
pinned: false

๐Ÿง  Opus Research

Training AI models from scratch, one parameter at a time.

"We stand at the right place at the right time."

๐Ÿ‘‹ About Us

We're two teenage AI enthusiasts (ages 15 & 17) passionate about understanding AI from the ground up. Instead of just using pre-trained models, we build them ourselves.

๐Ÿš€ Our Models

Model Parameters Architecture Status
Opus 1.5 0.88B LLaMA-style โœ… Released
Opus 2.0 3B+ LLaMA + Reasoning ๐Ÿ”œ Coming Soon

๐Ÿ”ฌ Research Focus

  • Training from scratch - No pre-trained weights, 100% original
  • Chain-of-thought reasoning - Teaching models to think before answering
  • Efficient architectures - Sub-3B models that run on consumer GPUs

๐Ÿ“Š Opus 1.5 Highlights

  • 0.88 billion parameters
  • ~2 GB VRAM for inference
  • 42 hours training on 2x RTX 4090
  • LLaMA architecture with RoPE, SwiGLU, GQA, FlashAttention-2

๐Ÿ”— Links

โšก Powered By

  • โค๏ธ Two teenagers who should probably be doing homework
  • ๐ŸŽฎ Two RTX 4090s that haven't known rest since December 2025
  • ๐Ÿ”ฅ Questionable GPU thermals
  • ๐Ÿ“Š A concerning amount of time staring at loss curves
  • ๐Ÿ’ก 48GB of VRAM
  • โšก ~847 kWh of electricity we'll never get back
  • ๐Ÿ’ธ Our parents' electricity bill
  • ๐Ÿค” The firm belief that we could've just used ChatGPT

But where's the fun in that?