README / README.md
FlameF0X's picture
Update README.md
acfe735 verified
---
title: README
emoji:
colorFrom: indigo
colorTo: pink
sdk: static
pinned: false
---
# Welcome to i3-lab
**"Chase the SOTA pipeline, not the MMLU slop."**
i3-lab is dedicated to extreme efficiency in LLM architecture. We develop the **i3** model family—state-of-the-art architectures designed to reach high performance levels in hours on consumer-grade hardware (like the NVIDIA Quadro P100) that typically require days on massive GPU clusters.
---
## i3: High-Efficiency Training
We specialize in hybrid architectures, specifically **RWKV-Attention**, to bypass the quadratic scaling bottlenecks of traditional Transformers.
* **Fast Iteration:** Trainable in hours, not weeks.
* **Accessible SOTA:** High performance on legacy/mid-range hardware.
* **Open Research:** Push the boundaries of what is possible with limited compute.
### Quick Links
* **Source Code:** [FlameF0X/open-i3](https://github.com/FlameF0X/open-i3)
* **Community:** [Join our Discord](https://discord.gg/qtXApjpaJF)
---
## Roadmap / TODO
We are currently scaling our architecture through the following milestones:
- [ ] **i3-Ethan-it** — Specialized instruction-tuned variant.
- [ ] **i3-1B** — Our first major scale-up.
- [ ] **i3-7B-A1.6B** — Mixture of Experts / Sparsity testing.
---
## Usage & Attribution
The `open-i3` codebase is licensed under **Apache 2.0**. We believe in open-source, but we value attribution.
If you use our architecture (RWKV-Attention) or our weights, you are required per **Section 4(b)** and **4(d)** to:
1. Carry prominent notices of any modifications.
2. Include a readable copy of the attribution notices from our **NOTICE** file.
> [!IMPORTANT]
> You **must** include the attribution link found in the [open-i3 GitHub](https://github.com/FlameF0X/open-i3) in your documentation or model card.
---
<p align="center">
Made with ❤️ and <b>DETERMINATION</b> by Daniel.
</p>