Update README.md
Browse files
README.md
CHANGED
|
@@ -14,11 +14,21 @@ pinned: false
|
|
| 14 |
i3-lab is dedicated to extreme efficiency in LLM architecture. We develop the **i3** model family—state-of-the-art architectures designed to reach high performance levels in hours on consumer-grade hardware (like the NVIDIA Quadro P100) that typically require days on massive GPU clusters.
|
| 15 |
|
| 16 |
## Why?
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
| 18 |
> Well, I’m determined to make this model or architecture as efficient and fast as possible, knowing that not everyone can afford a decent GPU. In some countries, weak economies or import bans make it even harder, and sometimes all you have is a laptop with an i3-6006U, relying on free cloud computing services like Colab or Kaggle—which is exactly my situation :D
|
| 19 |
>
|
| 20 |
> — Daniel
|
| 21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
---
|
| 23 |
|
| 24 |
## i3: High-Efficiency Training
|
|
|
|
| 14 |
i3-lab is dedicated to extreme efficiency in LLM architecture. We develop the **i3** model family—state-of-the-art architectures designed to reach high performance levels in hours on consumer-grade hardware (like the NVIDIA Quadro P100) that typically require days on massive GPU clusters.
|
| 15 |
|
| 16 |
## Why?
|
| 17 |
+
<details>
|
| 18 |
+
<summary>Click to expand</summary>
|
| 19 |
+
|
| 20 |
+
1. Why?
|
| 21 |
> Well, I’m determined to make this model or architecture as efficient and fast as possible, knowing that not everyone can afford a decent GPU. In some countries, weak economies or import bans make it even harder, and sometimes all you have is a laptop with an i3-6006U, relying on free cloud computing services like Colab or Kaggle—which is exactly my situation :D
|
| 22 |
>
|
| 23 |
> — Daniel
|
| 24 |
|
| 25 |
+
2. Why use RWKV-Attention when you could just use Attention like LLaMa, Qwen, and many others?
|
| 26 |
+
> RWKV is great because it’s fast, lightweight, and doesn’t require much RAM, though it struggles with long contexts. Adding a bit of attention to the model architecture makes it more stable and smarter, but at the cost of quadratic memory usage. From my tests on a Kaggle P100 GPU, you can train SLMs (Small Language Models) within its 16GB VRAM, though it takes time and patience. Once you hit around 500 million parameters, training speed drops from about 300–400 tokens per second to 200–300, which may not sound huge, but it’s definitely noticeable. Of course, with something like an RTX 2060 or better, you wouldn’t experience this issue of *feeling slow*.
|
| 27 |
+
>
|
| 28 |
+
> — Daniel
|
| 29 |
+
|
| 30 |
+
</details>
|
| 31 |
+
|
| 32 |
---
|
| 33 |
|
| 34 |
## i3: High-Efficiency Training
|