Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,10 @@ license: apache-2.0
|
|
| 7 |
<img src="./hxa079.png" style="border-radius: 15px; width: 60%; height: 60%; object-fit: cover; box-shadow: 10px 10px 20px rgba(0, 0, 0, 0.5); border: 2px solid white;" alt="PRWKV" />
|
| 8 |
</div>
|
| 9 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
### Model Description
|
| 11 |
|
| 12 |
HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
|
|
|
|
| 7 |
<img src="./hxa079.png" style="border-radius: 15px; width: 60%; height: 60%; object-fit: cover; box-shadow: 10px 10px 20px rgba(0, 0, 0, 0.5); border: 2px solid white;" alt="PRWKV" />
|
| 8 |
</div>
|
| 9 |
|
| 10 |
+
> I'm simply exploring the possibility of linearizing existing Transformer models.
|
| 11 |
+
> It's still far from perfect,
|
| 12 |
+
> but I hope you'll bear with me as I continue this journey.
|
| 13 |
+
|
| 14 |
### Model Description
|
| 15 |
|
| 16 |
HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
|