Update README.md
Browse files
README.md
CHANGED
|
@@ -1,16 +1,81 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
-
|
| 5 |
|
|
|
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
-
Architecture
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
# HRWKV7-Reka-Flash3-Preview
|
| 5 |
|
| 6 |
+
### Model Description
|
| 7 |
|
| 8 |
+
HRWKV7-Reka-Flash3-Preview is an experimental hybrid architecture model that combines RWKV v7's linear attention mechanism with Group Query Attention (GQA) layers. Built upon the Reka-flash3 21B foundation, this model replaces most Transformer attention blocks with RWKV blocks while strategically maintaining some GQA layers to enhance performance on specific tasks.
|
| 9 |
|
| 10 |
+
- **Developed by:** OpenMOSE
|
| 11 |
+
- **Model type:** Hybrid Linear-Attention Language Model
|
| 12 |
+
- **Language(s):** Multilingual (inherited from Reka-flash3 21B)
|
| 13 |
+
- **License:** Apache-2.0
|
| 14 |
+
- **Base Model:** Reka-flash3 21B
|
| 15 |
+
- **Year:** 2025
|
| 16 |
|
| 17 |
+
### Architecture Specifications
|
| 18 |
|
| 19 |
+
- **Architecture:** RWKV v7 based "hxa079" Architecture + Group Query Attention Hybrid
|
| 20 |
+
- **Total Layers:** 44 layers (L44D6114)
|
| 21 |
+
- 38 RWKV layers (with Rope)
|
| 22 |
+
- 6 GQA layers (No Rope, No Position Embeddings)
|
| 23 |
+
- **Hidden Dimension:** 6144
|
| 24 |
+
- **Training Context Window:** 4096 tokens
|
| 25 |
+
- **Inference Context Window** 32768
|
| 26 |
|
| 27 |
+
## Technical Innovation
|
| 28 |
|
| 29 |
+
### RWKV "hxa079" Architecture
|
| 30 |
+
|
| 31 |
+
The model implements several key improvements over standard RWKV architectures:
|
| 32 |
+
|
| 33 |
+
1. **Token Shift Removal**: Unlike traditional RWKV, the hxa079 variant removes token shifting mechanisms
|
| 34 |
+
2. **GroupNorm Removal**: Eliminates GroupNorm layers for training stability
|
| 35 |
+
3. **k_first Introduction**: Implements a novel k_first mechanism optimized for attention conversion
|
| 36 |
+
|
| 37 |
+
### Hybrid Design Benefits
|
| 38 |
+
|
| 39 |
+
- **Linear Attention Inference**: RWKV blocks enable O(1) memory complexity during inference
|
| 40 |
+
- **Enhanced Needle Tasks**: Strategic placement of GQA layers significantly improves performance on needle-in-haystack retrieval tasks, addressing a known limitation of pure linear attention models
|
| 41 |
+
- **Implicit Position Encoding**: Interestingly, the model achieves better performance when RoPE (Rotary Position Embedding) is not applied to GQA layers, suggesting that RWKV blocks provide implicit positional encoding capabilities
|
| 42 |
+
|
| 43 |
+
## Intended Use
|
| 44 |
+
|
| 45 |
+
This is an **experimental research model** designed to explore hybrid architectures combining linear and quadratic attention mechanisms. It is intended for:
|
| 46 |
+
|
| 47 |
+
- Research into efficient attention mechanisms
|
| 48 |
+
- Benchmarking hybrid architecture performance
|
| 49 |
+
- Exploring linear attention limitations and solutions
|
| 50 |
+
- Academic and industrial R&D purposes
|
| 51 |
+
|
| 52 |
+
## Limitations
|
| 53 |
+
|
| 54 |
+
- **Experimental Status**: This model is in experimental stages and may exhibit unexpected behaviors
|
| 55 |
+
- **Context Window**: Limited to 4096 tokens during training, though RWKV architecture theoretically supports longer sequences
|
| 56 |
+
- **Performance Variability**: As a hybrid model, performance may vary significantly across different task types
|
| 57 |
+
|
| 58 |
+
## Training Details
|
| 59 |
+
|
| 60 |
+
- **Training Context Window:** 4096 tokens
|
| 61 |
+
- **Base Model Initialization:** Weights initialized from Reka-flash3 21B
|
| 62 |
+
- **Architecture Conversion:** Transformer attention blocks systematically replaced with RWKV blocks, except for 6 strategically placed GQA layers
|
| 63 |
+
|
| 64 |
+
## Evaluation
|
| 65 |
+
|
| 66 |
+
Performance evaluation is ongoing. The model shows promising results in:
|
| 67 |
+
- Maintaining base model capabilities while achieving linear attention efficiency
|
| 68 |
+
- Significantly improved needle-in-haystack task performance compared to pure RWKV architectures
|
| 69 |
+
- Competitive performance on standard language modeling benchmarks
|
| 70 |
+
|
| 71 |
+
## Thank you for Big help :)
|
| 72 |
+
- SmerkyG Inspired by RADLADS (https://arxiv.org/abs/2505.03005)
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
## Model Card Contact
|
| 76 |
+
|
| 77 |
+
OpenMOSE - 2025
|
| 78 |
+
|
| 79 |
+
---
|
| 80 |
+
|
| 81 |
+
*Note: This is an experimental model. Performance characteristics and behaviors may differ from both pure RWKV and standard Transformer architectures. Users should thoroughly evaluate the model for their specific use cases.*
|