File size: 7,060 Bytes
ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 87cf70d ecb71f9 3635ff3 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 443b183 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 d312f24 ecb71f9 443b183 ecb71f9 d312f24 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 |
---
license: apache-2.0
pipeline_tag: text-retrieval
---
# EmbeddingRWKV
EmbeddingRWKV is a high-efficiency text embedding and reranking model based on the RWKV architecture, introduced in the paper [EmbeddingRWKV: State-Centric Retrieval with Reusable States](https://huggingface.co/papers/2601.07861).
It utilizes **State-Centric Retrieval**, a unified retrieval paradigm that uses "states" as a bridge to connect embedding models and rerankers, significantly improving inference speed for reranking tasks.
[**Paper**](https://huggingface.co/papers/2601.07861) | [**GitHub**](https://github.com/howard-hou/EmbeddingRWKV)
## 📦 Installation
```bash
pip install rwkv-emb
```
## 🤖 Models & Weights
You can download the weights from the [HuggingFace Repository](https://huggingface.co/howard-hou/EmbeddingRWKV/tree/main).
| Size / Level | Embedding Model (Main) | Matching Reranker (Paired) | Notes |
| :--- | :--- | :--- | :--- |
| **Tiny** | `rwkv0b1-emb-curriculum.pth` | `rwkv0b1-reranker.pth` | Ultra-fast, minimal memory. |
| **Base** | `rwkv0b4-emb-curriculum.pth` | `rwkv0b3-reranker.pth` | Balanced speed & performance. |
| **Large** | `rwkv1b4-emb-curriculum.pth` | `rwkv1b3-reranker.pth` | Best performance, higher VRAM usage. |
## 🚀 Quick Start (End-to-End)
Get text embeddings in just a few lines. The tokenizer and model are designed to work seamlessly together.
> **Note**: Always set `add_eos=True` during tokenization. The model relies on the EOS token (`65535`) to mark the end of a sentence for correct embedding generation.
```python
import os
from torch.nn import functional as F
# Set environment for JIT compilation (Optional, set to '1' for CUDA acceleration)
os.environ["RWKV_CUDA_ON"] = '1'
from rwkv_emb.tokenizer import RWKVTokenizer
from rwkv_emb.model import EmbeddingRWKV
# Fast retrieval, good for initial candidate filtering.
emb_model = EmbeddingRWKV(model_path='/path/to/model.pth')
tokenizer = RWKVTokenizer()
query = "What represents the end of a sequence?"
documents = [
"The EOS token is used to mark the end of a sentence.",
"Apples are red and delicious fruits.",
"Machine learning requires large datasets."
]
# Encode Query
q_tokens = tokenizer.encode(query, add_eos=True)
q_emb, _ = emb_model.forward_text_only(q_tokens, None) # shape: [1, Dim]
# Encode Documents (Batch)
doc_batch = [tokenizer.encode(doc, add_eos=True) for doc in documents]
max_doc_len = max(len(t) for t in doc_batch)
for i in range(len(doc_batch)):
pad_len = max_doc_len - len(doc_batch[i])
# Prepend 0s (Left Padding)
doc_batch[i] = [0] * pad_len + doc_batch[i]
d_embs, _ = emb_model.forward_text_only(doc_batch, None)
# Calculate Cosine Similarity
scores_emb = F.cosine_similarity(q_emb, d_embs)
print("
EmbeddingRWKV Cosine Similarity:")
for doc, score in zip(documents, scores_emb):
print(f"[{score.item():.4f}] {doc}")
```
### ⚠️ Critical Performance Tip: Pad to Same Length
While the model supports batches with variable sequence lengths, **we strongly recommend padding all sequences to the same length** for maximum GPU throughput.
- **Pad Token**: `0`
- **Performance**: Fixed-length batches allow the CUDA kernel to parallelize computation efficiently. Variable-length batches will trigger a slower execution path.
## 🎯 RWKVReRanker (State-based Reranker)
The `RWKVReRanker` utilizes the final hidden state produced by the main `EmbeddingRWKV` model to score the relevance between a query and a document.
### Online Mode Usage Example
```python
import torch
from rwkv_emb.tokenizer import RWKVTokenizer
from rwkv_emb.model import EmbeddingRWKV, RWKVReRanker
# 1. Load Models
emb_model = EmbeddingRWKV(model_path='/path/to/EmbeddingRWKV.pth')
reranker = RWKVReRanker(model_path='/path/to/RWKVReRanker.pth')
tokenizer = RWKVTokenizer()
# 2. Prepare Data (Query + Candidate Documents)
query = "What represents the end of a sequence?"
documents = [
"The EOS token is used to mark the end of a sentence.",
"Apples are red and delicious fruits.",
"Machine learning requires large datasets."
]
# 3. Construct Input Pairs
pairs = []
online_template = "Instruct: Given a query, retrieve documents that answer the query
Document: {document}
Query: {query}"
for doc in documents:
text = online_template.format(document=doc, query=query)
pairs.append(text)
# 4. Tokenize & Pad
batch_tokens = [tokenizer.encode(p, add_eos=True) for p in pairs]
max_len = max(len(t) for t in batch_tokens)
for i in range(len(batch_tokens)):
batch_tokens[i] = [0] * (max_len - len(batch_tokens[i])) + batch_tokens[i]
# 5. Get States from Embedding Model
_, state = emb_model.forward(batch_tokens, None)
# 6. Score with ReRanker
logits = reranker.forward(state[1])
scores = torch.sigmoid(logits)
# 7. Print Results
print("
RWKVReRanker Online Scores:")
for doc, score in zip(documents, scores):
print(f"[{score:.4f}] {doc}")
```
### Offline Mode (Cached Doc State)
For scenarios where documents are static but queries change (e.g., Search Engines, RAG), you can **pre-compute and cache the document states**. This reduces query-time latency from O(L_doc + L_query) to just O(L_query).
```python
# --- Phase 1: Indexing (Pre-computation) ---
doc_template = "Instruct: Given a query, retrieve documents that answer the query
Document: {document}
"
cached_states = []
for doc in documents:
text = doc_template.format(document=doc)
tokens = tokenizer.encode(text, add_eos=False)
_, state = emb_model.forward(tokens, None)
cpu_state = [s.cpu() for s in state]
cached_states.append(cpu_state)
# --- Phase 2: Querying (Fast Retrieval) ---
query_template = "Query: {query}"
query_text = query_template.format(query=query)
query_tokens = tokenizer.encode(query_text, add_eos=True)
batch_states = [[], []]
for cpu_s in cached_states:
batch_states[0].append(cpu_s[0].clone().cuda())
batch_states[1].append(cpu_s[1].clone().cuda())
state_input = [
torch.stack(batch_states[0], dim=2).squeeze(3),
torch.stack(batch_states[1], dim=1).squeeze(2)
]
batch_query_tokens = [query_tokens] * len(documents)
_, final_state = emb_model.forward(batch_query_tokens, state_input)
logits = reranker.forward(final_state[1])
scores = torch.sigmoid(logits)
```
## Summary of Differences
| Feature | 1. Embedding (Cosine) | 2. Online Reranking | 3. Offline Reranking |
| :--- | :--- | :--- | :--- |
| **Accuracy** | Good | **Best** | **Best** (Identical to Online) |
| **Latency** | Extremely Fast | Slow O(L_doc + L_query) | Fast O(L_query) only |
| **Input** | Query & Doc separate | `Instruct + Doc + Query` | `Query` (on top of cached Doc) |
| **Storage** | Low (Vector only) | None | High (Stores Hidden States) |
| **Best For** | Initial Retrieval (Top-k) | Reranking few candidates | Reranking many candidates |
## Citation
```bibtex
@article{hou2025embeddingrwkv,
title={EmbeddingRWKV: State-Centric Retrieval with Reusable States},
author={Hou, Howard and others},
journal={arXiv preprint arXiv:2601.07861},
year={2026}
}
``` |