HCAE
Collection
HCAE (Hybrid Convolutional-Attention Encoder) is a next-generation family of lightweight text embedding models designed for extreme efficiency wit • 4 items • Updated
HCAE-21M is a mid-scale (21 Million parameters) text embedding model combining Depthwise Separable Convolutions and Self-Attention layers. It achieves high performance on Semantic Textual Similarity and Retrieval tasks while remaining extremely memory-efficient.
This table delineates the performance disparities between architectural iterations:
| Model Revision | STSBenchmark (Spearman) | SciFact (Recall@10) | Description |
|---|---|---|---|
| HCAE-21M-Base | 0.507 |
0.324 |
Baseline configuration trained extensively on the MS MARCO dataset. |
| HCAE-21M-Instruct | 0.591 |
0.393 |
Multi-stage tuning incorporating ArXiv, STS-B, and SQuAD instruction tuning paradigms. |
For optimal retrieval performance, prepend the instruction mapping to the query text:
Instruction: Retrieve the exact document that answers the following question. Query: [Your Query]