File size: 5,922 Bytes
703d77a
 
 
46c1ac4
 
 
 
 
 
 
 
 
 
 
703d77a
 
21b9e32
703d77a
 
 
 
 
bf93f3b
703d77a
 
46c1ac4
 
 
 
e137306
 
 
 
703d77a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46c1ac4
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
---
license: mit
tags:
- sentence-transformers
- sentence-similarity
- mteb
- retriever
- text-embeddings-inference
language:
- en
- zh
base_model:
- Qwen/Qwen3-Embedding-0.6B
library_name: adapter-transformers
---

# Jasper-Token-Compression-600M

## Introduction

Inspired by Deepseek-OCR, this is the first vector model in the Jasper and Stella series to use **dynamic text token
compression**. Through the combination of vector distillation and contrastive learning, our model can compress text by
10x
while still achieving excellent performance!

Original training codes: https://github.com/DunZhang/Jasper-Token-Compression-Training

Report: https://arxiv.org/abs/2511.14405

Wechat: zhdunt

X: https://x.com/dunn_zhang

## Features

- ⭐⭐⭐ Supports bilingual (Chinese and English)
- ⭐⭐⭐⭐⭐⭐ Dynamic token compression - tested to achieve excellent results even when compressing text to 0.33x of original
  length
- ⭐⭐⭐ Combines vector distillation with contrastive learning to further improve performance on retrieval tasks
- ⭐⭐ 12 million unsupervised data distillation
- ⭐⭐ 0.6B parameter size

## Technical Details

### Dynamic Text Token Compression

My implementation is very simple: After text passes through the `word_embedding` layer, it immediately goes through a
`Qwen3MLP` (approximately 3 fully connected layers), then I calculate the compressed length, and finally use
`adaptive_avg_pool1d` to compress tokens to that length.

The compression length calculation logic is as follows:

```python
real_length = 1000  # Actual token count of the text
length_threshold = 80  # Compress only if exceeding this threshold
compression_ratio = 0.333
if real_length <= length_threshold:
    # No compression
    pass
else:
    target_length = int(length_threshold + (real_length - length_threshold) * compression_ratio)
```

For implementation details, please refer to the `modeling_qwen3_jasper.py` file in this directory.

### Vector Distillation + Contrastive Learning

First, we compute teacher vectors for texts in the contrastive learning training set, then use the following three
losses during training:

1. Cosine loss: Standard vector distillation loss
2. InfoNCE (hard loss): Standard contrastive learning loss function
3. KL divergence (soft loss): KL divergence between student score matrix and teacher score matrix. The score matrix is
   the scores between query and all documents(i.e. positive doc, hard negative docs, other in-batch docs).

#### Evaluation

My prompt strategy and specific content are consistent with the QZhou model. Please refer to their evaluation
script: https://github.com/Kingsoft-LLM/QZhou-Embedding

### Usage

```py
import torch
from sentence_transformers import SentenceTransformer

if __name__ == "__main__":
    model_name_or_path = "infgrad/Jasper-Token-Compression-600M"
    model = SentenceTransformer(
        model_name_or_path,
        model_kwargs={
            "torch_dtype": torch.bfloat16,
            "attn_implementation": "sdpa",  # We support flash_attention_2; sdpa; eager
            "trust_remote_code": True
        },
        trust_remote_code=True,
        tokenizer_kwargs={"padding_side": "left"},
        device="cpu",
    )

    queries = [
        "What is photosynthesis?",
        "Who invented the telephone?",
    ]
    documents = [
        "Photosynthesis is the process by which green plants use sunlight, carbon dioxide, and water to produce glucose and oxygen",
        "Alexander Graham Bell is credited with inventing the first practical telephone in 1876, receiving US patent number 174,465 for his device."
    ]
    # The smaller the compression_ratio parameter, the faster the speed, but the quality will correspondingly decrease.
    # Based on our parameter settings during training and test results, we recommend a range between 0.3-0.8.
    query_embeddings = model.encode(queries, prompt_name="query", normalize_embeddings=True, compression_ratio=0.3333)
    document_embeddings = model.encode(documents, normalize_embeddings=True, compression_ratio=0.3333)

    similarity = model.similarity(query_embeddings, document_embeddings)
    print(similarity)

```

### Limitations and TODO

#### Retrieval performance

I found that distilled models struggle to approach the retrieval performance of teacher models, which is why I
specifically used contrastive learning + distillation learning to enhance the student model. However, I found that while
the enhanced model showed improvement on retrieval test sets, there is still a significant gap compared to mainstream
models.
**Therefore, I believe that how to improve the retrieval performance of distilled models is a very necessary and
valuable
research direction.**

#### More reasonable text token compression modules

There is limited research on text token compression currently, and I have only tried the simplest approach. I believe
more reasonable text compression modules can definitely be found.

#### Text length

I only distilled texts up to 1024 tokens in length, so there should be performance degradation when text length exceeds
1024.

### Citation

If you find our work worth citing, please use the following citation.

```

@misc{zhang2025jasperstelladistillationsota,
      title={Jasper and Stella: distillation of SOTA embedding models}, 
      author={Dun Zhang and Jiacheng Li and Ziyang Zeng and Fulong Wang},
      year={2025},
      eprint={2412.19048},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2412.19048}, 
}

```

```
@misc{zhang2025jaspertokencompression600mtechnicalreport,
      title={Jasper-Token-Compression-600M Technical Report}, 
      author={Dun Zhang and Ziyang Zeng and Yudong Zhou and Shuyang Lu},
      year={2025},
      eprint={2511.14405},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2511.14405}, 
}
```