Update README.md
Browse files
README.md
CHANGED
|
@@ -186,14 +186,14 @@ documents = [
|
|
| 186 |
input_texts = queries + documents
|
| 187 |
model = LLM(model="Kingsoft-LLM/QZhou-Embedding")
|
| 188 |
outputs = model.embed(input_texts)
|
| 189 |
-
|
| 190 |
```
|
| 191 |
|
| 192 |
### FAQs
|
| 193 |
**1. Does the model support MRL?**<br>
|
| 194 |
The model currently does not support MRL in this release due to observed performance degradation.<br>
|
| 195 |
**2. Why not build upon the Qwen3 series models?**<br>
|
| 196 |
-
Our initial research experiments commenced prior to the release of Qwen3.
|
| 197 |
|
| 198 |
### Citation
|
| 199 |
If you find our work worth citing, please use the following citation:<br>
|
|
|
|
| 186 |
input_texts = queries + documents
|
| 187 |
model = LLM(model="Kingsoft-LLM/QZhou-Embedding")
|
| 188 |
outputs = model.embed(input_texts)
|
| 189 |
+
outputs = [F.normalize(torch.tensor(x.outputs.embedding), p=2, dim=0) for x in outputs]
|
| 190 |
```
|
| 191 |
|
| 192 |
### FAQs
|
| 193 |
**1. Does the model support MRL?**<br>
|
| 194 |
The model currently does not support MRL in this release due to observed performance degradation.<br>
|
| 195 |
**2. Why not build upon the Qwen3 series models?**<br>
|
| 196 |
+
Our initial research experiments commenced prior to the release of Qwen3. We retained the original base model throughout the study to maintain our experimental consistency. While we subsequently conducted first-stage (retrieval) training with Qwen3, the performance after 32k steps showed no significant improvement over Qwen2.5, leading to discontinuation of further development with this architecture.
|
| 197 |
|
| 198 |
### Citation
|
| 199 |
If you find our work worth citing, please use the following citation:<br>
|