Update README.md
Browse files
README.md
CHANGED
|
@@ -24,6 +24,11 @@ We present <a href="https://huggingface.co/Kingsoft-LLM/QZhou-Embedding">QZhou-E
|
|
| 24 |
- Long context: up to 8k context length;
|
| 25 |
- 7B parameter size
|
| 26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
## Usage
|
| 28 |
### Completely reproduce the benchmark results
|
| 29 |
We provide detailed parameters and environment configurations so that you can run results that are completely consistent with the mteb leaderboard on your own machine, including configurations such as environment dependencies and model arguments.
|
|
|
|
| 24 |
- Long context: up to 8k context length;
|
| 25 |
- 7B parameter size
|
| 26 |
|
| 27 |
+
## Model Refactoring
|
| 28 |
+
For the Qwen base model, we implemented the following modifications:
|
| 29 |
+
1. Replaced causal attention with bidirectional attention and constructed a new QZhouModel module based on Qwen2Model;
|
| 30 |
+
2. Modified the tokenizer's padding_side to "left".
|
| 31 |
+
|
| 32 |
## Usage
|
| 33 |
### Completely reproduce the benchmark results
|
| 34 |
We provide detailed parameters and environment configurations so that you can run results that are completely consistent with the mteb leaderboard on your own machine, including configurations such as environment dependencies and model arguments.
|