FronyAI commited on
Commit
90fe623
·
verified ·
1 Parent(s): faf38cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -46,7 +46,7 @@ The overall training process was conducted with reference to snowflake-arctic-2.
46
  **In V2, a three-stage training process was introduced as a key component of the overall learning strategy.**<br>
47
  The training process consisted of three stages: Adaptation-training, Pre-training, and Post-training.
48
 
49
- * In the adaptation-training stage, we observed through preliminary experiments that multi-vector Retrieval consistently outperformed standard dense retrieval. To reflect this, we first trained the model using a multi-vector retrieval objective.
50
  * In the pre-training stage, we introduced knowledge distillation, **where the multi-vector retrieval loss was distilled into the dense retrieval loss**. This allowed the model to capture fine-grained token-level similarity signals while being trained with in-batch negatives.
51
  * In the post-training stage, we utilized the multilingual-e5-large model to mine hard negatives—specifically, the top 4 samples with a similarity score below a 99% threshold—and fine-tuned the model further using these examples.
52
 
 
46
  **In V2, a three-stage training process was introduced as a key component of the overall learning strategy.**<br>
47
  The training process consisted of three stages: Adaptation-training, Pre-training, and Post-training.
48
 
49
+ * In the adaptation-training stage, we observed through preliminary experiments that multi-vector retrieval consistently outperformed standard dense retrieval. To reflect this, we first trained the model using a multi-vector retrieval objective.
50
  * In the pre-training stage, we introduced knowledge distillation, **where the multi-vector retrieval loss was distilled into the dense retrieval loss**. This allowed the model to capture fine-grained token-level similarity signals while being trained with in-batch negatives.
51
  * In the post-training stage, we utilized the multilingual-e5-large model to mine hard negatives—specifically, the top 4 samples with a similarity score below a 99% threshold—and fine-tuned the model further using these examples.
52