dragonkue commited on
Commit
a06ee89
Β·
verified Β·
1 Parent(s): 49021c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +83 -9
README.md CHANGED
@@ -88,13 +88,42 @@ print(similarities.shape)
88
  # [3, 3]
89
  ```
90
 
91
- <!--
92
  ### Direct Usage (Transformers)
93
 
94
- <details><summary>Click to see the direct usage in Transformers</summary>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
 
96
- </details>
97
- -->
98
 
99
  <!--
100
  ### Downstream Usage (Sentence Transformers)
@@ -114,15 +143,26 @@ You can finetune this model on your own dataset.
114
 
115
  ## Evaluation
116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  ### Metrics
118
 
 
 
119
  #### Information Retrieval
120
 
121
- <!--
122
- ## Bias, Risks and Limitations
123
 
124
- *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
125
- -->
126
 
127
  <!--
128
  ### Recommendations
@@ -133,7 +173,8 @@ You can finetune this model on your own dataset.
133
  ## Training Details
134
 
135
  ### Training Datasets
136
-
 
137
 
138
  ### Training Hyperparameters
139
  #### Non-Default Hyperparameters
@@ -278,6 +319,25 @@ You can finetune this model on your own dataset.
278
  - Datasets: 3.5.1
279
  - Tokenizers: 0.21.1
280
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
281
  ## Citation
282
 
283
  ### BibTeX
@@ -295,6 +355,20 @@ You can finetune this model on your own dataset.
295
  }
296
  ```
297
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
298
  <!--
299
  ## Glossary
300
 
 
88
  # [3, 3]
89
  ```
90
 
 
91
  ### Direct Usage (Transformers)
92
 
93
+ ```python
94
+ import torch.nn.functional as F
95
+
96
+ from torch import Tensor
97
+ from transformers import AutoTokenizer, AutoModel
98
+
99
+
100
+ def average_pool(last_hidden_states: Tensor,
101
+ attention_mask: Tensor) -> Tensor:
102
+ last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
103
+ return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]
104
+
105
+
106
+ # Each input text should start with "query: " or "passage: ", even for non-English texts.
107
+ # For tasks other than retrieval, you can simply use the "query: " prefix.
108
+ input_texts = ["query: λΆν•œκ°€μ‘±λ²• λͺ‡ μ°¨ κ°œμ •μ—μ„œ 이혼판결 ν™•μ • ν›„ 3κ°œμ›” 내에 λ“±λ‘μ‹œμ—λ§Œ μœ νš¨ν•˜λ‹€λŠ” 쑰항을 ν™•μ‹€νžˆ ν–ˆμ„κΉŒ?",
109
+ "passage: 1990년에 μ œμ •λœ λΆν•œ 가쑱법은 μ§€κΈˆκΉŒμ§€ 4μ°¨λ‘€ κ°œμ •λ˜μ–΄ ν˜„μž¬μ— 이λ₯΄κ³  μžˆλ‹€. 1993년에 이루어진 제1μ°¨ κ°œμ •μ€ 주둜 κ·œμ •μ˜ 정확성을 κΈ°ν•˜κΈ° μœ„ν•˜μ—¬ λͺ‡λͺ‡ 쑰문을 μˆ˜μ •ν•œ 것이며, 싀체적인 λ‚΄μš©μ„ λ³΄μ™„ν•œ 것은 μƒμ†μ˜ 승인과 포기기간을 μ„€μ •ν•œ 제52μ‘° 정도라고 ν•  수 μžˆλ‹€. 2004년에 이루어진 제2차에 κ°œμ •μ—μ„œλŠ” 제20쑰제3항을 μ‹ μ„€ν•˜μ—¬ μž¬νŒμƒ ν™•μ •λœ μ΄ν˜ΌνŒκ²°μ„ 3κ°œμ›” 내에 등둝해야 이혼의 효λ ₯이 λ°œμƒν•œλ‹€λŠ” 것을 λͺ…ν™•ν•˜κ²Œ ν•˜μ˜€λ‹€. 2007년에 이루어진 제3μ°¨ κ°œμ •μ—μ„œλŠ” λΆ€λͺ¨μ™€ μžλ…€ 관계 λ˜ν•œ 신뢄등둝기관에 λ“±λ‘ν•œ λ•ŒλΆ€ν„° 법적 효λ ₯이 λ°œμƒν•œλ‹€λŠ” 것을 μ‹ μ„€(제25쑰제2ν•­)ν•˜μ˜€λ‹€. λ˜ν•œ λ―Έμ„±λ…„μž, 노동λŠ₯λ ₯ μ—†λŠ” 자의 λΆ€μ–‘κ³Ό κ΄€λ ¨(제37쑰제2ν•­)ν•˜μ—¬ κΈ°μ‘΄μ—λŠ” β€œλΆ€μ–‘λŠ₯λ ₯이 μžˆλŠ” 가정성원이 없을 κ²½μš°μ—λŠ” λ”°λ‘œ μ‚¬λŠ” λΆ€λͺ¨λ‚˜ μžλ…€, μ‘°λΆ€λͺ¨λ‚˜ μ†μžλ…€, ν˜•μ œμžλ§€κ°€ λΆ€μ–‘ν•œλ‹€β€κ³  κ·œμ •ν•˜κ³  μžˆμ—ˆλ˜ 것을 β€œλΆ€μ–‘λŠ₯λ ₯이 μžˆλŠ” 가정성원이 없을 κ²½μš°μ—λŠ” λ”°λ‘œ μ‚¬λŠ” λΆ€λͺ¨λ‚˜ μžλ…€κ°€ λΆ€μ–‘ν•˜λ©° 그듀이 없을 κ²½μš°μ—λŠ” μ‘°λΆ€λͺ¨λ‚˜ μ†μžλ…€, ν˜•μ œμžλ§€κ°€ λΆ€μ–‘ν•œλ‹€β€λ‘œ κ°œμ •ν•˜μ˜€λ‹€.",
110
+ "passage: ν™˜κ²½λ§ˆν¬ μ œλ„, 인증기쀀 λ³€κ²½μœΌλ‘œ κΈ°μ—…λΆ€λ‹΄ 쀄인닀\nν™˜κ²½λ§ˆν¬ μ œλ„ μ†Œκ°œ\nβ–‘ κ°œμš”\nβ—‹ 동일 μš©λ„μ˜ λ‹€λ₯Έ μ œν’ˆμ— λΉ„ν•΄ β€˜μ œν’ˆμ˜ ν™˜κ²½μ„±*’을 κ°œμ„ ν•œ μ œν’ˆμ— λ‘œκ³ μ™€ μ„€λͺ…을 ν‘œμ‹œν•  수 μžˆλ„λ‘ν•˜λŠ” 인증 μ œλ„\nβ€» μ œν’ˆμ˜ ν™˜κ²½μ„± : μž¬λ£Œμ™€ μ œν’ˆμ„ μ œμ‘°β€€μ†ŒλΉ„ νκΈ°ν•˜λŠ” μ „κ³Όμ •μ—μ„œ μ˜€μ—Όλ¬Όμ§ˆμ΄λ‚˜ μ˜¨μ‹€κ°€μŠ€ 등을 λ°°μΆœν•˜λŠ” 정도 및 μžμ›κ³Ό μ—λ„ˆμ§€λ₯Ό μ†ŒλΉ„ν•˜λŠ” 정도 λ“± ν™˜κ²½μ— λ―ΈμΉ˜λŠ” 영ν–₯λ ₯의 정도(γ€Œν™˜κ²½κΈ°μˆ  및 ν™˜κ²½μ‚°μ—… μ§€μ›λ²•γ€μ œ2쑰제5호)\nβ–‘ 법적근거\nβ—‹ γ€Œν™˜κ²½κΈ°μˆ  및 ν™˜κ²½μ‚°μ—… μ§€μ›λ²•γ€μ œ17μ‘°(ν™˜κ²½ν‘œμ§€μ˜ 인증)\nβ–‘ κ΄€λ ¨ κ΅­μ œν‘œμ€€\nβ—‹ ISO 14024(제1μœ ν˜• ν™˜κ²½λΌλ²¨λ§)\nβ–‘ μ μš©λŒ€μƒ\nβ—‹ 사무기기, κ°€μ „μ œν’ˆ, μƒν™œμš©ν’ˆ, κ±΄μΆ•μžμž¬ λ“± 156개 λŒ€μƒμ œν’ˆκ΅°\nβ–‘ μΈμ¦ν˜„ν™©\nβ—‹ 2,737개 κΈ°μ—…μ˜ 16,647개 μ œν’ˆ(2015.12월말 κΈ°μ€€)"]
111
+
112
+ tokenizer = AutoTokenizer.from_pretrained('dragonkue/multilingual-e5-small-ko')
113
+ model = AutoModel.from_pretrained('dragonkue/multilingual-e5-small-ko')
114
+
115
+ # Tokenize the input texts
116
+ batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')
117
+
118
+ outputs = model(**batch_dict)
119
+ embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])
120
+
121
+ # normalize embeddings
122
+ embeddings = F.normalize(embeddings, p=2, dim=1)
123
+ scores = (embeddings[:1] @ embeddings[1:].T)
124
+ print(scores.tolist())
125
+ ```
126
 
 
 
127
 
128
  <!--
129
  ### Downstream Usage (Sentence Transformers)
 
143
 
144
  ## Evaluation
145
 
146
+ - This evaluation references the KURE GitHub repository. (https://github.com/nlpai-lab/KURE)
147
+ - We conducted an evaluation on all **Korean Retrieval Benchmarks** registered in [MTEB](https://github.com/embeddings-benchmark/mteb).
148
+
149
+ ### Korean Retrieval Benchmark
150
+ - [Ko-StrategyQA](https://huggingface.co/datasets/taeminlee/Ko-StrategyQA): A Korean **ODQA multi-hop retrieval dataset**, translated from StrategyQA.
151
+ - [AutoRAGRetrieval](https://huggingface.co/datasets/yjoonjang/markers_bm): A **Korean document retrieval dataset** constructed by parsing PDFs from five domains: **finance, public, medical, legal, and commerce**.
152
+ - [MIRACLRetrieval](https://huggingface.co/datasets/miracl/miracl): A **Korean document retrieval dataset** based on Wikipedia.
153
+ - [PublicHealthQA](https://huggingface.co/datasets/xhluca/publichealth-qa): A **retrieval dataset** focused on **medical and public health domains** in Korean.
154
+ - [BelebeleRetrieval](https://huggingface.co/datasets/facebook/belebele): A **Korean document retrieval dataset** based on FLORES-200.
155
+ - [MrTidyRetrieval](https://huggingface.co/datasets/mteb/mrtidy): A **Wikipedia-based Korean document retrieval dataset**.
156
+ - [MultiLongDocRetrieval](https://huggingface.co/datasets/Shitao/MLDR): A **long-document retrieval dataset** covering various domains in Korean.
157
+ - [XPQARetrieval](https://huggingface.co/datasets/jinaai/xpqa): A **cross-domain Korean document retrieval dataset**.
158
+
159
  ### Metrics
160
 
161
+ * Standard metric : NDCG@10
162
+
163
  #### Information Retrieval
164
 
 
 
165
 
 
 
166
 
167
  <!--
168
  ### Recommendations
 
173
  ## Training Details
174
 
175
  ### Training Datasets
176
+ This model was fine-tuned on the same dataset used in dragonkue/snowflake-arctic-embed-l-v2.0-ko, which consists of Korean query-passage pairs.
177
+ The training objective was to improve retrieval performance specifically for Korean-language tasks.
178
 
179
  ### Training Hyperparameters
180
  #### Non-Default Hyperparameters
 
319
  - Datasets: 3.5.1
320
  - Tokenizers: 0.21.1
321
 
322
+ ## FAQ
323
+ 1. Do I need to add the prefix "query: " and "passage: " to input texts?
324
+
325
+ Yes, this is how the model is trained, otherwise you will see a performance degradation.
326
+
327
+ Here are some rules of thumb:
328
+
329
+ Use "query: " and "passage: " correspondingly for asymmetric tasks such as passage retrieval in open QA, ad-hoc information retrieval.
330
+
331
+ Use "query: " prefix for symmetric tasks such as semantic similarity, bitext mining, paraphrase retrieval.
332
+
333
+ Use "query: " prefix if you want to use embeddings as features, such as linear probing classification, clustering.
334
+
335
+ 2. Why does the cosine similarity scores distribute around 0.7 to 1.0?
336
+
337
+ This is a known and expected behavior as we use a low temperature 0.01 for InfoNCE contrastive loss.
338
+
339
+ For text embedding tasks like text retrieval or semantic similarity, what matters is the relative order of the scores instead of the absolute values, so this should not be an issue.
340
+
341
  ## Citation
342
 
343
  ### BibTeX
 
355
  }
356
  ```
357
 
358
+ #### Base Model
359
+ ```bibtex
360
+ @article{wang2024multilingual,
361
+ title={Multilingual E5 Text Embeddings: A Technical Report},
362
+ author={Wang, Liang and Yang, Nan and Huang, Xiaolong and Yang, Linjun and Majumder, Rangan and Wei, Furu},
363
+ journal={arXiv preprint arXiv:2402.05672},
364
+ year={2024}
365
+ }
366
+ ```
367
+
368
+ ## Limitations
369
+
370
+ Long texts will be truncated to at most 512 tokens.
371
+
372
  <!--
373
  ## Glossary
374