File size: 9,628 Bytes
7f1d15e 5232f96 02bca4f 5000b8d c3b41d6 5000b8d 7f1d15e 5232f96 02bca4f 5232f96 02bca4f 0f56705 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f cc525ca 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 cc525ca 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f 89da2d6 02bca4f 89da2d6 02bca4f 89da2d6 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f 5232f96 02bca4f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- ko
- en
widget:
source_sentence: "๋ํ๋ฏผ๊ตญ์ ์๋๋?"
sentences:
- "์์ธํน๋ณ์๋ ํ๊ตญ์ด ์ ์น,๊ฒฝ์ ,๋ฌธํ ์ค์ฌ ๋์์ด๋ค."
- "๋ถ์ฐ์ ๋ํ๋ฏผ๊ตญ์ ์ 2์ ๋์์ด์ ์ต๋์ ํด์ ๋ฌผ๋ฅ ๋์์ด๋ค."
- "์ ์ฃผ๋๋ ๋ํ๋ฏผ๊ตญ์์ ์ ๋ช
ํ ๊ด๊ด์ง์ด๋ค"
- "Seoul is the capital of Korea"
- "์ธ์ฐ๊ด์ญ์๋ ๋ํ๋ฏผ๊ตญ ๋จ๋๋ถ ํด์์ ์๋ ๊ด์ญ์์ด๋ค"
---
# moco-sentencebertV2.0
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
- ์ด ๋ชจ๋ธ์ [bongsoo/mbertV2.0](https://huggingface.co/bongsoo/mbertV2.0) MLM ๋ชจ๋ธ์
<br>sentencebert๋ก ๋ง๋ ํ,์ถ๊ฐ์ ์ผ๋ก STS Tearch-student ์ฆ๋ฅ ํ์ต ์์ผ ๋ง๋ ๋ชจ๋ธ ์
๋๋ค.
- **vocab: 152,537 ๊ฐ**(๊ธฐ์กด 119,548 vocab ์ 32,989 ์ ๊ท vocab ์ถ๊ฐ)
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence_transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bongsoo/moco-sentencebertV2.0')
embeddings = model.encode(sentences)
print(embeddings)
# sklearn ์ ์ด์ฉํ์ฌ cosine_scores๋ฅผ ๊ตฌํจ
# => ์
๋ ฅ๊ฐ embeddings ์ (1,768) ์ฒ๋ผ 2D ์ฌ์ผ ํจ.
from sklearn.metrics.pairwise import paired_cosine_distances, paired_euclidean_distances, paired_manhattan_distances
cosine_scores = 1 - (paired_cosine_distances(embeddings[0].reshape(1,-1), embeddings[1].reshape(1,-1)))
print(f'*cosine_score:{cosine_scores[0]}')
```
#### ์ถ๋ ฅ(Outputs)
```
[[ 0.16649279 -0.2933038 -0.00391259 ... 0.00720964 0.18175027 -0.21052675]
[ 0.10106096 -0.11454111 -0.00378215 ... -0.009032 -0.2111504 -0.15030429]]
*cosine_score:0.3352515697479248
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
- ํ๊ท ํด๋ง(mean_pooling) ๋ฐฉ์ ์ฌ์ฉ. ([cls ํด๋ง](https://huggingface.co/sentence-transformers/bert-base-nli-cls-token), [max ํด๋ง](https://huggingface.co/sentence-transformers/bert-base-nli-max-tokens))
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bongsoo/moco-sentencebertV2.0')
model = AutoModel.from_pretrained('bongsoo/moco-sentencebertV2.0')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
# sklearn ์ ์ด์ฉํ์ฌ cosine_scores๋ฅผ ๊ตฌํจ
# => ์
๋ ฅ๊ฐ embeddings ์ (1,768) ์ฒ๋ผ 2D ์ฌ์ผ ํจ.
from sklearn.metrics.pairwise import paired_cosine_distances, paired_euclidean_distances, paired_manhattan_distances
cosine_scores = 1 - (paired_cosine_distances(sentence_embeddings[0].reshape(1,-1), sentence_embeddings[1].reshape(1,-1)))
print(f'*cosine_score:{cosine_scores[0]}')
```
#### ์ถ๋ ฅ(Outputs)
```
Sentence embeddings:
tensor([[ 0.1665, -0.2933, -0.0039, ..., 0.0072, 0.1818, -0.2105],
[ 0.1011, -0.1145, -0.0038, ..., -0.0090, -0.2112, -0.1503]])
*cosine_score:0.3352515697479248
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
<br> ํ๊ตญ์ด : **korsts(1,379์๋ฌธ์ฅ)** ์ **klue-sts(519์๋ฌธ์ฅ)**
<br> ์์ด : [stsb_multi_mt](https://huggingface.co/datasets/stsb_multi_mt)(1,376์๋ฌธ์ฅ) ์ [glue:stsb](https://huggingface.co/datasets/glue/viewer/stsb/validation) (1,500์๋ฌธ์ฅ)
- ์ฑ๋ฅ ์งํ๋ **cosin.spearman** ์ธก์ ํ์ฌ ๋น๊ตํจ.
- ํ๊ฐ ์ธก์ ์ฝ๋๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-test.ipynb) ์ฐธ์กฐ
|๋ชจ๋ธ |korsts|klue-sts|korsts+klue-sts|stsb_multi_mt|glue(stsb)
|:--------|------:|--------:|--------------:|------------:|-----------:|
|distiluse-base-multilingual-cased-v2|0.747|0.785|0.577|0.807|0.819|
|paraphrase-multilingual-mpnet-base-v2|0.820|0.799|0.711|0.868|0.890|
|bongsoo/sentencedistilbertV1.2|0.819|0.858|0.630|0.837|0.873|
|bongsoo/moco-sentencedistilbertV2.0|0.812|0.847|0.627|0.837|0.877|
|bongsoo/moco-sentencebertV2.0|0.824|0.841|0.635|0.843|0.879|
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training(ํ๋ จ ๊ณผ์ )
The model was trained with the parameters:
**1. MLM ํ๋ จ**
- ์
๋ ฅ ๋ชจ๋ธ : bert-base-multilingual-cased
- ๋ง๋ญ์น : ํ๋ จ : bongsoo/moco-corpus-kowiki2022(7.6M) , ํ๊ฐ: bongsoo/bongevalsmall
- HyperParameter : LearningRate : 5e-5, epochs: 8, batchsize: 32, max_token_len : 128
- vocab : 152,537๊ฐ (๊ธฐ์กด 119,548 ์ 32,989 ์ ๊ท vocab ์ถ๊ฐ)
- ์ถ๋ ฅ ๋ชจ๋ธ : mbertV2.0 (size: 813MB)
- ํ๋ จ์๊ฐ : 90h/1GPU (24GB/19.6GB use)
- loss : ํ๋ จloss: 2.258400, ํ๊ฐloss: 3.102096, perplexity: 19.78158(bong_eval:1,500)
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/bert/bert-MLM-Trainer-V1.2.ipynb) ์ฐธ์กฐ
**2. STS ํ๋ จ**
=>bert๋ฅผ sentencebert๋ก ๋ง๋ฌ.
- ์
๋ ฅ ๋ชจ๋ธ : mbertV2.0
- ๋ง๋ญ์น : korsts + kluestsV1.1 + stsb_multi_mt + mteb/sickr-sts (์ด:33,093)
- HyperParameter : LearningRate : 3e-5, epochs: 200, batchsize: 32, max_token_len : 128
- ์ถ๋ ฅ ๋ชจ๋ธ : sbert-mbertV2.0 (size: 813MB)
- ํ๋ จ์๊ฐ : 9h20m/1GPU (24GB/9.0GB use)
- loss(cosin_spearman) : 0.799(๋ง๋ญ์น:korsts(tune_test.tsv))
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentece-bert-sts.ipynb) ์ฐธ์กฐ
**3.์ฆ๋ฅ(distilation) ํ๋ จ**
- ํ์ ๋ชจ๋ธ : sbert-mbertV2.0
- ๊ต์ฌ ๋ชจ๋ธ : paraphrase-multilingual-mpnet-base-v2
- ๋ง๋ญ์น : en_ko_train.tsv(ํ๊ตญ์ด-์์ด ์ฌํ๊ณผํ๋ถ์ผ ๋ณ๋ ฌ ๋ง๋ญ์น : 1.1M)
- HyperParameter : LearningRate : 5e-5, epochs: 40, batchsize: 128, max_token_len : 128
- ์ถ๋ ฅ ๋ชจ๋ธ : sbert-mlbertV2.0-distil
- ํ๋ จ์๊ฐ : 17h/1GPU (24GB/18.6GB use)
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sbert-distillaton.ipynb) ์ฐธ์กฐ
**4.STS ํ๋ จ**
=> sentencebert ๋ชจ๋ธ์ sts ํ๋ จ์ํด
- ์
๋ ฅ ๋ชจ๋ธ : sbert-mlbertV2.0-distil
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- HyperParameter : LearningRate : 3e-5, epochs: 800, batchsize: 64, max_token_len : 128
- ์ถ๋ ฅ ๋ชจ๋ธ : moco-sentencebertV2.0
- ํ๋ จ์๊ฐ : 25h/1GPU (24GB/13GB use)
- ํ๋ จ์ฝ๋ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/blob/master/sbert/sentece-bert-sts.ipynb) ์ฐธ์กฐ
<br>๋ชจ๋ธ ์ ์ ๊ณผ์ ์ ๋ํ ์์ธํ ๋ด์ฉ์ [์ฌ๊ธฐ](https://github.com/kobongsoo/BERT/tree/master)๋ฅผ ์ฐธ์กฐ ํ์ธ์.
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1035 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Config**:
```
{
"_name_or_path": "../../data11/model/sbert/sbert-mbertV2.0-distil",
"architectures": [
"BertModel"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"position_embedding_type": "absolute",
"torch_dtype": "float32",
"transformers_version": "4.21.2",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 152537
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
bongsoo |