pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
- ko
- en
widget:
source_sentence: ๋ํ๋ฏผ๊ตญ์ ์๋๋?
sentences:
- ์์ธํน๋ณ์๋ ํ๊ตญ์ด ์ ์น,๊ฒฝ์ ,๋ฌธํ ์ค์ฌ ๋์์ด๋ค.
- ๋ถ์ฐ์ ๋ํ๋ฏผ๊ตญ์ ์ 2์ ๋์์ด์ ์ต๋์ ํด์ ๋ฌผ๋ฅ ๋์์ด๋ค.
- ์ ์ฃผ๋๋ ๋ํ๋ฏผ๊ตญ์์ ์ ๋ช
ํ ๊ด๊ด์ง์ด๋ค
- Seoul is the capital of Korea
- ์ธ์ฐ๊ด์ญ์๋ ๋ํ๋ฏผ๊ตญ ๋จ๋๋ถ ํด์์ ์๋ ๊ด์ญ์์ด๋ค
moco-sentencebertV2.0
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
- ์ด ๋ชจ๋ธ์ bongsoo/mbertV2.0 MLM ๋ชจ๋ธ์
sentencebert๋ก ๋ง๋ ํ,์ถ๊ฐ์ ์ผ๋ก STS Tearch-student ์ฆ๋ฅ ํ์ต ์์ผ ๋ง๋ ๋ชจ๋ธ ์ ๋๋ค. - vocab: 152,537 ๊ฐ(๊ธฐ์กด 119,548 vocab ์ 32,989 ์ ๊ท vocab ์ถ๊ฐ)
Usage (Sentence-Transformers)
Using this model becomes easy when you have sentence-transformers installed:
pip install -U sentence_transformers
Then you can use the model like this:
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('bongsoo/moco-sentencebertV2.0')
embeddings = model.encode(sentences)
print(embeddings)
# sklearn ์ ์ด์ฉํ์ฌ cosine_scores๋ฅผ ๊ตฌํจ
# => ์
๋ ฅ๊ฐ embeddings ์ (1,768) ์ฒ๋ผ 2D ์ฌ์ผ ํจ.
from sklearn.metrics.pairwise import paired_cosine_distances, paired_euclidean_distances, paired_manhattan_distances
cosine_scores = 1 - (paired_cosine_distances(embeddings[0].reshape(1,-1), embeddings[1].reshape(1,-1)))
print(f'*cosine_score:{cosine_scores[0]}')
์ถ๋ ฅ(Outputs)
[[ 0.16649279 -0.2933038 -0.00391259 ... 0.00720964 0.18175027 -0.21052675]
[ 0.10106096 -0.11454111 -0.00378215 ... -0.009032 -0.2111504 -0.15030429]]
*cosine_score:0.3352515697479248
Usage (HuggingFace Transformers)
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
- ํ๊ท ํด๋ง(mean_pooling) ๋ฐฉ์ ์ฌ์ฉ. (cls ํด๋ง, max ํด๋ง)
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('bongsoo/moco-sentencebertV2.0')
model = AutoModel.from_pretrained('bongsoo/moco-sentencebertV2.0')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
# sklearn ์ ์ด์ฉํ์ฌ cosine_scores๋ฅผ ๊ตฌํจ
# => ์
๋ ฅ๊ฐ embeddings ์ (1,768) ์ฒ๋ผ 2D ์ฌ์ผ ํจ.
from sklearn.metrics.pairwise import paired_cosine_distances, paired_euclidean_distances, paired_manhattan_distances
cosine_scores = 1 - (paired_cosine_distances(sentence_embeddings[0].reshape(1,-1), sentence_embeddings[1].reshape(1,-1)))
print(f'*cosine_score:{cosine_scores[0]}')
์ถ๋ ฅ(Outputs)
Sentence embeddings:
tensor([[ 0.1665, -0.2933, -0.0039, ..., 0.0072, 0.1818, -0.2105],
[ 0.1011, -0.1145, -0.0038, ..., -0.0090, -0.2112, -0.1503]])
*cosine_score:0.3352515697479248
Evaluation Results
- ์ฑ๋ฅ ์ธก์ ์ ์ํ ๋ง๋ญ์น๋, ์๋ ํ๊ตญ์ด (kor), ์์ด(en) ํ๊ฐ ๋ง๋ญ์น๋ฅผ ์ด์ฉํจ
ํ๊ตญ์ด : korsts(1,379์๋ฌธ์ฅ) ์ klue-sts(519์๋ฌธ์ฅ)
์์ด : stsb_multi_mt(1,376์๋ฌธ์ฅ) ์ glue:stsb (1,500์๋ฌธ์ฅ) - ์ฑ๋ฅ ์งํ๋ cosin.spearman ์ธก์ ํ์ฌ ๋น๊ตํจ.
- ํ๊ฐ ์ธก์ ์ฝ๋๋ ์ฌ๊ธฐ ์ฐธ์กฐ
| ๋ชจ๋ธ | korsts | klue-sts | korsts+klue-sts | stsb_multi_mt | glue(stsb) |
|---|---|---|---|---|---|
| distiluse-base-multilingual-cased-v2 | 0.747 | 0.785 | 0.577 | 0.807 | 0.819 |
| paraphrase-multilingual-mpnet-base-v2 | 0.820 | 0.799 | 0.711 | 0.868 | 0.890 |
| bongsoo/sentencedistilbertV1.2 | 0.819 | 0.858 | 0.630 | 0.837 | 0.873 |
| bongsoo/moco-sentencedistilbertV2.0 | 0.812 | 0.847 | 0.627 | 0.837 | 0.877 |
| bongsoo/moco-sentencebertV2.0 | 0.824 | 0.841 | 0.635 | 0.843 | 0.879 |
For an automated evaluation of this model, see the Sentence Embeddings Benchmark: https://seb.sbert.net
Training(ํ๋ จ ๊ณผ์ )
The model was trained with the parameters:
1. MLM ํ๋ จ
- ์ ๋ ฅ ๋ชจ๋ธ : bert-base-multilingual-cased
- ๋ง๋ญ์น : ํ๋ จ : bongsoo/moco-corpus-kowiki2022(7.6M) , ํ๊ฐ: bongsoo/bongevalsmall
- HyperParameter : LearningRate : 5e-5, epochs: 8, batchsize: 32, max_token_len : 128
- vocab : 152,537๊ฐ (๊ธฐ์กด 119,548 ์ 32,989 ์ ๊ท vocab ์ถ๊ฐ)
- ์ถ๋ ฅ ๋ชจ๋ธ : mbertV2.0 (size: 813MB)
- ํ๋ จ์๊ฐ : 90h/1GPU (24GB/19.6GB use)
- loss : ํ๋ จloss: 2.258400, ํ๊ฐloss: 3.102096, perplexity: 19.78158(bong_eval:1,500)
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
2. STS ํ๋ จ =>bert๋ฅผ sentencebert๋ก ๋ง๋ฌ.
- ์ ๋ ฅ ๋ชจ๋ธ : mbertV2.0
- ๋ง๋ญ์น : korsts + kluestsV1.1 + stsb_multi_mt + mteb/sickr-sts (์ด:33,093)
- HyperParameter : LearningRate : 3e-5, epochs: 200, batchsize: 32, max_token_len : 128
- ์ถ๋ ฅ ๋ชจ๋ธ : sbert-mbertV2.0 (size: 813MB)
- ํ๋ จ์๊ฐ : 9h20m/1GPU (24GB/9.0GB use)
- loss(cosin_spearman) : 0.799(๋ง๋ญ์น:korsts(tune_test.tsv))
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
3.์ฆ๋ฅ(distilation) ํ๋ จ
- ํ์ ๋ชจ๋ธ : sbert-mbertV2.0
- ๊ต์ฌ ๋ชจ๋ธ : paraphrase-multilingual-mpnet-base-v2
- ๋ง๋ญ์น : en_ko_train.tsv(ํ๊ตญ์ด-์์ด ์ฌํ๊ณผํ๋ถ์ผ ๋ณ๋ ฌ ๋ง๋ญ์น : 1.1M)
- HyperParameter : LearningRate : 5e-5, epochs: 40, batchsize: 128, max_token_len : 128
- ์ถ๋ ฅ ๋ชจ๋ธ : sbert-mlbertV2.0-distil
- ํ๋ จ์๊ฐ : 17h/1GPU (24GB/18.6GB use)
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
4.STS ํ๋ จ => sentencebert ๋ชจ๋ธ์ sts ํ๋ จ์ํด
- ์ ๋ ฅ ๋ชจ๋ธ : sbert-mlbertV2.0-distil
- ๋ง๋ญ์น : korsts(5,749) + kluestsV1.1(11,668) + stsb_multi_mt(5,749) + mteb/sickr-sts(9,927) + glue stsb(5,749) (์ด:38,842)
- HyperParameter : LearningRate : 3e-5, epochs: 800, batchsize: 64, max_token_len : 128
- ์ถ๋ ฅ ๋ชจ๋ธ : moco-sentencebertV2.0
- ํ๋ จ์๊ฐ : 25h/1GPU (24GB/13GB use)
- ํ๋ จ์ฝ๋ ์ฌ๊ธฐ ์ฐธ์กฐ
๋ชจ๋ธ ์ ์ ๊ณผ์ ์ ๋ํ ์์ธํ ๋ด์ฉ์ ์ฌ๊ธฐ๋ฅผ ์ฐธ์กฐ ํ์ธ์.
DataLoader:
torch.utils.data.dataloader.DataLoader of length 1035 with parameters:
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
Config:
{
"_name_or_path": "../../data11/model/sbert/sbert-mbertV2.0-distil",
"architectures": [
"BertModel"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"directionality": "bidi",
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"pooler_fc_size": 768,
"pooler_num_attention_heads": 12,
"pooler_num_fc_layers": 3,
"pooler_size_per_head": 128,
"pooler_type": "first_token_transform",
"position_embedding_type": "absolute",
"torch_dtype": "float32",
"transformers_version": "4.21.2",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 152537
}
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
Citing & Authors
bongsoo