MIREI
Collection
MIREI: Matched Investigation of Representation Embedding Insights, code: https://github.com/iamtatsuki05/MIREI
•
14 items
•
Updated
•
1
English / Japanese
Sentence-Sarashina-Bi-0.5B-PT fine-tunes iamtatsuki05/sarashina2.2-Bi-0.5b with weakly supervised contrastive learning on cl-nagoya/ruri-dataset-v2-pt, delivering 1,280-dimensional Japanese embeddings.
sentence-transformers>=4.1.0
transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3
import torch
from sentence_transformers import SentenceTransformer
model_name = "iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT"
model_kwargs = {
"torch_dtype": torch.bfloat16,
"attn_implementation": "flash_attention_2",
}
model = SentenceTransformer(model_name, model_kwargs=model_kwargs)
queries = ["ハチワレはどのようなキャラクターですか?"]
docs = [
"ハチワレは、『ちいかわ』に登場する猫風のキャラクターで、明るく社交的、前向きな性格が特徴。ちいかわたちと共に日常を楽しみつつ、討伐などの冒険にも積極的に挑む存在です。",
"うさぎは、天真爛漫でマイペースな性格が特徴のキャラクターで、突飛な行動力と鋭い直感でちいかわたちを引っ張る存在。自由気ままながらも仲間思いな一面を併せ持ちます。",
]
q_emb = model.encode(queries, normalize_embeddings=True)
d_emb = model.encode(docs, normalize_embeddings=True)
scores = model.similarity(q_emb, d_emb)
print(scores)
All encoders listed below adopt approximately two million weakly supervised pairs per subset from cl-nagoya/ruri-dataset-v2-pt.
| ID | Architecture | #Param. | #Param. w/o Emb. |
|---|---|---|---|
| iamtatsuki05/Sentence-ModernBERT-JP-0.5B-PT | ModernBERT | 679M | 548M |
| iamtatsuki05/Sentence-Llama-Bi-JP-0.5B-PT | Llama | 661M | 530M |
| iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT (this model) |
Llama | 661M | 530M |
This model is distributed under the MIT License.
@article{MIREI
title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
author={岡田 龍樹 and 杉本 徹},
journal={言語処理学会第 32 回年次大会 (NLP2026)},
year={2026}
}