Sentence-Sarashina-Bi-0.5B-PT

English / Japanese

Overview

Sentence-Sarashina-Bi-0.5B-PT fine-tunes iamtatsuki05/sarashina2.2-Bi-0.5b with weakly supervised contrastive learning on cl-nagoya/ruri-dataset-v2-pt, delivering 1,280-dimensional Japanese embeddings.

Consept

Usage

Requirements

sentence-transformers>=4.1.0
transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3

Sample Code

import torch
from sentence_transformers import SentenceTransformer

model_name = "iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT"
model_kwargs = {
  "torch_dtype": torch.bfloat16,
  "attn_implementation": "flash_attention_2",
}
model = SentenceTransformer(model_name, model_kwargs=model_kwargs)

queries = ["ハチワレはどのようなキャラクターですか?"]
docs = [
    "ハチワレは、『ちいかわ』に登場する猫風のキャラクターで、明るく社交的、前向きな性格が特徴。ちいかわたちと共に日常を楽しみつつ、討伐などの冒険にも積極的に挑む存在です。",
    "うさぎは、天真爛漫でマイペースな性格が特徴のキャラクターで、突飛な行動力と鋭い直感でちいかわたちを引っ張る存在。自由気ままながらも仲間思いな一面を併せ持ちます。",
]
q_emb = model.encode(queries, normalize_embeddings=True)
d_emb = model.encode(docs, normalize_embeddings=True)
scores = model.similarity(q_emb, d_emb)
print(scores)

Model Details

  • Base model: iamtatsuki05/sarashina2.2-Bi-0.5b
  • Architecture: Llama
  • Maximum sequence length: 8,192 tokens
  • Embedding dimension: 1280 (mean pooling)
  • Tokenizer: SentencePiece / vocabulary size 102,400
  • Positional encoding: RoPE
  • Supported languages: Japanese
  • Similarity metric: cosine

Model Series

All encoders listed below adopt approximately two million weakly supervised pairs per subset from cl-nagoya/ruri-dataset-v2-pt.

ID Architecture #Param. #Param.
w/o Emb.
iamtatsuki05/Sentence-ModernBERT-JP-0.5B-PT ModernBERT 679M 548M
iamtatsuki05/Sentence-Llama-Bi-JP-0.5B-PT Llama 661M 530M
iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT
(this model)
Llama 661M 530M

Licence

This model is distributed under the MIT License.

How to Cite

@article{MIREI
  title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
  author={岡田 龍樹 and 杉本 徹},
  journal={言語処理学会第 32 回年次大会 (NLP2026)},
  year={2026}
}
Downloads last month
1
Safetensors
Model size
0.7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT

Finetuned
(1)
this model
Finetunes
1 model

Dataset used to train iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT

Collection including iamtatsuki05/Sentence-Sarashina-Bi-0.5B-PT