Sentence-ModernBERT-JP-0.5B

English / Japanese

Overview

Sentence-ModernBERT-JP-0.5B fine-tunes iamtatsuki05/Sentence-ModernBERT-JP-0.5B-PT with supervised contrastive learning on cl-nagoya/ruri-v3-dataset-ft, yielding 1,280-dimensional Japanese sentence embeddings.

Consept

Usage

Requirements

sentence-transformers>=4.1.0
transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3

Sample Code

import torch
from sentence_transformers import SentenceTransformer

model_name = "iamtatsuki05/Sentence-ModernBERT-JP-0.5B"
model_kwargs = {
  "torch_dtype": torch.bfloat16,
  "attn_implementation": "flash_attention_2",
}
model = SentenceTransformer(model_name, model_kwargs=model_kwargs)

queries = ["ハチワレはどのようなキャラクターですか?"]
docs = [
    "ハチワレは、『ちいかわ』に登場する猫風のキャラクターで、明るく社交的、前向きな性格が特徴。ちいかわたちと共に日常を楽しみつつ、討伐などの冒険にも積極的に挑む存在です。",
    "うさぎは、天真爛漫でマイペースな性格が特徴のキャラクターで、突飛な行動力と鋭い直感でちいかわたちを引っ張る存在。自由気ままながらも仲間思いな一面を併せ持ちます。",
]
q_emb = model.encode(queries, normalize_embeddings=True)
d_emb = model.encode(docs, normalize_embeddings=True)
scores = model.similarity(q_emb, d_emb)
print(scores)

Model Details

  • Base model: iamtatsuki05/Sentence-ModernBERT-JP-0.5B-PT
  • Architecture: ModernBERT
  • Maximum sequence length: 8,192 tokens
  • Embedding dimension: 1280 (mean pooling)
  • Tokenizer: SentencePiece / vocabulary size 102,400
  • Positional encoding: RoPE
  • Supported languages: Japanese
  • Similarity metric: cosine

Model Series

These encoders start from weakly supervised checkpoints and undergo supervised fine-tuning on cl-nagoya/ruri-v3-dataset-ft.

ID Architecture #Param. #Param.
w/o Emb.
JMTEB-Avg JMTEB-Retrieval JMTEB-STS JMTEB-Classification JMTEB-Reranking JMTEB-Clustering
iamtatsuki05/Sentence-ModernBERT-JP-0.5B
(this model)
ModernBERT 679M 548M 65.31 57.95 80.78 71.73 75.50 50.03
iamtatsuki05/Sentence-Llama-Bi-JP-0.5B Llama 661M 530M 61.02 51.55 78.01 68.51 71.96 48.69
iamtatsuki05/Sentence-Sarashina-Bi-0.5B Llama 661M 530M 66.84 59.00 83.50 74.35 77.36 49.40

Licence

This model is distributed under the MIT License.

How to Cite

@article{MIREI
  title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
  author={岡田 龍樹 and 杉本 徹},
  journal={言語処理学会第 32 回年次大会 (NLP2026)},
  year={2026}
}
Downloads last month
5
Safetensors
Model size
0.7B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for iamtatsuki05/Sentence-ModernBERT-JP-0.5B

Dataset used to train iamtatsuki05/Sentence-ModernBERT-JP-0.5B

Collection including iamtatsuki05/Sentence-ModernBERT-JP-0.5B