MIREI
Collection
MIREI: Matched Investigation of Representation Embedding Insights, code: https://github.com/iamtatsuki05/MIREI
•
14 items
•
Updated
•
1
English / Japanese
ModernBERT-JP-0.5B-init is a Japanese ModernBERT initialization with approximately 0.5B non-embedding parameters. It serves as a neutral starting point for downstream pre-training or fine-tuning rather than a model intended for direct deployment.
transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model_name = "iamtatsuki05/ModernBERT-JP-0.5B-init"
model_kwargs = {
"torch_dtype": torch.bfloat16,
"attn_implementation": "flash_attention_2",
"device_map": "auto",
}
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name, **model_kwargs)
text = f"ハチワレは{tokenizer.mask_token}のキャラクターです。"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
print(tokenizer.decode(outputs.logits[0, masked_index].argmax(axis=-1)))
These checkpoints correspond to parameter initializations prior to any domain-specific optimization. All models in the series share the sarashina2.2 tokenizer.
| ID | Architecture | #Param. | #Param. w/o Emb. |
|---|---|---|---|
| iamtatsuki05/ModernBERT-JP-0.5B-init (this model) |
ModernBERT | 679M | 548M |
| iamtatsuki05/Llama-JP-0.5B-init | Llama | 661M | 530M |
This model is released under the MIT License. See https://github.com/iamtatsuki05/MIREI/blob/main/LICENSE for details.
@article{MIREI
title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
author={岡田 龍樹 and 杉本 徹},
journal={言語処理学会第 32 回年次大会 (NLP2026)},
year={2026}
}