TiME-ja-m / README.md
dschulmeist's picture
add or update model card
d7e3680 verified
metadata
language:
  - ja
library_name: transformers
pipeline_tag: feature-extraction
tags:
  - BERT
  - encoder
  - embeddings
  - TiME
  - ja
  - size:m
license: apache-2.0
teacher_model: FacebookAI/xlm-roberta-large
datasets:
  - uonlp/CulturaX

TiME Japanese (ja, m)

Monolingual BERT-style encoder that outputs embeddings for Japanese. Distilled from FacebookAI/xlm-roberta-large.

Specs

  • language: Japanese (ja)
  • size: m
  • architecture: BERT encoder
  • layers: 6
  • hidden size: 768
  • intermediate size: 3072

Usage (mean pooled embeddings)

from transformers import AutoTokenizer, AutoModel
import torch

repo = "dschulmeist/TiME-ja-m"
tok = AutoTokenizer.from_pretrained(repo)
mdl = AutoModel.from_pretrained(repo)

def mean_pool(last_hidden_state, attention_mask):
    mask = attention_mask.unsqueeze(-1).type_as(last_hidden_state)
    return (last_hidden_state * mask).sum(1) / mask.sum(1).clamp(min=1e-9)

inputs = tok(["example sentence"], padding=True, truncation=True, return_tensors="pt")
outputs = mdl(**inputs)
emb = mean_pool(outputs.last_hidden_state, inputs['attention_mask'])
print(emb.shape)