mrm8488's picture
Upload pruned model
b67f6cc verified
metadata
pipeline_tag: fill-mask
language: es
license: mit
tags:
  - trimmed
library_name: transformers
base_model: jhu-clsp/mmBERT-small
base_model_relation: quantized
datasets:
  - Lumberjackk/fineweb-2-trimming

🇪🇸 spanish-mmBERT-small

This model is a 61.0% smaller version of jhu-clsp/mmBERT-small for the Spanish language, created using vocabulary pruning on the fineweb-2-trimming dataset.

Vocabulary size: 32768 tokens (reduced from 256000)
Tokenizer type: BPE
Training samples: 200000 texts

This pruned model should perform similarly to the original model for Spanish language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Spanish were removed from the original multilingual model's vocabulary.

Usage

You can use this model with the Transformers library:

from transformers import AutoModel, AutoTokenizer

model_name = "mrm8488/spanish-mmBERT-small"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)