Lumberjackk/fineweb-2-trimming
Viewer • Updated • 20M • 1.66k • 1
How to use mrm8488/spanish-mmBERT-small with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("fill-mask", model="mrm8488/spanish-mmBERT-small") # Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("mrm8488/spanish-mmBERT-small")
model = AutoModel.from_pretrained("mrm8488/spanish-mmBERT-small")This model is a 61.0% smaller version of jhu-clsp/mmBERT-small for the Spanish language, created using vocabulary pruning on the fineweb-2-trimming dataset.
Vocabulary size: 32768 tokens (reduced from 256000)
Tokenizer type: BPE
Training samples: 200000 texts
This pruned model should perform similarly to the original model for Spanish language tasks with a much smaller memory footprint. However, it may not perform well for other languages present in the original multilingual model as tokens not commonly used in Spanish were removed from the original multilingual model's vocabulary.
You can use this model with the Transformers library:
from transformers import AutoModel, AutoTokenizer
model_name = "mrm8488/spanish-mmBERT-small"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Base model
jhu-clsp/mmBERT-small