How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("fill-mask", model="mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit")
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit")
model = AutoModelForMaskedLM.from_pretrained("mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit")
Quick Links

mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit

The Model mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit was converted to MLX format from answerdotai/ModernBERT-Large-Instruct using mlx-lm version 0.0.3.

Use with mlx

pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx

model, tokenizer = load("mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit")

# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds  # Normalized embeddings

# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)

print("Similarity matrix between texts:")
print(similarity_matrix)

Downloads last month
5
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including mlx-community/answerdotai-ModernBERT-Large-Instruct-4bit