ModernBert
Collection
16 items • Updated • 2
How to use mlx-community/nomicai-modernbert-embed-base-6bit with sentence-transformers:
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("mlx-community/nomicai-modernbert-embed-base-6bit")
sentences = [
"That is a happy person",
"That is a happy dog",
"That is a very happy person",
"Today is a sunny day"
]
embeddings = model.encode(sentences)
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [4, 4]How to use mlx-community/nomicai-modernbert-embed-base-6bit with Transformers.js:
// npm i @huggingface/transformers
import { pipeline } from '@huggingface/transformers';
// Allocate pipeline
const pipe = await pipeline('sentence-similarity', 'mlx-community/nomicai-modernbert-embed-base-6bit');How to use mlx-community/nomicai-modernbert-embed-base-6bit with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir nomicai-modernbert-embed-base-6bit mlx-community/nomicai-modernbert-embed-base-6bit
The Model mlx-community/nomicai-modernbert-embed-base-6bit was converted to MLX format from nomic-ai/modernbert-embed-base using mlx-lm version 0.0.3.
pip install mlx-embeddings
from mlx_embeddings import load, generate
import mlx.core as mx
model, tokenizer = load("mlx-community/nomicai-modernbert-embed-base-6bit")
# For text embeddings
output = generate(model, processor, texts=["I like grapes", "I like fruits"])
embeddings = output.text_embeds # Normalized embeddings
# Compute dot product between normalized embeddings
similarity_matrix = mx.matmul(embeddings, embeddings.T)
print("Similarity matrix between texts:")
print(similarity_matrix)
Quantized
Base model
answerdotai/ModernBERT-base