staedi/coref-llama-3.2

Coreference Resolution fine-tuned model with mlx-lm (Base model: meta-llama/llama-3.2-3B-Instruct).

This model staedi/coref-llama-3.2 was converted to MLX format from meta-llama/llama-3.2-3B-Instruct using mlx-lm version 0.26.2.

Use with mlx

pip install mlx-lm
from mlx_lm import load, generate
# Load model with adapter
model, tokenizer = load("staedi/coref-llama-3.2")
# Text to resolve coreferences in
text = "Apple announced its earnings. The company performed well."
# Create prompt
prompt = (
    "Resolve all coreferences in the following text by replacing pronouns and "
    "descriptive references with their original entities. Maintain the same "
    "meaning and structure while making all references explicit: \n " + text
)
if tokenizer.chat_template is not None:
    messages = [{'role':'user','content':prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Downloads last month
6
Safetensors
Model size
0.5B params
Tensor type
BF16
·
U32
·
MLX
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support