metadata
license: apache-2.0
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
base_model: grimjim/Nemo-Instruct-2407-MPOA-v4-12B
pipeline_tag: text-generation
library_name: mlx
tags:
- mlx
CallMcMargin/Nemo-Instruct-2407-MPOA-v4-12B-mlx-bf16-mxfp4-qgroup32-mixed_4_6
This model CallMcMargin/Nemo-Instruct-2407-MPOA-v4-12B-mlx-bf16-mxfp4-qgroup32-mixed_4_6 was converted to MLX format from grimjim/Nemo-Instruct-2407-MPOA-v4-12B using mlx-lm version 0.28.4.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("CallMcMargin/Nemo-Instruct-2407-MPOA-v4-12B-mlx-bf16-mxfp4-qgroup32-mixed_4_6")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)