metadata
license: apache-2.0
pipeline_tag: text-generation
library_name: mlx
base_model: ai21labs/AI21-Jamba2-3B
tags:
- mlx
- safetensors
- jamba
- text-generation
AI21-Jamba2-3B MLX
This repository contains a public MLX safetensors export of
ai21labs/AI21-Jamba2-3B
for Apple Silicon workflows with mlx-lm.
Model Details
- Base model:
ai21labs/AI21-Jamba2-3B - Format: MLX
safetensors - Quantization: none
- Intended use: local text generation and chat on MLX-compatible Apple devices
Quick Start
Install the runtime:
pip install -U mlx-lm
Run a one-shot generation:
mlx_lm.generate --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16 --prompt "Write a short haiku about the sea."
Start an interactive chat:
mlx_lm.chat --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16
Run the HTTP server:
mlx_lm.server --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16 --host 127.0.0.1 --port 8080
You can replace the model ID above with a local path if you have already downloaded the repository.
Notes
- This is an MLX export intended for
mlx-lm. - The upstream model license remains Apache-2.0.
- For the original source checkpoint and upstream documentation, see
ai21labs/AI21-Jamba2-3B.