AI21-Jamba2-3B MLX
This repository contains a public MLX safetensors export of
ai21labs/AI21-Jamba2-3B
for Apple Silicon workflows with mlx-lm.
Model Details
- Base model:
ai21labs/AI21-Jamba2-3B - Format: MLX
safetensors - Quantization: none
- Intended use: local text generation and chat on MLX-compatible Apple devices
Quick Start
Install the runtime:
pip install -U mlx-lm
Run a one-shot generation:
mlx_lm.generate --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16 --prompt "Write a short haiku about the sea."
Start an interactive chat:
mlx_lm.chat --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16
Run the HTTP server:
mlx_lm.server --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16 --host 127.0.0.1 --port 8080
You can replace the model ID above with a local path if you have already downloaded the repository.
Notes
- This is an MLX export intended for
mlx-lm. - The upstream model license remains Apache-2.0.
- For the original source checkpoint and upstream documentation, see
ai21labs/AI21-Jamba2-3B.
- Downloads last month
- 90
Model size
3B params
Tensor type
BF16
·
Hardware compatibility
Log In to add your hardware
Quantized
Model tree for ssdataanalysis/AI21-Jamba2-3B-mlx-fp16
Base model
ai21labs/AI21-Jamba2-3B