File size: 1,359 Bytes
76fa4ba
 
 
 
 
 
 
3c2741f
 
 
76fa4ba
3c2741f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: apache-2.0
pipeline_tag: text-generation
library_name: mlx
base_model: ai21labs/AI21-Jamba2-3B
tags:
- mlx
- safetensors
- jamba
- text-generation
---

# AI21-Jamba2-3B MLX

This repository contains a public MLX `safetensors` export of
[`ai21labs/AI21-Jamba2-3B`](https://huggingface.co/ai21labs/AI21-Jamba2-3B)
for Apple Silicon workflows with `mlx-lm`.

## Model Details

- Base model: `ai21labs/AI21-Jamba2-3B`
- Format: MLX `safetensors`
- Quantization: none
- Intended use: local text generation and chat on MLX-compatible Apple devices

## Quick Start

Install the runtime:

```bash
pip install -U mlx-lm
```

Run a one-shot generation:

```bash
mlx_lm.generate --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16 --prompt "Write a short haiku about the sea."
```

Start an interactive chat:

```bash
mlx_lm.chat --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16
```

Run the HTTP server:

```bash
mlx_lm.server --model ssdataanalysis/AI21-Jamba2-3B-mlx-fp16 --host 127.0.0.1 --port 8080
```

You can replace the model ID above with a local path if you have already
downloaded the repository.

## Notes

- This is an MLX export intended for `mlx-lm`.
- The upstream model license remains Apache-2.0.
- For the original source checkpoint and upstream documentation, see
  [`ai21labs/AI21-Jamba2-3B`](https://huggingface.co/ai21labs/AI21-Jamba2-3B).