cmcmaster's picture
d2dfa0deb6942ceca2063b51211403f7d73f749520aabe909a89e5aab5a2fdd1
7e991e8 verified
|
raw
history blame
655 Bytes
---
license: cc-by-nc-nd-3.0
tags:
- mlx
---
# mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-4bit
The Model [mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-4bit](https://huggingface.co/mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-4bit) was converted to MLX format from [Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) using mlx-lm version **0.13.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-4bit")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```