voxmenthe's picture
394bcc7b8a1ac0af96cda777cd5244fa248d6c1f86e5e961db046ba4c089f485
541b23d verified
|
raw
history blame
683 Bytes
---
license: cc-by-nc-nd-3.0
tags:
- mlx
---
# mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized
The Model [mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized](https://huggingface.co/mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized) was converted to MLX format from [Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R](https://huggingface.co/Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R) using mlx-lm version **0.13.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mlx-community/SFR-Iterative-DPO-LLaMA-3-8B-R-unquantized")
response = generate(model, tokenizer, prompt="hello", verbose=True)
```