|
|
--- |
|
|
license: gemma |
|
|
language: |
|
|
- en |
|
|
base_model: SicariusSicariiStuff/X-Ray_Alpha |
|
|
datasets: |
|
|
- SicariusSicariiStuff/UBW_Tapestries |
|
|
widget: |
|
|
- text: X-Ray_Alpha |
|
|
output: |
|
|
url: https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha/resolve/main/Images/X-Ray_Alpha.png |
|
|
tags: |
|
|
- mlx |
|
|
--- |
|
|
|
|
|
# Daizee/X-Ray_Alpha-mlx-4Bit |
|
|
|
|
|
The Model [Daizee/X-Ray_Alpha-mlx-4Bit](https://huggingface.co/Daizee/X-Ray_Alpha-mlx-4Bit) was converted to MLX format from [SicariusSicariiStuff/X-Ray_Alpha](https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha) using mlx-lm version **0.29.1**. |
|
|
|
|
|
## Use with mlx |
|
|
|
|
|
```bash |
|
|
pip install mlx-lm |
|
|
``` |
|
|
|
|
|
```python |
|
|
from mlx_lm import load, generate |
|
|
|
|
|
model, tokenizer = load("Daizee/X-Ray_Alpha-mlx-4Bit") |
|
|
|
|
|
prompt="hello" |
|
|
|
|
|
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None: |
|
|
messages = [{"role": "user", "content": prompt}] |
|
|
prompt = tokenizer.apply_chat_template( |
|
|
messages, tokenize=False, add_generation_prompt=True |
|
|
) |
|
|
|
|
|
response = generate(model, tokenizer, prompt=prompt, verbose=True) |
|
|
``` |
|
|
|