Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ license_name: gemma-terms-of-use
|
|
| 12 |
license_link: https://ai.google.dev/gemma/terms
|
| 13 |
---
|
| 14 |
|
| 15 |
-
# mlx-community/quantized-gemma
|
| 16 |
This model was converted to MLX format from [`google/gemma-2b`]().
|
| 17 |
Refer to the [original model card](https://huggingface.co/google/gemma-2b) for more details on the model.
|
| 18 |
## Use with mlx
|
|
@@ -24,6 +24,6 @@ pip install mlx-lm
|
|
| 24 |
```python
|
| 25 |
from mlx_lm import load, generate
|
| 26 |
|
| 27 |
-
model, tokenizer = load("mlx-community/quantized-gemma")
|
| 28 |
response = generate(model, tokenizer, prompt="hello", verbose=True)
|
| 29 |
```
|
|
|
|
| 12 |
license_link: https://ai.google.dev/gemma/terms
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# mlx-community/quantized-gemma-2b
|
| 16 |
This model was converted to MLX format from [`google/gemma-2b`]().
|
| 17 |
Refer to the [original model card](https://huggingface.co/google/gemma-2b) for more details on the model.
|
| 18 |
## Use with mlx
|
|
|
|
| 24 |
```python
|
| 25 |
from mlx_lm import load, generate
|
| 26 |
|
| 27 |
+
model, tokenizer = load("mlx-community/quantized-gemma-2b")
|
| 28 |
response = generate(model, tokenizer, prompt="hello", verbose=True)
|
| 29 |
```
|