Update README.md
Browse files
README.md
CHANGED
|
@@ -8,9 +8,9 @@ tags:
|
|
| 8 |
inference: true
|
| 9 |
---
|
| 10 |
|
| 11 |
-
#
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
|
| 16 |
# Running the model with mlx on a Mac
|
|
@@ -20,7 +20,7 @@ pip install mlx-lm
|
|
| 20 |
```
|
| 21 |
|
| 22 |
```
|
| 23 |
-
python -m mlx_lm.generate --model
|
| 24 |
```
|
| 25 |
|
| 26 |
|
|
@@ -31,8 +31,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
| 31 |
|
| 32 |
device = "cuda" # or "cpu"
|
| 33 |
|
| 34 |
-
model = AutoModelForCausalLM.from_pretrained("
|
| 35 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
| 36 |
|
| 37 |
model.to(device)
|
| 38 |
|
|
|
|
| 8 |
inference: true
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# kkOracle v0.1
|
| 12 |
|
| 13 |
+
kkOracle v0.1 is a LORA fine-tuned version of [Meltemi 7B Instruct v1.5](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5) using a synthetic dataset based on text from the daily greek newspaper "Rizospastis" covering the timespan from 2008 to 2024.
|
| 14 |
|
| 15 |
|
| 16 |
# Running the model with mlx on a Mac
|
|
|
|
| 20 |
```
|
| 21 |
|
| 22 |
```
|
| 23 |
+
python -m mlx_lm.generate --model model_kkOracle --prompt "Καλημέρα!" --temp 0.3
|
| 24 |
```
|
| 25 |
|
| 26 |
|
|
|
|
| 31 |
|
| 32 |
device = "cuda" # or "cpu"
|
| 33 |
|
| 34 |
+
model = AutoModelForCausalLM.from_pretrained("model_kkOracle")
|
| 35 |
+
tokenizer = AutoTokenizer.from_pretrained("model_kkOracle")
|
| 36 |
|
| 37 |
model.to(device)
|
| 38 |
|