ShipBuilding commited on
Commit
04a2763
·
verified ·
1 Parent(s): d2a1a45

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -8,9 +8,9 @@ tags:
8
  inference: true
9
  ---
10
 
11
- # kkLLM v0.1
12
 
13
- kkLLM v0.1 is a LORA fine-tuned version of [Meltemi 7B Instruct v1.5](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5) using a synthetic dataset based on text from the daily greek newspaper "Rizospastis" covering the timespan from 2008 to 2024.
14
 
15
 
16
  # Running the model with mlx on a Mac
@@ -20,7 +20,7 @@ pip install mlx-lm
20
  ```
21
 
22
  ```
23
- python -m mlx_lm.generate --model model_kkLLM --prompt "Καλημέρα!" --temp 0.3
24
  ```
25
 
26
 
@@ -31,8 +31,8 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
31
 
32
  device = "cuda" # or "cpu"
33
 
34
- model = AutoModelForCausalLM.from_pretrained("model_kkLLM")
35
- tokenizer = AutoTokenizer.from_pretrained("model_kkLLM")
36
 
37
  model.to(device)
38
 
 
8
  inference: true
9
  ---
10
 
11
+ # kkOracle v0.1
12
 
13
+ kkOracle v0.1 is a LORA fine-tuned version of [Meltemi 7B Instruct v1.5](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5) using a synthetic dataset based on text from the daily greek newspaper "Rizospastis" covering the timespan from 2008 to 2024.
14
 
15
 
16
  # Running the model with mlx on a Mac
 
20
  ```
21
 
22
  ```
23
+ python -m mlx_lm.generate --model model_kkOracle --prompt "Καλημέρα!" --temp 0.3
24
  ```
25
 
26
 
 
31
 
32
  device = "cuda" # or "cpu"
33
 
34
+ model = AutoModelForCausalLM.from_pretrained("model_kkOracle")
35
+ tokenizer = AutoTokenizer.from_pretrained("model_kkOracle")
36
 
37
  model.to(device)
38