File size: 1,448 Bytes
6fa33d9
 
 
dcb80f9
 
 
 
 
6fa33d9
 
04a2763
6fa33d9
04a2763
6fa33d9
 
 
 
d2a1a45
6fa33d9
 
 
d2a1a45
04a2763
6fa33d9
 
 
 
 
 
 
 
 
 
04a2763
 
6fa33d9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb80f9
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language:
- el
license: apache-2.0
pipeline_tag: text-generation
tags:
- finetuned
inference: true
---

# kkOracle v0.1

kkOracle v0.1 is a LORA fine-tuned version of [Meltemi 7B Instruct v1.5](https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5) using a synthetic dataset based on text from the daily greek newspaper "Rizospastis" covering the timespan from 2008 to 2024.


# Running the model with mlx on a Mac

```
pip install mlx-lm
```

```
python -m mlx_lm.generate --model model_kkOracle --prompt "Καλημέρα!" --temp 0.3
```


# Running the model on other systems 

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # or "cpu"

model = AutoModelForCausalLM.from_pretrained("model_kkOracle")
tokenizer = AutoTokenizer.from_pretrained("model_kkOracle")

model.to(device)

messages = [
    {"role": "user", "content": "Καλημέρα!"},
]

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
input_prompt = tokenizer(prompt, return_tensors='pt').to(device)
outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True, temperature=0.3, use_cache=True, pad_token_id=tokenizer.eos_token_id, attention_mask=input_prompt["attention_mask"])

print(tokenizer.batch_decode(outputs)[0])
```


# Ethical Considerations

This model has been aligned with human preferences, but might generate misleading, harmful, and toxic content.