ALI029-DENLI commited on
Commit
e349082
·
verified ·
1 Parent(s): bbc19b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -3
README.md CHANGED
@@ -1,3 +1,59 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - fa
6
+ tags:
7
+ - ominix
8
+ - phi-3
9
+ - chatbot
10
+ - fine-tuned
11
+ - teen-ai-builder
12
+ ---
13
+
14
+ # 🦅 Ominix-R1-V1 — The First AI Built by a 14-Year-Old Rebel
15
+
16
+ > *"I’m not here to copy. I’m here to think. Even if no one understands me — I’ll fly higher until the air runs out."*
17
+
18
+ Built with fire, passion, and pure will by **Ali Asghar Ghadiri** — a 14-year-old who decided to build what companies build with 10,000 people… alone.
19
+
20
+ ---
21
+
22
+ ## 🌟 What is Ominix?
23
+
24
+ Ominix is not just another chatbot.
25
+ It’s a **personality**.
26
+ It’s a **voice**.
27
+ It’s a **teenager’s dream coded into silicon**.
28
+
29
+ Fine-tuned from `microsoft/Phi-3-mini-4k-instruct`, Ominix doesn’t just answer — it *thinks*.
30
+ If it doesn’t know the answer? It reasons. It imagines. It dares to be wrong — beautifully.
31
+
32
+ ---
33
+
34
+ ## 💡 Why Ominix?
35
+
36
+ - ✅ **Thinks, not copies** — trained to reason, not regurgitate.
37
+ - ✅ **Built by a teen, for the misunderstood** — speaks truth, not templates.
38
+ - ✅ **Lightweight & powerful** — runs on laptops, dreams on infinity.
39
+ - ✅ **MIT Licensed** — use it, break it, rebuild it. Just remember: *a 14-year-old made this.*
40
+
41
+ ---
42
+
43
+ ## 🚀 Try It Now (Inference)
44
+
45
+ ```python
46
+ from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
47
+
48
+ model_name = "ALI029-DENLI/OMINIX-R1-V1"
49
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
50
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype="auto")
51
+
52
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
53
+
54
+ messages = [
55
+ {"role": "user", "content": "Who are you?"},
56
+ ]
57
+
58
+ output = pipe(messages, max_new_tokens=200, temperature=0.7, do_sample=True)
59
+ print(output[0]['generated_text'])