Kenan023214 commited on
Commit
5d6c25e
·
verified ·
1 Parent(s): 960a3f5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - ru
6
+ - uk
7
+ base_model:
8
+ - openai/gpt-oss-20b
9
+ ---
10
+ # 🔥 PyroNet
11
+
12
+ **PyroNet** is a research-first open-source large language model, fine-tuned and customized with a unique system prompt and identity.
13
+ Created and maintained by **IceL1ghtning** from **Ukraine** 🇺🇦.
14
+
15
+ This model is based on **[gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)** and inherits its architecture, but provides a different **persona**, **behavior style**, and **chat template**.
16
+
17
+ ---
18
+
19
+ ## ✨ Features
20
+ - 🧠 Custom **system prompt** defining the PyroNet identity
21
+ - 🎭 Optimized for **conversational tasks** (chat, reasoning, coding help)
22
+ - 🔗 Fully compatible with the Hugging Face `transformers` library
23
+ - 📦 Distributed with a custom `chat_template.jinja`
24
+
25
+ ---
26
+
27
+ ## 🚀 Usage
28
+
29
+ ### Install requirements
30
+ ```bash
31
+ pip install transformers accelerate
32
+ ```
33
+ ### Quick inference
34
+
35
+ ```python
36
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
37
+
38
+ model_id = "Kenan023214/PyroNet"
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
41
+ model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")
42
+
43
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
44
+
45
+ prompt = "Hello, PyroNet! Can you introduce yourself?"
46
+ result = pipe(prompt, max_new_tokens=200, do_sample=True, temperature=0.8)
47
+
48
+ print(result[0]["generated_text"])
49
+ ```
50
+
51
+ ---
52
+
53
+ ### 💡 Recommendations
54
+
55
+ Hardware: best run on a GPU with at least 24 GB VRAM (e.g. RTX 3090, A100).
56
+
57
+ For inference on smaller GPUs, use:
58
+
59
+ ```python
60
+ model = AutoModelForCausalLM.from_pretrained(
61
+ model_id,
62
+ device_map="auto",
63
+ load_in_8bit=True
64
+ )
65
+
66
+ (requires bitsandbytes).
67
+
68
+ Adjust temperature and top_p for creative or deterministic outputs.
69
+ ```
70
+
71
+
72
+ ---
73
+
74
+ ### ⚡ System Prompt (Excerpt)
75
+
76
+ PyroNet is designed as a witty, research-driven assistant,
77
+ capable of reasoning, coding, explaining decisions, and keeping conversations engaging.
78
+
79
+
80
+ ---
81
+
82
+ 📞 Contact
83
+
84
+ **You can contact us via email**: *engineerglab@gmail.com*
85
+
86
+
87
+ ---
88
+
89
+ ### 📜 License & Disclaimer
90
+
91
+ Based on **gpt-oss-20b**.
92
+
93
+ For research purposes only. Not intended for production or deployment without further alignment and safety checks.
94
+
95
+ Responsibility for usage lies with the end-user.
96
+
97
+
98
+
99
+ ---
100
+
101
+ 🔥 PyroNet — Where logic meets creativity.