File size: 2,105 Bytes
5d6c25e
 
 
 
 
 
 
 
 
 
 
6bde8d1
 
5d6c25e
 
 
 
 
6bde8d1
 
 
 
5d6c25e
 
 
 
 
 
 
6bde8d1
cef1cf5
6bde8d1
cef1cf5
 
5d6c25e
 
 
 
 
 
 
 
 
 
 
 
 
1f005e3
5d6c25e
 
 
 
 
6bde8d1
5d6c25e
6bde8d1
5d6c25e
 
 
6bde8d1
 
5d6c25e
 
 
6bde8d1
5d6c25e
6bde8d1
5d6c25e
 
 
 
 
 
cef1cf5
6bde8d1
 
5d6c25e
 
 
 
 
 
6bde8d1
 
 
5d6c25e
6bde8d1
5d6c25e
 
 
 
 
 
 
6bde8d1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
language:
- en
- ru
- uk
base_model:
- openai/gpt-oss-20b
---
# 🔥 PyroNet

**PyroNet** is a fine-tuned and customized open-source large language model with a unique system identity.  
Originally based on **[gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)**, this model has been **further trained and specialized** to embody the **PyroNet persona**.  
Created and maintained by **IceL1ghtning** from **Ukraine** 🇺🇦.  

---

## ✨ Features
- 🧠 Fine-tuned on custom datasets to define the **PyroNet identity**  
- 🎭 Optimized for **chat, reasoning, coding, and explanation tasks**  
- 🔗 Fully compatible with the Hugging Face `transformers` ecosystem  
- 📦 Includes a custom **chat template** and structured **system prompt**  

---

## 🚀 Usage

### Install requirements
```bash
pip install transformers accelerate bitsandbytes
```

### Quick inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_id = "Kenan023214/PyroNet"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)

prompt = "Hello, PyroNet! Can you introduce yourself?"
result = pipe(prompt, max_new_tokens=200, do_sample=True, temperature=0.8)

print(result[0]["generated_text"])
```

---

### 💡 Recommendations

Best run on GPU with ≥24 GB VRAM (e.g. RTX 3090, A100).

For smaller GPUs, use:

```python
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    load_in_8bit=True
)
```
(Requires bitsandbytes).

Adjust temperature and top_p for more creative or deterministic outputs.



---


[💬 Telegram](https://t.me/LogovoOfEngineer)

📧 Contact: engineerglab@gmail.com


---

### 📜 License & Disclaimer

License: Apache 2.0

Based on gpt-oss-20b

For research purposes only. Not intended for production without further alignment and safety checks.

Responsibility for usage lies with the end-user.



---

🔥 PyroNet — Where logic meets creativity.