PyroNet-v2 / README.md
Kenan023214's picture
Update README.md (#3)
63fb2a3 verified
---
license: mit
language:
- ru
- en
- uk
- zh
---
# PyroNet-v2
**PyroNet-v2** is a fine-tuned conversational AI model based on [Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct).
It is the successor to **PyroNet-v1.5**, which was built on top of [phi-2](https://huggingface.co/microsoft/phi-2).
Created by **IceL1ghtning (Artyom, Ukraine)**.
---
## πŸ”§ Model Details
- **Base model:** Qwen2.5-3B-Instruct
- **Parameters:** ~3B
- **Previous version:** PyroNet-v1.5 (phi-2)
- **Input format:** ChatML (`<|im_start|>role ... <|im_end|>`)
- **Multilingual support:** English, Russian, Ukrainian, and more
---
## πŸš€ Quick Start
### Installation
```bash
pip install transformers accelerate
```
### Usage Example
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Kenan023214/PyroNet-v2"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto"
)
# Example conversation
messages = [
{"role": "user", "content": "Hi! Can you solve the equation x^2 - 5x + 6 = 0?"}
]
# Apply chat template
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate output
outputs = model.generate(
inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True
)
# Decode and print
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## πŸ“‚ Version History
* **PyroNet-v1.5** β€” based on Microsoft phi-2
* **PyroNet-v2** β€” upgraded to Qwen2.5-3B-Instruct with improved accuracy and longer context handling
## ⚠️ License & Limitations
This model is provided as is.
It must **not** be used for:
harmful or malicious activities
generating unsafe or illegal content
✦ Created by IceL1ghtning (Artyom, Ukraine)