metadata
license: mit
language:
- ru
- en
- uk
- zh
PyroNet-v2
PyroNet-v2 is a fine-tuned conversational AI model based on Qwen2.5-3B-Instruct.
It is the successor to PyroNet-v1.5, which was built on top of phi-2.
Created by IceL1ghtning (Artyom, Ukraine).
π§ Model Details
- Base model: Qwen2.5-3B-Instruct
- Parameters: ~3B
- Previous version: PyroNet-v1.5 (phi-2)
- Input format: ChatML (
<|im_start|>role ... <|im_end|>) - Multilingual support: English, Russian, Ukrainian, and more
π Quick Start
Installation
pip install transformers accelerate
Usage Example
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Kenan023214/PyroNet-v2"
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto"
)
# Example conversation
messages = [
{"role": "user", "content": "Hi! Can you solve the equation x^2 - 5x + 6 = 0?"}
]
# Apply chat template
inputs = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
# Generate output
outputs = model.generate(
inputs,
max_new_tokens=256,
temperature=0.7,
do_sample=True
)
# Decode and print
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
π Version History
PyroNet-v1.5 β based on Microsoft phi-2
PyroNet-v2 β upgraded to Qwen2.5-3B-Instruct with improved accuracy and longer context handling
β οΈ License & Limitations
This model is provided as is. It must not be used for:
harmful or malicious activities
generating unsafe or illegal content
β¦ Created by IceL1ghtning (Artyom, Ukraine)