PyroNet-v2

PyroNet-v2 is a fine-tuned conversational AI model based on Qwen2.5-3B-Instruct.
It is the successor to PyroNet-v1.5, which was built on top of phi-2.

Created by IceL1ghtning (Artyom, Ukraine).


πŸ”§ Model Details

  • Base model: Qwen2.5-3B-Instruct
  • Parameters: ~3B
  • Previous version: PyroNet-v1.5 (phi-2)
  • Input format: ChatML (<|im_start|>role ... <|im_end|>)
  • Multilingual support: English, Russian, Ukrainian, and more

πŸš€ Quick Start

Installation

pip install transformers accelerate

Usage Example

from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "Kenan023214/PyroNet-v2"

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map="auto"
)

# Example conversation
messages = [
    {"role": "user", "content": "Hi! Can you solve the equation x^2 - 5x + 6 = 0?"}
]

# Apply chat template
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize=True,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

# Generate output
outputs = model.generate(
    inputs,
    max_new_tokens=256,
    temperature=0.7,
    do_sample=True
)

# Decode and print
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

πŸ“‚ Version History

  • PyroNet-v1.5 β€” based on Microsoft phi-2

  • PyroNet-v2 β€” upgraded to Qwen2.5-3B-Instruct with improved accuracy and longer context handling

⚠️ License & Limitations

This model is provided as is. It must not be used for:

harmful or malicious activities

generating unsafe or illegal content

✦ Created by IceL1ghtning (Artyom, Ukraine)

Downloads last month
7
Safetensors
Model size
3B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Kenan023214/PyroNet-v2

Quantizations
2 models

Collection including Kenan023214/PyroNet-v2