PyroNet-v1 / README.md
Kenan023214's picture
Update README.md
890bab6 verified
---
license: mit
language:
- en
- ru
- uk
---
# 🌟 PyroNet-v1: The First in a Series
### Model Description
**PyroNet-v1** is a specialized AI assistant designed for precise, professional, and pragmatic communication. It's the progenitor model in the PyroNet series, built on the compact and efficient [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) architecture.
Made by **IceL1ghtning**
Its persona is that of a serious but reliable mentor who excels at delivering accurate, fact-based information across various scientific and technical domains.
---
### πŸš€ Quick Start: How to Use the Model
To unlock the full potential of **PyroNet-v1** and activate its persona, you **must** use the provided `chat_template`. This template automatically adds the system prompt to your queries, allowing the model to work as intended straight out of the box.
1. **Install the Libraries**: Make sure you have `transformers`, `torch`, and `accelerate` installed.
```bash
!pip install transformers torch accelerate
```
2. **Code Example**: Use this code to start a conversation with the model. Just replace `model_id` with your repository name.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "Kenan023214/PyroNet-v1"
# Load the tokenizer and model.
# The tokenizer will automatically find and load chat_template.jinja from your repo.
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="auto"
)
# Create the conversation messages
messages = [
{"role": "user", "content": "Explain what gravity is."}
]
# Apply the chat template to activate the PyroNet-v1 persona
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
inputs = inputs.to(model.device)
# Generate the response
outputs = model.generate(
inputs,
max_new_tokens=256,
pad_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
```
---
### βš™οΈ Model Details and License
* **Base Model**: [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* **Architecture**: Specialized Transformer Model
* **Languages**: Multilingual (includes Russian, English, Ukrainian)
* **License**: The [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) license applies to this model.
We are always open to improvements and welcome your feedback!