File size: 2,608 Bytes
c81bf89
 
 
 
 
 
e218fad
c81bf89
 
 
 
890bab6
c81bf89
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
890bab6
c81bf89
 
890bab6
c81bf89
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: mit
language:
- en
- ru
- uk

---
# ๐ŸŒŸ PyroNet-v1: The First in a Series

### Model Description
**PyroNet-v1** is a specialized AI assistant designed for precise, professional, and pragmatic communication. It's the progenitor model in the PyroNet series, built on the compact and efficient [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) architecture.

Made by **IceL1ghtning**

Its persona is that of a serious but reliable mentor who excels at delivering accurate, fact-based information across various scientific and technical domains.

---

### ๐Ÿš€ Quick Start: How to Use the Model

To unlock the full potential of **PyroNet-v1** and activate its persona, you **must** use the provided `chat_template`. This template automatically adds the system prompt to your queries, allowing the model to work as intended straight out of the box.

1.  **Install the Libraries**: Make sure you have `transformers`, `torch`, and `accelerate` installed.

    ```bash
    !pip install transformers torch accelerate
    ```

2.  **Code Example**: Use this code to start a conversation with the model. Just replace `model_id` with your repository name.

    ```python
    from transformers import AutoModelForCausalLM, AutoTokenizer
    import torch

    model_id = "Kenan023214/PyroNet-v1"

    # Load the tokenizer and model.
    # The tokenizer will automatically find and load chat_template.jinja from your repo.
    tokenizer = AutoTokenizer.from_pretrained(model_id)
    model = AutoModelForCausalLM.from_pretrained(
        model_id,
        device_map="auto",
        torch_dtype="auto"
    )

    # Create the conversation messages
    messages = [
        {"role": "user", "content": "Explain what gravity is."}
    ]

    # Apply the chat template to activate the PyroNet-v1 persona
    inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
    inputs = inputs.to(model.device)

    # Generate the response
    outputs = model.generate(
        inputs,
        max_new_tokens=256,
        pad_token_id=tokenizer.eos_token_id
    )

    print(tokenizer.decode(outputs[0], skip_special_tokens=False))
    ```

---

### โš™๏ธ Model Details and License

* **Base Model**: [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)
* **Architecture**: Specialized Transformer Model
* **Languages**: Multilingual (includes Russian, English, Ukrainian)
* **License**: The [Qwen2.5](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) license applies to this model.

We are always open to improvements and welcome your feedback!