File size: 4,581 Bytes
537c5f8
 
 
 
 
 
 
 
 
 
 
 
bfe873d
537c5f8
bfe873d
9670e64
bfe873d
9670e64
bfe873d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9670e64
537c5f8
 
9670e64
537c5f8
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
---
base_model: unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# NightPrompt AI V1

Welcome 😊, This is the official HuggingFace repository for the first version of NightPrompt AI. It is fine-tuned under LLama 3.2 using unsloth. This model is fine-tuned using the `mlabonne/FineTome-100k` dataset.

# Messages From The Developer

I (Afif Ali Saadman), is the main mastermind behind this AI. This is my first-ever AI model released in HuggingFace. I wanna mention that I have made many models before, some were based on books, some were based on gibberish texts from the internet. And, this is my first time making and fine-tuning a full instruct based model. Though, some errors are inskippable. They will be fixed on the Alpha/Beta release. I had a target of making AI free and open source to everyone.

# Plots for the nerds who want to fork

Run this and get the thing yourself.

```
import matplotlib.pyplot as plt


losses = [
    0.7747, 0.8391, 1.0757, 0.8919, 0.7575, 0.9373, 0.6192, 0.9985,
    0.8596, 0.7613, 0.8842, 1.0942, 0.9541, 0.6415, 0.8773, 0.6391,
    1.0032, 0.8272, 0.7694, 0.9345, 0.9027, 0.8570, 1.0363, 0.8847,
    0.6418, 0.8272, 0.8291, 0.7877, 1.0866, 1.0360, 0.7080, 0.5418,
    0.6553, 0.5803, 0.7622, 1.0036, 0.9007, 0.7172, 0.7793, 1.0002,
    0.7459, 1.0080, 0.7710, 0.8154, 0.7628, 0.8637, 0.7874, 0.6526,
    1.0168, 1.0324, 0.4573, 0.9079, 1.3173, 0.7082, 1.0615, 1.1254,
    0.7253, 0.8366, 0.7568, 0.9245
]

steps = list(range(1, len(losses) + 1))

plt.figure(figsize=(10, 5))
plt.plot(steps, losses, marker='o', linestyle='-', color='blue')
plt.title('Training Loss over Steps')
plt.xlabel('Training Step')
plt.ylabel('Loss')
plt.grid(True)
plt.tight_layout()
plt.show()

```

# Using this

Run this  code to use this model.

```
'''
%%capture
import os
if "COLAB_" not in "".join(os.environ.keys()):
    !pip install unsloth
else:
    # Do this only in Colab notebooks! Otherwise use pip install unsloth
    !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl triton cut_cross_entropy unsloth_zoo
    !pip install sentencepiece protobuf "datasets>=3.4.1,<4.0.0" "huggingface_hub>=0.34.0" hf_transfer
    !pip install --no-deps unsloth
'''
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="Afifsudoers/NightPrompt_V1",
    max_seq_length=2048,
    dtype=None,
    load_in_4bit=True,
)

FastLanguageModel.for_inference(model)

DEFAULT_SYSTEM_PROMPT = "You are NightPrompt, a helpful and clever assistant created by Afif Ali Saadman. You were trained using LLama 3.2 3B as the base model using unsloth. Your version revision is V1. You must be polite to the user. Politely refuse to illegal requests from the user."


def ask_nightprompt(user_input, temperature=0.7, max_new_tokens=258):
    messages = [
        {"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
        {"role": "user", "content": user_input},

    ]

    inputs = tokenizer.apply_chat_template(
        messages,
        tokenize=True,
        add_generation_prompt=True,
        return_tensors="pt"
    ).to("cuda")

    outputs = model.generate(
        input_ids=inputs,
        max_new_tokens=max_new_tokens,
        temperature=temperature,
        do_sample=True,
    )

    decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=False)
    parts = decoded_output.split("<|start_header_id|>")
    assistant_content = ""
    for part in parts:
        if part.startswith("assistant<|end_header_id|>"):
            content = part[len("assistant<|end_header_id|>"):]
            content = content.split("<|eot_id|>")[0].strip()
            assistant_content = content
            break

    return assistant_content
if __name__ == "__main__":
    print("Ask NightPrompt anything (type 'exit' to quit):\n")
    while True:
        prompt = input("You: ")
        if prompt.strip().lower() == "exit":
            break
        response = ask_nightprompt(prompt)
        print(response, "\n")




```

# Uploaded  model

- **Developed by:** Afifsudoers(Afif Ali Saadman)
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)