NightPrompt AI V1

Welcome ๐Ÿ˜Š, This is the official HuggingFace repository for the first version of NightPrompt AI. It is fine-tuned under LLama 3.2 using unsloth. This model is fine-tuned using the mlabonne/FineTome-100k dataset.

Messages From The Developer

I (Afif Ali Saadman), is the main mastermind behind this AI. This is my first-ever AI model released in HuggingFace. I wanna mention that I have made many models before, some were based on books, some were based on gibberish texts from the internet. And, this is my first time making and fine-tuning a full instruct based model. Though, some errors are inskippable. They will be fixed on the Alpha/Beta release. I had a target of making AI free and open source to everyone.

Plots for the nerds who want to fork

Run this and get the thing yourself.

import matplotlib.pyplot as plt


losses = [
    0.7747, 0.8391, 1.0757, 0.8919, 0.7575, 0.9373, 0.6192, 0.9985,
    0.8596, 0.7613, 0.8842, 1.0942, 0.9541, 0.6415, 0.8773, 0.6391,
    1.0032, 0.8272, 0.7694, 0.9345, 0.9027, 0.8570, 1.0363, 0.8847,
    0.6418, 0.8272, 0.8291, 0.7877, 1.0866, 1.0360, 0.7080, 0.5418,
    0.6553, 0.5803, 0.7622, 1.0036, 0.9007, 0.7172, 0.7793, 1.0002,
    0.7459, 1.0080, 0.7710, 0.8154, 0.7628, 0.8637, 0.7874, 0.6526,
    1.0168, 1.0324, 0.4573, 0.9079, 1.3173, 0.7082, 1.0615, 1.1254,
    0.7253, 0.8366, 0.7568, 0.9245
]

steps = list(range(1, len(losses) + 1))

plt.figure(figsize=(10, 5))
plt.plot(steps, losses, marker='o', linestyle='-', color='blue')
plt.title('Training Loss over Steps')
plt.xlabel('Training Step')
plt.ylabel('Loss')
plt.grid(True)
plt.tight_layout()
plt.show()

Using this

Run this code to use this model.

'''
%%capture
import os
if "COLAB_" not in "".join(os.environ.keys()):
    !pip install unsloth
else:
    # Do this only in Colab notebooks! Otherwise use pip install unsloth
    !pip install --no-deps bitsandbytes accelerate xformers==0.0.29.post3 peft trl triton cut_cross_entropy unsloth_zoo
    !pip install sentencepiece protobuf "datasets>=3.4.1,<4.0.0" "huggingface_hub>=0.34.0" hf_transfer
    !pip install --no-deps unsloth
'''
from unsloth import FastLanguageModel

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="Afifsudoers/NightPrompt_V1",
    max_seq_length=2048,
    dtype=None,
    load_in_4bit=True,
)

FastLanguageModel.for_inference(model)

DEFAULT_SYSTEM_PROMPT = "You are NightPrompt, a helpful and clever assistant created by Afif Ali Saadman. You were trained using LLama 3.2 3B as the base model using unsloth. Your version revision is V1. You must be polite to the user. Politely refuse to illegal requests from the user."


def ask_nightprompt(user_input, temperature=0.7, max_new_tokens=258):
    messages = [
        {"role": "system", "content": DEFAULT_SYSTEM_PROMPT},
        {"role": "user", "content": user_input},

    ]

    inputs = tokenizer.apply_chat_template(
        messages,
        tokenize=True,
        add_generation_prompt=True,
        return_tensors="pt"
    ).to("cuda")

    outputs = model.generate(
        input_ids=inputs,
        max_new_tokens=max_new_tokens,
        temperature=temperature,
        do_sample=True,
    )

    decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=False)
    parts = decoded_output.split("<|start_header_id|>")
    assistant_content = ""
    for part in parts:
        if part.startswith("assistant<|end_header_id|>"):
            content = part[len("assistant<|end_header_id|>"):]
            content = content.split("<|eot_id|>")[0].strip()
            assistant_content = content
            break

    return assistant_content
if __name__ == "__main__":
    print("Ask NightPrompt anything (type 'exit' to quit):\n")
    while True:
        prompt = input("You: ")
        if prompt.strip().lower() == "exit":
            break
        response = ask_nightprompt(prompt)
        print(response, "\n")



Uploaded model

  • Developed by: Afifsudoers(Afif Ali Saadman)
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-instruct-unsloth-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support