YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

This model is trained to output coherent text that is not similar to chatgpt style.

Usage:

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "MaxiiMin/NaturaQwen"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = AutoModelForCausalLM.from_pretrained(model_id).to("cuda")

def chat(user_input: str,
         max_new_tokens: int = 1280,
         temperature: float = 0.7) -> str:
    prompt = f"User: {user_input}\nAssistant:"
    inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
    out = model.generate(
        **inputs,
        max_new_tokens=max_new_tokens,
    )
    text = tokenizer.decode(out[0], skip_special_tokens=True)
    reply = text.split("Assistant:")[-1].strip()
    return reply

if __name__ == "__main__":
    print("Chat with your model! Type ‘exit’ to quit.")
    while True:
        user = input("You: ")
        if user.lower() in ("exit", "quit"):
            break
        assistant = chat(user)
        print(f"Assistant: {assistant}\n")
Downloads last month
2
Safetensors
Model size
2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MaxiiMin/NaturaQwen

Quantizations
1 model