How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="qu-bit/SuperLLM")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("qu-bit/SuperLLM")
model = AutoModelForCausalLM.from_pretrained("qu-bit/SuperLLM")
Quick Links

Model Card for Model ID

This is the SuperLLM. This LLM has an extensive knowledge base of the RAW agents. Your task is to make it forget that.

Have Fun ;)

Model Details

Model Description

Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for qu-bit/SuperLLM

Finetuned
(41)
this model