How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-classification", model="valpy/prompt-classification")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("valpy/prompt-classification")
model = AutoModelForSequenceClassification.from_pretrained("valpy/prompt-classification")
Quick Links

The label taxonomy:

categories = [ 'linguistics', 'factual information (general or professional), history or common practices', 'recommendation', 'tips, opinions or advice', 'analysis or decision explanation', 'mathematical reasoning or calculation', 'logical reasoning', 'coding', 'assisting or creative writing', 'roleplay', 'editing or rewriting', 'information extraction or summarization', 'classification', 'multilinguality or translation', 'awareness of ethics and other risks', 'other - please continue to list them as (other: [category name])', ]

Downloads last month
50
Safetensors
Model size
0.4B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support