# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("SajilAwale/FunnyModel")
model = AutoModelForSequenceClassification.from_pretrained("SajilAwale/FunnyModel")Quick Links
Model Card for Funny Model (fun-model-v0.1)
This model was fine tuned to classify if a joke is humorous, offensive and what sentiment it carries (multi-label classification).
Model Details
- Base Model: FacebookAI/roberta-base
- Tokenizer: FacebookAI/roberta-base
- Parameters: 125M
Training Data
- 10% sample of r/Jokes dataset from https://github.com/orionw/rJokesData (500k)
Dataset
- Can be found at https://huggingface.co/datasets/SajilAwale/FunnyData/
- Total Data Size: 573,410
- Train Data Size: 90% of 10% of total size
- Validation Data Size: 10% of 10% of total size
- Test Data Size: 90% of total size
Evaluation
- Downloads last month
- 9
Model tree for SajilAwale/FunnyModel
Base model
FacebookAI/roberta-base

# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="SajilAwale/FunnyModel")