Instructions to use Suru/Distillbert-base-uncased-finetuned with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Suru/Distillbert-base-uncased-finetuned with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Suru/Distillbert-base-uncased-finetuned")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("Suru/Distillbert-base-uncased-finetuned") model = AutoModelForSequenceClassification.from_pretrained("Suru/Distillbert-base-uncased-finetuned") - Notebooks
- Google Colab
- Kaggle
Sentiment Analysis Model: Fine-Tuned DistilBERT
Overview
This repository contains a fine-tuned version of the distilbert-base-uncased model, designed for sentiment analysis of tweets. The model is trained to classify the sentiment of a sentence into two categories: positive (label 0) and negative (label 1).
Model Description
The fine-tuned model utilizes the distilbert-base-uncased architecture, trained on a dataset of GPT-3.5-generated tweets. It is designed to input a sentence and output a binary sentiment label, 0 for positive and 1 for negative.
Training Data
The model was trained on a dataset consisting of tweets generated and labeled with sentiments by GPT-3.5. Each tweet in the training set was labeled as either positive or negative to provide ground truth for training.
- Downloads last month
- 1