Daredevil47's picture
Update README.md
cedaf7a verified
|
raw
history blame
1.93 kB
metadata
license: apache-2.0
datasets:
  - C-MTEB/TNews-classification
base_model:
  - openai-community/gpt2

Model Details:

Model Description: This is a fine-tuned version of the GPT-2 model for sentiment analysis on tweets. The model has been trained on the mteb/tweet_sentiment_extraction dataset to classify tweets into three sentiment categories: Positive, Neutral, and Negative. It uses the Hugging Face Transformers library and achieves an evaluation accuracy of 76%.

Developed by: Pradeep Vepaada Contact: pradeep.vepada24@gmail.com Model Type: GPT-2 Language(s): English License: Apache-2.0 Finetuned from model: OpenAI GPT-2 Model Sources Repository: charlie1898/gpt2_finetuned_twitter_sentiment_analysis

Uses: Direct Use: This model is designed for sentiment analysis of tweets or other short social media text. Given an input text, it predicts the sentiment as Positive, Neutral, or Negative.

Out-of-Scope Use Not suitable for long-form text or non-English language analysis. Avoid deploying in sensitive or high-stakes applications without further validation.

Bias, Risks, and Limitations: Bias: The dataset may contain biased or harmful text, potentially influencing predictions. Domain Limitations: The model is optimized for English tweets and may not perform well on other text types or languages.

Recommendations: Users should validate outputs and be cautious about biases in training data before using the model for critical applications.

Dataset: Name: mteb/tweet_sentiment_extraction Description: A dataset for extracting and classifying sentiment in tweets. Language: English; Size: 1,000 samples used for training and 1,000 for evaluation.

Training Configuration: Tokenizer: GPT-2 Tokenizer (with EOS token as pad token); Optimizer: AdamW; Learning Rate: 1e-5; Epochs: 3; Batch Size: 1; Hardware Used: A100;

Performance: Accuracy: 76%; Evaluation Metric: Accuracy; Validation Split: 10% of the dataset.