SentimentTensor / README.md
saishshinde15's picture
Update README.md
625bb2b verified
|
raw
history blame
3.15 kB
metadata
datasets:
  - yelp_review_full
language:
  - en
metrics:
  - accuracy
  - code_eval
library_name: adapter-transformers

Model Card for SentimentTensor

This modelcard provides details about the SentimentTensor model, developed by Saish Shinde, for sentiment analysis using LSTM architecture.

Model Details

Model Description

The SentimentTensor model is a deep learning model based on LSTM architecture, developed by Saish Shinde, for sentiment analysis tasks. It achieves an accuracy of 81% on standard evaluation datasets. The model is designed to classify text data into three categories: negative, neutral, and positive sentiments.

  • Developed by: Saish Shinde
  • Model type: LSTM-based Sequence Classification
  • Language(s) (NLP): English
  • License: No specific license

Dataset Used

yelp dataset with 4.04GB compressed,8.65GB uncompressed data

Uses

Direct Use

The SentimentTensor model can be directly used for sentiment analysis tasks without fine-tuning.

Downstream Use

This model can be fine-tuned for specific domains or integrated into larger NLP applications.

Out-of-Scope Use

The model may not perform well on highly specialized or domain-specific text data.

Bias, Risks, and Limitations

The SentimentTensor model, like any LSTM-based model, may have biases and limitations inherent in its training data and architecture. It might sometimes struggle with capturing long-range dependencies or understanding context in complex sentences.

Recommendations

Users should be aware of potential biases and limitations and evaluate results accordingly.

How to Get Started with the Model

Loading the Model

You can load the SentimentTensor model using the Hugging Face library:

python Code:

from transformers import AutoModelForSequenceClassification, AutoTokenizer

Load the model and tokenizer

model = AutoModelForSequenceClassification.from_pretrained("your-model-name") tokenizer = AutoTokenizer.from_pretrained("your-tokenizer-name")

Tokenization

text = "Your text data here" tokenized_input = tokenizer(text, return_tensors="pt")

Sentiment Analysis

#Forward pass through the model outputs = model(**tokenized_input)

#Get predicted sentiment label predicted_label = outputs.logits.argmax().item()

Example Usage

#Load the model and tokenizer

model = AutoModelForSequenceClassification.from_pretrained("your-model-name") tokenizer = AutoTokenizer.from_pretrained("your-tokenizer-name")

#Tokenize text data

text = "This is a great movie!" tokenized_input = tokenizer(text, return_tensors="pt")

#Perform sentiment analysis

outputs = model(**tokenized_input) predicted_label = outputs.logits.argmax().item()

#Print predicted sentiment sentiment_labels = ["negative", "neutral", "positive"] print(f"Predicted Sentiment: {sentiment_labels[predicted_label]}")

Model Architecture and Objective

The SentimentTensor model is based on LSTM architecture, which is well-suited for sequence classification tasks like sentiment analysis. It uses long short-term memory cells to capture dependencies in sequential data.

Model Card Authors

Saish Shinde