File size: 3,304 Bytes
b677ddd
 
 
 
 
 
 
625bb2b
32fb167
05e0c48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6d729d0
05e0c48
 
 
 
 
 
 
 
 
d6f61b6
05e0c48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d6f61b6
05e0c48
dae43fd
05e0c48
dae43fd
 
 
625bb2b
dae43fd
05e0c48
 
 
dae43fd
05e0c48
 
 
dae43fd
05e0c48
 
 
dae43fd
d6f61b6
05e0c48
 
 
 
 
625bb2b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
datasets:
- yelp_review_full
language:
- en
metrics:
- accuracy
- code_eval
pipeline_tag: text-classification
---
# Model Card for SentimentTensor

This modelcard provides details about the SentimentTensor model, developed by Saish Shinde, for sentiment analysis using LSTM architecture.

## Model Details

### Model Description

The SentimentTensor model is a deep learning model based on LSTM architecture, developed by Saish Shinde, for sentiment analysis tasks. It achieves an accuracy of 81% on standard evaluation datasets. The model is designed to classify text data into three categories: negative, neutral, and positive sentiments.

- **Developed by:** Saish Shinde
- **Model type:** LSTM-based Sequence Classification
- **Language(s) (NLP):** English
- **License:** No specific license



# Dataset Used

yelp dataset with 4.04GB compressed,8.65GB uncompressed data

## Uses

### Direct Use

The SentimentTensor model can be directly used for sentiment analysis tasks without fine-tuning.

### Downstream Use

This model can be fine-tuned for specific domains or integrated into larger NLP applications.

### Out-of-Scope Use

The model may not perform well on highly specialized or domain-specific text data.

## Bias, Risks, and Limitations

The SentimentTensor model, like any LSTM-based model, may have biases and limitations inherent in its training data and architecture. It might sometimes struggle with capturing long-range dependencies or understanding context in complex sentences, also it emphasizes less on neutral sentiment

### Recommendations

Users should be aware of potential biases and limitations and evaluate results accordingly.

## How to Get Started with the Model

### Loading the Model

You can load the SentimentTensor model using the Hugging Face library:

# python Code:
from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("your-model-name")
tokenizer = AutoTokenizer.from_pretrained("your-tokenizer-name")

# Tokenization
text = "Your text data here"
tokenized_input = tokenizer(text, return_tensors="pt")

# Sentiment Analysis
#Forward pass through the model
outputs = model(**tokenized_input)

#Get predicted sentiment label
predicted_label = outputs.logits.argmax().item()

# Example Usage
```python

from transformers import AutoModelForSequenceClassification, AutoTokenizer

# Load the model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("saishshinde15/SentimentTensor")
tokenizer = AutoTokenizer.from_pretrained("saishshinde15/SentimentTensor")

# Tokenize text data
text = "This is a great movie!"
tokenized_input = tokenizer(text, return_tensors="pt")

# Perform sentiment analysis
outputs = model(**tokenized_input)
predicted_label = outputs.logits.argmax().item()

# Print predicted sentiment
sentiment_labels = ["negative", "neutral", "positive"]
print(f"Predicted Sentiment: {sentiment_labels[predicted_label]}")


```
# Model Architecture and Objective

The SentimentTensor model is based on LSTM architecture, which is well-suited for sequence classification tasks like sentiment analysis. It uses long short-term memory cells to capture dependencies in sequential data.

# Model Card Authors
Saish Shinde