saishshinde15 commited on
Commit
d6f61b6
·
verified ·
1 Parent(s): eb5132f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -46,7 +46,7 @@ The model may not perform well on highly specialized or domain-specific text dat
46
 
47
  ## Bias, Risks, and Limitations
48
 
49
- The SentimentTensor model, like any LSTM-based model, may have biases and limitations inherent in its training data and architecture. It might sometimes struggle with capturing long-range dependencies or understanding context in complex sentences.
50
 
51
  ### Recommendations
52
 
@@ -77,6 +77,7 @@ outputs = model(**tokenized_input)
77
  predicted_label = outputs.logits.argmax().item()
78
 
79
  # Example Usage
 
80
 
81
  #Load the model and tokenizer
82
 
@@ -97,7 +98,7 @@ predicted_label = outputs.logits.argmax().item()
97
  sentiment_labels = ["negative", "neutral", "positive"]
98
  print(f"Predicted Sentiment: {sentiment_labels[predicted_label]}")
99
 
100
-
101
  # Model Architecture and Objective
102
 
103
  The SentimentTensor model is based on LSTM architecture, which is well-suited for sequence classification tasks like sentiment analysis. It uses long short-term memory cells to capture dependencies in sequential data.
 
46
 
47
  ## Bias, Risks, and Limitations
48
 
49
+ The SentimentTensor model, like any LSTM-based model, may have biases and limitations inherent in its training data and architecture. It might sometimes struggle with capturing long-range dependencies or understanding context in complex sentences, also it emphasizes less on neutral sentiment
50
 
51
  ### Recommendations
52
 
 
77
  predicted_label = outputs.logits.argmax().item()
78
 
79
  # Example Usage
80
+ ```python
81
 
82
  #Load the model and tokenizer
83
 
 
98
  sentiment_labels = ["negative", "neutral", "positive"]
99
  print(f"Predicted Sentiment: {sentiment_labels[predicted_label]}")
100
 
101
+ ```
102
  # Model Architecture and Objective
103
 
104
  The SentimentTensor model is based on LSTM architecture, which is well-suited for sequence classification tasks like sentiment analysis. It uses long short-term memory cells to capture dependencies in sequential data.