Update README.md
Browse files
README.md
CHANGED
|
@@ -38,6 +38,20 @@ Feel free to experiment with the interface, input different reviews, and observe
|
|
| 38 |
|
| 39 |
For questions/issues, open an issue in this repository.
|
| 40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
**Acknowledgments:**
|
| 42 |
|
| 43 |
The sentiment analysis model uses BERT architecture from Hugging Face Transformers. The Amazon Fine Food Reviews dataset is for training. Gradio is used for the interactive interface.
|
|
|
|
| 38 |
|
| 39 |
For questions/issues, open an issue in this repository.
|
| 40 |
|
| 41 |
+
**Model Achievements**
|
| 42 |
+
|
| 43 |
+
- Gated Recurrent Unit (GRU): Achieved an accuracy of 94.8%.
|
| 44 |
+
- Long Short-Term Memory (LSTM): Implemented an architecture with an accuracy of 93.2%.
|
| 45 |
+
- BERT Base Model Fine-Tuning: Achieved an accuracy of 96.4% after finetuning.
|
| 46 |
+
|
| 47 |
+
**Training Details**
|
| 48 |
+
|
| 49 |
+
All experiments were performed on a single NVIDIA RTX 2070 GPU. The training times are as follows:
|
| 50 |
+
|
| 51 |
+
- GRU Model: Trained for 10 epochs, took approximately 10+ hours.
|
| 52 |
+
- LSTM Model: Trained for 10 epochs, took around 10+ hours.
|
| 53 |
+
- BERT Base Model Fine-Tuning: Trained for 10 epochs, took around 10+ hours.
|
| 54 |
+
|
| 55 |
**Acknowledgments:**
|
| 56 |
|
| 57 |
The sentiment analysis model uses BERT architecture from Hugging Face Transformers. The Amazon Fine Food Reviews dataset is for training. Gradio is used for the interactive interface.
|