Update README.md
Browse files
README.md
CHANGED
|
@@ -34,14 +34,4 @@ print(results)
|
|
| 34 |
|
| 35 |
### Training Data
|
| 36 |
|
| 37 |
-
The model was trained on the [Twitter Financial News Sentiment](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) dataset. The text data undergoes a comprehensive cleaning process (`data_preprocessing.py`) which includes:
|
| 38 |
-
|
| 39 |
-
### Training Procedure
|
| 40 |
-
|
| 41 |
-
The model was trained using the `transformers` library in PyTorch. The training script (`model_development.py`) includes the following features:
|
| 42 |
-
|
| 43 |
-
- **Hyperparameter Optimization**: Optuna was used to find the best learning rate and batch size.
|
| 44 |
-
- **Optimizer**: AdamW with a linear learning rate scheduler and warmup.
|
| 45 |
-
- **Early Stopping**: Training stops if the validation accuracy does not improve for a set number of epochs.
|
| 46 |
-
- **Mixed-Precision Training**: `torch.amp` was used for faster training.
|
| 47 |
-
- **Gradient Accumulation**: To simulate a larger batch size.
|
|
|
|
| 34 |
|
| 35 |
### Training Data
|
| 36 |
|
| 37 |
+
The model was trained on the [Twitter Financial News Sentiment](https://huggingface.co/datasets/zeroshot/twitter-financial-news-sentiment) dataset. The text data undergoes a comprehensive cleaning process (`data_preprocessing.py`) which includes:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|