ToBeAndNotToBe
Collection
Datasets and models associated with the paper "To be and not to be: that is the question"
• 21 items • Updated
This model is a fine-tuned version of google-bert/bert-base-multilingual-uncased on an unknown dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 1.0 | 77 | 0.3969 | 0.9064 |
| No log | 2.0 | 154 | 0.3107 | 0.9064 |
| No log | 3.0 | 231 | 0.3110 | 0.9064 |
Base model
google-bert/bert-base-multilingual-uncased