Update README.md
Browse files
README.md
CHANGED
|
@@ -2,21 +2,22 @@
|
|
| 2 |
library_name: transformers
|
| 3 |
language: en
|
| 4 |
license: apache-2.0
|
| 5 |
-
datasets:
|
|
|
|
| 6 |
base_model:
|
| 7 |
- google-bert/bert-base-uncased
|
| 8 |
---
|
| 9 |
|
| 10 |
-
# BERT Fine-Tuned on
|
| 11 |
|
| 12 |
-
A fine-tuned BERT model using the
|
| 13 |
|
| 14 |
## Model Details
|
| 15 |
|
| 16 |
### Description
|
| 17 |
|
| 18 |
This model is based on the [BERT base (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
|
| 19 |
-
architecture and has been fine-tuned on the
|
| 20 |
|
| 21 |
- **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
|
| 22 |
- **Funded by:** [ERC](https://erc.europa.eu)
|
|
@@ -58,8 +59,8 @@ on the entire training dataset.
|
|
| 58 |
|
| 59 |
### Training Data
|
| 60 |
|
| 61 |
-
The model was trained on the
|
| 62 |
-
|
| 63 |
|
| 64 |
#### Training Hyperparameters
|
| 65 |
|
|
@@ -71,7 +72,7 @@ either the dataset’s development set (if available) or a random 20% split of t
|
|
| 71 |
|
| 72 |
## Uses
|
| 73 |
|
| 74 |
-
This model can be used for classification tasks aligned with the structure and intent of the
|
| 75 |
|
| 76 |
For broader guidance, refer to the BERT base model’s [Inteded Uses & Limitations](https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations).
|
| 77 |
|
|
@@ -80,7 +81,7 @@ For broader guidance, refer to the BERT base model’s [Inteded Uses & Limitatio
|
|
| 80 |
This model inherits the potential risks and limitations of its base model. For more details,
|
| 81 |
refer to the [Limitations and bias](https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias) section of the original model documentation.
|
| 82 |
|
| 83 |
-
Additionally, it may reflect or amplify patterns and biases present in the
|
| 84 |
|
| 85 |
## Hardware
|
| 86 |
|
|
@@ -91,4 +92,4 @@ Additionally, it may reflect or amplify patterns and biases present in the <Data
|
|
| 91 |
## Citation
|
| 92 |
|
| 93 |
If you use this model in your research, please cite both the base BERT model
|
| 94 |
-
and the
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
language: en
|
| 4 |
license: apache-2.0
|
| 5 |
+
datasets:
|
| 6 |
+
- stanfordnlp/imdb
|
| 7 |
base_model:
|
| 8 |
- google-bert/bert-base-uncased
|
| 9 |
---
|
| 10 |
|
| 11 |
+
# BERT Fine-Tuned on IMDb
|
| 12 |
|
| 13 |
+
A fine-tuned BERT model using the IMDb dataset.
|
| 14 |
|
| 15 |
## Model Details
|
| 16 |
|
| 17 |
### Description
|
| 18 |
|
| 19 |
This model is based on the [BERT base (uncased)](https://huggingface.co/google-bert/bert-base-uncased)
|
| 20 |
+
architecture and has been fine-tuned on the [IMDb](https://huggingface.co/datasets/stanfordnlp/imdb) dataset.
|
| 21 |
|
| 22 |
- **Developed by:** [Cesar Gonzalez-Gutierrez](https://ceguel.es)
|
| 23 |
- **Funded by:** [ERC](https://erc.europa.eu)
|
|
|
|
| 59 |
|
| 60 |
### Training Data
|
| 61 |
|
| 62 |
+
The model was trained on the IMDb training partition, with validation performed on
|
| 63 |
+
a random 20% split of the training data.
|
| 64 |
|
| 65 |
#### Training Hyperparameters
|
| 66 |
|
|
|
|
| 72 |
|
| 73 |
## Uses
|
| 74 |
|
| 75 |
+
This model can be used for classification tasks aligned with the structure and intent of the IMDb corpus.
|
| 76 |
|
| 77 |
For broader guidance, refer to the BERT base model’s [Inteded Uses & Limitations](https://huggingface.co/google-bert/bert-base-uncased#intended-uses--limitations).
|
| 78 |
|
|
|
|
| 81 |
This model inherits the potential risks and limitations of its base model. For more details,
|
| 82 |
refer to the [Limitations and bias](https://huggingface.co/google-bert/bert-base-uncased#limitations-and-bias) section of the original model documentation.
|
| 83 |
|
| 84 |
+
Additionally, it may reflect or amplify patterns and biases present in the IMDb training data.
|
| 85 |
|
| 86 |
## Hardware
|
| 87 |
|
|
|
|
| 92 |
## Citation
|
| 93 |
|
| 94 |
If you use this model in your research, please cite both the base BERT model
|
| 95 |
+
and the IMDb source.
|