Commit ·
563783c
1
Parent(s): 9500fd3
typos in model card
Browse files
README.md
CHANGED
|
@@ -114,23 +114,23 @@ print(ner_results)
|
|
| 114 |
### Training Procedure
|
| 115 |
|
| 116 |
|
| 117 |
-
#### Preprocessing
|
| 118 |
|
| 119 |
English dataset was filterd out : ```train_dataset = train_dataset.filter(lambda x: x['lang'] == 'en')```
|
| 120 |
|
| 121 |
|
| 122 |
#### Training Hyperparameters
|
| 123 |
|
| 124 |
-
The following hyperparameters were used during training:
|
| 125 |
|
| 126 |
-
learning_rate: 5e-05
|
| 127 |
-
train_batch_size: 32
|
| 128 |
-
eval_batch_size: 32
|
| 129 |
-
seed: 42
|
| 130 |
-
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 131 |
-
lr_scheduler_type: linear
|
| 132 |
-
lr_scheduler_warmup_ratio: 0.1
|
| 133 |
-
num_epochs: 1
|
| 134 |
|
| 135 |
|
| 136 |
## Evaluation
|
|
@@ -170,7 +170,7 @@ For this `seqeval` metric was used: ```metric = load_metric("seqeval")```.
|
|
| 170 |
|VEHI | 0.812 | 0.812 | 0.812 | 32 |
|
| 171 |
|**Overall** | **0.939** | **0.947** | **0.943** |
|
| 172 |
|
| 173 |
-
## Technical Specifications
|
| 174 |
|
| 175 |
### Model Architecture and Objective
|
| 176 |
Follows the same as RoBERTa-BASE
|
|
@@ -183,16 +183,16 @@ Kaggle - GPU T4x2
|
|
| 183 |
Google Colab - GPU T4x1
|
| 184 |
|
| 185 |
#### Software
|
| 186 |
-
pandas==1.5.3
|
| 187 |
-
numpy==1.23.5
|
| 188 |
-
seqeval==1.2.2
|
| 189 |
-
datasets==2.15.0
|
| 190 |
-
huggingface_hub==0.19.4
|
| 191 |
-
transformers[torch]==4.35.2
|
| 192 |
-
evaluate==0.4.1
|
| 193 |
-
matplotlib==3.7.1
|
| 194 |
-
collections
|
| 195 |
-
torch==2.0.0
|
| 196 |
|
| 197 |
## Model Card Contact
|
| 198 |
[jayant-yadav](https://huggingface.co/jayant-yadav)
|
|
|
|
| 114 |
### Training Procedure
|
| 115 |
|
| 116 |
|
| 117 |
+
#### Preprocessing
|
| 118 |
|
| 119 |
English dataset was filterd out : ```train_dataset = train_dataset.filter(lambda x: x['lang'] == 'en')```
|
| 120 |
|
| 121 |
|
| 122 |
#### Training Hyperparameters
|
| 123 |
|
| 124 |
+
The following hyperparameters were used during training:
|
| 125 |
|
| 126 |
+
learning_rate: 5e-05
|
| 127 |
+
train_batch_size: 32
|
| 128 |
+
eval_batch_size: 32
|
| 129 |
+
seed: 42
|
| 130 |
+
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
| 131 |
+
lr_scheduler_type: linear
|
| 132 |
+
lr_scheduler_warmup_ratio: 0.1
|
| 133 |
+
num_epochs: 1
|
| 134 |
|
| 135 |
|
| 136 |
## Evaluation
|
|
|
|
| 170 |
|VEHI | 0.812 | 0.812 | 0.812 | 32 |
|
| 171 |
|**Overall** | **0.939** | **0.947** | **0.943** |
|
| 172 |
|
| 173 |
+
## Technical Specifications
|
| 174 |
|
| 175 |
### Model Architecture and Objective
|
| 176 |
Follows the same as RoBERTa-BASE
|
|
|
|
| 183 |
Google Colab - GPU T4x1
|
| 184 |
|
| 185 |
#### Software
|
| 186 |
+
pandas==1.5.3
|
| 187 |
+
numpy==1.23.5
|
| 188 |
+
seqeval==1.2.2
|
| 189 |
+
datasets==2.15.0
|
| 190 |
+
huggingface_hub==0.19.4
|
| 191 |
+
transformers[torch]==4.35.2
|
| 192 |
+
evaluate==0.4.1
|
| 193 |
+
matplotlib==3.7.1
|
| 194 |
+
collections
|
| 195 |
+
torch==2.0.0
|
| 196 |
|
| 197 |
## Model Card Contact
|
| 198 |
[jayant-yadav](https://huggingface.co/jayant-yadav)
|