hyperparameter tuning step 19519
Browse files- README.md +7 -11
- config.json +2 -2
- pytorch_model.bin +1 -1
README.md
CHANGED
|
@@ -18,7 +18,7 @@ metrics:
|
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
| 21 |
-
[sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali of [WikiANN ](https://huggingface.co/datasets/wikiann).
|
| 22 |
|
| 23 |
Named Entities predicted by the model:
|
| 24 |
|
|
@@ -60,7 +60,7 @@ WIP
|
|
| 60 |
|
| 61 |
## Training data
|
| 62 |
|
| 63 |
-
The model was initialized it with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step
|
| 64 |
|
| 65 |
## Training procedure
|
| 66 |
|
|
@@ -73,16 +73,12 @@ Coming soon!
|
|
| 73 |
|
| 74 |
## Eval results
|
| 75 |
|
| 76 |
-
accuracy: 0.9756540697674418
|
| 77 |
-
|
| 78 |
-
f1: 0.9570102589154861
|
| 79 |
-
|
| 80 |
-
loss: 0.13705264031887054
|
| 81 |
-
|
| 82 |
-
precision: 0.9518950437317785
|
| 83 |
-
|
| 84 |
-
recall: 0.962180746561886
|
| 85 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
|
| 88 |
### BibTeX entry and citation info
|
|
|
|
| 18 |
|
| 19 |
## Model description
|
| 20 |
|
| 21 |
+
[sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann).
|
| 22 |
|
| 23 |
Named Entities predicted by the model:
|
| 24 |
|
|
|
|
| 60 |
|
| 61 |
## Training data
|
| 62 |
|
| 63 |
+
The model was initialized it with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali of [WikiANN ](https://huggingface.co/datasets/wikiann)
|
| 64 |
|
| 65 |
## Training procedure
|
| 66 |
|
|
|
|
| 73 |
|
| 74 |
## Eval results
|
| 75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
|
| 77 |
+
loss: 0.11714419722557068
|
| 78 |
+
accuracy: 0.9772286821705426
|
| 79 |
+
precision: 0.9585365853658536
|
| 80 |
+
recall: 0.9651277013752456
|
| 81 |
+
f1 : 0.9618208516886931
|
| 82 |
|
| 83 |
|
| 84 |
### BibTeX entry and citation info
|
config.json
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
{
|
| 2 |
-
"_name_or_path": "albertvillanova/autonlp-wikiann-entity_extraction-
|
| 3 |
"_num_labels": 7,
|
| 4 |
"architectures": [
|
| 5 |
"AlbertForTokenClassification"
|
|
@@ -36,7 +36,7 @@
|
|
| 36 |
"6": 6
|
| 37 |
},
|
| 38 |
"layer_norm_eps": 1e-12,
|
| 39 |
-
"max_length":
|
| 40 |
"max_position_embeddings": 512,
|
| 41 |
"model_type": "albert",
|
| 42 |
"net_structure_type": 0,
|
|
|
|
| 1 |
{
|
| 2 |
+
"_name_or_path": "albertvillanova/autonlp-baselines-wikiann-entity_extraction-1341171",
|
| 3 |
"_num_labels": 7,
|
| 4 |
"architectures": [
|
| 5 |
"AlbertForTokenClassification"
|
|
|
|
| 36 |
"6": 6
|
| 37 |
},
|
| 38 |
"layer_norm_eps": 1e-12,
|
| 39 |
+
"max_length": 128,
|
| 40 |
"max_position_embeddings": 512,
|
| 41 |
"model_type": "albert",
|
| 42 |
"net_structure_type": 0,
|
pytorch_model.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 67605209
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:042f8d9caceb37cadfe054e9e78c8088f7ec98f8c62eb2db5238bf3a71de30f8
|
| 3 |
size 67605209
|