Update README.md
Browse files
README.md
CHANGED
|
@@ -180,7 +180,7 @@ The model also was evaluated on a held-out test set.
|
|
| 180 |
|
| 181 |
#### Metrics
|
| 182 |
The evaluation metrics include precision, recall, F1 score, and accuracy, which are standard for token classification tasks. These metrics provide a comprehensive understanding of the model's ability to identify scientific terms.
|
| 183 |
-
```
|
| 184 |
metric = evaluate.load("seqeval")
|
| 185 |
```
|
| 186 |
|
|
@@ -189,14 +189,14 @@ On the test set, the model achieved an accuracy of 98.34% with a F1 score of 0.9
|
|
| 189 |
|
| 190 |
## Citation
|
| 191 |
**BibTeX:**
|
| 192 |
-
@misc{
|
| 193 |
author = {JonyC},
|
| 194 |
title = {SciBERT for Scientific Term Detection},
|
| 195 |
year = {2025},
|
| 196 |
-
url = {https://huggingface.co/
|
| 197 |
}
|
| 198 |
**APA:**
|
| 199 |
-
JonyC. (2025). SciBERT for Scientific Term Detection. Hugging Face. https://huggingface.co/
|
| 200 |
|
| 201 |
Author: JonyC
|
| 202 |
|
|
|
|
| 180 |
|
| 181 |
#### Metrics
|
| 182 |
The evaluation metrics include precision, recall, F1 score, and accuracy, which are standard for token classification tasks. These metrics provide a comprehensive understanding of the model's ability to identify scientific terms.
|
| 183 |
+
```python
|
| 184 |
metric = evaluate.load("seqeval")
|
| 185 |
```
|
| 186 |
|
|
|
|
| 189 |
|
| 190 |
## Citation
|
| 191 |
**BibTeX:**
|
| 192 |
+
@misc{scibert-NER-finetuned-improved,
|
| 193 |
author = {JonyC},
|
| 194 |
title = {SciBERT for Scientific Term Detection},
|
| 195 |
year = {2025},
|
| 196 |
+
url = {https://huggingface.co/JonyC/scibert-NER-finetuned-improved}
|
| 197 |
}
|
| 198 |
**APA:**
|
| 199 |
+
JonyC. (2025). SciBERT for Scientific Term Detection. Hugging Face. https://huggingface.co/JonyC/scibert-NER-finetuned-improved
|
| 200 |
|
| 201 |
Author: JonyC
|
| 202 |
|