|
|
--- |
|
|
language: |
|
|
- en |
|
|
metrics: |
|
|
- accuracy |
|
|
pipeline_tag: text-classification |
|
|
tags: |
|
|
- medical |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
### Model summary |
|
|
|
|
|
`bert-sci-am` is a BERT-family model trained for scientific literature argument mining. At low-level it performs sequence classification. This version is trained on 3-class classification on (david-inf/am-nlp-abstrct)[david-inf/am-nlp-abstrct] forked from [pie/abstrct](https://huggingface.co/datasets/pie/abstrct/tree/main) dataset. |
|
|
|
|
|
### How to use |
|
|
|
|
|
```python |
|
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
|
|
|
def load_model(): |
|
|
"""Load model from hub""" |
|
|
checkpoint = "david-inf/bert-sci-am" |
|
|
model = AutoModelForSequenceClassification.from_pretrained( |
|
|
checkpoint, num_labels=3) |
|
|
tokenizer = AutoTokenizer.from_pretrained(checkpoint) |
|
|
return model, tokenizer |
|
|
|
|
|
model, tokenizer = load_model() |
|
|
``` |