File size: 5,077 Bytes
b00b71b 5be109b b00b71b 5be109b b00b71b 5be109b b00b71b 29d84c0 b00b71b 5be109b b00b71b 5be109b 0eed99e 5be109b b00b71b 5be109b 2697f53 5be109b b00b71b 29d84c0 0dc01ea b00b71b 29d84c0 b00b71b | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
language: de
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: academic_main_text_classifier_de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Academic Main Text_Classifier (de)
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on a labelled dataset of publications in the Bibliography of Linguistic Literature.
It achieves the following results on the evaluation set:
- Loss: 0.2342
- Accuracy: 0.9385
- Precision: 0.9385
- Recall: 0.9385
- F1: 0.9385
## Model description
The model is fine-tuned with academic publications in Linguistics, to classify texts in publications into 4 classes as a filter to other tasks. Sentence-based data obtained from OCR-processed PDF files was annotated manually with the following classes:
- 0: out of scope - materials that are of low significance, eg. page number and page header, noise from OCR/pdf-to-text convertion
- 1: main text - texts that are the main texts of the publication, to be used for down-stream tasks
- 2: examples - texts that are captions of the figures, or quotes or excerpts
- 3: references - references of the publication, excluding in-text citations
## Intended uses & limitations
Intended uses:
- filter out noise from OCR of academic texts (conference papers, journals, books etc.)
- extract main text in academic texts for down-stream NLP tasks
Limitations:
- training and evaluation data is limited to English, and academic texts in Linguistics (though still to a higher extent usable for German texts)
## How to run
```python
from transformers import pipeline
# define model name
model_name = "ubffm/academic_text_classifier_de"
# run model with hf pipeline
## return output for the best label
## eg. [{'label': 'EXAMPLE', 'score': 0.9601941108703613}]
classifier = pipeline("text-classification", model=model_name, tokenizer=model_name)
## return output for all labels
## eg. [[{'label': 'OUT OF SCOPE', 'score': 0.007808608002960682}, {'label': 'MAIN TEXT', 'score': 0.028077520430088043}, {'label': 'EXAMPLE', 'score': 0.9601941108703613}, {'label': 'REFERENCE', 'score': 0.003919811453670263}]]
classifier = pipeline("text-classification", model=model_name, tokenizer=model_name, return_all_scores=True)
# Perform inference on your input text
your_text = "your text here."
result = classifier(your_text)
print(result)
```
## Try it yourself with the following examples (not in training/ evaluation data)
## Problematic cases
## Training and evaluation data
### Labelled dataset from open access publications of the Bibliography of Linguistic Literature (BLL)
- Manually labelled dataset on Huggingface:
ubffm/academic_main_text_classifier_de_annotated (https://huggingface.co/datasets/ubffm/academic_main_text_classifier_de_annotated)
- The Bibliography of Linguistic Literature (BLL) is one of the most comprehensive sources of bibliographic information for the general linguistics with its subdomains and neighboring disciplines as well as for the English, German and Romance linguistics. The subject bibliography is based mainly on the library's holdings on linguistics. It lists monographs, dissertations, articles from periodicals, collective works, conference contributions, unpublished research papers, etc. The printed edition is published annually (at the end of each year) and covers the literature of the previous year and some supplements. Usually, it includes about 10,000 references per year. (Frankfurt a. M. : Klostermann, 1.1971/75(1976) - 47.2021 (2022))
(See more at https://www.ub.uni-frankfurt.de/linguistik/sammlung_en.html)
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch_fused with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.9745 | 1.0 | 268 | 0.4236 | 0.8629 | 0.8629 | 0.8629 | 0.8629 |
| 0.4594 | 2.0 | 536 | 0.2755 | 0.9193 | 0.9193 | 0.9193 | 0.9193 |
| 0.2734 | 3.0 | 804 | 0.2541 | 0.9287 | 0.9287 | 0.9287 | 0.9287 |
| 0.2288 | 4.0 | 1072 | 0.2300 | 0.9329 | 0.9329 | 0.9329 | 0.9329 |
| 0.1909 | 5.0 | 1340 | 0.2342 | 0.9385 | 0.9385 | 0.9385 | 0.9385 |
### Framework versions
- Transformers 4.57.1
- Pytorch 2.9.0+cu128
- Datasets 4.2.0
- Tokenizers 0.22.1
|