File size: 5,272 Bytes
84cdc4c 6bf1f0f 84cdc4c 773cba3 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f 84cdc4c 6bf1f0f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
language:
- pt
license: cc-by-nc-4.0
model_name: BERTimbau fine-tuned on ClaimPT
tags:
- claim-detection
- portuguese
- bertimbau
- news
---
# 🇵🇹 BERTimbau fine-tuned on ClaimPT (Claim Extraction)
This model is a fine-tuned version of **[neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)** on the **ClaimPT** dataset for **claim and non-claim detection** in Portuguese news articles.
It classifies each token as part of a *Claim* or *Non-Claim* span, following the guidelines described below. For more information visit our [GitHub repository](https://github.com/LIAAD/ClaimPT)
---
## 🧠 Model Details
**Model type:** Transformer-based encoder (BERT)
**Base model:** [`neuralmind/bert-base-portuguese-cased`](https://huggingface.co/neuralmind/bert-base-portuguese-cased)
**Fine-tuning objective:** Token classification
**Task:** Claim Extraction
**Language:** Portuguese (pt)
**Framework:** 🤗 Transformers
**License:** CC BY-NC 4.0 *(non-commercial use)*
**Authors:** Ricardo Campos, Raquel Sequeira, Sara Nerea, Inês Cantante, Diogo Folques, Luís Filipe Cunha, João Canavilhas, António Branco, Alípio Jorge, Sérgio Nunes, Nuno Guimarães, Purificação Silvano
**Institution(s):** INESC TEC, University of Beira Interior, University of Porto, University of Lisbon
---
## 📘 Dataset
**Dataset:** [ClaimPT](https://rdm.inesctec.pt/dataset/cs-2025-008)
**Authors:** Ricardo Campos, Raquel Sequeira, Sara Nerea, Inês Cantante, Diogo Folques, Luís Filipe Cunha, João Canavilhas, António Branco, Alípio Jorge, Sérgio Nunes, Nuno Guimarães, Purificação Silvano
**ClaimPT**, a dataset of European Portuguese news articles annotated for **factual claims**, comprising **1,308 articles** and **6,875 individual annotations**.
---
## ⚙️ Training Details
- **Task formulation:** Token classification with labels
`{B-Claim, I-Claim, B-Non-Claim, I-Non-Claim, O}`
- **Loss:** Cross-entropy
- **Optimizer:** AdamW
- **Learning rate:** 2e-5
- **Batch size:** 16
- **Max sequence length:** 512
- **Truncation strategy:** Sentence-level segmentation
---
## 📊 Evaluation
| **Model** | **Label** | **Precision (%)** | **Recall (%)** | **F1 (%)** |
|------------|------------|-------------------|----------------|-------------|
| **BERT-Sent (This model)** | Claim | 37.50 | 25.81 | 30.57 |
| | Non-Claim | 63.35 | 76.42 | 69.27 |
| | Micro Avg | 61.88 | 71.59 | 66.38 |
---
## 🧩 Usage
```python
from transformers import AutoTokenizer, AutoModelForTokenClassification
tokenizer = AutoTokenizer.from_pretrained("lfcc/bertimbau-claimpt-sent")
model = AutoModelForTokenClassification.from_pretrained("lfcc/bertimbau-claimpt-sent")
text = '"O governo vai reduzir o IVA dos alimentos", disse o ministro da economia.'
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits
````
---
## Annotation Guidelines
Detailed annotation instructions, including procedures, quality-control measures, and schema definitions, are available in the document:
📄 [ClaimPT Annotation Manual (PDF)](https://github.com/LIAAD/ClaimPT/blob/main/ClaimPT%20Annotation%20Manual.pdf)
This manual describes:
* The annotation process and methodology
* The annotation scheme and entity structures
* The definition of a claim
* Metadata and label taxonomy
* Examples and boundary cases
Researchers interested in replicating the annotation or training models should refer to this guide.
---
## Citation
If you use this dataset, please cite:
```bibtex
@dataset{claimpt2025,
author = {Ricardo Campos and Raquel Sequeira and Sara Nerea and Inês Cantante and Diogo Folques and Luís Filipe Cunha and João Canavilhas and António Branco and Alípio Jorge and Sérgio Nunes and Nuno Guimarães and Purificação Silvano},
title = {ClaimPT: A Portuguese Dataset of Annotated Claims in News Articles},
year = {2025},
doi = {https://rdm.inesctec.pt/dataset/cs-2025-008},
institution = {INESC TEC}
}
```
---
## Credits and Acknowledgements
This dataset was developed by **[INESC TEC – Institute for Systems and Computer Engineering, Technology and Science](https://www.inesctec.pt)**, specifically by the **[NLP Group](https://nlp.inesctec.pt/)** within the **[LIAAD – Laboratory of Artificial Intelligence and Decision Support](https://www.inesctec.pt/pt/centros/LIAAD)** research center.
### Affiliated Institutions
* [University of Beira Interior](https://www.ubi.pt/en/)
* [University of Porto ](https://www.up.pt/portal/en/)
* [University of Lisbon](https://www.ulisboa.pt/en)
### Acknowledgements
This work was carried out as part of the project *Accelerat.AI* (Ref. C644865762-00000008), financed by IAPMEI and the European Union — Next Generation EU Fund, within the scope of call for proposals no. 02/C05-i01/2022 — submission of final proposals for project development under the Mobilizing Agendas for Business Innovation of the Recovery and Resilience Plan.
Ricardo Campos, Alípio Jorge, and Nuno Guimarães also acknowledge support from the *StorySense* project (Ref. 2022.09312.PTDC, DOI: [10.54499/2022.09312.PTDC](https://doi.org/10.54499/2022.09312.PTDC)).
|