|
|
--- |
|
|
language: |
|
|
- pt |
|
|
license: cc-by-nc-4.0 |
|
|
model_name: BERTimbau fine-tuned on ClaimPT |
|
|
tags: |
|
|
- claim-detection |
|
|
- portuguese |
|
|
- bertimbau |
|
|
- news |
|
|
--- |
|
|
|
|
|
# 🇵🇹 BERTimbau fine-tuned on ClaimPT (Claim Extraction) |
|
|
|
|
|
This model is a fine-tuned version of **[neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased)** on the **ClaimPT** dataset for **claim and non-claim detection** in Portuguese news articles. |
|
|
It classifies each token as part of a *Claim* or *Non-Claim* span, following the guidelines described below. For more information visit our [GitHub repository](https://github.com/LIAAD/ClaimPT) |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧠 Model Details |
|
|
|
|
|
**Model type:** Transformer-based encoder (BERT) |
|
|
**Base model:** [`neuralmind/bert-base-portuguese-cased`](https://huggingface.co/neuralmind/bert-base-portuguese-cased) |
|
|
**Fine-tuning objective:** Token classification |
|
|
**Task:** Claim Extraction |
|
|
**Language:** Portuguese (pt) |
|
|
**Framework:** 🤗 Transformers |
|
|
**License:** CC BY-NC 4.0 *(non-commercial use)* |
|
|
**Authors:** Ricardo Campos, Raquel Sequeira, Sara Nerea, Inês Cantante, Diogo Folques, Luís Filipe Cunha, João Canavilhas, António Branco, Alípio Jorge, Sérgio Nunes, Nuno Guimarães, Purificação Silvano |
|
|
|
|
|
**Institution(s):** INESC TEC, University of Beira Interior, University of Porto, University of Lisbon |
|
|
|
|
|
--- |
|
|
|
|
|
## 📘 Dataset |
|
|
|
|
|
**Dataset:** [ClaimPT](https://rdm.inesctec.pt/dataset/cs-2025-008) |
|
|
**Authors:** Ricardo Campos, Raquel Sequeira, Sara Nerea, Inês Cantante, Diogo Folques, Luís Filipe Cunha, João Canavilhas, António Branco, Alípio Jorge, Sérgio Nunes, Nuno Guimarães, Purificação Silvano |
|
|
|
|
|
**ClaimPT**, a dataset of European Portuguese news articles annotated for **factual claims**, comprising **1,308 articles** and **6,875 individual annotations**. |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## ⚙️ Training Details |
|
|
|
|
|
- **Task formulation:** Token classification with labels |
|
|
`{B-Claim, I-Claim, B-Non-Claim, I-Non-Claim, O}` |
|
|
- **Loss:** Cross-entropy |
|
|
- **Optimizer:** AdamW |
|
|
- **Learning rate:** 2e-5 |
|
|
- **Batch size:** 16 |
|
|
- **Max sequence length:** 512 |
|
|
- **Truncation strategy:** Chunking with 128-token overlap (stride) |
|
|
|
|
|
--- |
|
|
|
|
|
## 📊 Evaluation |
|
|
|
|
|
|
|
|
| **Model** | **Label** | **Precision (%)** | **Recall (%)** | **F1 (%)** | |
|
|
|------------|------------|-------------------|----------------|-------------| |
|
|
| **BERT-Chunk** | Claim | 40.38 | 22.58 | 28.97 | |
|
|
| | Non-Claim | 55.96 | 68.71 | 61.68 | |
|
|
| | Micro Avg | 55.24 | 64.31 | 59.43 | |
|
|
|
|
|
--- |
|
|
|
|
|
## 🧩 Usage |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForTokenClassification |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained("lfcc/bertimbau-claimpt-sent") |
|
|
model = AutoModelForTokenClassification.from_pretrained("lfcc/bertimbau-claimpt-sent") |
|
|
|
|
|
text = '"O governo vai reduzir o IVA dos alimentos", disse o ministro da economia.' |
|
|
inputs = tokenizer(text, return_tensors="pt") |
|
|
outputs = model(**inputs) |
|
|
logits = outputs.logits |
|
|
```` |
|
|
|
|
|
--- |
|
|
|
|
|
## Annotation Guidelines |
|
|
|
|
|
Detailed annotation instructions, including procedures, quality-control measures, and schema definitions, are available in the document: |
|
|
|
|
|
📄 [ClaimPT Annotation Manual (PDF)](https://github.com/LIAAD/ClaimPT/blob/main/ClaimPT%20Annotation%20Manual.pdf) |
|
|
|
|
|
This manual describes: |
|
|
|
|
|
* The annotation process and methodology |
|
|
* The annotation scheme and entity structures |
|
|
* The definition of a claim |
|
|
* Metadata and label taxonomy |
|
|
* Examples and boundary cases |
|
|
|
|
|
Researchers interested in replicating the annotation or training models should refer to this guide. |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{claimpt2025, |
|
|
author = {Ricardo Campos and Raquel Sequeira and Sara Nerea and Inês Cantante and Diogo Folques and Luís Filipe Cunha and João Canavilhas and António Branco and Alípio Jorge and Sérgio Nunes and Nuno Guimarães and Purificação Silvano}, |
|
|
title = {ClaimPT: A Portuguese Dataset of Annotated Claims in News Articles}, |
|
|
year = {2025}, |
|
|
doi = {https://rdm.inesctec.pt/dataset/cs-2025-008}, |
|
|
institution = {INESC TEC} |
|
|
} |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## Credits and Acknowledgements |
|
|
|
|
|
This dataset was developed by **[INESC TEC – Institute for Systems and Computer Engineering, Technology and Science](https://www.inesctec.pt)**, specifically by the **[NLP Group](https://nlp.inesctec.pt/)** within the **[LIAAD – Laboratory of Artificial Intelligence and Decision Support](https://www.inesctec.pt/pt/centros/LIAAD)** research center. |
|
|
|
|
|
### Affiliated Institutions |
|
|
|
|
|
* [University of Beira Interior](https://www.ubi.pt/en/) |
|
|
* [University of Porto ](https://www.up.pt/portal/en/) |
|
|
* [University of Lisbon](https://www.ulisboa.pt/en) |
|
|
|
|
|
### Acknowledgements |
|
|
|
|
|
This work was carried out as part of the project *Accelerat.AI* (Ref. C644865762-00000008), financed by IAPMEI and the European Union — Next Generation EU Fund, within the scope of call for proposals no. 02/C05-i01/2022 — submission of final proposals for project development under the Mobilizing Agendas for Business Innovation of the Recovery and Resilience Plan. |
|
|
Ricardo Campos, Alípio Jorge, and Nuno Guimarães also acknowledge support from the *StorySense* project (Ref. 2022.09312.PTDC, DOI: [10.54499/2022.09312.PTDC](https://doi.org/10.54499/2022.09312.PTDC)). |
|
|
|