Datasets:
license: cc-by-nc-sa-4.0
language:
- pt
size_categories:
- 10K<n<100K
tags:
- corpus
- pt-pt
- pt
- science
- academic
- llm
- high-quality
- markdown
- low-resource
pretty_name: CorEGe-PT
CorEGe-PT: Corpus do Estudo Geral - Portuguese
CorEGe-PT is a large-scale corpus of academic texts written in Portuguese (mainly European Portuguese), extracted from Estudo Geral, the institutional repository of the University of Coimbra. It contains over 34,000 documents and approximately 1 billion tokens, making it the largest available corpus of its kind for the Portuguese language.
This dataset is designed to support linguistic research (Academic Discourse Studies) and the training or adaptation of Large Language Models (LLMs) for the academic domain.
Dataset Details
- Total Documents: >34,000
- Total Tokens: ~1.1 Billion
- Languages: Portuguese (Primary), with a distinction between European (PT-PT) and Brazilian (PT-BR) varieties.
- Format: Markdown (extracted using Docling)
Taxonomy
The corpus covers five main Fields of Science and Technology (FOS):
Social Sciences (37.9% of docs)
Medical and Health Sciences (24.8% of docs)
Humanities (16.4% of docs)
Engineering and Technology Sciences (12.4% of docs)
Exact and Natural Sciences (7.1% of docs)
*(Some of the documents may overlap more than one FOS.)
Data Fields
Each record contains the full text and rich metadata. The metadata fields include original repository data and enriched fields added during post-processing:
Core metadata (as released with each row)
Collection: repository collection label used during compilation and for heuristic FOS mappingdc.title: titledc.creator: author(s) (separated with||)dc.date.issued: publication/issue datedc.subject: keywords (often separated with||)dc.type: document type (e.g., master thesis, article, book part, etc.)dc.identifier.uri: persistent handle/URIdc.rights: access rights label (e.g., open access/embargo labels)dc.rights.uri: license URI when present (often missing)dc.subject.fos: assigned Field of Science (may be single or multiple)
Added post-processing fields
fos.assignment: howdc.subject.foswas assigned:heuristic(mapped fromCollection)classifier(model-based, for records not mappable by heuristic)
pt.auto: whether automatic language identification classified the extracted text as Portuguesept.mean.confidence.auto: mean confidence across snippets for Portuguese language identificationpt.pt.auto: whether automatic variety identification classified the text as European Portuguese (PT-PT)pt.pt.mean.confidence.auto: mean confidence across snippets for PT-PT classification
Download and text extraction
Extraction outcomes using Docling:
- At least one Markdown file was successfully extracted for 93.1% of the selected records.
- Download failures: 1,952 records could not be downloaded (e.g., imminent embargo expirations, internal server errors, withdrawn records).
- Conversion failures: Docling failed on 419 files (1.3%) (e.g., non-PDF formats, corrupted files).
Total: 34,285 Markdown files, totaling ~1.1B tokens.
What to expect in text
The text field contains Markdown with section headings (## only) and may contain conversion artifacts such as:
<!-- image -->placeholders (image-heavy PDFs)- encoding/character issues from source PDFs/extraction errors
- mixed-language segments (e.g., abstracts in English)
Field of Science (FOS) assignment
Heuristic mapping
Most records were mapped to one of the five OECD high-level FOS categories using the repository Collection structure (Agricultural Sciences is not represented due to the institutional organization of the source repository).
Classifier for unmapped records
For records where Collection did not allow a reliable heuristic mapping (e.g., generic thesis/dissertation collections), supervised classification was used to assign FOS.
- Training data: ~27k records with heuristic FOS labels.
- Inputs: combinations of
dc.title,dc.subject, abstract, and optionallydc.description(when available). - Best-performing model used for assignment: BERTimbau, fine-tuned for FOS classification.
Reported model performance (macro scores):
| Model | dc.description | Epochs | Precision | Recall | F1 |
|---|---|---|---|---|---|
| mBERT | Yes | 5 | 0.934 | 0.936 | 0.935 |
| mBERT | No | 4 | 0.926 | 0.918 | 0.922 |
| BERTimbau | Yes | 2 | 0.938 | 0.935 | 0.937 |
| BERTimbau | No | 3 | 0.927 | 0.923 | 0.925 |
| Albertina-PTPT | Yes | 5 | 0.935 | 0.933 | 0.934 |
| Albertina-PTPT | No | 4 | 0.926 | 0.921 | 0.923 |
*(If you want to avoid model-assigned labels, filter to fos.assignment == "heuristic".)
Language and Portuguese variety identification
Automatic language and variety labels were added to support stricter filtering and quality control.
Procedure
- Language ID with papluca/xlm-roberta-base-language-detection.
- Variety ID (pt-PT vs pt-BR) with liaad/PtVId (Portuguese BERT fine-tuned for Portuguese variety).
For each document:
- 31 random snippets of up to 800 characters were sampled.
- Language is considered Portuguese if mean confidence ≥ 0.5 across snippets.
- Variety is PT-PT if more than half of snippets are classified as PT-PT.
Results
- Of the processed Markdown files, 32,137 (93.7%) were classified as Portuguese (
pt.auto). - Of those, 29,766 (92.6%) were classified as PT-PT (
pt.pt.auto). - Mean confidence: 0.91 ± 0.09 (Portuguese) and 0.88 ± 0.12 (PT-PT).
Why pt.auto may be false even though the record was selected as Portuguese
A manual inspection of a sample of non-PT classifications found common causes:
- text extraction artifacts/image-heavy PDFs/encoding problems (~41%)
- metadata language errors (~28%)
- truly multilingual documents (~13%)
- model false negatives (~10%)
- incomplete PDFs in the repository (~7%)
*(For consistency, such documents were kept, but can be excluded by filtering on pt.auto.)
Some useful filters
Strict Portuguese subset
Keep only documents that the automatic language ID considers Portuguese:
pt.auto == "true"
Optionally add a confidence threshold (example):
pt.auto == "true" AND float(pt.mean.confidence.auto) >= 0.8
Strict European Portuguese (pt-PT) subset
pt.auto == "true" AND pt.pt.auto == "true"
Optionally add a confidence threshold (example):
pt.pt.auto == "true" AND float(pt.pt.mean.confidence.auto) >= 0.8
Heuristic-only FOS subset (avoid classifier-assigned FOS)
fos.assignment == "heuristic"
Ethical Considerations
All documents included in CorEGe-PT were sourced from a publicly available repository. We ensured that the documents were in open access and, when such information was available, covered by permissive licenses. In the process, we excluded documents in closed access, embargoed, or without the most restrictive licenses (ND). In addition, we acknowledge all the sources in the corpus metadata, thus recognizing the intellectual contributions of authors. To support reproducibility, we provide detailed documentation of the corpus construction process, including data sources, selection criteria, and pre- and post-processing steps. The corpus is intended for research use, and we encourage responsible usage in accordance with ethical research practices.
Citation
If you find CorEGe-PT useful in your research, please consider citing:
@inproceedings{kuhn_etal:lrec2026,
author = {Tanara Zingano Kuhn and Jos{\´e} Matos and Bruno Neves and Daniela Pereira and Elisabete Ca{\c c}{\~a}o and Ivo Sim{\~o}es and Jacinto Estima and Delfim Le{\~a}o and Hugo {Gon{\c c}alo Oliveira}},
booktitle = {Proceedings of 15th Language Resources and Evaluation Conference},
pages = {Accepted},
publisher = {ELRA},
series = {LREC 2026},
title = {{CorEGe-PT}: {C}ompiling a {L}arge {C}orpus of {A}cademic {T}exts in~{P}ortuguese},
year = {2026}}
Acknowledgments
This work was partially supported by the AMALIA project, funded by FCT/IP in the context of measure RE-C05-i08 of the Portuguese Recovery and Resilience Program; by the Portuguese Recovery and Resilience Plan through project C645008882-00000055, Center for Responsible AI; and by national funds through FCT – Foundation for Science and Technology I.P., in the framework of the Project CISUC (UIDB/00326/2025 and UIDP/00326/2025).
License
This project is licensed under the CC BY-NC-SA 4.0 license.

