Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Portuguese
License:
File size: 9,585 Bytes
85a15cc
 
 
 
 
 
d223e5f
 
 
 
 
 
 
 
 
 
a72c410
85a15cc
 
 
 
e5e24e4
85a15cc
 
 
 
e5e24e4
85a15cc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
671d4eb
 
 
 
 
85a15cc
 
 
 
e5e24e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85a15cc
e5e24e4
85a15cc
e5e24e4
85a15cc
e5e24e4
 
 
85a15cc
e5e24e4
 
 
 
 
 
 
85a15cc
e5e24e4
85a15cc
e5e24e4
85a15cc
e5e24e4
 
 
0e200fe
e5e24e4
 
 
 
 
 
 
 
 
 
 
 
 
 
85a15cc
e5e24e4
0e200fe
e5e24e4
85a15cc
e5e24e4
85a15cc
e5e24e4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
85a15cc
 
 
c0b1112
ce46bbd
c0b1112
ce46bbd
85a15cc
 
 
 
 
 
 
ce46bbd
85a15cc
 
 
 
 
 
 
 
3efda9e
 
 
 
 
 
85a15cc
 
d223e5f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
---
license: cc-by-nc-sa-4.0
language:
- pt
size_categories:
- 10K<n<100K
tags:
- corpus
- pt-pt
- pt
- science
- academic
- llm
- high-quality
- markdown
- low-resource
pretty_name: CorEGe-PT
---

# *CorEGe-PT*: Corpus do Estudo Geral - Portuguese


**CorEGe-PT** is a large-scale **corpus of academic texts written in Portuguese (mainly European Portuguese)**, extracted from [Estudo Geral](https://www.uc.pt/en/sibuc/Estudo_Geral), the institutional repository of the [University of Coimbra](https://www.uc.pt/). It contains over **34,000 documents** and approximately **1 billion tokens**, making it the **largest available corpus of its kind for the Portuguese language**.

This dataset is designed to **support linguistic research** (Academic Discourse Studies) and the **training or adaptation of Large Language Models (LLMs)** for the academic domain.

---

## Dataset Details

- **Total Documents**: >34,000 
- **Total Tokens**: ~1.1 Billion 
- **Languages**: Portuguese (Primary), with a distinction between European (PT-PT) and Brazilian (PT-BR) varieties.
- **Format**: Markdown (extracted using [Docling](https://docling-project.github.io/docling/))


### Taxonomy

The corpus covers **five main Fields of Science and Technology (FOS)**:

- **Social Sciences** (37.9% of docs) 

- **Medical and Health Sciences** (24.8% of docs) 

- **Humanities** (16.4% of docs) 

- **Engineering and Technology Sciences** (12.4% of docs) 

- **Exact and Natural Sciences** (7.1% of docs)

*(Some of the documents may overlap more than one FOS.)


![scientific_area_pie_chart (1)](https://cdn-uploads.huggingface.co/production/uploads/662cf1ddfc20896bf10fc211/yyJLHU1JNad1cnKFpHGKS.png)


![temporal_fos](https://cdn-uploads.huggingface.co/production/uploads/662cf1ddfc20896bf10fc211/dDHHTo_cLlyS4aTff80RJ.png)

### Data Fields

Each record contains the full text and rich metadata. The metadata fields **include original repository data and enriched fields added during post-processing**:

#### Core metadata (as released with each row)

- `Collection`: repository collection label used during compilation and for heuristic FOS mapping
- `dc.title`: title
- `dc.creator`: author(s) (separated with `||`)
- `dc.date.issued`: publication/issue date
- `dc.subject`: keywords (often separated with `||`)
- `dc.type`: document type (e.g., master thesis, article, book part, etc.)
- `dc.identifier.uri`: persistent handle/URI
- `dc.rights`: access rights label (e.g., open access/embargo labels)
- `dc.rights.uri`: license URI when present (often missing)
- `dc.subject.fos`: assigned Field of Science (may be single or multiple)

#### Added post-processing fields

- `fos.assignment`: how `dc.subject.fos` was assigned:
  - `heuristic` (mapped from `Collection`)
  - `classifier` (model-based, for records not mappable by heuristic)
- `pt.auto`: whether automatic language identification classified the extracted text as Portuguese
- `pt.mean.confidence.auto`: mean confidence across snippets for Portuguese language identification
- `pt.pt.auto`: whether automatic variety identification classified the text as **European Portuguese (PT-PT)**
- `pt.pt.mean.confidence.auto`: mean confidence across snippets for PT-PT classification

---

## Download and text extraction

Extraction outcomes using Docling:

- **At least one Markdown file was successfully extracted for 93.1% of the selected records**.
- **Download failures**: **1,952** records could not be downloaded (e.g., imminent embargo expirations, internal server errors, withdrawn records).
- **Conversion failures**: Docling failed on **419 files (1.3%)** (e.g., non-PDF formats, corrupted files).

Total: **34,285 Markdown files**, totaling **~1.1B tokens**.

### What to expect in `text`

The `text` field contains Markdown with section headings (`##` only) and may contain conversion artifacts such as:

- `<!-- image -->` placeholders (image-heavy PDFs)
- encoding/character issues from source PDFs/extraction errors
- mixed-language segments (e.g., abstracts in English)

---

## Field of Science (FOS) assignment

### Heuristic mapping

Most records were mapped to one of the five OECD high-level FOS categories using the repository `Collection` structure (Agricultural Sciences is not represented due to the institutional organization of the source repository).

### Classifier for unmapped records

For records where `Collection` did not allow a reliable heuristic mapping (e.g., generic thesis/dissertation collections), supervised classification was used to assign FOS.

- Training data: ~27k records with heuristic FOS labels.
- Inputs: combinations of `dc.title`, `dc.subject`, abstract, and optionally `dc.description` (when available).
- Best-performing model used for assignment: **[BERTimbau](https://huggingface.co/neuralmind/bert-base-portuguese-cased)**, fine-tuned for FOS classification.

Reported model performance (macro scores):

| Model          | dc.description | Epochs | Precision | Recall | F1    |
|----------------|----------------|--------|----------:|-------:|------:|
| mBERT          | Yes            | 5      | 0.934     | 0.936  | 0.935 |
| mBERT          | No             | 4      | 0.926     | 0.918  | 0.922 |
| **BERTimbau**  | **Yes**        | **2**  | **0.938** | 0.935  | **0.937** |
| BERTimbau      | No             | 3      | 0.927     | 0.923  | 0.925 |
| Albertina-PTPT | Yes            | 5      | 0.935     | 0.933  | 0.934 |
| Albertina-PTPT | No             | 4      | 0.926     | 0.921  | 0.923 |

*(If you want to avoid model-assigned labels, filter to `fos.assignment == "heuristic"`.)

---

## Language and Portuguese variety identification

Automatic language and variety labels were added to support stricter filtering and quality control.

### Procedure

1) **Language ID** with [papluca/xlm-roberta-base-language-detection](https://huggingface.co/papluca/xlm-roberta-base-language-detection).
2) **Variety ID** (pt-PT vs pt-BR) with [liaad/PtVId](https://huggingface.co/liaad/PtVId) (Portuguese BERT fine-tuned for Portuguese variety).

For each document:
- **31 random snippets** of up to **800 characters** were sampled.
- Language is considered Portuguese if **mean confidence ≥ 0.5** across snippets.
- Variety is PT-PT if **more than half** of snippets are classified as PT-PT.

### Results 

- Of the processed Markdown files, **32,137 (93.7%)** were classified as Portuguese (`pt.auto`).
- Of those, **29,766 (92.6%)** were classified as PT-PT (`pt.pt.auto`).
- Mean confidence: **0.91 ± 0.09** (Portuguese) and **0.88 ± 0.12** (PT-PT).

### Why `pt.auto` may be false even though the record was selected as Portuguese

A manual inspection of a sample of non-PT classifications found common causes:
- text extraction artifacts/image-heavy PDFs/encoding problems (~41%)
- metadata language errors (~28%)
- truly multilingual documents (~13%)
- model false negatives (~10%)
- incomplete PDFs in the repository (~7%)

*(For consistency, such documents were kept, but can be excluded by filtering on `pt.auto`.)

---

## Some useful filters

### Strict Portuguese subset
Keep only documents that the automatic language ID considers Portuguese:
- `pt.auto == "true"`

Optionally add a confidence threshold (example):
- `pt.auto == "true" AND float(pt.mean.confidence.auto) >= 0.8`

### Strict European Portuguese (pt-PT) subset
- `pt.auto == "true" AND pt.pt.auto == "true"`

Optionally add a confidence threshold (example):
- `pt.pt.auto == "true" AND float(pt.pt.mean.confidence.auto) >= 0.8`

### Heuristic-only FOS subset (avoid classifier-assigned FOS)
- `fos.assignment == "heuristic"`
---

## Ethical Considerations

All documents included in CorEGe-PT were sourced from a publicly available repository. 
We ensured that the documents were in open access and, when such information was available, covered by permissive licenses. In the process, we excluded documents in closed access, embargoed, or without the most restrictive licenses (ND).
In addition, we acknowledge all the sources in the corpus metadata, thus recognizing the intellectual contributions of authors.
To support reproducibility, we provide detailed documentation of the corpus construction process, including data sources, selection criteria, and pre- and post-processing steps. The corpus is intended for research use, and we encourage responsible usage in accordance with ethical research practices.

## Citation

If you find **CorEGe-PT** useful in your research, please consider citing:

```
@inproceedings{kuhn_etal:lrec2026,
	author = {Tanara Zingano Kuhn and Jos{\´e} Matos and Bruno Neves and Daniela Pereira and Elisabete Ca{\c c}{\~a}o and Ivo Sim{\~o}es and Jacinto Estima and Delfim Le{\~a}o and Hugo {Gon{\c c}alo Oliveira}},
	booktitle = {Proceedings of 15th Language Resources and Evaluation Conference},
	pages = {Accepted},
	publisher = {ELRA},
	series = {LREC 2026},
	title = {{CorEGe-PT}: {C}ompiling a {L}arge {C}orpus of {A}cademic {T}exts in~{P}ortuguese},
	year = {2026}}
```

## Acknowledgments

This work was partially supported by the AMALIA project, funded by FCT/IP in the context of measure RE-C05-i08 of the Portuguese Recovery and Resilience Program;
by the Portuguese Recovery and Resilience Plan through project C645008882-00000055, Center for Responsible AI;
and by national funds through FCT – Foundation for Science and Technology I.P., in the framework of the Project  CISUC (UIDB/00326/2025 and UIDP/00326/2025).

## License

This project is licensed under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.en) license.