Datasets:
license: cc-by-4.0
language:
- pt
pretty_name: PT-BR SciELO Articles (Brazilian Open-Access Research)
size_categories:
- 100K<n<1M
task_categories:
- text-generation
tags:
- pt-br
- brazilian-portuguese
- academic
- scientific
- scielo
- research
- pretraining
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
PT-BR SciELO Articles (Brazilian Open-Access Research)
Part of the MagTina350m pretrain corpus release by Dataseek
under the Magestic.ai brand. This is one of nine silver-layer datasets that fed
dataseek/magtina350m-base.
Summary
154 K full-text Brazilian Portuguese research and review articles from SciELO Brazil (post-2010), spanning health sciences, social sciences, humanities and engineering. Avg ~34 KB per article — the largest academic-prose corpus in the MagTina350m mix.
Source and collection method
Source: SciELO articlemeta API → fulltext HTML scrape → HTML→text → PT-only gate → year ≥ 2010 → doctype ∈ {research-article, review-article}.
ETL script (in the MagTina1B repository): scripts/etl/24_scielo_articles_v1.py (public release of the ETL scripts is on the roadmap; until then the data card below documents the recipe in full).
Filters and deduplication
The following filters were applied before this dataset reached its silver (release-ready) state:
- doctype ∈ {research-article, review-article}
- year ≥ 2010
- lang = pt
- len(text) ≥ 500 chars
Global URL-normalised deduplication was applied across all web-derived corpora
(webpages, news, blogs) so the same article does not appear twice across
those three datasets.
Schema
| Column | Type | Description |
|---|---|---|
text |
string |
Article fulltext. |
source |
string |
Always 'scielo.br'. |
lang |
string |
Language code (typically 'pt'). |
year |
int32 |
Publication year. |
doctype |
string |
research-article |
doc_id |
string |
SciELO PID (links back to article). |
n_chars |
int64 |
Character count. |
Columns dropped at export (kept private as ETL internals): none
Size statistics
| Metric | Value |
|---|---|
| Rows | 154.2 K (154,218) |
| Characters | 5.29 B (5,291,892,295) |
| Estimated tokens (PT-BR, chars / 4.5) | 1.18 B |
| Compressed Parquet on disk | ~2.96 GB |
Used in MagTina350m pretrain: 1.176 B tokens (6.8 % of MagTina350m's 17.39 B-token pretrain budget).
How to load
from datasets import load_dataset
ds = load_dataset("dataseek/ptbr-scielo", split="train", streaming=True)
for row in ds.take(5):
print(row["text"][:200])
Streaming is recommended for the larger configs. For the smaller datasets
(ptbr-dou, ptbr-books-publicos) eager loading is fine.
Licensing
CC-BY 4.0 — the dominant license across SciELO Brazil (open access). Individual articles may carry CC-BY-NC variants; downstream users should honour the per-article licenses available via the SciELO API. Attribution by article DOI is required for redistribution.
Upstream attribution: SciELO Brazil — https://scielo.br/
Citation
If you use this dataset, please cite both the upstream source and MagTina350m:
@misc{magtina350m_pretrain_2026,
title = {MagTina350m pretrain corpus — PT-BR SciELO Articles (Brazilian Open-Access Research)},
author = {Frasson, Ricardo and {Dataseek Team}},
year = 2026,
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/dataseek/ptbr-scielo}
}
Please also honour the upstream license terms — for CC-BY-derived data, attribution to the upstream creators is mandatory; for CC-BY-SA, downstream derivatives must remain CC-BY-SA-compatible.
Intended use
- Pre-training, continued pre-training, or domain-adapting of Brazilian Portuguese language models.
- PT-BR NLP research where statistically representative public-web / academic / legal / encyclopedic data is needed.
- Reproducing or improving on the MagTina350m result.
Known limitations and PII statement
- Text was NOT PII-scrubbed. URLs, emails, phone numbers and personal names that occurred in the source data may still be present. We strip zero-width characters and normalise Unicode but we do not run an NER pass.
- Crawled data carries upstream biases of CommonCrawl, Wikipedia, news outlets and academic institutions present in the source. We have not audited these.
- No safety filtering beyond langid and basic alpha-ratio gates. Hate-speech, spam and adult content present in the source remain unless caught incidentally.
- Provenance preserved at row level. Every row has either a
url,sourceordoc_idcolumn that points back to upstream — this is intentional, so consumers can re-license, redact or filter.
Related releases
- Model:
dataseek/magtina350m-base(354.6 M params, pretrained on this corpus + 8 sibling datasets) - Instruct model:
dataseek/magtina350m-instruct - Sibling datasets: see
dataseek/ptbr-*for all nine corpora