dataseek commited on
Commit
a25f42f
·
verified ·
1 Parent(s): c600ae5

Initial dataset release — MagTina350m pretrain corpus slice

Browse files
Files changed (2) hide show
  1. README.md +147 -0
  2. data/data_0.parquet +3 -0
README.md ADDED
@@ -0,0 +1,147 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ language:
4
+ - pt
5
+ pretty_name: PT-BR SciELO Articles (Brazilian Open-Access Research)
6
+ size_categories:
7
+ - 100K<n<1M
8
+ task_categories:
9
+ - text-generation
10
+ tags:
11
+ - pt-br
12
+ - brazilian-portuguese
13
+ - academic
14
+ - scientific
15
+ - scielo
16
+ - research
17
+ - pretraining
18
+ configs:
19
+ - config_name: default
20
+ data_files:
21
+ - split: train
22
+ path: data/*.parquet
23
+ ---
24
+
25
+ # PT-BR SciELO Articles (Brazilian Open-Access Research)
26
+
27
+ Part of the **MagTina350m pretrain corpus release** by [Dataseek](https://dataseek.com.br)
28
+ under the Magestic.ai brand. This is one of nine silver-layer datasets that fed
29
+ [`dataseek/magtina350m-base`](https://huggingface.co/dataseek/magtina350m-base).
30
+
31
+ ## Summary
32
+
33
+ 154 K full-text Brazilian Portuguese research and review articles from SciELO Brazil (post-2010), spanning health sciences, social sciences, humanities and engineering. Avg ~34 KB per article — the largest academic-prose corpus in the MagTina350m mix.
34
+
35
+ ## Source and collection method
36
+
37
+ Source: SciELO `articlemeta` API → fulltext HTML scrape → HTML→text → PT-only gate → year ≥ 2010 → doctype ∈ {research-article, review-article}.
38
+
39
+ **ETL script (in the MagTina1B repository):** [`scripts/etl/24_scielo_articles_v1.py`](https://huggingface.co/dataseek/magtina350m-base) *(public release of the ETL scripts is on the roadmap; until then the data card below documents the recipe in full).*
40
+
41
+ ## Filters and deduplication
42
+
43
+ The following filters were applied before this dataset reached its silver
44
+ (release-ready) state:
45
+
46
+ - doctype ∈ {research-article, review-article}
47
+ - year ≥ 2010
48
+ - lang = pt
49
+ - len(text) ≥ 500 chars
50
+
51
+ Global URL-normalised deduplication was applied across all web-derived corpora
52
+ (`webpages`, `news`, `blogs`) so the same article does not appear twice across
53
+ those three datasets.
54
+
55
+ ## Schema
56
+
57
+ | Column | Type | Description |
58
+ |---|---|---|
59
+ | `text` | `string` | Article fulltext. |
60
+ | `source` | `string` | Always 'scielo.br'. |
61
+ | `lang` | `string` | Language code (typically 'pt'). |
62
+ | `year` | `int32` | Publication year. |
63
+ | `doctype` | `string` | research-article | review-article. |
64
+ | `doc_id` | `string` | SciELO PID (links back to article). |
65
+ | `n_chars` | `int64` | Character count. |
66
+
67
+ Columns dropped at export (kept private as ETL internals): *none*
68
+
69
+ ## Size statistics
70
+
71
+ | Metric | Value |
72
+ |---|---:|
73
+ | Rows | 154.2 K (154,218) |
74
+ | Characters | 5.29 B (5,291,892,295) |
75
+ | Estimated tokens (PT-BR, chars / 4.5) | 1.18 B |
76
+ | Compressed Parquet on disk | ~2.96 GB |
77
+
78
+ **Used in MagTina350m pretrain:** 1.176 B tokens
79
+ (6.8 % of MagTina350m's 17.39 B-token pretrain budget).
80
+
81
+ ## How to load
82
+
83
+ ```python
84
+ from datasets import load_dataset
85
+
86
+ ds = load_dataset("dataseek/ptbr-scielo", split="train", streaming=True)
87
+ for row in ds.take(5):
88
+ print(row["text"][:200])
89
+ ```
90
+
91
+ Streaming is recommended for the larger configs. For the smaller datasets
92
+ (`ptbr-dou`, `ptbr-books-publicos`) eager loading is fine.
93
+
94
+ ## Licensing
95
+
96
+ CC-BY 4.0 — the dominant license across SciELO Brazil (open access). Individual articles may carry CC-BY-NC variants; downstream users should honour the per-article licenses available via the SciELO API. Attribution by article DOI is required for redistribution.
97
+
98
+ **Upstream attribution:** SciELO Brazil — https://scielo.br/
99
+
100
+ ## Citation
101
+
102
+ If you use this dataset, please cite both the upstream source and MagTina350m:
103
+
104
+ ```bibtex
105
+ @misc{magtina350m_pretrain_2026,
106
+ title = {MagTina350m pretrain corpus — PT-BR SciELO Articles (Brazilian Open-Access Research)},
107
+ author = {Frasson, Ricardo and {Dataseek Team}},
108
+ year = 2026,
109
+ publisher = {Hugging Face},
110
+ url = {https://huggingface.co/datasets/dataseek/ptbr-scielo}
111
+ }
112
+ ```
113
+
114
+ Please also honour the upstream license terms — for CC-BY-derived data,
115
+ attribution to the upstream creators is mandatory; for CC-BY-SA, downstream
116
+ derivatives must remain CC-BY-SA-compatible.
117
+
118
+ ## Intended use
119
+
120
+ - Pre-training, continued pre-training, or domain-adapting of Brazilian Portuguese
121
+ language models.
122
+ - PT-BR NLP research where statistically representative public-web / academic /
123
+ legal / encyclopedic data is needed.
124
+ - Reproducing or improving on the MagTina350m result.
125
+
126
+ ## Known limitations and PII statement
127
+
128
+ - **Text was NOT PII-scrubbed.** URLs, emails, phone numbers and personal names
129
+ that occurred in the source data may still be present. We strip zero-width
130
+ characters and normalise Unicode but we do not run an NER pass.
131
+ - **Crawled data carries upstream biases** of CommonCrawl, Wikipedia, news outlets
132
+ and academic institutions present in the source. We have not audited these.
133
+ - **No safety filtering** beyond langid and basic alpha-ratio gates. Hate-speech,
134
+ spam and adult content present in the source remain unless caught incidentally.
135
+ - **Provenance preserved at row level.** Every row has either a `url`, `source` or
136
+ `doc_id` column that points back to upstream — this is intentional, so consumers
137
+ can re-license, redact or filter.
138
+
139
+ ## Related releases
140
+
141
+ - **Model:** [`dataseek/magtina350m-base`](https://huggingface.co/dataseek/magtina350m-base) (354.6 M params, pretrained on this corpus + 8 sibling datasets)
142
+ - **Instruct model:** [`dataseek/magtina350m-instruct`](https://huggingface.co/dataseek/magtina350m-instruct)
143
+ - **Sibling datasets:** see `dataseek/ptbr-*` for all nine corpora
144
+
145
+ ## License
146
+
147
+ [cc-by-4.0](https://spdx.org/licenses/cc-by-4.0.html)
data/data_0.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a6ae76bef0913215c5455f480acab264205a9ac5924e4b0a8172bc071b36a45
3
+ size 2835025250