Datasets:
mmontoya@ujaen.es
commited on
Commit
·
e217c79
1
Parent(s):
39949cc
Readme Updates v2
Browse files- ALIA-biomedical-datatrove.parquet +2 -2
- README.md +14 -13
ALIA-biomedical-datatrove.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ac7f3024b9fbff2512e94e94ede32951db72b540007d05c8618c87aecc7fd702
|
| 3 |
+
size 7550802834
|
README.md
CHANGED
|
@@ -105,30 +105,31 @@ Each instance in the corpus has the following structure:
|
|
| 105 |
### Data Splits
|
| 106 |
|
| 107 |
The complete dataset contains the following main sources with their statistics:
|
|
|
|
| 108 |
| Source Dataset | Num Tokens | Num Instances | Tokens Percentage | Link |
|
| 109 |
| :--- | :---: | :---: | :---: | :--- |
|
| 110 |
| SECOMCYC | 428,621 | 60 | 0.0077% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SECOMCYC) |
|
| 111 |
| SER | 441,098 | 12 | 0.0079% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SER) |
|
| 112 |
-
| SANGVA | 504,259 | 7 | 0.
|
| 113 |
| Ministerio_Sanidad_Medic_Trans | 519,871 | 36 | 0.0093% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Ministerio_Sanidad_Medic_Trans) |
|
| 114 |
| CARMEN_I | 742,437 | 1,310 | 0.0133% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/CARMEN_I) |
|
| 115 |
| Tox_Habits | 1,061,706 | 1,040 | 0.0191% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Tox_Habits) |
|
| 116 |
| SPA_Junta_De_Andalucia | 1,206,355 | 27 | 0.0217% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SPA_Junta_De_Andalucia) |
|
| 117 |
| AEPCP | 1,507,891 | 40 | 0.0271% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/AEPCP) |
|
| 118 |
-
| RECCMI | 1,827,907 | 30 | 0.0328% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/RECCMI) |
|
| 119 |
| AEPED | 1,946,814 | 788 | 0.0350% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/AEPED) |
|
| 120 |
| Guia_Salud | 2,015,004 | 21 | 0.0362% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Guia_Salud) |
|
| 121 |
-
| BARR_2 | 2,108,752 | 2,858 | 0.
|
| 122 |
| Ministerio_Sanidad_Estrategias | 6,038,859 | 172 | 0.1084% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Ministerio_Sanidad_Estrategias) |
|
| 123 |
-
| Prod_Cient_AETSA | 11,169,765 | 303 | 0.
|
| 124 |
| MedlinePlus | 11,259,425 | 5,531 | 0.2021% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/MedlinePlus) |
|
| 125 |
-
| Multi_Clin_Sum | 38,208,297 | 53,691 | 0.
|
| 126 |
-
| MESINESP_2 | 60,639,295 | 135,286 | 1.
|
| 127 |
-
| Wikipedia_Biomedical | 64,881,930 | 39,601 | 1.
|
| 128 |
-
| CIMA_AEMPS | 123,472,173 | 16,392 | 2.
|
| 129 |
-
| Miscelanea_Roberta | 1,554,077,398 | 11,776 | 27.
|
| 130 |
-
| Translated_Pubmed | 3,687,325,552 | 10,033,666 | 66.
|
| 131 |
-
| **TOTAL** | **5,
|
|
|
|
| 132 |
|
| 133 |
|
| 134 |
### Example Usage
|
|
@@ -197,7 +198,7 @@ All data come from official and publicly accessible sources.
|
|
| 197 |
|
| 198 |
#### Preprocessing system
|
| 199 |
|
| 200 |
-
The corpus is based on a previous version of nearly
|
| 201 |
|
| 202 |
**Step 1: Configuration and paths loading**
|
| 203 |
- Loading YAML configuration files with parameters (language threshold, filters, etc.)
|
|
@@ -218,7 +219,7 @@ The corpus is based on a previous version of nearly 20 billion tokens that was p
|
|
| 218 |
|
| 219 |
Token counting was performed using [tiktoken](https://github.com/openai/tiktoken).
|
| 220 |
|
| 221 |
-
The final result is a corpus of **
|
| 222 |
|
| 223 |
### Annotations
|
| 224 |
|
|
|
|
| 105 |
### Data Splits
|
| 106 |
|
| 107 |
The complete dataset contains the following main sources with their statistics:
|
| 108 |
+
|
| 109 |
| Source Dataset | Num Tokens | Num Instances | Tokens Percentage | Link |
|
| 110 |
| :--- | :---: | :---: | :---: | :--- |
|
| 111 |
| SECOMCYC | 428,621 | 60 | 0.0077% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SECOMCYC) |
|
| 112 |
| SER | 441,098 | 12 | 0.0079% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SER) |
|
| 113 |
+
| SANGVA | 504,259 | 7 | 0.0091% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SANGVA) |
|
| 114 |
| Ministerio_Sanidad_Medic_Trans | 519,871 | 36 | 0.0093% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Ministerio_Sanidad_Medic_Trans) |
|
| 115 |
| CARMEN_I | 742,437 | 1,310 | 0.0133% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/CARMEN_I) |
|
| 116 |
| Tox_Habits | 1,061,706 | 1,040 | 0.0191% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Tox_Habits) |
|
| 117 |
| SPA_Junta_De_Andalucia | 1,206,355 | 27 | 0.0217% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/SPA_Junta_De_Andalucia) |
|
| 118 |
| AEPCP | 1,507,891 | 40 | 0.0271% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/AEPCP) |
|
|
|
|
| 119 |
| AEPED | 1,946,814 | 788 | 0.0350% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/AEPED) |
|
| 120 |
| Guia_Salud | 2,015,004 | 21 | 0.0362% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Guia_Salud) |
|
| 121 |
+
| BARR_2 | 2,108,752 | 2,858 | 0.0379% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/BARR_2) |
|
| 122 |
| Ministerio_Sanidad_Estrategias | 6,038,859 | 172 | 0.1084% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Ministerio_Sanidad_Estrategias) |
|
| 123 |
+
| Prod_Cient_AETSA | 11,169,765 | 303 | 0.2006% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Prod_Cient_AETSA) |
|
| 124 |
| MedlinePlus | 11,259,425 | 5,531 | 0.2021% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/MedlinePlus) |
|
| 125 |
+
| Multi_Clin_Sum | 38,208,297 | 53,691 | 0.6860% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Multi_Clin_Sum) |
|
| 126 |
+
| MESINESP_2 | 60,639,295 | 135,286 | 1.0888% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/MESINESP_2) |
|
| 127 |
+
| Wikipedia_Biomedical | 64,881,930 | 39,601 | 1.1650% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Wikipedia_Biomedical) |
|
| 128 |
+
| CIMA_AEMPS | 123,472,173 | 16,392 | 2.2169% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/CIMA_AEMPS) |
|
| 129 |
+
| Miscelanea_Roberta | 1,554,077,398 | 11,776 | 27.9031% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Miscelanea_Roberta) |
|
| 130 |
+
| Translated_Pubmed | 3,687,325,552 | 10,033,666 | 66.2050% | [GitHub](https://github.com/sinai-uja/ALIA-UJA/tree/dev/data/llms/datasets/biomedical/Translated_Pubmed) |
|
| 131 |
+
| **TOTAL** | **5,569,555,502** | **10,302,617** | **100.00%** | - |
|
| 132 |
+
|
| 133 |
|
| 134 |
|
| 135 |
### Example Usage
|
|
|
|
| 198 |
|
| 199 |
#### Preprocessing system
|
| 200 |
|
| 201 |
+
The corpus is based on a previous version of nearly 6 billion tokens that was processed with an advanced cleaning methodology based on [datatrove](https://github.com/huggingface/datatrove). This system automates the cleaning and preparation of large volumes of text in Spanish, eliminating duplicate and low-quality content.
|
| 202 |
|
| 203 |
**Step 1: Configuration and paths loading**
|
| 204 |
- Loading YAML configuration files with parameters (language threshold, filters, etc.)
|
|
|
|
| 219 |
|
| 220 |
Token counting was performed using [tiktoken](https://github.com/openai/tiktoken).
|
| 221 |
|
| 222 |
+
The final result is a corpus of **5,569,555,502 tokens** distributed across **10302617**, optimized for language model training.
|
| 223 |
|
| 224 |
### Annotations
|
| 225 |
|