Add comprehensive dataset card
#1
by
AbdulkareemOmer
- opened
README.md
ADDED
|
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
tags:
|
| 4 |
+
- biology
|
| 5 |
+
- genomics
|
| 6 |
+
- DNA
|
| 7 |
+
- huggingscience
|
| 8 |
+
- science
|
| 9 |
+
pretty_name: "Human DNA v0"
|
| 10 |
+
size_categories:
|
| 11 |
+
- 100K<n<1M
|
| 12 |
+
task_categories:
|
| 13 |
+
- text-generation
|
| 14 |
+
- fill-mask
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
# Dataset Card for Human DNA v0
|
| 18 |
+
|
| 19 |
+
## Dataset Description
|
| 20 |
+
|
| 21 |
+
**Repository:** [simecek/Human_DNA_v0](https://huggingface.co/datasets/simecek/Human_DNA_v0)
|
| 22 |
+
|
| 23 |
+
### Dataset Summary
|
| 24 |
+
|
| 25 |
+
The `Human_DNA_v0` dataset provides a large corpus of nucleotide sequences from the human genome. It is designed to serve as a foundational training resource for developing and evaluating language models in the field of genomics. The data consists of long strings representing the fundamental DNA bases: **A** (adenine), **C** (cytosine), **G** (guanine), and **T** (thymine).
|
| 26 |
+
|
| 27 |
+
This dataset is crucial for pre-training models that can learn the underlying patterns, syntax, and long-range dependencies within the human genetic code. Such "Genomic Foundation Models" have applications in predicting gene locations, understanding regulatory regions, and identifying variants associated with disease. 🧬
|
| 28 |
+
|
| 29 |
+
### Supported Tasks and Leaderboards
|
| 30 |
+
|
| 31 |
+
This dataset is primarily intended for **self-supervised learning** tasks, similar to how models like BERT or GPT are trained on natural language.
|
| 32 |
+
|
| 33 |
+
* **Masked Language Modeling (Fill-Mask):** Models can be trained to predict a masked or missing nucleotide in a sequence. This helps the model learn contextual information within the DNA strand.
|
| 34 |
+
* **Next-Token Prediction (Text Generation):** Models can be trained to predict the next nucleotide in a sequence. This is the basis for generative models that can produce realistic DNA sequences.
|
| 35 |
+
* **Downstream Fine-tuning:** Models pre-trained on this dataset can be fine-tuned for a variety of specific genomic tasks, such as promoter identification, splice site prediction, or classification of non-coding regions.
|
| 36 |
+
|
| 37 |
+
### Data Fields
|
| 38 |
+
|
| 39 |
+
The dataset is provided in Parquet format and contains a single, straightforward column:
|
| 40 |
+
|
| 41 |
+
* `sequence`: (string) A long string of characters representing a segment of the human DNA sequence. Each character is one of `A`, `C`, `G`, or `T`.
|
| 42 |
+
|
| 43 |
+
### Data Splits
|
| 44 |
+
|
| 45 |
+
* **`train`**: Contains 292,955 rows of DNA sequences for model training.
|
| 46 |
+
|
| 47 |
+
There are no pre-defined validation or test splits, so users should create their own splits from the training data as needed for their experiments.
|
| 48 |
+
|
| 49 |
+
### Curation and Rationale
|
| 50 |
+
|
| 51 |
+
While the original curation details are sparse, this dataset was created to provide a simple, large-scale resource for applying modern NLP techniques to genomics. By formatting genomic data in a way that is immediately accessible to machine learning libraries, it lowers the barrier for ML practitioners to contribute to computational biology. It has been successfully used as a pre-training corpus for models like [vesteinn/gpt2-dna](https://huggingface.co/vesteinn/gpt2-dna).
|
| 52 |
+
|
| 53 |
+
### Considerations for Use
|
| 54 |
+
|
| 55 |
+
* **Character-level Processing:** DNA sequences are inherently character-level. Tokenization strategies are a key consideration. Simple character-level tokenizers are common, but more advanced methods like BPE (Byte-Pair Encoding) or k-mer based approaches may also be effective. A **k-mer
|