| --- |
| annotations_creators: |
| - no-annotation |
| language_creators: |
| - no-annotation |
| task_categories: |
| - text-classification |
| tags: |
| - genomics |
| - dna |
| - dnabert |
| - bioinformatics |
| - human-dna |
| - tokenized |
| source_datasets: |
| - simecek/Human_DNA_v0 |
| language: |
| - en |
| license: other |
| license_name: unspecified |
| --- |
| |
| # `Human_DNA_v0_DNABert6tokenized` |
| |
| ## Dataset Description |
| |
| The `simecek/Human_DNA_v0_DNABert6tokenized` is a processed version of the `simecek/Human_DNA_v0` dataset. It consists of human DNA sequences that have been tokenized using a 6-mer approach, making it directly compatible with models like DNABert for classification and other downstream tasks. |
|
|
| This dataset can be used alongside the `davidcechak/Worm_DNA_v0_DNABert6tokenized` dataset for comparative genomic analysis or to build classifiers that can distinguish between human and worm DNA. This provides a valuable resource for cross-species machine learning tasks in bioinformatics. |
|
|
| ## Dataset Structure |
|
|
| The dataset is available in the `parquet` format and is split into training and testing subsets. |
|
|
| ### Data Fields |
|
|
| The dataset includes the following fields: |
| * **tokens**: A list of integers representing the 6-mer token IDs. |
| * **text**: The original DNA sequence string, consisting of the nucleotides `A`, `T`, `C`, and `G`. |
|
|
| ## Dataset Creation |
|
|
| ### Data Source |
|
|
| The base `Human_DNA_v0` dataset likely consists of DNA sequences from the human reference genome. |
|
|
| ### Preprocessing and Tokenization |
|
|
| The raw sequences were processed using a 6-mer tokenization scheme: |
| 1. **Splitting**: Original DNA sequences were split into non-overlapping 6-mer tokens. |
| 2. **Mapping**: Each unique 6-mer was mapped to a unique integer ID to create a vocabulary. |
| 3. **Encoding**: The tokenized sequences were then represented as a list of these integer IDs. |
|
|
| ## Intended Uses |
|
|
| The dataset can be used for: |
| * **Comparative Genomics**: Comparing genomic features and training models to distinguish between species (e.g., human vs. worm). |
| * **Genomic Classification**: Training and evaluating machine learning models on tasks like species identification from DNA sequences. |
| * **LLM Pre-training**: Providing a corpus for pre-training large language models on human DNA sequences, which can then be fine-tuned for more specific downstream tasks. |
|
|
| ## Limitations and Ethical Considerations |
|
|
| * **Unspecified Origin**: Without an official dataset card from the author, the precise origin and collection methodology of the sequences are unknown. This may impact reproducibility and potential biases. |
| * **Licensing**: The license is currently unspecified. For any public or commercial use, it is necessary to verify the terms with the author, Petr Simecek, on Hugging Face. |
|
|
| ## How to Get the Dataset |
|
|
| You can easily load this dataset from the Hugging Face Hub using the `datasets` library: |
|
|
| ```python |
| from datasets import load_dataset |
| |
| # Load the tokenized dataset |
| dataset = load_dataset("simecek/Human_DNA_v0_DNABert6tokenized") |
| |
| # Access the training split |
| train_dataset = dataset["train"] |
| |