File size: 3,043 Bytes
f044c2d | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 | ---
annotations_creators:
- no-annotation
language_creators:
- no-annotation
task_categories:
- text-classification
tags:
- genomics
- dna
- dnabert
- bioinformatics
- human-dna
- tokenized
source_datasets:
- simecek/Human_DNA_v0
language:
- en
license: other
license_name: unspecified
---
# `Human_DNA_v0_DNABert6tokenized`
## Dataset Description
The `simecek/Human_DNA_v0_DNABert6tokenized` is a processed version of the `simecek/Human_DNA_v0` dataset. It consists of human DNA sequences that have been tokenized using a 6-mer approach, making it directly compatible with models like DNABert for classification and other downstream tasks.
This dataset can be used alongside the `davidcechak/Worm_DNA_v0_DNABert6tokenized` dataset for comparative genomic analysis or to build classifiers that can distinguish between human and worm DNA. This provides a valuable resource for cross-species machine learning tasks in bioinformatics.
## Dataset Structure
The dataset is available in the `parquet` format and is split into training and testing subsets.
### Data Fields
The dataset includes the following fields:
* **tokens**: A list of integers representing the 6-mer token IDs.
* **text**: The original DNA sequence string, consisting of the nucleotides `A`, `T`, `C`, and `G`.
## Dataset Creation
### Data Source
The base `Human_DNA_v0` dataset likely consists of DNA sequences from the human reference genome.
### Preprocessing and Tokenization
The raw sequences were processed using a 6-mer tokenization scheme:
1. **Splitting**: Original DNA sequences were split into non-overlapping 6-mer tokens.
2. **Mapping**: Each unique 6-mer was mapped to a unique integer ID to create a vocabulary.
3. **Encoding**: The tokenized sequences were then represented as a list of these integer IDs.
## Intended Uses
The dataset can be used for:
* **Comparative Genomics**: Comparing genomic features and training models to distinguish between species (e.g., human vs. worm).
* **Genomic Classification**: Training and evaluating machine learning models on tasks like species identification from DNA sequences.
* **LLM Pre-training**: Providing a corpus for pre-training large language models on human DNA sequences, which can then be fine-tuned for more specific downstream tasks.
## Limitations and Ethical Considerations
* **Unspecified Origin**: Without an official dataset card from the author, the precise origin and collection methodology of the sequences are unknown. This may impact reproducibility and potential biases.
* **Licensing**: The license is currently unspecified. For any public or commercial use, it is necessary to verify the terms with the author, Petr Simecek, on Hugging Face.
## How to Get the Dataset
You can easily load this dataset from the Hugging Face Hub using the `datasets` library:
```python
from datasets import load_dataset
# Load the tokenized dataset
dataset = load_dataset("simecek/Human_DNA_v0_DNABert6tokenized")
# Access the training split
train_dataset = dataset["train"]
|