fengxy03 commited on
Commit
7adca17
·
verified ·
1 Parent(s): bb47dd5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ ## Sumary
5
+ This dataset provides a benchmark for evaluating the model's ability to leverage richer genetic information from longer sequences to achieve more accurate inference. Using data from the Human Pangenome Reference Consortium (BioProject ID: PRJNA730823), we designed a population classification task focusing on African, East Asian, and European population groups. From samples' VCF file and the reference genome sequence, we generated sample pseudo-sequences. Based on variant site information recorded in the VCF file, we extracted a variant-dense region from chromosome 9. We used three sequence lengths: 8,192 bp (8K), 32,768 bp (32K), and 131,072 bp (128K). An XGBoost classifier was employed to perform classification predictions for individual sequences.
6
+
7
+ ## Usage
8
+ ```python
9
+ from datasets import load_dataset
10
+
11
+ #Download the whole dataset
12
+ dataset = load_dataset("BGI-HangzhouAI/Benchmark_Dataset-Human_population_classification")
13
+
14
+ #Download a specific task
15
+ task_name = "Human_population_classification_8192"
16
+ dataset = load_dataset(
17
+ "BGI-HangzhouAI/Benchmark_Dataset-Human_population_classification",
18
+ data_files = {
19
+ "train": f"{task_name}/train.jsonl",
20
+ "test": f"{task_name}/test.jsonl",
21
+ "eval": f"{task_name}/eval.jsonl",
22
+ }
23
+ )
24
+ ```
25
+
26
+ ## Benchmark tasks
27
+ | Task | `task_name` | Input fields | # Train Seqs | # Validation Seqs | # Test Seqs |
28
+ |-------------|--------------|------------------|---------------|-------------------|--------------|
29
+ | Human_population_classification 8k | `Human_population_classification_8192` | {seq, label} | 23,172 | 2,906 | 2,916 |
30
+ | Human_population_classification 32k | `Human_population_classification_32768` | {seq, label} | 23,207 | 2,913 | 2,925 |
31
+ | Human_population_classification 128k | `Human_population_classification_131072` | {seq, label} | 23,623 | 2,830 | 2,957 |
32
+
33
+ | Population | Sample_counts | Label|
34
+ |------------|---------------|-------------|
35
+ |CEU-European|30 |0 |
36
+ |AFR-African |69 |1 |
37
+ |EAS-East Asian|50 |2 |
38
+
39
+ ## Data processing
40
+ ### 1. Pseudo-sequence Generation:
41
+ For each sample, pseudo-sequences (including hap1 and hap2) were generated from its VCF file and the reference genome sequence using `bcftools`.
42
+
43
+ ### 2. VCF Variant Region Statistics:
44
+ Using the sample VCF files, sliding windows of three different lengths (8K, 32K, 128K) were applied from the start of chromosome 9 of the reference genome. The overlap between consecutive windows was half the window length. The number of variants within each window was counted. Windows were then ranked in descending order based on this variant count to identify variant-dense genomic coordinates. Chromosome 9 were randomly selected, other autosomes could also be used.
45
+
46
+ ### 3. Centromere Removal:
47
+ Centromeric regions, which are repetitive and non-coding and thus unsuitable for variant or classification tasks, were filtered out according to a BED file. This resulted in a final mapping of genomic windows to their corresponding variant counts.
48
+
49
+ ## 4. Data Selection:
50
+ The samples for each label were split into training, validation and test sets in a 8:1:1 ratio.
51
+ Based on the previously obtained window-variant count mapping, the hap1 pseudo-sequences for chromosome 9 of each sample were segmented.
52
+ Regions were selected starting from the highest variant count downwards, while ensuring a roughly balanced number of sequences for each label.
53
+
54
+ ### 5. Final format
55
+ Datasets are saved in JSONL format. Each file contains:
56
+ - `"seq"` — the DNA sequence string (A/C/G/T, uppercase)
57
+ - `"label"` — ternary class indicator (0 = CEU, 1 = AFR, 2 = EAS)
58
+
59
+ ### 6. Additional information
60
+ The XGBoost model here used only training and test sets. The reserved validation set is available for algorithms requiring it, such as Multilayer Perceptron(MLP), which need it for hyperparameter tuning.