YADAV0206 commited on
Commit
7c45e8f
·
verified ·
1 Parent(s): a9be772

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -3
README.md CHANGED
@@ -1,3 +1,51 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ ## This uploaded dataset is a sample data of 12k rows for train and 1.2k for test to get idea of the dataset used in making of PathoPreter
6
+ <a href="https://huggingface.co/YADAV0206/PathoPreter-4B-SNV-Pathogen-ClinVar-gnomAD">https://huggingface.co/YADAV0206/PathoPreter-4B-SNV-Pathogen-ClinVar-gnomAD</a>
7
+
8
+ The model's full training dataset contains approximately 144k pathogenic variants and 1.05 million benign variants, for a total of about 1.2 million samples and Testing have 55k total different samples with 11 seperate ablations tests (on same 55k rows) making around 12*55k=660k”
9
+
10
+ To get the Dataset contact
11
+ **Rohit Yadav**
12
+
13
+ <div>
14
+ <a href="yrohit1825@gmail.com">yrohit1825@gmail.com</a> | <a href="https://github.com/YADAV1825/PathoPreter">Github: https://github.com/YADAV1825/PathoPreter</a>
15
+ </div>
16
+
17
+ ---
18
+
19
+ ## 📦 Dataset Availability
20
+
21
+ <div>Dataset Construction and Availability:</div>
22
+ The datasets used to train and evaluate PathoPreter (including large-scale ClinVar-derived SNV corpora, controlled ablation test suites, and robustness evaluation datasets) were fully constructed in-house using publicly available, permissively licensed genomic resources such as ClinVar and gnomAD. All upstream sources are properly credited and explicitly permit commercial use and redistribution.
23
+ Significant original engineering and curation effort was applied beyond raw data usage. This included large-scale extraction, normalization, schema unification, quality control, deduplication, and strict train–test disjointness enforcement. Approximately 8 million raw ClinVar variants and ~250 GB of gnomAD VCF data were processed and merged into production-grade Parquet datasets optimized for large-scale analytics, machine learning training, and downstream integration.
24
+ The end-to-end data construction process required approximately 150 hours of compute time and over 250 hours of expert engineering and curation work. The resulting datasets constitute a high-value derived data asset, distinct from the original source distributions.
25
+ These datasets are available for licensed distribution to startups, enterprises, and research organizations for use in applied genomics, AI/ML model development, benchmarking, variant prioritization workflows, and internal research. Commercial licensing, redistribution terms, and support options are available upon request.
26
+
27
+
28
+ Available components include:(in Parquet and CSV both)
29
+ - Large-scale ClinVar-style SNV training dataset
30
+ - Held-out test set with identical variants across ablations
31
+ - Controlled ablation datasets (signal removal studies)
32
+ - Fake-variant's robustness evaluation dataset (see below why is it important in FAKE VARIANT ROBUSTNESS TEST)
33
+ - Balanced CSV subsets suitable for classical ML training
34
+ - Data audit and leakage-verification scripts
35
+
36
+ If you are interested in:
37
+ - dataset licensing
38
+ - research or industry use
39
+ - collaboration or benchmarking
40
+ - reproducing or extending this work
41
+
42
+ please contact:
43
+
44
+ **Rohit Yadav**
45
+
46
+ <div>
47
+ <a href="yrohit1825@gmail.com">yrohit1825@gmail.com</a> | <a href="https://github.com/YADAV1825/PathoPreter">Github: https://github.com/YADAV1825/PathoPreter</a>
48
+ </div>
49
+
50
+ Requests are evaluated on a case-by-case basis.
51
+