Hani Park commited on
Commit
4b9f728
·
1 Parent(s): 4fafdff

Upload v1.2.0 with new CSV files

Browse files
Files changed (2) hide show
  1. README.md +24 -52
  2. prepare_hf_upload.py +35 -0
README.md CHANGED
@@ -6,9 +6,9 @@ tags:
6
  - bioassay
7
  pretty_name: CHAFF
8
  size_categories:
9
- - 10K<n<100K
10
  dataset_info:
11
- - config_name: CHAFF
12
  features:
13
  - name: Type
14
  dtype: string
@@ -19,7 +19,7 @@ dataset_info:
19
  - name: SMILES
20
  dtype: string
21
  splits:
22
- - name: train
23
  num_bytes: 5834311
24
  num_examples: 69777
25
  download_size: 1913364
@@ -50,58 +50,30 @@ configs:
50
  - split: train
51
  path: ChAFF/train-*
52
  ---
53
- # Dataset description
54
- This dataset collection contains ~70K curated active compound lists from 73 different [PubChem](https://pubchem.ncbi.nlm.nih.gov) BioAssay (AID) datasets, focusing on known assay interference artifacts. We downloaded raw assay results from PubChem using their AID identifiers and extracted only the compounds labeled as "Active."
55
-
56
- Then We applied SMILES standardization using RDKit and MolVS, including molecule sanitization and fragment removal.
57
-
58
- Each dataset includes the following columns:
59
-
60
- - Type: Interference type (e.g., Autofluorescence, REDOX)
61
- - AID: PubChem Assay ID
62
- - CID: PubChem Compound ID
63
- (Some CID would not match its SMILES as we did SMILES sanitization)
64
- - SMILES: Curated chemical representation
65
-
66
- The final dataset is suitable for training and evaluating machine learning models.
67
-
68
- - If you are looking for a large combined dataset with various AIDs, [click here](https://huggingface.co/datasets/maomlab/CHAFF/tree/1.0.0/CHAFF).
69
- - If you prefer individual datasets for each AID, [click here](https://huggingface.co/datasets/maomlab/CHAFF/tree/1.0.0/CHAFF_individual_AIDs).
70
- - Current version of our repository: 1.0.0
71
-
72
-
73
- # List of PubChem AIDs included:
74
- 632, 1641, 1730, 1857, 1926, 435026, 504689, 720541, 1159604,
75
- 587, 588, 589, 590, 591, 592, 593, 594, 709, 923, 1480, 1483, 1696, 1775, 1776, 2124, 2757,
76
- 588517, 588620, 624483, 720675, 720678, 720680, 720681, 720682, 720686, 720687,
77
- 584, 585, 1476, 1478, 485294, 485341,
78
- 411, 1006, 1269, 1379, 1891, 2515, 2530, 366887, 366889, 366891, 488838, 493175,
79
- 588342, 588498, 602357, 602358, 602364, 602474, 602475, 602476, 602477,
80
- 624030, 652016, 720522, 720835, 1224835, 1347047,
81
- 672, 682, 936,
82
- 878, 888, 929, 1234
83
-
84
-
85
- # Dataset processing
86
- If you are interested in our dataset curation process, follow these scripts in [our repository](https://huggingface.co/datasets/maomlab/CHAFF/tree/main/CHAFF_processing_scripts).
87
 
88
- ### st1_download_pubchem.py
89
- Download bioassay datasets from PubChem using a single AID (Assay ID) and save them in CSV format.
90
 
91
- ### st2_run_download_pubchem.py
92
- Automate the download process for multiple AIDs by allowing you to input a list of AIDs, sequentially downloading each dataset.
 
 
 
 
 
 
 
 
93
 
94
- ### st3_extract_active_compounds.py
95
- Parse each dataset and filters rows labeled as "Active".
96
 
97
- ### st4_smiles_curation.py
98
- Standardize and validate 'CanonicalSMILES' column. Apply sanitization, standardization, and fragment removal using RDKit and MolVS.
 
 
 
 
 
 
99
 
100
- ### st5_detergent_smiles_curation.py
101
- Special handling for AIDs 585, 584, 1476, 1478, 485341, 485294
102
- These datasets are processed to remove overlapping compounds based on detergent-related assay pairs. (Without detergent - With detergent).
103
- This ensures non-specific binders (likely aggregators) are excluded.
104
 
105
- - AID 585 → removed 48 compounds also active in AID 584
106
- - AID 1476 → removed 439 compounds also active in AID 1478
107
- - AID 485341 → removed 44 compounds also active in AID 485294
 
6
  - bioassay
7
  pretty_name: CHAFF
8
  size_categories:
9
+ - 100K<n<1M
10
  dataset_info:
11
+ - config_name: ChAFF
12
  features:
13
  - name: Type
14
  dtype: string
 
19
  - name: SMILES
20
  dtype: string
21
  splits:
22
+ - name: train
23
  num_bytes: 5834311
24
  num_examples: 69777
25
  download_size: 1913364
 
50
  - split: train
51
  path: ChAFF/train-*
52
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
+ # ChAFF datasets
55
+ This dataset collection contains ~200K curated Active compound lists from ~90 different BioAssay datasets, focusing on known assay interference artifacts. We applied SMILES standardization using RDKit and MolVS, including molecule sanitization and fragment removal. The final dataset is suitable for training and evaluating machine learning models.
56
 
57
+ ## Types
58
+ - Absorbance
59
+ - Artifact
60
+ - Autofluoresence
61
+ - ColloidalAggregators
62
+ - HeavyHitters
63
+ - LuciferaseInhibition
64
+ - Misannotation
65
+ - Reactivity
66
+ - REDOX
67
 
68
+ ## Dataset Columns
 
69
 
70
+ | Column | Description |
71
+ |------------|-------------------------------------- |
72
+ | Type | Task domain (e.g. Absorbance) |
73
+ | DatasetName| Source dataset name |
74
+ | AID | Pubchem Assay ID |
75
+ | ID | Identifier for the compound |
76
+ | IDType | Type of identifier (e.g. CID) |
77
+ | SMILES | Curated SMILES |
78
 
 
 
 
 
79
 
 
 
 
prepare_hf_upload.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import shutil
3
+
4
+ # Path
5
+ csv_source_folder = "/home/HuggingFaceFinal"
6
+ hf_upload_folder = "/home/HuggingFaceUpload"
7
+ data_folder = os.path.join(hf_upload_folder, "data")
8
+
9
+ os.makedirs(data_folder, exist_ok=True)
10
+
11
+ # Curated CSV files
12
+ csv_files = []
13
+ for file in os.listdir(csv_source_folder):
14
+ if file.endswith(".csv"):
15
+ src = os.path.join(csv_source_folder, file)
16
+ dst = os.path.join(data_folder, file)
17
+ shutil.copy(src, dst)
18
+ csv_files.append(file)
19
+
20
+ print(f"Number of copied CSV files: {len(csv_files)}")
21
+
22
+
23
+ yaml_path = os.path.join(hf_upload_folder, "dataset.yaml")
24
+ with open(yaml_path, "w") as f:
25
+ f.write("dataset_info:\n")
26
+ f.write(" features:\n")
27
+ for col in ["Type", "DatasetName", "AID", "ID", "IDType", "SMILES"]:
28
+ f.write(f" - name: {col}\n")
29
+ f.write(" dtype: string\n")
30
+ f.write(" splits:\n")
31
+ for fname in csv_files:
32
+ split_name = os.path.splitext(fname)[0]
33
+ f.write(f" - name: {split_name}\n")
34
+
35
+ print("dataset.yaml created")