linfei-mise commited on
Commit
e7c7fb4
·
1 Parent(s): 27169b2

Add dataset loading support

Browse files
Files changed (3) hide show
  1. README.md +76 -0
  2. dataset_infos.json +3 -0
  3. toxiMol.py +106 -0
README.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ToxiMol Benchmark Dataset
2
+
3
+ The ToxiMol Benchmark Dataset is a collection of molecules from various toxicity datasets, designed for toxicity prediction and molecular structure repair tasks.
4
+
5
+ ## Dataset Description
6
+
7
+ This dataset contains molecules from 11 different toxicity datasets:
8
+
9
+ 1. **Ames** - Molecules with Ames mutagenicity test results
10
+ 2. **Carcinogens_Lagunin** - Carcinogenic molecules from the Lagunin dataset
11
+ 3. **ClinTox** - Molecules with clinical toxicity data
12
+ 4. **DILI** - Molecules associated with drug-induced liver injury
13
+ 5. **hERG** - Molecules with hERG channel inhibition data
14
+ 6. **hERG_Central** - Alternative hERG dataset
15
+ 7. **hERG_Karim** - hERG data from Karim et al.
16
+ 8. **LD50_Zhu** - Molecules with acute toxicity (LD50) data
17
+ 9. **Skin_Reaction** - Molecules associated with adverse skin reactions
18
+ 10. **Tox21** - Molecules from the Tox21 dataset (nuclear receptor and stress response pathways)
19
+ 11. **ToxCast** - Molecules from the ToxCast dataset with pathway-specific toxicity data
20
+
21
+ Each dataset entry includes:
22
+ - Task identifier
23
+ - Molecule ID
24
+ - SMILES string representation of the molecule
25
+ - Molecular structure image (PNG format)
26
+
27
+ ## Usage
28
+
29
+ You can load the dataset using the Hugging Face `datasets` library:
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ # Load a specific subdataset
35
+ ames_dataset = load_dataset("treasurels/ToxiMol-benchmark", "ames")
36
+
37
+ # Load another subdataset
38
+ tox21_dataset = load_dataset("treasurels/ToxiMol-benchmark", "tox21")
39
+
40
+ # Access the data
41
+ for example in ames_dataset['train']:
42
+ print(f"Task: {example['task']}")
43
+ print(f"ID: {example['id']}")
44
+ print(f"SMILES: {example['smiles']}")
45
+ print("-" * 50)
46
+ ```
47
+
48
+ ## Dataset Structure
49
+
50
+ Each subdataset follows the same structure:
51
+ ```
52
+ {
53
+ "task": string, # The toxicity task identifier
54
+ "id": int, # Molecule ID
55
+ "smiles": string # SMILES representation of the molecule
56
+ }
57
+ ```
58
+
59
+ The molecular structure images are available in the repository in their respective subdirectories.
60
+
61
+ ## Citation
62
+
63
+ If you use this dataset in your research, please cite:
64
+
65
+ ```
66
+ @misc{toxiMol2023,
67
+ author = {Lin, Jason},
68
+ title = {ToxiMol: A benchmark dataset for toxicity prediction and molecular structure repair},
69
+ year = {2023},
70
+ publisher = {GitHub},
71
+ }
72
+ ```
73
+
74
+ ## License
75
+
76
+ [License information]
dataset_infos.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ade559ae2c43c3fe340d531736a1b8d9454a52bd950335f085feca73c0e40793
3
+ size 10290
toxiMol.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ToxiMol dataset: A benchmark dataset for toxicity prediction and molecular structure repair."""
2
+
3
+ import json
4
+ import os
5
+
6
+ import datasets
7
+
8
+
9
+ _CITATION = """
10
+ @misc{toxiMol2023,
11
+ author = {Lin, Jason},
12
+ title = {ToxiMol: A benchmark dataset for toxicity prediction and molecular structure repair},
13
+ year = {2023},
14
+ publisher = {GitHub},
15
+ }
16
+ """
17
+
18
+ _DESCRIPTION = """
19
+ ToxiMol is a benchmark dataset for toxicity prediction and molecular structure repair.
20
+ It contains molecules from various toxicity datasets, including Ames mutagenicity,
21
+ carcinogenicity, clinical toxicity, drug-induced liver injury, cardiotoxicity (hERG),
22
+ skin reactions, acute toxicity (LD50), and other pathway-specific toxicity endpoints.
23
+ """
24
+
25
+ _HOMEPAGE = "https://huggingface.co/datasets/treasurels/ToxiMol-benchmark"
26
+
27
+ # Define the names of all the subdatasets
28
+ _SUBDATASETS = [
29
+ "ames",
30
+ "carcinogens_lagunin",
31
+ "clintox",
32
+ "dili",
33
+ "herg",
34
+ "herg_central",
35
+ "herg_karim",
36
+ "ld50_zhu",
37
+ "skin_reaction",
38
+ "tox21",
39
+ "toxcast",
40
+ ]
41
+
42
+ # Define the structure of each subdataset
43
+ _FEATURES = {
44
+ "task": datasets.Value("string"),
45
+ "id": datasets.Value("int32"),
46
+ "smiles": datasets.Value("string"),
47
+ }
48
+
49
+
50
+ class ToxiMolConfig(datasets.BuilderConfig):
51
+ """BuilderConfig for ToxiMol."""
52
+
53
+ def __init__(self, **kwargs):
54
+ """BuilderConfig for ToxiMol.
55
+
56
+ Args:
57
+ **kwargs: keyword arguments forwarded to super.
58
+ """
59
+ super(ToxiMolConfig, self).__init__(**kwargs)
60
+
61
+
62
+ class ToxiMol(datasets.GeneratorBasedBuilder):
63
+ """ToxiMol benchmark dataset for toxicity prediction and molecular structure repair."""
64
+
65
+ BUILDER_CONFIGS = [
66
+ ToxiMolConfig(
67
+ name=subdataset,
68
+ version=datasets.Version("1.0.0"),
69
+ description=f"ToxiMol {subdataset} dataset",
70
+ )
71
+ for subdataset in _SUBDATASETS
72
+ ]
73
+
74
+ def _info(self):
75
+ return datasets.DatasetInfo(
76
+ description=_DESCRIPTION,
77
+ features=datasets.Features(_FEATURES),
78
+ supervised_keys=None,
79
+ homepage=_HOMEPAGE,
80
+ citation=_CITATION,
81
+ )
82
+
83
+ def _split_generators(self, dl_manager):
84
+ """Returns SplitGenerators."""
85
+ # This dataset doesn't have predefined splits, so we use a single split
86
+ return [
87
+ datasets.SplitGenerator(
88
+ name=datasets.Split.TRAIN,
89
+ gen_kwargs={"subdataset": self.config.name},
90
+ ),
91
+ ]
92
+
93
+ def _generate_examples(self, subdataset):
94
+ """Yields examples."""
95
+ # Path to the JSON file for the selected subdataset
96
+ json_path = os.path.join(subdataset, f"{subdataset}.json")
97
+
98
+ with open(json_path, "r", encoding="utf-8") as f:
99
+ data = json.load(f)
100
+
101
+ for i, item in enumerate(data):
102
+ yield i, {
103
+ "task": item["task"],
104
+ "id": item["id"],
105
+ "smiles": item["smiles"],
106
+ }