Hani Park commited on
Commit
f6200be
·
1 Parent(s): 9b2546c

Revise usage part

Browse files
Files changed (1) hide show
  1. README.md +19 -41
README.md CHANGED
@@ -9,45 +9,19 @@ size_categories:
9
  - 100K<n<1M
10
  dataset_info:
11
  - config_name: ChAFF
12
- splits:
13
- - name: assays
14
- data_files: data/*.csv
15
- features:
16
- - name: Type
17
- dtype: string
18
- - name: DatasetName
19
- dtype: string
20
- - name: AID
21
- dtype: int64
22
- - name: ID
23
- dtype: string
24
- - name: IDType
25
- dtype: string
26
- - name: SMILES
27
- dtype: string
28
- - name: summary
29
- data_files: ChAFF_dataset_summary.csv
30
- features:
31
- - name: Type
32
- dtype: string
33
- - name: DatasetName
34
- dtype: string
35
- - name: AID
36
- dtype: int64
37
- - name: AID_confirmatory
38
- dtype: int64
39
- - name: NumActiveCompounds
40
- dtype: int64
41
- - name: Paper Title
42
- dtype: string
43
- - name: Reference
44
- dtype: string
45
- - name: URL
46
- dtype: string
47
- - name: Assay Name
48
- dtype: string
49
- - name: Description
50
- dtype: string
51
  ---
52
 
53
  # ChAFF datasets
@@ -114,8 +88,12 @@ then, from within python load the datasets library.
114
 
115
  Now load the 'ChAFF' datasets together,
116
 
117
- >>> assays = datasets.load_dataset("maomlab/ChAFF", split="assays")
 
118
 
119
  If you are interested in the summary file,
120
 
121
- >>> summary = datasets.load_dataset("maomlab/ChAFF", split="summary")
 
 
 
 
9
  - 100K<n<1M
10
  dataset_info:
11
  - config_name: ChAFF
12
+ features:
13
+ - name: Type
14
+ dtype: string
15
+ - name: DatasetName
16
+ dtype: string
17
+ - name: AID
18
+ dtype: int64
19
+ - name: ID
20
+ dtype: string
21
+ - name: IDType
22
+ dtype: string
23
+ - name: SMILES
24
+ dtype: string
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
25
  ---
26
 
27
  # ChAFF datasets
 
88
 
89
  Now load the 'ChAFF' datasets together,
90
 
91
+ >>> assays = datasets.load_dataset("maomlab/ChAFF", split="train")
92
+
93
 
94
  If you are interested in the summary file,
95
 
96
+ >>> summary = datasets.load_dataset("json", data_files="https://huggingface.co/datasets/maomlab/ChAFF/resolve/main/summary.json", split="train"
97
+ )
98
+
99
+ The default split is "train", as we did not split the datasets.