Hani Park
commited on
Commit
·
50ed0c8
1
Parent(s):
c4a99a3
Change column type in Artifact.csv file
Browse files
README.md
CHANGED
|
@@ -10,18 +10,18 @@ size_categories:
|
|
| 10 |
dataset_info:
|
| 11 |
- config_name: ChAFF
|
| 12 |
features:
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
---
|
| 26 |
|
| 27 |
# ChAFF datasets
|
|
@@ -59,7 +59,6 @@ A summary file is uploaded, which lists:
|
|
| 59 |
- Type
|
| 60 |
- DatasetName
|
| 61 |
- AID
|
| 62 |
-
- AID_confirmatory
|
| 63 |
- NumActiveCompounds
|
| 64 |
- PaperTitle
|
| 65 |
- Reference
|
|
@@ -67,7 +66,7 @@ A summary file is uploaded, which lists:
|
|
| 67 |
- AssayName
|
| 68 |
- Description
|
| 69 |
|
| 70 |
-
Dataset summary file can be found: ChAFF_dataset_summary.
|
| 71 |
|
| 72 |
|
| 73 |
# License
|
|
@@ -85,14 +84,23 @@ First, from the command line install the `datasets` library
|
|
| 85 |
then, from within python load the datasets library.
|
| 86 |
|
| 87 |
>>> import datasets
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
dataset_info:
|
| 11 |
- config_name: ChAFF
|
| 12 |
features:
|
| 13 |
+
- name: Type
|
| 14 |
+
dtype: string
|
| 15 |
+
- name: DatasetName
|
| 16 |
+
dtype: string
|
| 17 |
+
- name: AID
|
| 18 |
+
dtype: int64
|
| 19 |
+
- name: ID
|
| 20 |
+
dtype: string
|
| 21 |
+
- name: IDType
|
| 22 |
+
dtype: string
|
| 23 |
+
- name: SMILES
|
| 24 |
+
dtype: string
|
| 25 |
---
|
| 26 |
|
| 27 |
# ChAFF datasets
|
|
|
|
| 59 |
- Type
|
| 60 |
- DatasetName
|
| 61 |
- AID
|
|
|
|
| 62 |
- NumActiveCompounds
|
| 63 |
- PaperTitle
|
| 64 |
- Reference
|
|
|
|
| 66 |
- AssayName
|
| 67 |
- Description
|
| 68 |
|
| 69 |
+
Dataset summary file can be found: ChAFF_dataset_summary.json
|
| 70 |
|
| 71 |
|
| 72 |
# License
|
|
|
|
| 84 |
then, from within python load the datasets library.
|
| 85 |
|
| 86 |
>>> import datasets
|
| 87 |
+
>>> from datasets import load_dataset, Features, Value
|
| 88 |
+
|
| 89 |
+
Specifiy column types to prevent pyarrow error.
|
| 90 |
+
```python
|
| 91 |
+
features = Features({
|
| 92 |
+
"Type": Value("string"),
|
| 93 |
+
"DatasetName": Value("string"),
|
| 94 |
+
"AID": Value("int64"),
|
| 95 |
+
"ID": Value("string"),
|
| 96 |
+
"IDType": Value("string"),
|
| 97 |
+
"SMILES": Value("string")
|
| 98 |
+
})
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
Now load one of the 'ChAFF' datasets, e.g.,
|
| 102 |
+
|
| 103 |
+
>>> dataset = datasets.load_dataset("maomlab/ChAFF", name = "default", data_files = "data/Absorbance.csv", split = "train", features = features)
|
| 104 |
+
|
| 105 |
+
You can modify "data/Absorbance.csv" based on your interest (e.g., "data/Reactivity.csv").
|
| 106 |
+
The default is split = "train" as we did not split the datasets.
|