added readme
Browse files
README.md
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# general information
|
| 2 |
+
- the purpose of this the repository is to create a dataset loading script for a milling related dataset
|
| 3 |
+
- further information about the dataset are in https://data.mendeley.com/datasets/zpxs87bjt8/3
|
| 4 |
+
- the dataset size is 10 Gigabit. hugging face doesn't allow uploading that amount of data. This rep contain a small section of the dataset (see *data* folder) only to prove the concept. loading script for the original dataset and the small dataset
|
| 5 |
+
|
| 6 |
+
# how to load dataset
|
| 7 |
+
- first download the dataset from https://data.mendeley.com/datasets/zpxs87bjt8/3
|
| 8 |
+
- loading script (milling_LUH_data.py) should be at the same directory as data directory (the folder where all h5 files are)
|
| 9 |
+
- load data as shown in notebook.ipynb
|
| 10 |
+
- define train, test, validation splits in the loading script (milling_LUH_data.py)
|