Sudarshan2002 commited on
Commit
aa90a84
·
verified ·
1 Parent(s): 234bb15

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -29
README.md CHANGED
@@ -49,16 +49,6 @@ cat GenDS_part_* > GenDS.tar.gz
49
  tar -xzvf GenDS.tar.gz
50
  ```
51
 
52
- <!--Please follow these steps:
53
- git clone https://huggingface.co/datasets/Sudarshan2002/GenDS.git
54
- cd GenDS
55
- git lfs pull
56
-
57
- Once the dataset is dowloaded, unzip it by
58
-
59
- cat GenDS_part_* > GenDS.tar.gz
60
- tar -xzvf GenDS.tar.gz-->
61
-
62
  ## Dataset Structure
63
 
64
  The dataset includes:
@@ -70,8 +60,8 @@ Each JSON file contains a list of dictionaries with the following fields:
70
 
71
  ```json
72
  {
73
- "image_path": "/path/to/image",
74
- "target_path": "/path/to/ground_truth",
75
  "dataset": "Source dataset name",
76
  "degradation": "Original degradation type",
77
  "category": "real | synthetic",
@@ -80,8 +70,7 @@ Each JSON file contains a list of dictionaries with the following fields:
80
  "mu": "mu value used in GenDeg",
81
  "sigma": "sigma value used in GenDeg",
82
  "random_sampled": true | false,
83
- "sampled_dataset": "Dataset name if mu/sigma are not random",
84
- "check_tgt_path": "Not used"
85
  }
86
  ```
87
 
@@ -97,21 +86,6 @@ with open("/path/to/train_gends.json") as f:
97
  print(train_data[0])
98
  ```
99
 
100
- <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
101
- <!--Eveything required to use the dataset is present in the two .json files: train_gends.json and val_gends.json. train_gends,json contains the images for training and
102
- val_gends.json contains images for validation. Load the train json file using:
103
- json.load(open("/path/to/train_gends.json"))
104
-
105
- The json file is a list of dictionaries such that each dictionary contains:
106
-
107
- {
108
- 'image_path': '/path/to/img',
109
- 'target_path': '/path/to/gt',
110
- 'dataset': Dataset from which image was taken, 'degradation': original degradation of the dataset, 'category': Dataset category (real or synthetic),
111
- 'degradation_sub_type': GenDeg generated degradadtion on the image ('Original' if it is from existing datasets), 'split': Train or Test,
112
- 'mu': value of mu used in GenDeg, 'sigma': value of sigma used in GenDeg, 'random_sampled': whether mu, sigma were randomly sampled,
113
- 'sampled_dataset': If not randomly sampled, mu sigma taken from this dataset, 'check_tgt_path': not used
114
- }-->
115
 
116
  ## Citation
117
 
 
49
  tar -xzvf GenDS.tar.gz
50
  ```
51
 
 
 
 
 
 
 
 
 
 
 
52
  ## Dataset Structure
53
 
54
  The dataset includes:
 
60
 
61
  ```json
62
  {
63
+ "image_path": "/relpath/to/image",
64
+ "target_path": "/relpath/to/ground_truth",
65
  "dataset": "Source dataset name",
66
  "degradation": "Original degradation type",
67
  "category": "real | synthetic",
 
70
  "mu": "mu value used in GenDeg",
71
  "sigma": "sigma value used in GenDeg",
72
  "random_sampled": true | false,
73
+ "sampled_dataset": "Dataset name if mu/sigma are not random"
 
74
  }
75
  ```
76
 
 
86
  print(train_data[0])
87
  ```
88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
89
 
90
  ## Citation
91