lreal commited on
Commit
7e69239
·
verified ·
1 Parent(s): c70327c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -0
README.md CHANGED
@@ -10,9 +10,26 @@ tags:
10
  - trash
11
  ---
12
  **Abstract**
 
13
  This dataset consists of data scraped from Bing images using an iCrawler bot. Additional processing and cleanup was applied to remove duplicates, irrelevant images, watermark banners, watermarks, as well as a final screening with a VLM to see which images can still be used for test data, or if we simply need to throw them out as they would 'poison' the dataset. The dataset started off with 75k webscraped images, 15k of those were duplicates (found this out with comparing MD5 hashes) and the remaining 20k were images deemed too low of quality to be in the dataset.
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  **Problems with web scraped datasets**
 
16
  Some of the problems that had to be addressed in this web scraped image dataset (common for most web scraped datasets)
17
  - Watermarks (many stockhouses have ones with symbols and lines that are difficult to detect)
18
  - Banner watermarks (at bottom of image usually)
@@ -23,10 +40,12 @@ Some of the problems that had to be addressed in this web scraped image dataset
23
  - Duplicate images (can show up across different queries that are similar)
24
 
25
  **Pipeline diagram**
 
26
  Below you can see a diagram of the entire pipeline for gathering this data
27
  ![Processing pipeline diagram](BingRecycleDatasetDiagram.png)
28
 
29
  **Github Repositories**
 
30
  If you would like to use the same workflow I used, here is each repository listed below:
31
 
32
  [iCrawler scraper](https://github.com/real6c/RecycleImageDatasetWebScraper): Scrapes images from web using queries.txt, contains script to remove duplicates<br>
@@ -37,6 +56,7 @@ If you would like to use the same workflow I used, here is each repository liste
37
  [Final YOLO cls conversion](https://github.com/real6c/yolo-cls-dataset-converter): This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
38
 
39
  **Design process (Each step explained)**
 
40
  - Web scraping: For this dataset, I used Bing images as it tends to be less restrictive than Google images (also supported by iCrawler), in a future dataset, different Google image results can be added to the dataset for a larger set of images
41
  - Identifying duplicates: Using MD5 hash comparison is a pretty obvious choice, as it has a very high likelihood of identifying exact duplicates because of its derivation from the file's bytes
42
  - Removing watermark banners: This had some design iterations, from looking for large areas with color difference in OpenCV, to using text detection, but ultimately how the algorithm works is that it will convert the image to greyscale, then check if the bottom row of pixels in the image is greater than a set average darkness value, then each row above has its average darkness value computed, once it reaches below a percentage of the first row's darkness value, it will crop to that height. A maximum crop height is also implemented for safety.
@@ -46,6 +66,7 @@ If you would like to use the same workflow I used, here is each repository liste
46
  - Converting to YOLO: This last part was pretty straightforward to implement, I originally was just going to have the regular train/test/val split, but because of the VLM screening able to salvage some images from the dataset that we can run tests on, I opted to keep those exclusively for the test split. Essentially, after filtering out the images with incorrect items and ones with a clipart style, if the VLM said there was a watermark present or the incorrect background, it would just throw those images into the test split. This may cause some bias when running tests, but it is not in the val split so the quality of the model is not affected as the hyperparameters are not dependent on the quality of the images.
47
 
48
  **What can be improved**
 
49
  - A lot of the watermarks (especially the more difficult ones like lines/symbols) are not detected well or being properly removed, maybe training a custom segmentation model on synthetic data can prove to be successful.
50
  - Less images than expected, I was aiming for 50k, but a lot got removed due to duplicates or being incorrect, maybe expand on the web scrape queries for the next revision, using a strategy where synonyms are used to find the widest range of images possible. Another thing that can be explored is using different image search engines or websites.
51
  - Optimization of course, this is a pretty rough first release just to get it working, this can definitely be optimized to run faster.
 
10
  - trash
11
  ---
12
  **Abstract**
13
+
14
  This dataset consists of data scraped from Bing images using an iCrawler bot. Additional processing and cleanup was applied to remove duplicates, irrelevant images, watermark banners, watermarks, as well as a final screening with a VLM to see which images can still be used for test data, or if we simply need to throw them out as they would 'poison' the dataset. The dataset started off with 75k webscraped images, 15k of those were duplicates (found this out with comparing MD5 hashes) and the remaining 20k were images deemed too low of quality to be in the dataset.
15
 
16
+ **Usage**
17
+
18
+ It is reccomended to just download files using wget rather than cloning the entire repository:
19
+
20
+ For dataset .zip file:
21
+ ```
22
+ wget https://huggingface.co/datasets/lreal/BingRecycle40k/resolve/main/BingRecycle40k_rev1.zip
23
+ ```
24
+ For dataset classes.txt file:
25
+ ```
26
+ wget https://huggingface.co/datasets/lreal/BingRecycle40k/resolve/main/classes.txt
27
+ ```
28
+
29
+ If you would like to create your own split, please refer to the [YOLO Conversion Repo](https://github.com/real6c/yolo-cls-dataset-converter) I made to automate this. You will also need to download both .zip files in the pre-split-dataset directory.
30
+
31
  **Problems with web scraped datasets**
32
+
33
  Some of the problems that had to be addressed in this web scraped image dataset (common for most web scraped datasets)
34
  - Watermarks (many stockhouses have ones with symbols and lines that are difficult to detect)
35
  - Banner watermarks (at bottom of image usually)
 
40
  - Duplicate images (can show up across different queries that are similar)
41
 
42
  **Pipeline diagram**
43
+
44
  Below you can see a diagram of the entire pipeline for gathering this data
45
  ![Processing pipeline diagram](BingRecycleDatasetDiagram.png)
46
 
47
  **Github Repositories**
48
+
49
  If you would like to use the same workflow I used, here is each repository listed below:
50
 
51
  [iCrawler scraper](https://github.com/real6c/RecycleImageDatasetWebScraper): Scrapes images from web using queries.txt, contains script to remove duplicates<br>
 
56
  [Final YOLO cls conversion](https://github.com/real6c/yolo-cls-dataset-converter): This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
57
 
58
  **Design process (Each step explained)**
59
+
60
  - Web scraping: For this dataset, I used Bing images as it tends to be less restrictive than Google images (also supported by iCrawler), in a future dataset, different Google image results can be added to the dataset for a larger set of images
61
  - Identifying duplicates: Using MD5 hash comparison is a pretty obvious choice, as it has a very high likelihood of identifying exact duplicates because of its derivation from the file's bytes
62
  - Removing watermark banners: This had some design iterations, from looking for large areas with color difference in OpenCV, to using text detection, but ultimately how the algorithm works is that it will convert the image to greyscale, then check if the bottom row of pixels in the image is greater than a set average darkness value, then each row above has its average darkness value computed, once it reaches below a percentage of the first row's darkness value, it will crop to that height. A maximum crop height is also implemented for safety.
 
66
  - Converting to YOLO: This last part was pretty straightforward to implement, I originally was just going to have the regular train/test/val split, but because of the VLM screening able to salvage some images from the dataset that we can run tests on, I opted to keep those exclusively for the test split. Essentially, after filtering out the images with incorrect items and ones with a clipart style, if the VLM said there was a watermark present or the incorrect background, it would just throw those images into the test split. This may cause some bias when running tests, but it is not in the val split so the quality of the model is not affected as the hyperparameters are not dependent on the quality of the images.
67
 
68
  **What can be improved**
69
+
70
  - A lot of the watermarks (especially the more difficult ones like lines/symbols) are not detected well or being properly removed, maybe training a custom segmentation model on synthetic data can prove to be successful.
71
  - Less images than expected, I was aiming for 50k, but a lot got removed due to duplicates or being incorrect, maybe expand on the web scrape queries for the next revision, using a strategy where synonyms are used to find the widest range of images possible. Another thing that can be explored is using different image search engines or websites.
72
  - Optimization of course, this is a pretty rough first release just to get it working, this can definitely be optimized to run faster.