lreal commited on
Commit
44a7a3b
·
verified ·
1 Parent(s): bfb442a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -26,15 +26,15 @@ If you would like to use the same workflow I used, here is each repository liste
26
  [Final YOLO cls conversion](https://github.com/real6c/yolo-cls-dataset-converter): This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
27
 
28
  **Design process (Each step explained)**
29
- - Web scraping: For this dataset, I used Bing images as it tends to be less restrictive that Google images (also supported by iCrawler), in a future dataset, different Google image results can be added to the dataset for a larger set of images
30
- - Identifying duplicates: Using MD5 hash comparison is a pretty obvious choice, as it has a very high likelyhood of identifying exact duplicates because of its derivation from the file's bytes
31
  - Removing watermark banners: This had some design iterations, from looking for large areas with color difference in OpenCV, to using text detection, but ultimately how the algorithm works is that it will convert the image to greyscale, then check if the bottom row of pixels in the image is greater than a set average darkness value, then each row above has its average darkness value computed, once it reaches below a percentage of the first row's darkness value, it will crop to that height. A maximum crop height is also implemented for safety.
32
  - Watermark detection: This was inspired from [this](https://huggingface.co/spaces/fancyfeast/joycaption-watermark-detection) huggingface repo, and was adapted to work on large datasets with optimized performance for CUDA enabled devices. I also implemented batching for parallel processing, speeding up inference by many times on datasets like this one.
33
  - Watermark removal: This was also reused from a LAMA-based project called [IOPaint](https://www.iopaint.com/), where I modified the source code to have the CLI command (iopaint run) work with large datasets, and support nested directories and a preserved directory output, as well as implemented parallel computing for quicker inpainting on the entire dataset.
34
- - VLM Screening: This was originally not going to be included, but I noticed a lot of the methods above did not do enough to clean up the dataset. Since this is a very lengthy inference (took 30 hours in total to run on this dataset), this is run at the very end when quicker algorithms and inference can be run to make the job of the VLM easier. I already was familiar with Ollama and it's strong API, and had it running locally so this was pretty easy to implement. The first model I tried was LLaVA, however the results from this were a lackluster, as it did not seem to follow prompts and was either completely wrong, or hesitant. I used the qwen2.5vl model, and found much better results, where it resolved the mentioned issues. Memory usage was also an issue as more and more images were being base64 encoded, so I made sure to also add several several garbage collection calls as well as deleting the unused variables, at the end it took up less than 4GB at 60k images processed.
35
- - Converting to YOLO: This last part was pretty straightforward to implement, I originally was just going to have the regular train/test/val split, but because of the VLM screening able to salvage some images from the dataset that we can run tests on, I opted to keep those exclusively for the test split. Essentially, after filtering out the images with incorrect items and ones with a clipart style, if the VLM said there was a watermark present or the incorrect background, it would just throw those images into the test split. This may cause some bias when running tests, but it is not in the val split so the quality of the model is nor affected as the hyperparameters are not dependent on the quality of the images.
36
 
37
  **What can be improved**
38
- - A lot of the watermarks (especially the more difficult ones like lines/symbols) are not detected well and being removed properly, maybe training a custom segmentation model on synthetic data can prove to be successful.
39
  - Less images than expected, I was aiming for 50k, but a lot got removed due to duplicates or being incorrect, maybe expand on the web scrape queries for the next revision, using a strategy where synonyms are used to find the widest range of images possible. Another thing that can be explored is using different image search engines or websites.
40
- - Optimization of course, this is a pretty rough first release just to get it working, this can definetely be optimized to run faster.
 
26
  [Final YOLO cls conversion](https://github.com/real6c/yolo-cls-dataset-converter): This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
27
 
28
  **Design process (Each step explained)**
29
+ - Web scraping: For this dataset, I used Bing images as it tends to be less restrictive than Google images (also supported by iCrawler), in a future dataset, different Google image results can be added to the dataset for a larger set of images
30
+ - Identifying duplicates: Using MD5 hash comparison is a pretty obvious choice, as it has a very high likelihood of identifying exact duplicates because of its derivation from the file's bytes
31
  - Removing watermark banners: This had some design iterations, from looking for large areas with color difference in OpenCV, to using text detection, but ultimately how the algorithm works is that it will convert the image to greyscale, then check if the bottom row of pixels in the image is greater than a set average darkness value, then each row above has its average darkness value computed, once it reaches below a percentage of the first row's darkness value, it will crop to that height. A maximum crop height is also implemented for safety.
32
  - Watermark detection: This was inspired from [this](https://huggingface.co/spaces/fancyfeast/joycaption-watermark-detection) huggingface repo, and was adapted to work on large datasets with optimized performance for CUDA enabled devices. I also implemented batching for parallel processing, speeding up inference by many times on datasets like this one.
33
  - Watermark removal: This was also reused from a LAMA-based project called [IOPaint](https://www.iopaint.com/), where I modified the source code to have the CLI command (iopaint run) work with large datasets, and support nested directories and a preserved directory output, as well as implemented parallel computing for quicker inpainting on the entire dataset.
34
+ - VLM Screening: This was originally not going to be included, but I noticed a lot of the methods above did not do enough to clean up the dataset. Since this is a very lengthy inference (took 30 hours in total to run on this dataset), this is run at the very end when quicker algorithms and inference can be run to make the job of the VLM easier. I already was familiar with Ollama and it's strong API, and had it running locally so this was pretty easy to implement. The first model I tried was LLaVA, however the results from this were lackluster, as it did not seem to follow prompts and was either completely wrong, or hesitant. I used the qwen2.5vl model, and found much better results, where it resolved the mentioned issues. Memory usage was also an issue as more and more images were being base64 encoded, so I made sure to also add several garbage collection calls as well as deleting the unused variables, at the end it took up less than 4GB at 60k images processed.
35
+ - Converting to YOLO: This last part was pretty straightforward to implement, I originally was just going to have the regular train/test/val split, but because of the VLM screening able to salvage some images from the dataset that we can run tests on, I opted to keep those exclusively for the test split. Essentially, after filtering out the images with incorrect items and ones with a clipart style, if the VLM said there was a watermark present or the incorrect background, it would just throw those images into the test split. This may cause some bias when running tests, but it is not in the val split so the quality of the model is not affected as the hyperparameters are not dependent on the quality of the images.
36
 
37
  **What can be improved**
38
+ - A lot of the watermarks (especially the more difficult ones like lines/symbols) are not detected well or being properly removed, maybe training a custom segmentation model on synthetic data can prove to be successful.
39
  - Less images than expected, I was aiming for 50k, but a lot got removed due to duplicates or being incorrect, maybe expand on the web scrape queries for the next revision, using a strategy where synonyms are used to find the widest range of images possible. Another thing that can be explored is using different image search engines or websites.
40
+ - Optimization of course, this is a pretty rough first release just to get it working, this can definitely be optimized to run faster.