lreal commited on
Commit
ffcf207
·
verified ·
1 Parent(s): f17c1c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -11
README.md CHANGED
@@ -8,14 +8,17 @@ Below you can see a diagram of the entire pipeline for gathering this data
8
  **Github Repositories**
9
  If you would like to use the same workflow I used, here is each repository listed below:
10
 
11
- [iCrawler scraper](https://github.com/real6c/RecycleImageDatasetWebScraper): Scrapes images from web using queries.txt, contains script to remove duplicates
12
-
13
- [Remove watermark banners from bottom of images](https://github.com/real6c/auto-banner-cropper): This will crop the image until it finds a sharp color difference using greyscale filters with darkness values
14
-
15
- [Watermark mask generation](https://github.com/real6c/yolo-watermark-detection): Generate the masks for watermarks with YOLO detect inference and OWLv2 for quick screening
16
-
17
- [Remove watermarks](https://github.com/real6c/IOPaintDataset): Adapted from the IOPaint pip package, allows user to use CLI (iopaint run) with large datasets (recursive directories and batching), uses LAMA model
18
-
19
- [Ollama VLM Screening](https://github.com/real6c/vlm-dataset-filtering): VLM classification of images based off different criteria with local Ollama server, determines which images are salvageable or should be removed
20
-
21
- [Final YOLO cls conversion](https://github.com/real6c/yolo-cls-dataset-converter): This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
 
 
 
 
8
  **Github Repositories**
9
  If you would like to use the same workflow I used, here is each repository listed below:
10
 
11
+ [iCrawler scraper](https://github.com/real6c/RecycleImageDatasetWebScraper): Scrapes images from web using queries.txt, contains script to remove duplicates<br>
12
+ [Remove watermark banners from bottom of images](https://github.com/real6c/auto-banner-cropper): This will crop the image until it finds a sharp color difference using greyscale filters with darkness values<br>
13
+ [Watermark mask generation](https://github.com/real6c/yolo-watermark-detection): Generate the masks for watermarks with YOLO detect inference and OWLv2 for quick screening<br>
14
+ [Remove watermarks](https://github.com/real6c/IOPaintDataset): Adapted from the IOPaint pip package, allows user to use CLI (iopaint run) with large datasets (recursive directories and batching), uses LAMA model<br>
15
+ [Ollama VLM Screening](https://github.com/real6c/vlm-dataset-filtering): VLM classification of images based off different criteria with local Ollama server, determines which images are salvageable or should be removed<br>
16
+ [Final YOLO cls conversion](https://github.com/real6c/yolo-cls-dataset-converter): This converts the dataset into a YOLO classify format, and uses the JSON outputs from the above step to determine to keep or discard image
17
+
18
+ **Design process (yap yap yap)**
19
+ - Web scraping: For this dataset, I used Bing images as it tends to be less restrictive that Google images (also supported by iCrawler), in a future dataset, different Google image results can be added to the dataset for a larger set of images
20
+ - Identifying duplicates: This was a pretty obvious choice, as it has a very high likelyhood of identifying exact duplicates because of its derivation from the file's bytes
21
+ - Removing watermark banners: This had some design iterations, from looking for large areas with color difference in OpenCV, to using text detection, but ultimately how the algorithm works is that it will convert the image to greyscale, then check if the bottom row of pixels in the image is greater than a set average darkness value, then each row above has its average darkness value computed, then once it reaches below a percentage of the first row's darkness value, it will crop to that height. A maximum crop height is also implemented for safety.
22
+ - Watermark detection: This was inspired from [this](https://huggingface.co/spaces/fancyfeast/joycaption-watermark-detection) huggingface repo, and was adapted to work on large datasets with optimized performance for CUDA enabled devices. I also implemented batching for parallel processing, speeding up inference by many times on datasets like this one.
23
+ - Watermark removal: This was also reused from a LAMA-based project called [IOPaint](https://www.iopaint.com/), where I modified the source code to have the CLI command (iopaint run) work with large datasets, and support nested directories and a preserved directory output, as well as implemented parallel computing for quicker inpainting on the entire dataset.
24
+ - VLM Screening: This was originally not going to be included, but I noticed a lot of the methods above did not do enough to clean up the dataset. Since this is a very lengthy inference (took 30 hours in total to run on this dataset), this is run at the very end when quicker algorithms and inference can be run to make the job of the VLM easier. I already was familiar with Ollama and it's amazing API, and had it running locally so this was pretty easy to implement. The first model I tried was LLaVA, however the results from this were a lackluster, as it did not seem to follow prompts and was either completely wrong, or hesitant. I used the qwen2.5vl model, and found much better results, where it resolved the mentioned issues. CONTINUE HERE