sochastic commited on
Commit
a31265a
·
verified ·
1 Parent(s): ca86ef0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +381 -99
README.md CHANGED
@@ -1,99 +1,381 @@
1
- <<<<<<< HEAD
2
- # Data Directory
3
-
4
- This directory contains datasets and annotations for the SynSpill project.
5
-
6
- ## Structure
7
-
8
- - `synthetic/` - Generated synthetic spill images and annotations
9
- - `real/` - Real-world industrial CCTV footage (test set)
10
- - `annotations/` - Ground truth labels and bounding boxes
11
-
12
- ## Synthetic Data
13
-
14
- The synthetic dataset is generated using our AnomalInfusion pipeline:
15
-
16
- - Stable Diffusion XL for base image generation
17
- - IP adapters for style conditioning
18
- - Inpainting for precise spill placement
19
-
20
- ## Citation
21
-
22
- If you use this data in your research, please cite our ICCV 2025 paper.
23
- =======
24
- # SynSpill Data Directory
25
-
26
- This directory contains datasets, annotations, and workflow configurations for the SynSpill project - a comprehensive dataset for industrial spill detection and synthesis.
27
-
28
- ## Directory Structure
29
-
30
- ```text
31
- data/
32
- ├── README.md # This file
33
- ├── generation_workflow.json # ComfyUI workflow for synthetic image generation
34
- ├── inpainting_workflow.json # ComfyUI workflow for inpainting operations
35
- ├── release/ # Full dataset release
36
- │ ├── annotation_masks/ # Binary masks for spill regions (PNG format)
37
- │ ├── annotations/ # Ground truth annotations and metadata
38
- │ └── generated_images/ # Complete set of synthetic spill images
39
- └── samples/ # Sample data for preview and testing
40
- ├── annotation_masks/ # Sample binary masks
41
- ├── generated_images/ # Sample synthetic images
42
- └── inpainted_images/ # Sample inpainted results
43
- ```
44
-
45
- ## Dataset Contents
46
-
47
- ### Release Dataset (`release/`)
48
-
49
- - **Generated Images**: High-quality synthetic industrial spill scenarios
50
- - **Annotation Masks**: Pixel-perfect binary masks identifying spill regions
51
- - **Annotations**: Structured metadata including bounding boxes, class labels, and scene descriptions
52
-
53
- ### Sample Dataset (`samples/`)
54
-
55
- A subset of the full dataset for quick evaluation and testing purposes, containing:
56
-
57
- - Representative examples from each category
58
- - Various spill types and industrial environments
59
- - Both generated and inpainted image samples
60
-
61
- ### Workflow Configurations
62
-
63
- - **`generation_workflow.json`**: ComfyUI workflow for generating base synthetic images using Stable Diffusion XL
64
- - **`inpainting_workflow.json`**: ComfyUI workflow for precise spill placement and inpainting operations
65
-
66
- ## Synthetic Data Generation
67
-
68
- The synthetic dataset is created using our AnomalInfusion pipeline:
69
-
70
- 1. **Base Generation**: Stable Diffusion XL creates industrial environment images
71
- 2. **Style Conditioning**: IP adapters ensure consistent visual style across scenes
72
- 3. **Spill Synthesis**: Controlled inpainting places realistic spills in specified locations
73
- 4. **Mask Generation**: Automated creation of precise segmentation masks
74
-
75
- ## Usage
76
-
77
- The data is organized for direct use with computer vision frameworks:
78
-
79
- - Images are in standard formats (PNG/JPG)
80
- - Masks are binary images (0 = background, 255 = spill)
81
- - Annotations follow standard object detection formats
82
-
83
- ## Citation
84
-
85
- If you use this dataset in your research, please cite our ICCV 2025 paper:
86
-
87
- ```bibtex
88
- @inproceedings{baranwal2025synspill,
89
- title={SynSpill: Improved Industrial Spill Detection With Synthetic Data},
90
- author={Baranwal, Aaditya and Bhatia, Guneet and Mueez, Abdul and Voelker, Jason and Vyas, Shruti},
91
- booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision - Workshops (ICCV-W)},
92
- year={2025}
93
- }
94
- ```
95
-
96
- ## License
97
-
98
- Please refer to the `LICENSE.md` file in the root directory for licensing information.
99
- >>>>>>> 88ba51e64db7d06e72955aec9f7edc45fd937ec7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-to-image
5
+ - object-detection
6
+ language:
7
+ - en
8
+ tags:
9
+ - industry
10
+ - synthetic
11
+ - spills
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---
15
+ ## SynSpill Reproduction Guide
16
+
17
+ Create a conda environment and install the dependencies (Python 3.12).
18
+
19
+ ### 1. Environment Setup
20
+
21
+ ```bash
22
+ # Clone and setup
23
+ git clone https://github.com/comfyanonymous/ComfyUI.git
24
+ cd ComfyUI
25
+
26
+ # Install dependencies
27
+ pip install -r requirements.txt
28
+
29
+ # Install PyTorch (NVIDIA GPU)
30
+ pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128
31
+
32
+ ```bash
33
+ # Manual ComfyUI Manager installation
34
+ cd custom_nodes
35
+ git clone https://github.com/ltdrdata/ComfyUI-Manager.git
36
+ cd ..
37
+
38
+ # Install custom nodes
39
+ ./install_custom_nodes.sh
40
+ ```
41
+
42
+ ### 2. Download Required Models
43
+
44
+ #### Model Directory Structure
45
+ ```
46
+ models/
47
+ ├── checkpoints/ # Base diffusion models (.safetensors)
48
+ ├── vae/ # VAE models
49
+ ├── loras/ # LoRA weights
50
+ ├── controlnet/ # ControlNet models
51
+ ├── clip_vision/ # CLIP vision models
52
+ └── ipadapter/ # IP-Adapter models
53
+ ```
54
+
55
+ #### Required Models for Research Reproduction
56
+
57
+ **Base Models:**
58
+
59
+ ```bash
60
+ # Create directories
61
+ mkdir -p models/checkpoints models/loras models/ipadapter models/clip_vision
62
+
63
+ # SDXL-Turbo Inpainting Model
64
+ wget -P models/checkpoints/ https://huggingface.co/stabilityai/sdxl-turbo/resolve/main/sd_xl_turbo_1.0_fp16.safetensors
65
+ ```
66
+
67
+ **IP-Adapter Components:**
68
+
69
+ ```bash
70
+ # IP Composition Adapter - Download specific files
71
+ wget -P models/ipadapter/ https://huggingface.co/ostris/ip-composition-adapter/resolve/main/ip_plus_composition_sd15.safetensors
72
+ # Or for SDXL version:
73
+ wget -P models/ipadapter/ https://huggingface.co/ostris/ip-composition-adapter/resolve/main/ip_plus_composition_sdxl.safetensors
74
+
75
+ # CLIP ViT-H/14 LAION-2B - Download model files
76
+ wget -P models/clip_vision/ https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/open_clip_pytorch_model.bin
77
+ wget -P models/clip_vision/ https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K/resolve/main/config.json
78
+ ```
79
+
80
+ **Manual Downloads Required:**
81
+
82
+ - **Interior Scene XL**: Visit https://civitai.com/models/715747/interior-scene-xl and download the model file to `models/checkpoints/`
83
+ - **Factory Model** (optional): Visit https://civitai.com/models/77373/factory for additional scene generation
84
+
85
+ **Note:** Some models from CivitAI require account registration and manual download due to licensing agreements.
86
+
87
+ ### 3. Custom Nodes Installation
88
+
89
+ #### Automated Installation (Recommended)
90
+
91
+ We provide a comprehensive installation script that clones all the custom nodes used in this research:
92
+
93
+ ```bash
94
+ # Make the script executable (if not already)
95
+ chmod +x install_custom_nodes.sh
96
+
97
+ # Run the installation script
98
+ ./install_custom_nodes.sh
99
+ ```
100
+
101
+ **Installed Custom Nodes Include:**
102
+
103
+ - **ComfyUI Manager** - Essential for managing nodes and models
104
+ - **ComfyUI IPAdapter Plus** - IP-Adapter functionality for composition
105
+ - **ComfyUI Impact Pack/Subpack** - Advanced image processing and segmentation
106
+ - **ComfyUI Inspire Pack** - Additional workflow utilities
107
+ - **ComfyUI Custom Scripts** - Workflow enhancements and UI improvements
108
+ - **ComfyUI Dynamic Prompts** - Dynamic prompt generation
109
+ - **ComfyUI KJNodes** - Various utility nodes for image processing
110
+ - **ComfyUI Ultimate SD Upscale** - Advanced upscaling capabilities
111
+ - **ComfyUI GGUF** - Support for GGUF model format
112
+ - **ComfyUI Image Filters** - Comprehensive image filtering nodes
113
+ - **ComfyUI Depth Anything V2** - Depth estimation capabilities
114
+ - **ComfyUI RMBG** - Background removal functionality
115
+ - **ComfyUI FizzNodes** - Animation and scheduling nodes
116
+ - **RGThree ComfyUI** - Advanced workflow management
117
+ - **WAS Node Suite** - Comprehensive collection of utility nodes
118
+ - **And more...**
119
+
120
+ ### 4. Using ComfyUI Manager
121
+
122
+ After installing ComfyUI Manager, you can easily install missing nodes and models:
123
+
124
+ ```bash
125
+ # Start ComfyUI first
126
+ python main.py --listen 0.0.0.0 --port 8188
127
+ ```
128
+
129
+ **In the ComfyUI Web Interface:**
130
+
131
+ 1. **Access Manager**: Click the "Manager" button in the ComfyUI interface
132
+ 2. **Install Missing Nodes**:
133
+ - Load any workflow that uses custom nodes
134
+ - Click "Install Missing Custom Nodes" to automatically install required nodes
135
+ 3. **Install Models**:
136
+ - Go to "Model Manager" tab
137
+ - Search and install models directly from the interface
138
+ - Supports HuggingFace, CivitAI, and other model repositories
139
+
140
+ **Alternative Model Installation via Manager:**
141
+
142
+ - **Checkpoints**: Search for "SDXL" or "Stable Diffusion" models
143
+ - **IP-Adapters**: Search for "IP-Adapter" in the model manager
144
+ - **ControlNets**: Browse and install ControlNet models as needed
145
+ - **LoRAs**: Install LoRA models directly through the interface
146
+
147
+ **Benefits of using ComfyUI Manager:**
148
+
149
+ - Automatic dependency resolution
150
+ - One-click installation of missing nodes
151
+ - Model browser with direct download
152
+ - Version management
153
+ - Automatic updates
154
+
155
+ ### 5. Start ComfyUI Server
156
+
157
+ ```bash
158
+ # Local access
159
+ python main.py
160
+
161
+ # Network access (for cluster/remote)
162
+ python main.py --listen 0.0.0.0 --port 8188
163
+
164
+ # With latest frontend
165
+ python main.py --front-end-version Comfy-Org/ComfyUI_frontend@latest
166
+ ```
167
+
168
+ Access at: `http://localhost:8188`
169
+
170
+ ## Research-Specific Features
171
+
172
+ ### Custom Guidance Methods
173
+
174
+ - **FreSca**: Frequency-dependent scaling guidance (`comfy_extras/nodes_fresca.py`)
175
+ - **PAG**: Perturbed Attention Guidance (`comfy_extras/nodes_pag.py`)
176
+ - **SAG**: Self Attention Guidance (`comfy_extras/nodes_sag.py`)
177
+ - **SLG**: Skip Layer Guidance (`comfy_extras/nodes_slg.py`)
178
+ - **APG**: Adaptive Patch Guidance (`comfy_extras/nodes_apg.py`)
179
+ - **Mahiro**: Direction-based guidance scaling (`comfy_extras/nodes_mahiro.py`)
180
+
181
+ ### Advanced Sampling
182
+
183
+ - Custom samplers and schedulers (`comfy_extras/nodes_custom_sampler.py`)
184
+ - Token merging optimization (`comfy_extras/nodes_tomesd.py`)
185
+ - Various diffusion model sampling methods
186
+
187
+ ## Research Configuration
188
+
189
+ ### Key Hyperparameters for Synthetic Image Generation
190
+
191
+ The following table summarizes the key hyperparameters used in our synthetic image generation pipeline:
192
+
193
+ | Parameter | Value / Configuration |
194
+ |-----------|----------------------|
195
+ | **Scene Generation Specifics** | |
196
+ | Base Model | Stable Diffusion XL 1.0 |
197
+ | Image Resolution | 1024 × 1024 |
198
+ | Sampler | DDPM-SDE-2m-GPU |
199
+ | Scheduler | Karras |
200
+ | Sampling Steps | 64 |
201
+ | CFG Scale | 8 |
202
+ | LoRA Strength | 0.2–0.4 |
203
+ | IP-Adapter | IP Composition+CLIP-ViT-H |
204
+ | IP-Adapter Strength | 0.6 |
205
+ | **Inpainting Specifics** | |
206
+ | Inpainting Model | SDXL-Turbo Inpainting |
207
+ | Differential Diffusion | Enabled |
208
+ | Mask Feathering | 50 pixels |
209
+ | Mask Opacity | 75% |
210
+ | Denoise Strength | 0.5-0.6 |
211
+
212
+ ### Model References
213
+
214
+ - **Interior Scene XL**: https://civitai.com/models/715747/interior-scene-xl
215
+ - **SDXL-Turbo**: https://huggingface.co/stabilityai/sdxl-turbo
216
+ - **IP Composition Adapter**: https://huggingface.co/ostris/ip-composition-adapter
217
+ - **CLIP ViT-H/14 LAION-2B**: https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K
218
+
219
+ ### Configuration in ComfyUI
220
+
221
+ When setting up workflows in ComfyUI, ensure the following nodes are configured with the specified parameters:
222
+
223
+ **KSampler/KSampler Advanced:**
224
+
225
+ - Steps: 64
226
+ - CFG: 8.0
227
+ - Sampler: ddpm_sde_gpu (or ddpm_sde if GPU version unavailable)
228
+ - Scheduler: karras
229
+
230
+ **LoRA Loader:**
231
+
232
+ - Strength Model: 0.2-0.4 range
233
+ - Strength CLIP: 0.2-0.4 range
234
+
235
+ **IPAdapter:**
236
+
237
+ - Weight: 0.6
238
+ - Weight Type: composition (for IP Composition Adapter)
239
+
240
+ **Inpainting Specific:**
241
+
242
+ - Denoise: 0.5-0.6
243
+ - Use differential diffusion when available
244
+ - Mask feathering: 50 pixels
245
+ - Mask opacity: 0.75
246
+
247
+ ## Running Experiments
248
+
249
+ ### Load Research Workflows
250
+
251
+ 1. Navigate to ComfyUI interface
252
+ 2. Load workflows from `user/default/workflows/`:
253
+ - `IMG-SDTune-Lightning-RD.json`
254
+ - `Inpaint.json`
255
+ - `IP-Adapter.json`
256
+ - `Test Factory.json`
257
+
258
+ **Using ComfyUI Manager with Workflows:**
259
+
260
+ - When loading workflows, if nodes are missing, ComfyUI Manager will show a popup
261
+ - Click "Install Missing Custom Nodes" to automatically install required nodes
262
+ - Restart ComfyUI after installation
263
+ - Reload the workflow to verify all nodes are available
264
+
265
+ ### For Cluster Usage
266
+
267
+ See `CLUSTER_ACCESS_README.md` for detailed SLURM cluster setup with SSH tunneling.
268
+
269
+ ### API Usage
270
+
271
+ ```python
272
+ # Basic API example
273
+ python script_examples/basic_api_example.py
274
+
275
+ # WebSocket examples
276
+ python script_examples/websockets_api_example.py
277
+ ```
278
+
279
+ ## Key Research Nodes
280
+
281
+ | Node | Purpose | Location |
282
+ |------|---------|----------|
283
+ | FreSca | Frequency scaling | `_for_testing` category |
284
+ | PAG | Attention perturbation | `model_patches/unet` |
285
+ | SAG | Self-attention guidance | `model_patches` |
286
+ | Mahiro | Directional guidance | `_for_testing` |
287
+
288
+ ## Troubleshooting
289
+
290
+ **CUDA Issues:**
291
+
292
+ ```bash
293
+
294
+ pip uninstall torch
295
+ pip install torch --extra-index-url https://download.pytorch.org/whl/cu128
296
+ ```
297
+
298
+ **Memory Issues:**
299
+
300
+ ```bash
301
+ python main.py --cpu # CPU fallback
302
+ python main.py --force-fp32 # Lower precision
303
+ ```
304
+
305
+ **Custom Nodes Not Loading:**
306
+
307
+ - Check `custom_nodes/` directory
308
+ - Restart ComfyUI after installing new nodes
309
+ - Check logs for dependency issues
310
+ - Use ComfyUI Manager to reinstall problematic nodes
311
+ - Try "Update All" in ComfyUI Manager for compatibility fixes
312
+
313
+ **ComfyUI Manager Issues:**
314
+
315
+ - If Manager button doesn't appear, restart ComfyUI
316
+ - Check that ComfyUI-Manager is properly cloned in `custom_nodes/`
317
+ - For model download failures, try manual wget commands provided above
318
+ - Clear browser cache if Manager interface doesn't load properly
319
+
320
+ **Custom Nodes Installation Script Issues:**
321
+
322
+ - If script fails with permission errors, run: `chmod +x install_custom_nodes.sh`
323
+ - For network issues, try running the script again (it will skip existing installations)
324
+ - If specific nodes fail to clone, check your internet connection and GitHub access
325
+ - Some nodes may require additional dependencies - check individual node README files
326
+ - After running the script, restart ComfyUI to load all new nodes
327
+
328
+ ## Directory Structure
329
+
330
+ After setup, your ComfyUI directory should look like this:
331
+
332
+ ```
333
+ ComfyUI/
334
+ ├── models/
335
+ │ ├── checkpoints/
336
+ │ │ ├── [SDXL models]
337
+ │ │ └── [Inpainting models]
338
+ │ ├── loras/
339
+ │ │ └── [LoRA models]
340
+ │ ├── controlnet/
341
+ │ │ └── [ControlNet models]
342
+ │ ├── ipadapter/
343
+ │ │ └── [IP-Adapter models]
344
+ │ └── [other model directories]
345
+ ├── custom_nodes/
346
+ │ ├── ComfyUI-Manager/
347
+ │ ├── ComfyUI-IPAdapter-Plus/
348
+ │ └── [other extensions]
349
+ └── [other ComfyUI files]
350
+ ```
351
+
352
+ ## SynSpill Integration
353
+
354
+ After ComfyUI is set up:
355
+
356
+ 1. Clone the SynSpill repository
357
+ 2. Copy the provided ComfyUI workflows to your ComfyUI directory
358
+ 3. Configure the data paths in the workflow files
359
+ 4. Run the synthetic data generation pipeline
360
+
361
+ ## Troubleshooting
362
+
363
+ ### Common Issues
364
+
365
+ - **CUDA out of memory**: Reduce batch size or use model offloading
366
+ - **Missing models**: Ensure all models are downloaded and placed in correct directories
367
+ - **Extension conflicts**: Check ComfyUI Manager for compatibility issues
368
+
369
+ ### Performance Optimization
370
+
371
+ - Use `--lowvram` flag if you have limited GPU memory
372
+ - Consider using `--cpu` for CPU-only inference (slower)
373
+ - Enable model offloading for better memory management
374
+
375
+ ## Next Steps
376
+
377
+ Once ComfyUI is properly set up, you can proceed with:
378
+ 1. Loading the SynSpill workflows
379
+ 2. Configuring dataset paths
380
+ 3. Running synthetic data generation
381
+ 4. Training adaptation models