Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,11 +7,15 @@ size: 10,000 instances
|
|
| 7 |
---
|
| 8 |
|
| 9 |
# What is it?
|
| 10 |
-
W-Bench is the first
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
|
| 12 |
# Dataset Structure
|
| 13 |
|
| 14 |
-
The evaluation set
|
| 15 |
- 1,000 samples for stochastic regeneration
|
| 16 |
- 1,000 samples for deterministic regeneration
|
| 17 |
- 1,000 samples for global editing
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
# What is it?
|
| 10 |
+
**W-Bench is the first benchmark to evaluate watermarking robustness across four image editing techniques.**
|
| 11 |
+
|
| 12 |
+
Eleven representative watermarking methods are evaluated on the W-Bench. The W-Bench contains 10,000 images sourced from datasets such as COCO, Flickr, ShareGPT4V, etc.
|
| 13 |
+
|
| 14 |
+
GitHub Page: [https://github.com/Shilin-LU/VINE](https://github.com/Shilin-LU/VINE)
|
| 15 |
|
| 16 |
# Dataset Structure
|
| 17 |
|
| 18 |
+
The evaluation set consists of six subsets, each targeting a different type of AIGC-based image editing:
|
| 19 |
- 1,000 samples for stochastic regeneration
|
| 20 |
- 1,000 samples for deterministic regeneration
|
| 21 |
- 1,000 samples for global editing
|