| | --- |
| | license: cc-by-4.0 |
| | task_categories: |
| | - text-to-image |
| | language: |
| | - en |
| | - zh |
| | - fr |
| | tags: |
| | - vision |
| | - color |
| | - evaluation |
| | - diagnostic |
| | - AI-Obedience |
| | pretty_name: VIOLIN |
| | size_categories: |
| | - 10K<n<100K |
| | --- |
| | |
| | # VIOLIN: Visual Instruction-based Color Evaluation |
| |
|
| | **VIOLIN** (VIsual Obedience Level-4 EvaluatIoN) is a diagnostic benchmark designed to assess the **Level-4 Instructional Obedience** of text-to-image generative models. |
| |
|
| | While state-of-the-art models can render complex semantic scenes (e.g., "Cyberpunk cityscapes"), they often fail at the most fundamental deterministic tasks: generating a perfectly pure, texture-less color image. VIOLIN provides a rigorous framework to measure this "Paradox of Simplicity." |
| |
|
| |
|
| |
|
| | ## 🧪 Key Scientific Insights |
| | Our research identifies two primary obstacles in current generative AI: |
| | - **Aesthetic Inertia**: The tendency of models to prioritize visual richness and textures over strict instructional adherence, even when "pure color" or "no texture" is explicitly requested. |
| | - **Semantic Gravity**: The bias where models follow instructions better when they align with common visual knowledge but fail when context is random or conflicting. |
| |
|
| | ## 📊 Dataset Structure |
| | The dataset comprises over 42,000 text-image pairs across 6 variations: |
| |
|
| | | Variation | Description | Evaluation Focus | |
| | | :--- | :--- | :--- | |
| | | **Variation 1** | Single Color Block | Basic pixel-level precision (ISCC-NBS) | |
| | | **Variation 2** | Two-block Split | Spatial layout and vertical/horizontal split | |
| | | **Variation 3** | Four-quadrant Split | Complex spatial reasoning and contrast | |
| | | **Variation 4** | Fuzzy Color | Bounded constraints and flexibility | |
| | | **Variation 5** | Multilingual | Robustness across English, Chinese, and French | |
| | | **Variation 6** | Color Spaces | Cross-format understanding (Hex, RGB, HSL) | |
| |
|
| |
|
| |
|
| | ## 📐 Evaluation Metrics |
| | We propose a dual-metric approach for evaluating "Minimum Viable Obedience": |
| | 1. **Color Precision**: Measuring the ΔE (CIEDE2000) or Euclidean distance between the generated pixels and the ground truth. |
| | 2. **Color Purity**: Assessing the presence of artifacts, gradients, or unintended textures using variance-based analysis. |
| |
|
| | ## 📁 How to Use |
| | You can load the dataset directly via the Hugging Face `datasets` library: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("Perkzi/VIOLIN") |
| | |