--- pretty_name: WeART language: - en license: other task_categories: - image-retrieval - image-classification - text-image-retrieval tags: - icassp-2026 - art - artistic-style - style-retrieval - benchmark - computer-vision - multimodal - cultural-analysis size_categories: - 100K ### A large-scale benchmark for artistic style analysis across diverse cultures, styles, and artists **280K+ Artworks** • **152 Styles** • **1,556 Artists** **Balanced Coverage** for **Style Retrieval**, **Clustering**, and **Generative Model Evaluation** [📄 Paper](https://arxiv.org/abs/2601.17697) --- ## Overview **WeART** is a large-scale benchmark for **artistic style analysis**. It is introduced to address major limitations in existing art datasets, including narrow cultural coverage, incomplete annotations, and weak support for robust style generalization. Compared with prior resources, WeART provides broader coverage of artistic categories, stronger balance across styles and artists, and cleaner curation for large-scale evaluation. :contentReference[oaicite:1]{index=1} It is designed for: - artistic style retrieval - artist-aware representation learning - style clustering and visualization - cross-cultural art analysis - evaluation of generative models on stylistic fidelity --- ## Highlights - **280,000+ artworks** - **152 artistic styles** - **1,556 artists** - **~3× larger than WikiArt** - **Only 3% artist overlap with WikiArt** - Curated for **quality**, **balance**, and **broader cultural coverage** :contentReference[oaicite:2]{index=2} --- ## What makes WeART useful? WeART is built to better reflect the real diversity of artistic expression. Compared with existing benchmarks, it significantly improves coverage of underrepresented categories such as: - children’s illustration - digital art - traditional Chinese painting The dataset is also curated with: - manual duplicate removal - high-resolution scans - a minimum of two works per artist - strong artist coverage, with **88% of artists having five or more works** :contentReference[oaicite:3]{index=3} This makes WeART a strong benchmark for evaluating whether visual representations truly capture **style**, rather than only semantic content. --- ## Dataset Content Each sample typically corresponds to an artwork and may include metadata such as: - **image** - **artist** - **style** - **category** - **source information** WeART is especially suitable for studying: - style-sensitive visual retrieval - artist-level clustering - style manifold discovery - generalization to underrepresented artistic traditions - stylistic evaluation of generated images --- ## Why WeART? Many existing visual models struggle to separate **style** from **content**, especially when evaluated beyond familiar Western art distributions. WeART was introduced as a more challenging benchmark for this setting. It supports evaluation on a broader range of artistic forms and enables more reliable analysis of stylistic similarity, clustering structure, and model robustness across cultures and artistic traditions. :contentReference[oaicite:4]{index=4} --- ## Use Cases WeART is intended for: - benchmarking artistic style retrieval - learning style-aware image representations - analyzing artist and movement similarity - studying cross-domain and cross-cultural generalization - evaluating generative models for artistic style fidelity - building new metrics for artistic similarity --- ## Paper **StyleDecoupler: Generalizable Artistic Style Disentanglement** **Zexi Jia, Jinchao Zhang, Jie Zhou** **ICASSP 2026** [arXiv: 2601.17697](https://arxiv.org/abs/2601.17697) --- ## Citation If you use **WeART** in your research, please cite: ```bibtex @article{jia2026styledecoupler, title={StyleDecoupler: Generalizable Artistic Style Disentanglement}, author={Jia, Zexi and Zhang, Jinchao and Zhou, Jie}, journal={arXiv preprint arXiv:2601.17697}, year={2026} }