Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
Vis | 2,025 | "It looks sexy but it's wrong." Tensions in creativity and accuracy using genAI for biomedical visualization | 10.1109/TVCG.2025.3633883 | We contribute an in-depth analysis of the workflows and tensions arising from generative AI (genAI) use in biomedical visualization (BioMedVis). Although genAI affords facile production of aesthetic visuals for biological and medical content, the architecture of these tools fundamentally limits the accuracy and trustworthiness of the depicted information, from imaginary (or fanciful) molecules to alien anatomy. Through 17 interviews with a diverse group of practitioners and researchers, we qualitatively analyze the concerns and values driving genAI (dis)use for the visual representation of spatially-oriented biomedical data. We find that BioMedVis experts, both in roles as developers and designers, use genAI tools at different stages of their daily workflows and hold attitudes ranging from enthusiastic adopters to skeptical avoiders of genAI. In contrasting the current use and perspectives on genAI observed in our study with predictions towards genAI in the visualization pipeline from prior work, we refocus the discussion of genAI's effects on projects in visualization in the here and now with its respective opportunities and pitfalls for future visualization research. At a time when public trust in science is in jeopardy, we are reminded to first do no harm, not just in biomedical visualization but in science communication more broadly. Our observations reaffirm the necessity of human intervention for empathetic design and assessment of accurate scientific visuals. Supplemental study materials are available at https://osf.io/genaixbiomedvis/. | false | true | [
"Roxanne Ziman",
"Shehryar Saharan",
"Gaël McGill",
"Laura Garrison"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.14494",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/mbw86/?view_only=e087ab5b90a6474abec7bfc42cd2b105",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_619269e9-f6be-4882-aec8-0ffe4cee1a96.html",
"icon": "other"
}
] |
Vis | 2,025 | "Mapping What I Feel": Understanding Affective Geovisualization Design Through the Lens of People-Place Relationships | 10.1109/TVCG.2025.3633878 | Affective visualization design is an emerging research direction focused on communicating and influencing emotion through visualization. However, as revealed by previous research, this area is highly interdisciplinary and involves theories and practices from diverse fields and disciplines, thus awaiting analysis from more fine-grained angles. To address this need, this work focuses on a pioneering and relatively mature sub-area, affective geovisualization design, to further the research in this direction and provide more domain-specific insights. Through an analysis of a curated corpus of affective geovisualization designs using the Person-Process-Place (PPP) model from geographic theory, we derived a design taxonomy that characterizes a variety of methods for eliciting and enhancing emotions through geographic visualization. We also identified four underlying high-level design paradigms of affective geovisualization design (e.g., computational, anthropomorphic) that guide distinct approaches to linking geographic information with human experience. By extending existing affective visualization design frameworks with geographic specificity, we provide additional design examples, domain-specific analyses, and insights to guide future research and practices in this underexplored yet highly innovative domain. | false | true | [
"Xingyu Lan",
"Yutong Yang",
"Yifan Wang"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.11841",
"icon": "paper"
},
{
"name": "Website",
"url": "https://affectivegeovis.github.io/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_33761c1b-3649-4d86-8e2f-41cdb361e6a6.html",
"icon": "other"
}
] |
Vis | 2,025 | "They Aren't Built For Me": An Exploratory Study of Strategies for Measurement of Graphical Primitives in Tactile Graphics | 10.1109/TVCG.2025.3633881 | Advancements in accessibility technologies such as low-cost swell form printers or refreshable tactile displays promise to allow blind or low-vision (BLV) people to analyze data by transforming visual representations directly to tactile representations. However, it is possible that design guidelines derived from experiments on the visual perception system may not be suited for the tactile perception system. We investigate the potential mismatch between familiar visual encodings and tactile perception in an exploratory study into the strategies employed by BLV people to measure common graphical primitives converted to tactile representations. First, we replicate the Cleveland and McGill study on graphical perception using swell form printing with eleven BLV subjects. Then, we present results from a group interview in which we describe the strategies used by our subjects to read four common chart types. While our results suggest that familiar encodings based on visual perception studies can be useful in tactile graphics, our subjects also expressed a desire to use encodings designed explicitly for BLV people. Based on this study, we identify gaps between the perceptual expectations of common charts and the perceptual tools available in tactile perception. Then, we present a set of guidelines for the design of tactile graphics that accounts for these gaps. Supplemental material is available at https://osf.io/3nsfp/?view_only=7b7b8dcbae1d4c9a8bb4325053d13d9f. | true | true | [
"Areen Khalaila",
"Lane Harrison",
"Nam Wook Kim",
"Dylan Cashman"
] | [
"BP"
] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/3nsfp/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_a005eefe-c93e-4d8d-9e90-468a635c8887.html",
"icon": "other"
}
] |
Vis | 2,025 | A Design Space for Multiscale Visualization | 10.1109/TVCG.2025.3634790 | Designing multiscale visualizations, particularly when the ratio between the largest scale and the smallest item is large, can be challenging, and designers have developed many approaches to overcome this challenge. We present a design space for visualization with multiple scales. The design space includes three dimensions, with eight total subdimensions. We demonstrate its descriptive power by using it to code approaches from a corpus we compiled of 52 examples, created by a mix of academics and practitioners. We demonstrate descriptive power by analyzing and partitioning these examples into four high-level strategies for designing multiscale visualizations, which are shared approaches with respect to design space dimension choices. We demonstrate generative power by analyzing missed opportunities within the corpus of examples, identified through analysis of the design space, where we note how certain examples could have benefited from different choices. We discuss patterns in the use of different dimension and strategy choices in the different visualization contexts of analysis and presentation. Supplemental materials: https://osf.io/wbrdm/ Design space website: https://marasolen.github.io/multiscale-vis-ds/ | true | true | [
"Mara Solen",
"Matt Oddo",
"Tamara Munzner"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2404.01485",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/wbrdm/",
"icon": "other"
},
{
"name": "Website",
"url": "https://marasolen.github.io/multiscale-vis-ds/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_09fe1b80-d4a6-454e-bd5f-d49b53eae041.html",
"icon": "other"
}
] |
Vis | 2,025 | A Multimodal Framework for Understanding Collaborative Design Processes | 10.1109/TVCG.2025.3634232 | An essential task in analyzing collaborative design processes, such as those that are part of workshops in design studies, is identifying design outcomes and understanding how the collaboration between participants formed the results and led to decision-making. However, findings are typically restricted to a consolidated textual form based on notes from interviews or observations. A challenge arises from integrating different sources of observations, leading to large amounts and heterogeneity of collected data. To address this challenge we propose a practical, modular, and adaptable framework of workshop setup, multimodal data acquisition, AI-based artifact extraction, and visual analysis. Our interactive visual analysis system, reCAPit, allows the flexible combination of different modalities, including video, audio, notes, or gaze, to analyze and communicate important workshop findings. A multimodal streamgraph displays activity and attention in the working area, temporally aligned topic cards summarize participants' discussions, and drill-down techniques allow inspecting raw data of included sources. As part of our research, we conducted six workshops across different themes ranging from social science research on urban planning to a design study on band-practice visualization. The latter two are examined in detail and described as case studies. Further, we present considerations for planning workshops and challenges that we derive from our own experience and the interviews we conducted with workshop experts. Our research extends existing methodology of collaborative design workshops by promoting data-rich acquisition of multimodal observations, combined AI-based extraction and interactive visual analysis, and transparent dissemination of results. | true | true | [
"Maurice Koch",
"Nelusa Pathmanathan",
"Daniel Weiskopf",
"Kuno Kurzhals"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.06117",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://doi.org/10.18419/DARUS-5166",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_18f5b9fb-e80f-415d-a701-6c7256d9711b.html",
"icon": "other"
}
] |
Vis | 2,025 | A Rigorous Behavior Assessment of CNNs Using a Data-Domain Sampling Regime | 10.1109/TVCG.2025.3633829 | We present a data-domain sampling regime for quantifying CNNs' graphic perception behaviors. This regime lets us evaluate CNNs' ratio estimation ability in bar charts from three perspectives: sensitivity to training-test distribution discrepancies, stability to limited samples, and relative expertise to human observers. After analyzing 16 million trials from 800 CNN models and 6,825 trials from 113 human participants, we arrived at a simple and actionable conclusion: CNNs can outperform humans and their biases simply depend on the training-test distance. We show evidence of this simple, elegant behavior of the machines when they interpret visualization images. osf.io/gfqc3 provides registration, the code for our sampling regime, and experimental results. | false | true | [
"Shuning Jiang",
"Wei-Lun Chao",
"Daniel Haehn",
"Hanspeter Pfister",
"Jian Chen"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://www.arxiv.org/abs/2507.03866",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/gfqc3/",
"icon": "other"
},
{
"name": "Preregistration 1",
"url": "https://osf.io/t65su",
"icon": "other"
},
{
"name": "Preregistration 2",
"url": "https://osf.io/mzcky",
"icon": "other"
},
{
"name": "Preregistration 3",
"url": "https://osf.io/myt7g",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_72c11016-caee-44a4-82cf-1939cf5d21d3.html",
"icon": "other"
}
] |
Vis | 2,025 | Affective color scales for colormap data visualizations | 10.1109/TVCG.2025.3634775 | Research on affective visualization design has shown that color is an especially powerful feature for influencing the emotional connotation of visualizations. Associations between colors and emotions are largely driven by lightness (e.g., lighter colors are associated with positive emotions, whereas darker colors are associated with negative emotions). Designing visualizations to have all light or all dark colors to convey particular emotions may work well for visualizations in which colors represent categories and spatial channels encode data values. However, this approach poses a problem for visualizations that use color to represent spatial patterns in data (e.g., colormap data visualizations) because lightness contrast is needed to reveal fine details in spatial structure. In this study, we found it is possible to design colormaps that have strong lightness contrast to support spatial vision while communicating clear affective connotation. We also found that affective connotation depended not only on the color scales used to construct the colormaps, but also the frequency with which colors appeared in the map, as determined by the underlying dataset (data-dependence hypothesis). These results emphasize the importance of data-aware design, which accounts for not only the design features that encode data (e.g., colors, shapes, textures), but also how those design features are instantiated in a visualization, given the properties of the data. | false | true | [
"Halle Braun",
"Kushin Mukherjee",
"Seth Gorelik",
"Karen Schloss"
] | [
"HM"
] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/p3bva_v1",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/SchlossVRL/color_scales_affect",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_c9fcc45e-0f81-49be-817e-b813e0b61c89.html",
"icon": "other"
}
] |
Vis | 2,025 | Algorithmically-Assisted Schematic Transit Map Design: A System and Algorithmic Core for Fast Layout Iteration | 10.1109/TVCG.2025.3633910 | London's famous “tube map” is an iconic piece of design and perhaps represents the schematic visualization style most well-known to the general public: its octolinearity has become the de facto standard for transit maps around the world. Making a good schematic transit map is challenging and labour-intensive, and has attracted the attention of the optimization community. Much of the literature has focused on mathematically defining an optimal drawing and algorithms to compute one. However, achieving these “optimal” layouts is computationally challenging, often requiring multiple minutes of runtime. Crucially, what it means for a map to be good is actually highly dependent on factors that evade a general formal definition, like unique landmarks within the network, the context in which a map will be displayed, and the preference of the designer and client. Rather than attempting to make an algorithm that produces a single high-quality and ready-to-use metro map at great cost, we propose it is more fruitful to support rapid layout iteration by a human designer, providing a workflow that enables efficient exploration of a wider range of designs than could be done by hand, and iterating on these designs. To this end we identify steps in the design process of schematic maps that are tedious to do by hand but are algorithmically feasible, and present a framework around a simple linear program that computes network layouts almost instantaneously given a fixed direction for every connection. These connection directions are decided by a designer in a graphical user interface with several interaction methods and a number of quality-of-life features demonstrating the flexibility of the framework; the implementation is available as open source. | false | true | [
"Thomas C. van Dijk",
"Soeren Terziadis"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_f53bf658-7b1b-4417-9c0c-0b74e7455702.html",
"icon": "other"
}
] |
Vis | 2,025 | An Analysis of Text Functions in Information Visualization | 10.1109/TVCG.2025.3634632 | Text is an integral but understudied component of visualization design. Although recent studies have examined how text elements (e.g., titles and annotations) influence comprehension, preferences, and predictions, many questions remain about textual design and use in practice. This paper introduces a framework for understanding text functions in information visualizations, building on and filling gaps in prior classifications and taxonomies. Through an analysis of 120 real-world visualizations and 804 text elements, we identified ten distinct text functions, ranging from identifying data mappings to presenting valenced subtext. We further identify patterns in text usage and conduct a factor analysis, revealing four overarching text-informed design strategies: Attribution and Variables, Annotation-Centric Design, Visual Embellishments, and Narrative Framing. In addition to these factors, we explore features of title rhetoric and text multifunctionality, while also uncovering previously unexamined text functions, such as text replacing visual elements. Our findings highlight the flexibility of text, demonstrating how different text elements in a given design can combine to communicate, synthesize, and frame visual information. This framework adds important nuance and detail to existing frameworks that analyze the diverse roles of text in visualization. | true | true | [
"Chase Stokes",
"Anjana Arunkumar",
"Marti Hearst",
"Lace Padilla"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.12334",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/swqfc/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9188996c-0f8f-420b-95eb-1cf466e3fa0d.html",
"icon": "other"
}
] |
Vis | 2,025 | An Autoethnography on Visualization Literacy: A Wicked Measurement Problem | 10.1109/TVCG.2025.3634792 | We contribute an autoethnographic reflection on the complexity of defining and measuring visualization literacy (i.e., the ability to interpret and construct visualizations) to expose our tacit thoughts that often exist in-between polished works and remain unreported in individual research papers. Our work is inspired by the growing number of empirical studies in visualization research that rely on visualization literacy as a basis for developing effective data representations or educational interventions. Researchers have already made various efforts to assess this construct, yet it is often hard to pinpoint either what we want to measure or what we are effectively measuring. In this autoethnography, we gather insights from 14 internal interviews with researchers who are users or designers of visualization literacy tests. We aim to identify what makes visualization literacy assessment a “wicked” problem. We further reflect on the fluidity of visualization literacy and discuss how this property may lead to misalignment between what the construct is and how measurements of it are used or designed. We also examine potential threats to measurement validity from conceptual, operational, and methodological perspectives. Based on our experiences and reflections, we propose several calls to action aimed at tackling the wicked problem of visualization literacy measurement, such as by broadening test scopes and modalities, improving test ecological validity, making it easier to use tests, seeking interdisciplinary collaboration, and drawing from continued dialogue on visualization literacy to expect and be more comfortable with its fluidity. | true | true | [
"Lily W. Ge",
"Anne-Flore Cabouat",
"Karen Bonilla",
"Yuan Cui",
"Yiren Ding",
"Noëlle Rakotondravony",
"Mackenzie Creamer",
"Jasmine Otto",
"Maryam Hedayati",
"Bum Chul Kwon",
"Angela Locoro",
"Lane Harrison",
"Petra Isenberg",
"Michael Correll",
"Matthew Kay"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/dfr4p_v2",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/xwr4c/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_7c5cabc5-b76c-4eba-be3a-c6fcfd508888.html",
"icon": "other"
}
] |
Vis | 2,025 | An Intelligent Interactive Visual Analytics System for Exploring Large and Multi-Scale Pathology Images | 10.1109/TVCG.2025.3634833 | Pathology images are crucial for cancer diagnosis and treatment. Although artificial intelligence has driven rapid advancements in pathology image analysis, the interpretation of ultra-large and multi-scale pathology images in clinical practice still heavily relies on physicians' experience. Clinicians need to repeatedly zoom in and out on individual slides to compare and assess pathological details - a process that is both time-consuming and prone to visual fatigue. The system first employs a diffusion model to perform tissue segmentation on pathology images, then calculates pathological tissue proportions and morphological metrics. Finally, through multi-scale dynamic comparison and multi-level visual evaluation, the system facilitates comprehensive and precise analysis of pathology images. The system provides clinicians with an intelligent and interactive tool for pathology image interpretation, enabling efficient visualization and precise analysis of pathological details, thereby reducing the effort require for detailed analysis. | false | true | [
"Chaoqing Xu",
"Ruiqi Yang",
"Weihan Li",
"Xinyuan Fu",
"Liting Fang",
"Zunlei Feng",
"Can Wang",
"Mingli Song",
"Wei Chen"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3ca02fd1-d387-44fa-b576-413ea269d949.html",
"icon": "other"
}
] |
Vis | 2,025 | AortaDiff: Volume-Guided Conditional Diffusion Models for Multi-Branch Aortic Surface Generation | 10.1109/TVCG.2025.3634652 | Accurate 3D aortic construction is crucial for clinical diagnosis, preoperative planning, and computational fluid dynamics (CFD) simulations, as it enables the estimation of critical hemodynamic parameters such as blood flow velocity, pressure distribution, and wall shear stress. Existing construction methods often rely on large annotated training datasets and extensive manual intervention. While the resulting meshes can serve for visualization purposes, they struggle to produce geometrically consistent, well-constructed surfaces suitable for downstream CFD analysis. To address these challenges, we introduce AortaDiff, a diffusion-based framework that generates smooth aortic surfaces directly from CT/MRI volumes. AortaDiff first employs a volume-guided conditional diffusion model (CDM) to iteratively generate aortic centerlines conditioned on volumetric medical images. Each centerline point is then automatically used as a prompt to extract the corresponding vessel contour, ensuring accurate boundary delineation. Finally, the extracted contours are fitted into a smooth 3D surface, yielding a continuous, CFD-compatible mesh representation. AortaDiff offers distinct advantages over existing methods, including an end-to-end workflow, minimal dependency on large labeled datasets, and the ability to generate CFD-compatible aorta meshes with high geometric fidelity. Experimental results demonstrate that AortaDiff performs effectively even with limited training data, successfully constructing both normal and pathologically altered aorta meshes, including cases with aneurysms or coarctation. This capability enables the generation of high-quality visualizations and positions AortaDiff as a practical solution for cardiovascular research. | true | true | [
"Delin An",
"Pan Du",
"Jian-Xun Wang",
"Chaoli Wang"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2507.13404",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_8330bc4e-c9eb-41e0-8fa0-8043f0518241.html",
"icon": "other"
}
] |
Vis | 2,025 | Automatic Semantic Alignment of Flow Pattern Representations for Exploration with Large Language Models | 10.1109/TVCG.2025.3634650 | Explorative flow visualization allows domain experts to analyze complex flow structures by interactively investigating flow patterns. However, traditional visual interfaces often rely on specialized graphical representations and interactions, which require additional effort to learn and use. Natural language interaction offers a more intuitive alternative, but teaching machines to recognize diverse scientific concepts and extract corresponding structures from flow data poses a significant challenge. In this paper, we introduce an automated framework that aligns flow pattern representations with the semantic space of large language models (LLMs), eliminating the need for manual labeling. Our approach encodes streamline segments using a denoising autoencoder and maps the generated flow pattern representations to LLM embeddings via a projector layer. This alignment empowers semantic matching between textual embeddings and flow representations through an attention mechanism, enabling the extraction of corresponding flow patterns based on textual descriptions. To enhance accessibility, we develop an interactive interface that allows users to query and visualize flow structures using natural language. Through case studies, we demonstrate the effectiveness of our framework in enabling intuitive and intelligent flow exploration. | false | true | [
"Weihan Zhang",
"Jun Tao"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.06300",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_d207baab-2afc-4d7a-bbfd-cf81768ec83c.html",
"icon": "other"
}
] |
Vis | 2,025 | BDIViz: An Interactive Visualization System for Biomedical Schema Matching with LLM-Powered Validation | 10.1109/TVCG.2025.3634843 | Biomedical data harmonization is essential for enabling exploratory analyses and meta-studies, but the process of schema matching-identifying semantic correspondences between elements of disparate datasets (schemas)-remains a labor-intensive and error-prone task. Even state-of-the-art automated methods often yield low accuracy when applied to biomedical schemas due to the large number of attributes and nuanced semantic differences between them. We present BDIViz, a novel visual analytics system designed to streamline the schema matching process for biomedical data. Through formative studies with domain experts, we identified key requirements for an effective solution and developed interactive visualization techniques that address both scalability challenges and semantic ambiguity. BDIViz employs an ensemble approach that combines multiple matching methods with LLM-based validation, summarizes matches through interactive heatmaps, and provides coordinated views that enable users to quickly compare attributes and their values. Our method-agnostic design allows the system to integrate various schema matching algorithms and adapt to application-specific needs. Through two biomedical case studies and a within-subject user study with domain experts, we demonstrate that BDIViz significantly improves matching accuracy while reducing cognitive load and curation time compared to baseline approaches. | true | true | [
"Eden Wu",
"Dishita Turakhia",
"Guande Wu",
"Christos Koutras",
"Sarah Keegan",
"Wenke Liu",
"Beata Szeitz",
"David Fenyo",
"Claudio Silva",
"Juliana Freire"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.16117",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/VIDA-NYU/bdi-viz",
"icon": "code"
},
{
"name": "Website",
"url": "https://bdiviz.users.hsrn.nyu.edu/dashboard/",
"icon": "project_webste"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_e8f720e9-fa87-43c1-9d57-b8cc150f3cd3.html",
"icon": "other"
}
] |
Vis | 2,025 | Beyond Log Scales: Toward Cognitively Informed Bar Charts for Orders of Magnitude Values | 10.1109/TVCG.2025.3634795 | In this work, we challenge the dominant use of logarithmic scales to communicate values spanning multiple orders of magnitude-Orders of Magnitude Values (OMVs)-to the general public. Focusing on bar charts, we incorporate cognitive insights into visualization design to better align with how humans perceive OMVs. Studies in cognitive psychology suggest that, for large numerical ranges such as millions and billions, people do not think logarithmically. Instead, they perceive numbers in a piecewise linear manner, grouping values into scale words (e.g., millions) and applying linear reasoning within each group. We build upon a recently introduced piecewise linear scale, EplusM, and validate its use in bar charts, which we refer to as EplusM bar charts. We also introduce two novel variants of the EplusM bar chart informed by findings in numerical perception: Bricks, which builds on the concepts of round numbers and subitizing, and Multi-Magnitude, which leverages categorical perception of large numbers. In a crowdsourced experiment, we evaluate four bar chart designs: 1) Log, 2) EplusM, 3) Bricks, and 4) Multi-Magnitude, across value retrieval and quantitative comparison tasks. Our results show that EplusM bar charts are significantly preferred over logarithmic designs, increase user confidence, and reduce perceived mental demand, while maintaining task performance. These findings suggest that EplusM bar charts can serve as effective alternatives to logarithmic ones when visualizing OMVs for general audiences. | true | true | [
"Katerina Batziakoudi",
"Stéphanie Rey",
"Jean-Daniel Fekete"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://hal.science/hal-05171203",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/ hybvp/?view_only=5cd17943e9ba46deb66a8f7f4eeeb4da",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_80f2d2e8-877c-469a-8ce2-bb3e5bea1f93.html",
"icon": "other"
}
] |
Vis | 2,025 | Beyond Problem Solving: Framing and Problem-Solution Co-Evolution in Data Visualization Design | 10.1109/TVCG.2025.3633866 | Visualization design is often described as a process of solving a well-defined problem by navigating a design space. While existing visualization design models have provided valuable structure and guidance, they tend to foreground technical problem-solving and underemphasize the interpretive, judgment-based aspects of design. In contrast, research in other design disciplines has emphasized the importance of framing-how designers define and redefine what the problem is-and the co-evolution of problem and solution spaces through reflective practice. These dimensions remain underexplored in visualization research, particularly from the perspective of expert practitioners. This paper investigates how visualization designers frame problems and navigate the interplay between problem understanding and solution development. We conducted a mixed-methods study with 11 expert design practitioners using design challenges, diary entries, and semi-structured interviews. Through reflexive thematic analysis, we identified key strategies that participants used to frame design problems, reframe them in response to evolving constraints or insights, and construct bridges between problem and solution spaces. These included the use of metaphors, heuristics, sketching, primary generators, and reflective evaluation of failed or incomplete ideas. Our findings contribute an empirically grounded account of visualization design as a reflective, co-evolutionary practice. We show that framing is not a preliminary step, but a continuous activity embedded in the act of designing. Participants frequently shifted their understanding of the problem based on solution attempts, feedback from tools, and ethical or narrative concerns. These insights extend current visualization design models and highlight the need for frameworks that better account for framing and interpretive judgment. We conclude with implications for visualization research, education, and practice. In particular, we discuss how design education can better support framing and co-evolutionary thinking, and how visualization research can benefit from greater attention to the cognitive strategies and reflective processes that underpin expert design. | false | true | [
"Paul Parsons",
"Prakash Chandra Shukla"
] | [
"BP"
] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_930062ee-90b2-4c48-9896-626047d01771.html",
"icon": "other"
}
] |
Vis | 2,025 | Beyond the Broadcast: Enhancing VR Tennis Broadcasting through Embedded Visualizations and Camera Techniques | 10.1109/TVCG.2025.3634638 | Virtual Reality (VR) broadcasting has emerged as a promising medium for providing immersive viewing experiences of major sports events such as tennis. However, current VR broadcast systems often lack an effective camera language and do not adequately incorporate dynamic, in-game visualizations, limiting viewer engagement and narrative clarity. To address these limitations, we analyze 400 out-of-play segments from eight major tennis broadcasts to develop a tennis-specific design framework that effectively combines cinematic camera movements with embedded visualizations. We further refine our framework by examining 25 cinematic VR animations, comparing their camera techniques with traditional tennis broadcasts to identify key differences and inform adaptations for VR. Based on data extracted from the broadcast videos, we reconstruct a simulated game that captures the players' and ball's motion and trajectories. Leveraging this design framework and processing pipeline, we develope Beyond the Broadcast, a VR tennis viewing system that integrates embedded visualizations with adaptive camera motions to construct a comprehensive and engaging narrative. Our system dynamically overlays tactical information and key match events onto the simulated environment, enhancing viewer comprehension and narrative engagement while ensuring perceptual immersion and viewing comfort. A user study involving tennis viewers demonstrate that our approach outperforms traditional VR broadcasting methods in delivering an immersive, informative viewing experience. | true | true | [
"Jun-Hsiang Yao",
"Jielin Feng",
"Xinfang Tian",
"Kai Xu",
"Gulshat Amirkhanova",
"Siming Chen"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.20006",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_d2d68e75-88a1-4f4a-a1c5-7d8cb6706524.html",
"icon": "other"
}
] |
Vis | 2,025 | BondMatcher: H-Bond Stability Analysis in Molecular Systems | 10.1109/TVCG.2025.3634636 | This application paper investigates the stability of hydrogen bonds (H-bonds), as characterized by the Quantum Theory of Atoms in Molecules (QTAIM). First, we contribute a database of 4544 electron densities associated to four isomers of water hexamers (the so-called Ring, Book, Cage and Prism), generated by distorting their equilibrium geometry under various structural perturbations, modeling the natural dynamic behavior of molecular systems. Second, we present a new stability measure, called bond occurrence rate, associating each bond path present at equilibrium with its rate of occurrence within the input ensemble. We also provide an algorithm, called BondMatcher, for its automatic computation, based on a tailored, geometry-aware partial isomorphism estimation between the extremum graphs of the considered electron densities. Our new stability measure allows for the automatic identification of densities lacking H-bond paths, enabling further visual inspections. Specifically, the topological analysis enabled by our framework corroborates experimental observations and provides refined geometrical criteria for characterizing the disappearance of H-bond paths. Our electron density database and our C++ implementation are available at this address: https://github.com/thom-dani/BondMatcher. | false | true | [
"Thomas Daniel",
"Malgorzata Olejniczak",
"Julien Tierny"
] | [
"HM"
] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2504.03205",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_981234e2-bdbe-42ef-863d-00d6ea28561d.html",
"icon": "other"
}
] |
Vis | 2,025 | Calli-VA: A Visual Analytics System for Analyzing and Comparing Chinese Calligraphic Styles | 10.1109/TVCG.2025.3634633 | Chinese calligraphy is a quintessential element of Chinese cultural heritage. Analyzing and comparing calligraphic styles not only enhances the appreciation, learning, and advancement of calligraphy but also provides valuable insights into ancient China. However, such analysis remains challenging due to the limited scalability and possible inconsistencies of qualitative methods, as well as usability and misalignment issues in conventional quantitative approaches. We propose Calli-VA, a visual analytics system, to address these challenges. Calli-VA extracts character images and their corresponding strokes from original works and characterizes each character using systematic criteria. During analysis, the system defines the analysis scope by overview and uncovers relationships between characters. Explanation and recommendation mechanisms are integrated to help users understand patterns and guide further exploration. A documentation feature allows users to record and share their findings. We demonstrate the effectiveness of Calli-VA through three case studies and expert feedback. | false | true | [
"Jincheng Li",
"Jinpeng Wu",
"Shaocong Tan",
"Lin Du",
"Yu Zhang",
"Chaofan Yang",
"Jiadi Zhang",
"Rebecca Ruige Xu",
"Rui Shi",
"Lu Bai",
"Xiaoru Yuan"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_e61c0f73-21a5-4e16-bdb6-4038e74770ae.html",
"icon": "other"
}
] |
Vis | 2,025 | Can LLMs Bridge Domain and Visualization? A Case Study on High-Dimension Data Visualization in Single-Cell Transcriptomics | 10.1109/TVCG.2025.3633869 | While many visualizations are built for domain users (e.g., biologists, machine learning developers), understanding how visualizations are used in the domain has long been a challenging task. Previous research has relied on either interviewing a limited number of domain users or reviewing relevant application papers in the visualization community, neither of which provides comprehensive insight into visualizations “in the wild” of a specific domain. This paper aims to fill this gap by examining the potential of using Large Language Models (LLM) to analyze visualization usage in domain literature. We use high-dimension (HD) data visualization in sing-cell transcriptomics as a test case, analyzing 1,203 papers that describe 2,056 HD visualizations with highly specialized domain terminologies (e.g., biomarkers, cell lineage). To facilitate this analysis, we introduce a human-in-the-loop LLM workflow that can effectively analyze a large collection of papers and translate domain-specific terminology into standardized data and task abstractions. Instead of relying solely on LLMs for end-to-end analysis, our workflow enhances analytical quality through 1) integrating image processing and traditional NLP methods to prepare well-structured inputs for three targeted LLM subtasks (i.e., translating domain terminology, summarizing analysis tasks, and performing categorization), and 2) establishing checkpoints for human involvement and validation throughout the process. The analysis results, validated with expert interviews and a test set, revealed three often overlooked aspects in HD visualization: trajectories in HD spaces, inter-cluster relationships, and dimension clustering. This research provides a stepping stone for future studies seeking to use LLMs to bridge the gap between visualization design and domain-specific usage. | false | true | [
"Qianwen Wang",
"Xinyi Liu",
"Nils Gehlenborg"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/qtsak_v2?view_only=",
"icon": "paper"
},
{
"name": "Website",
"url": "https://hdvis.github.io",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_32bd145e-06bf-411a-af2e-109192108a7e.html",
"icon": "other"
}
] |
Vis | 2,025 | Causality-based Visual Analytics of Sentiment Contagion in Social Media Topics | 10.1109/TVCG.2025.3633839 | Sentiment contagion occurs when attitudes toward one topic are influenced by attitudes toward others. Detecting and understanding this phenomenon is essential for analyzing topic evolution and informing social policies. Prior research has developed models to simulate the contagion process through hypothesis testing and has visualized user-topic correlations to aid comprehension. Nevertheless, the vast volume of topics and the complex interrelationships on social media present two key challenges: (1) efficient construction of large-scale sentiment contagion networks, and (2) in-depth explorations of these networks. To address these challenges, we introduce a causality-based framework that efficiently constructs and explains sentiment contagion. We further propose a map-like visualization technique that encodes time using a horizontal axis, enabling efficient visualization of causality-based sentiment flow while maintaining scalability through limitless spatial segmentation. Based on the visualization, we develop CausalMap, a system that supports analysts in tracing sentiment contagion pathways and assessing the influence of different demographic groups. Furthermore, we conduct comprehensive evaluations—including two use cases, a task-based user study, an expert interview, and an algorithm evaluation—to validate the usability and effectiveness of our approach. | false | true | [
"Renzhong Li",
"Shuainan Ye",
"Yuchen Lin",
"Buwei Zhou",
"Zhining Kang",
"Tai-Quan Peng",
"Wenhao Fu",
"Tan Tang",
"Yingcai Wu"
] | [
"BP"
] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_bf2f93ab-ab67-4a81-8726-6819e84553be.html",
"icon": "other"
}
] |
Vis | 2,025 | CD-TVD: Contrastive Diffusion for 3D Super-Resolution with Scarce High-Resolution Time-Varying Data | 10.1109/TVCG.2025.3634787 | Large-scale scientific simulations require significant resources to generate high-resolution time-varying data (TVD). While super-resolution is an efficient post-processing strategy to reduce costs, existing methods rely on a large amount of HR training data, limiting their applicability to diverse simulation scenarios. To address this constraint, we proposed CD-TVD, a novel framework that combines contrastive learning and an improved diffusion-based super-resolution model to achieve accurate 3D super-resolution from limited time-step high-resolution data. During pre-training on historical simulation data, the contrastive encoder and diffusion super-resolution modules learn degradation patterns and detailed features of high-resolution and low-resolution samples. In the training phase, the improved diffusion model with a local attention mechanism is fine-tuned using only one newly generated high-resolution timestep, leveraging the degradation knowledge learned by the encoder. This design minimizes the reliance on large-scale high-resolution datasets while maintaining the capability to recover fine-grained details. Experimental results on fluid and atmospheric simulation datasets confirm that CD-TVD delivers accurate and resource-efficient 3D super-resolution, marking a significant advancement in data augmentation for large-scale scientific simulations. The code is available at https://github.com/Xin-Gao-private/CD-TVD. | true | true | [
"Chongke Bi",
"Xin Gao",
"Jiakang Deng",
"Guan Li",
"Jun Han"
] | [
"HM"
] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.08173",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_4161480d-1e23-4eed-be76-3953fac207b7.html",
"icon": "other"
}
] |
Vis | 2,025 | Characterizing Visualization Perception with Psychological Phenomena: Uncovering the Role of Subitizing in Data Visualization | 10.1109/TVCG.2025.3634807 | Understanding how people perceive visualizations is crucial for designing effective visual data representations; however, many heuristic design guidelines are derived from specific tasks or visualization types, without considering the constraints or conditions under which those guidelines hold. In this work, we aimed to assess existing design heuristics for categorical visualization using well-established psychological knowledge. Specifically, we examine the impact of the subitizing phenomenon in cognitive psychology-people's ability to automatically recognize a small set of objects instantly without counting-in data visualizations. We conducted three experiments with multi-class scatterplots-between 2 and 15 classes with varying design choices-across three different tasks-class estimation, correlation comparison, and clustering judgments-to understand how performance changes as the number of classes (and therefore set size) increases. Our results indicate if the category number is smaller than six, people tend to perform well at all tasks, providing empirical evidence of subitizing in visualization. When category numbers increased, performance fell, with the magnitude of the performance change depending on task and encoding. Our study bridges the gap between heuristic guidelines and empirical evidence by applying well-established psychological theories, suggesting future opportunities for using psychological theories and constructs to characterize visualization perception. | false | true | [
"Arran Zeyu Wang",
"Ghulam Jilani Quadri",
"Mengyuan Zhu",
"Chin Tseng",
"Danielle Szafir"
] | [
"HM"
] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/y3z2b/?view_only=7f6569187b344fadbd11cc09a6e63d24",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_8aa31c36-374d-4155-b9c3-fd20d8deeec2.html",
"icon": "other"
}
] |
Vis | 2,025 | Charts-of-Thought: Enhancing LLM Visualization Literacy Through Structured Data Extraction | 10.1109/TVCG.2025.3634813 | This paper evaluates the visualization literacy of modern Large Language Models (LLMs) and introduces a novel prompting technique called Charts-of-Thought. We tested three state-of-the-art LLMs (Claude-3.7-sonnet, GPT-4.5-preview, and Gemini-2.0-pro) on the Visualization Literacy Assessment Test (VLAT) using standard prompts and our structured approach. The Charts-of-Thought method guides LLMs through a systematic data extraction, verification, and analysis process before answering visualization questions. Our results show Claude-3.7-sonnet achieved a score of 50.17 using this method, far exceeding the human baseline of 28.82. This approach improved performance across all models, with score increases of 21.8% for GPT-4.5, 9.4% for Gemini-2.0, and 13.5% for Claude-3.7 compared to standard prompting. The performance gains were consistent across original and modified VLAT charts, with Claude correctly answering 100% of questions for several chart types that previously challenged LLMs. Our study reveals that modern multimodal LLMs can surpass human performance on visualization literacy tasks when given the proper analytical framework. These findings establish a new benchmark for LLM visualization literacy and demonstrate the importance of structured prompting strategies for complex visual interpretation tasks. Beyond improving LLM visualization literacy, Charts-of-Thought could also enhance the accessibility of visualizations, potentially benefiting individuals with visual impairments or lower visualization literacy. | false | true | [
"Amit Kumar Das",
"Mohammad Tarun",
"Klaus Mueller"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.04842",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/vhcailab/Charts-of-Thought",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_fcafb37e-70f2-4d68-b9a4-e92902d9d2bd.html",
"icon": "other"
}
] |
Vis | 2,025 | ClimateSOM: A Visual Analysis Workflow for Climate Ensemble Datasets | 10.1109/TVCG.2025.3634788 | Ensemble datasets are ever more prevalent in various scientific domains. In climate science, ensemble datasets are used to capture variability in projections under plausible future conditions including greenhouse and aerosol emissions. Each ensemble model run produces projections that are fundamentally similar yet meaningfully distinct. Understanding this variability among ensemble model runs and analyzing its magnitude and patterns is a vital task for climate scientists. In this paper, we present ClimateSOM, a visual analysis workflow that leverages a self-organizing map (SOM) and Large Language Models (LLMs) to support interactive exploration and interpretation of climate ensemble datasets. The workflow abstracts climate ensemble model runs-spatiotemporal time series-into a distribution over a 2D space that captures the variability among the ensemble model runs using a SOM. LLMs are integrated to assist in sensemaking of this SOM-defined 2D space, the basis for the visual analysis tasks. In all, ClimateSOM enables users to explore the variability among ensemble model runs, identify patterns, compare and cluster the ensemble model runs. To demonstrate the utility of ClimateSOM, we apply the workflow to an ensemble dataset of precipitation projections over California and the Northwestern United States. Furthermore, we conduct a short evaluation of our LLM integration, and conduct an expert review of the visual workflow and the insights from the case studies with six domain experts to evaluate our approach and its utility. | false | true | [
"Yuya Kawakami",
"Daniel Cayan",
"Dongyu Liu",
"Kwan-Liu Ma"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_34e579da-1d6a-4ba7-a6e1-4644a0d2c0b7.html",
"icon": "other"
}
] |
Vis | 2,025 | Cluster-Based Random Forest Visualization and Interpretation | 10.1109/TVCG.2025.3634260 | Random forests are a machine learning method used to automatically classify datasets and consist of a multitude of decision trees. While these random forests often have higher performance and generalize better than a single decision tree, they are also harder to interpret. This paper presents a visualization method and system to increase interpretability of random forests. We cluster similar trees which enables users to interpret how the model performs in general without needing to analyze each individual decision tree in detail, or interpret an oversimplified summary of the full forest. To meaningfully cluster the decision trees, we introduce a new distance metric that takes into account both the decision rules as well as the predictions of a pair of decision trees. We also propose two new visualization methods that visualize both clustered and individual decision trees: (1) The Feature Plot, which visualizes the topological position of features in the decision trees, and (2) the Rule Plot, which visualizes the decision rules of the decision trees. We demonstrate the efficacy of our approach through a case study on the “Glass” dataset, which is a relatively complex standard machine learning dataset, as well as a small user study. | true | true | [
"Max Sondag",
"Christofer Meinecke",
"Dennis Collaris",
"Tatiana von Landesberger",
"Stef van den Elzen"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.22665",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/maxie12/RandomForestVis",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_d3492f20-d356-4c55-992d-dbc0fa0fe0af.html",
"icon": "other"
}
] |
Vis | 2,025 | Collaborating across Domains and Roles: An Interview Study of Visualization Design Practices | 10.1109/TVCG.2025.3634711 | Visualization design study is a widely adopted approach for developing tailored visual solutions to domain-specific problems through close interdisciplinary collaboration. While the visualization community has proposed generalizable frameworks, there is a growing need for domain-aware methodologies that address discipline-specific challenges and refine design study practices. To investigate how domain characteristics and collaborator roles influence the design study process, we conducted interviews with 15 experts, including domain specialists from the humanities, arts, applied sciences, and artificial intelligence, as well as visualization researchers and developers, with direct experience in design studies. Our findings reveal tensions and opportunities that arise from differing expectations, communication styles, and levels of engagement among collaborators at various stages of the design process, including problem formulation, co-design, and evaluation. We highlight how domain-specific norms and role dynamics shape collaboration and influence the trajectory of visualization projects. Based on these insights, we offer practical considerations to help visualization researchers anticipate domain-specific challenges, foster mutual understanding, and adapt their methods accordingly. Our study contributes to ongoing efforts to support more context-sensitive, sustainable, and inclusive design study practices across diverse application domains. | false | true | [
"Yiwen Xing",
"Maria Teresa Ortoleva",
"Rita Borgo",
"Alfie Abdul-Rahman"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://kclpure.kcl.ac.uk/portal/en/publications/collaborating-across-domains-and-roles-an-interview-study-of-visu",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_c7fde6c0-1dc4-4ad4-8413-50cc367ed451.html",
"icon": "other"
}
] |
Vis | 2,025 | ConceptViz: A Visual Analytics Approach for Exploring Concepts in Large Language Models | 10.1109/TVCG.2025.3634806 | Large language models (LLMs) have achieved remarkable performance across a wide range of natural language tasks. Understanding how LLMs internally represent knowledge remains a significant challenge. Despite Sparse Autoencoders (SAEs) have emerged as a promising technique for extracting interpretable features from LLMs, SAE features do not inherently align with human-understandable concepts, making their interpretation cumbersome and labor-intensive. To bridge the gap between SAE features and human concepts, we present ConceptViz, a visual analytics system designed for exploring concepts in LLMs. ConceptViz implements a novel Identification ⇒ Interpretation ⇒Validation pipeline, enabling users to query SAEs using concepts of interest, interactively explore concept-to-feature alignments, and validate the correspondences through model behavior verification. We demonstrate the effectiveness of ConceptViz through two usage scenarios and a user study. Our results show that ConceptViz enhances interpretability research by streamlining the discovery and validation of meaningful concept representations in LLMs, ultimately aiding researchers in building more accurate mental models of LLM features. Our code and user guide are publicly available at https://github.com/Happy-Hippo209/conceptViz. | false | true | [
"Haoxuan Li",
"Zhen Wen",
"Qiqi Jiang",
"Chenxiao Li",
"Yuwei Wu",
"Yuchen Yang",
"Yiyao Wang",
"Xiuqi Huang",
"Minfeng Zhu",
"Wei Chen"
] | [
"HM"
] | [
"C",
"O"
] | [
{
"name": "Code",
"url": "https://github.com/Happy-Hippo209/ConceptViz",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_1f23efb5-deca-4391-a7c4-e455fce74cb9.html",
"icon": "other"
}
] |
Vis | 2,025 | Conch: Competitive Debate Analysis via Visualizing Clash Points and Hierarchical Strategies | 10.1109/TVCG.2025.3634629 | In-depth analysis of competitive debates is essential for participants to develop argumentative skills and refine strategies, and further improve their debating performance. However, manual analysis of unstructured and unlabeled textual records of debating is time-consuming and ineffective, as it is challenging to reconstruct contextual semantics and track logical connections from raw data. To address this, we propose Conch, an interactive visualization system that systematically analyzes both what is debated and how it is debated. In particular, we propose a novel parallel spiral visualization that compactly traces the multidimensional evolution of clash points and participant interactions throughout debate process. In addition, we leverage large language models with well-designed prompts to automatically identify critical debate elements such as clash points, disagreements, viewpoints, and strategies, enabling participants to understand the debate context comprehensively. Finally, through two case studies on real-world debates and a carefully-designed user study, we demonstrate Conch's effectiveness and usability for competitive debate analysis. | true | true | [
"Qianhe Chen",
"Yong WANG",
"Yixin Yu",
"Xiyuan Zhu",
"Xuerou Yu",
"Ran Wang"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.14482/",
"icon": "paper"
},
{
"name": "Website",
"url": "https://debate.datavizu.app/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_da138859-6a6c-4df2-813b-6a70e20d04a1.html",
"icon": "other"
}
] |
Vis | 2,025 | Correcting Misperceptions at a Glance: Using Data Visualizations to Reduce Political Sectarianism | 10.1109/TVCG.2025.3634777 | Political sectarianism is fueled in part by misperceptions of political opponents: People commonly overestimate the support for extreme policies among members of the other party. These misperceptions inflame partisan animosity and may be used to justify extremism among one's own party. Research suggests that correcting partisan misperceptions-by informing people about the actual views of outparty members-may reduce one's own expressed support for political extremism, including partisan violence and antidemocratic actions. However, there remains a limited understanding of how the design of correction interventions drives these effects. The present study investigated how correction effects depend on different representations of outparty views communicated through data visualizations. Building on prior interventions that present the average outparty view, we consider the impact of visualizations that more fully convey the range of views among outparty members. We conducted an experiment with U.S.-based participants from Prolific (N=239 Democrats, N=244 Republicans). Participants made predictions about support for political violence and undemocratic practices among members of their political outparty. They were then presented with data from an earlier survey on the actual views of outparty members. Some participants viewed only the average response (Mean-Only condition), while other groups were shown visual representations of the range of views from 75% of the outparty (Mean+Interval condition) or the full distribution of responses (Mean+Points condition). Compared to a control group that was not informed about outparty views, we observed the strongest correction effects (i.e., lower support for political violence and undemocratic practices) among participants in the Mean-only and Mean+Points condition, while correction effects were weaker in the Mean+Interval condition. In addition, participants who observed the full distribution of out-party views (Mean+Points condition) were most accurate at later recalling the degree of support among the outparty. Our findings suggest that data visualizations can be an important tool for correcting pervasive distortions in beliefs about other groups. However, the way in which variability in outparty views is visualized can significantly shape how people interpret and respond to corrective information. Supplemental materials for this paper are available at this OSF repository. | false | true | [
"Doug Markant",
"Subham Sah",
"Alireza Karduni",
"Milad Rogha",
"My Thai",
"Wenwen Dou"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2508.00233",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/8crsp/",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/swt4c",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_ad323fdf-97c1-4e09-a814-a643243d280b.html",
"icon": "other"
}
] |
Vis | 2,025 | Critical Design Strategy: a Method for Heuristically Evaluating Visualisation Designs | 10.1109/TVCG.2025.3634783 | We present the Critical Design Strategy (CDS)-a structured method designed to facilitate the examination of visualisation designs through reflection and critical thought. The CDS helps designers think critically and make informed improvements using heuristic evaluation. When developing a visual tool or pioneering a novel visualisation approach, identifying areas for enhancement can be challenging. Critical thinking is particularly crucial for visualisation designers and tool developers, especially those new to the field, such as studying visualisation in higher education. The CDS consists of three stages across six perspectives: Stage 1 captures the essence of the idea by assigning an indicative title and selecting five adjectives (from twenty options) to form initial impressions of the design. Stage 2 involves an in-depth critique using 30 heuristic questions spanning six key perspectives-user, environment, interface, components, design, and visual marks. Stage 3 focuses on synthesising insights, reflecting on design decisions, and determining the next steps forward. We introduce the CDS and explore its use across three visualisation modules in both undergraduate and postgraduate courses. Our longstanding experience with the CDS has allowed us to refine and develop it over time: from its initial creation through workshops in 2017/18 to improvements in wording and the development of two applications by 2020, followed by the expansion of support notes and refinement of heuristics through 2023; while using it in our teaching each year. This sustained use allows us to reflect on its practical application and offer guidance on how others can incorporate it into their own work. | true | true | [
"Jonathan C. Roberts",
"Hanan Alnjar",
"Aron Owen",
"Panagiotis D. Ritsos"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.05325",
"icon": "paper"
},
{
"name": "Website",
"url": "https://cds-critical-design-strategy.github.io/index.html",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_70fe9706-bf4e-4d28-a8e4-fc5b25dca854.html",
"icon": "other"
}
] |
Vis | 2,025 | CrossSet: Unveiling the Complex Interplay of Two Set-typed Dimensions in Multivariate Data | 10.1109/TVCG.2025.3633897 | The interactive visual analysis of set-typed data, i.e., data with attributes that are of type set, is a rewarding area of research and applications. Valuable prior work has contributed solutions that enable the study of such data with individual set-typed dimensions. In this paper, we present CrossSet, a novel method for the joint study of two set-typed dimensions and their interplay. Based on a task analysis, we describe a new, multi-scale approach to the interactive visual exploration and analysis of such data. Two set-typed data dimensions are jointly visualized using a hierarchical matrix layout, enabling the analysis of the interactions between two set-typed attributes at several levels, in addition to the analysis of individual such dimensions. CrossSet is anchored at a compact, large-scale overview that is complemented by drill-down opportunities to study the relations between and within the set-typed dimensions, enabling an interactive visual multi-scale exploration and analysis of bivariate set-typed data. Such an interactive approach makes it possible to study single set-typed dimensions in detail, to gain an overview of the interaction and association between two such dimensions, to refine one of the dimensions to gain additional details at several levels, and to drill down to the specific interactions of individual set-elements from the set-typed dimensions. To demonstrate the effectiveness and efficiency of CrossSet, we have evaluated the new method in the context of several application scenarios. | false | true | [
"Kresimir Matkovic",
"Rainer Splechtna",
"Denis Gracanin",
"Helwig Hauser"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.00424",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_c663a254-dd80-4021-88aa-f0840fc3024e.html",
"icon": "other"
}
] |
Vis | 2,025 | Data Augmentation for Visualization Design Knowledge Bases | 10.1109/TVCG.2025.3634811 | Visualization knowledge bases enable computational reasoning and recommendation over a visualization design space. These systems evaluate design trade-offs using numeric weights assigned to different features (e.g., binning a variable). Feature weights can be learned automatically by fitting a model to a collection of chart pairs, in which one chart is deemed preferable to the other. To date, labeled chart pairs have been drawn from published empirical research results; however, such pairs are not comprehensive, resulting in a training corpus that lacks many design variants and fails to systematically assess potential trade-offs. To improve knowledge base coverage and accuracy, we contribute data augmentation techniques for generating and labeling chart pairs. We present methods to generate novel chart pairs based on design permutations and by identifying under-assessed features-leading to an expanded corpus with thousands of new chart pairs, now in need of labels. Accordingly, we next compare varied methods to scale labeling efforts to annotate chart pairs, in order to learn updated feature weights. We evaluate our methods in the context of the Draco knowledge base, demonstrating improvements to both feature coverage and chart recommendation performance. | true | true | [
"Hyeok Kim",
"Jeffrey Heer"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.02216",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/fqpdh/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_cf7e7577-abbd-426a-bf21-3f2fbffe5aad.html",
"icon": "other"
}
] |
Vis | 2,025 | Data Speaks, But Who Gives It a Voice? Understanding Persuasive Strategies in Data-Driven News Articles | 10.1109/TVCG.2025.3642509 | Data-driven news articles combine narrative storytelling with data visualizations to inform and influence public opinion on pressing societal issues. These articles often employ persuasive strategies, which are rhetorical techniques in narrative framing, visual rhetoric, or data presentation, to influence audience interpretation and opinion formation regarding information communication. While previous research has examined whether and when data visualizations persuade, the strategic choices made by persuaders remain largely unexplored. Addressing this gap, our work presents a taxonomy of persuasive strategies grounded in psychological theories and expert insights, categorizing 15 strategies across five dimensions: Credibility, Guided Interpretation, Reference-based Framing, Emotional Appeal, and Participation Invitation. To facilitate large-scale analysis, we curated a dataset of 936 data-driven news articles annotated with both persuasive strategies and their perceived effects. Leveraging this corpus, we developed a multimodal, multi-task learning model that jointly predicts the presence of persuasive strategies and their persuasive effects by incorporating both embedded (text and visualization) and explicit (visual narrative and psycholinguistic) features. Our evaluation demonstrates that our model outperforms state-of-the-art baselines in identifying persuasive strategies and measuring their effects. | false | true | [
"Zikai Li",
"Chuyi Zheng",
"Ziang Li",
"Yang Shi"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9e9a15b7-6ccb-4419-b391-e82e3bd8b907.html",
"icon": "other"
}
] |
Vis | 2,025 | Dataset-Adaptive Dimensionality Reduction | 10.1109/TVCG.2025.3634784 | Selecting the appropriate dimensionality reduction (DR) technique and determining its optimal hyperparameter settings that maximize the accuracy of the output projections typically involves extensive trial and error, often resulting in unnecessary computational overhead. To address this challenge, we propose a dataset-adaptive approach to DR optimization guided by structural complexity metrics. These metrics quantify the intrinsic complexity of a dataset, predicting whether higher-dimensional spaces are necessary to represent it accurately. Since complex datasets are often inaccurately represented in two-dimensional projections, leveraging these metrics enables us to predict the maximum achievable accuracy of DR techniques for a given dataset, eliminating redundant trials in optimizing DR. We introduce the design and theoretical foundations of these structural complexity metrics. We quantitatively verify that our metrics effectively approximate the ground truth complexity of datasets and confirm their suitability for guiding dataset-adaptive DR workflow. Finally, we empirically show that our dataset-adaptive workflow significantly enhances the efficiency of DR optimization without compromising accuracy. | false | true | [
"Hyeon Jeon",
"Jeongin Park",
"Soohyun Lee",
"Dae Hyun Kim",
"Sungbok Shin",
"Jinwook Seo"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.11984",
"icon": "paper"
},
{
"name": "Website",
"url": "https://hyeonword.com/dadr/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3d30c2f2-5e29-4f5f-a3bc-05c9fb3e4ea0.html",
"icon": "other"
}
] |
Vis | 2,025 | DataWink: Reusing and Adapting SVG-based Visualization Examples with Large Multimodal Models | 10.1109/TVCG.2025.3634635 | Creating aesthetically pleasing data visualizations remains challenging for users without design expertise or familiarity with visualization tools. To address this gap, we present DataWink, a system that enables users to create custom visualizations by adapting high-quality examples. Our approach combines large multimodal models (LMMs) to extract data encoding from existing SVG-based visualization examples, featuring an intermediate representation of visualizations that bridges primitive SVG and visualization programs. Users may express adaptation goals to a conversational agent and control the visual appearance through widgets generated on demand. With an interactive interface, users can modify both data mappings and visual design elements while maintaining the original visualization's aesthetic quality. To evaluate DataWink, we conduct a user study (N=12) with replication and free-form exploration tasks. As a result, DataWink is recognized for its learnability and effectiveness in personalized authoring tasks. Our results demonstrate the potential of example-driven approaches for democratizing visualization creation. | false | true | [
"Liwenhan Xie",
"Yanna Lin",
"Can Liu",
"Huamin Qu",
"Xinhuan Shu"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.17734",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_30cf9291-b159-4a33-95bd-47f194b893a0.html",
"icon": "other"
}
] |
Vis | 2,025 | Deconstructing Implicit Beliefs in Visual Data Journalism: Unstable Meanings Behind Data as Truth & Design for Insight | 10.1109/TVCG.2025.3634830 | We conduct a deconstructive reading of a qualitative interview study with 17 visual data journalists from newsrooms across the globe. We borrow a deconstruction approach from literary critique to explore the instability of meaning in language and reveal implicit beliefs in words and ideas. Through our analysis we surface two sets of opposing implicit beliefs in visual data journalism: objectivity/subjectivity and humanism/mechanism. We contextualize these beliefs through a genealogical analysis, which brings deconstruction theory into practice by providing a historic backdrop for these opposing perspectives. Our analysis shows that these beliefs held within visual data journalism are not self-enclosed but rather a product of external societal forces and paradigm shifts over time. Through this work, we demonstrate how thinking with critical theories such as deconstruction and genealogy can reframe “success” in visual data storytelling and diversify visualization research outcomes. These efforts push the ways in which we as researchers produce domain knowledge to examine the sociotechnical issues of today's values towards datafication and data visualization. All supplemental materials for this work are available at osf.io/5fr48. | false | true | [
"Ke Er Amy Zhang",
"Jodie Jenkinson",
"Laura Garrison"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.12377",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/5fr48/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9d0c979c-02fc-47d5-9f8f-50fc27cf9c35.html",
"icon": "other"
}
] |
Vis | 2,025 | DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning | 10.1109/TVCG.2025.3634645 | Although data visualization is powerful for revealing patterns and communicating insights, creating effective visualizations requires familiarity with authoring tools and often disrupts the analysis flow. While large language models show promise for automatically converting analysis intent into visualizations, existing methods function as black boxes without transparent reasoning processes, which prevents users from understanding design rationales and refining suboptimal outputs. To bridge this gap, we propose integrating Chain-of-Thought (CoT) reasoning into the Natural Language to Visualization (NL2VIS) pipeline. First, we design a comprehensive CoT reasoning process for NL2VIS and develop an automatic pipeline to equip existing datasets with structured reasoning steps. Second, we introduce nvBench-CoT, a specialized dataset capturing detailed step-by-step reasoning from ambiguous natural language descriptions to finalized visualizations, which enables state-of-the-art performance when used for model fine-tuning. Third, we develop DeepVIS, an interactive visual interface that tightly integrates with the CoT reasoning process, allowing users to inspect reasoning steps, identify errors, and make targeted adjustments to improve visualization outcomes. Quantitative benchmark evaluations, two use cases, and a user study collectively demonstrate that our CoT framework effectively enhances NL2VIS quality while providing insightful reasoning steps to users. | false | true | [
"Zhihao Shuai",
"Boyan LI",
"siyu yan",
"Yuyu Luo",
"Weikai Yang"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2508.01700",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/Bvivib-shuai/DeepVIS",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_f4073f80-c0bd-4d83-9c72-3dcfd060ef7f.html",
"icon": "other"
}
] |
Vis | 2,025 | Design Space and Declarative Grammar for 3D Genomic Data Visualization | 10.1109/TVCG.2025.3634654 | Various computational approaches predict chromatin structure, yielding concrete models that position genomic loci in physical space and help reveal genome organization and function. While prior visualization research has explored data and task abstractions for genomics, the design space for depicting these three-dimensional (3D) genome models—and associated genome-mapped data—remains unclear. In this paper, we investigate the visualization of genomic data with a spatial component. First, we systematically survey how 3D genome models are used and depicted in computational biology. We analyze over 300 papers with figures that visualize 3D genomic data and categorize the methods for visual representation. From this survey, we derive a design space for visualizing 3D genome data, identifying common patterns and key properties such as representation, visual channels, and composition. We position these findings within an existing genomics visualization taxonomy, refining and extending existing classifications. Second, we augment Gosling, a declarative visualization grammar for genomics, to support 3D genomic data. Our integration enables expressive authoring of visualizations that connect traditional genome-mapped information with 3D genome models, emphasizing their spatial characteristics. To demonstrate its utility, we employ our extended grammar to recreate interactive examples, showcasing its ability to represent complex visual designs. Comprehensive examples and an interactive editor are available at 3d.gosling-lang.org. | true | true | [
"David Kouřil",
"Trevor Manz",
"Sehi L'Yi",
"Nils Gehlenborg"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.31219/osf.io/dtr6u_v1",
"icon": "paper"
},
{
"name": "Live Editor",
"url": "http://3d.gosling-lang.org",
"icon": "project_website"
},
{
"name": "3D Genome Survey",
"url": "https://manzt.github.io/3d-genome-survey/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_1af413bb-b7e1-4e7a-bbba-ec896ec760e7.html",
"icon": "other"
}
] |
Vis | 2,025 | Designing for Disclosure in Data Visualizations | 10.1109/TVCG.2025.3634781 | Visualizing data often entails data transformations that can reveal and hide information, operations we dub disclosure tactics. Whether designers hide information intentionally or as an implicit consequence of other design choices, tools and frameworks for visualization offer little explicit guidance on disclosure. To systematically characterize how visualizations can limit access to an underlying dataset, we contribute a content analysis of 425 examples of visualization techniques sampled from academic papers in the visualization literature, resulting in a taxonomy of disclosure tactics. Our taxonomy organizes disclosure tactics based on how they change the data representation underlying a chart, providing a systematic way to reason about design trade-offs in terms of what information is revealed, distorted, or hidden. We demonstrate the benefits of using our taxonomy by showing how it can guide reasoning in design scenarios where disclosure is a first-order consideration. Adopting disclosure as a framework for visualization research offers new perspective on authoring tools, literacy, uncertainty communication, personalization, and ethical design. | false | true | [
"Krisha Mehta",
"Gordon Kindlmann",
"Alex Kale"
] | [] | [
"C",
"O"
] | [
{
"name": "Code",
"url": "https://github.com/krisha-mehta/DisclosureInDataVis",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_184d62a7-fb82-4d6f-9819-6f552e1c6c28.html",
"icon": "other"
}
] |
Vis | 2,025 | DKMap: Interactive Exploration of Vision-Language Alignment in Multimodal Embeddings via Dynamic Kernel Enhanced Projection | 10.1109/TVCG.2025.3642641 | Examining vision-language alignment in multimodal embeddings is crucial for various tasks, such as evaluating generative models and filtering pretraining data. The intricate nature of high-dimensional features necessitates dimensionality reduction (DR) methods to explore alignment of multimodal embeddings. However, existing DR methods fail to account for cross-modal alignment metrics, resulting in severe occlusion of points with divergent metrics clustered together, inaccurate contour maps from over-aggregation, and insufficient support for multi-scale exploration. To address these problems, this paper introduces DKMap, a novel DR visualization technique for interactive exploration of multimodal embeddings through Dynamic Kernel enhanced projection. First, rather than performing dimensionality reduction and contour estimation sequentially, we introduce a kernel regression supervised t-SNE that directly integrates post-projection contour mapping into the projection learning process, ensuring cross-modal alignment mapping accuracy. Second, to enable multi-scale exploration with dynamic zooming and progressively enhanced local detail, we integrate validation-constrained a refinement of a generalized t-kernel with quad-tree-based multi-resolution technique, ensuring reliable kernel parameter tuning without overfitting. DKMap is implemented as a multi-platform visualization tool, featuring a web-based system for interactive exploration and a Python package for computational notebook analysis. Quantitative comparisons with baseline DR techniques demonstrate DKMap's superiority in accurately mapping cross-modal alignment metrics. We further demonstrate generalizability and scalability of DKMap with three usage scenarios, including visualizing million-scale text-to-image corpus, comparatively evaluating generative models, and exploring a billion-scale pretraining dataset. | false | true | [
"Yilin Ye",
"Chenxi Ruan",
"Yu Zhang",
"Zikun Deng",
"Wei Zeng"
] | [] | [
"C",
"O"
] | [
{
"name": "Code",
"url": "https://github.com/HKUST-CIVAL/DKMap",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3d649242-7596-4300-84fe-793b6a69b69c.html",
"icon": "other"
}
] |
Vis | 2,025 | Embodied Natural Language Interaction (NLI): Speech Input Patterns in Immersive Analytics | 10.1109/TVCG.2025.3634798 | Embodiment shapes how users verbally express intent when interacting with data through speech interfaces in immersive analytics. Despite growing interest in Natural Language Interactions (NLIs) for visual analytics in immersive environments, users' speech patterns and their use of embodiment cues in speech remain underexplored. Understanding their interplay is crucial to bridging the gap between users' intent and an immersive analytic system. To address this, we report the results from 15 participants in a user study conducted using the Wizard of Oz method. We performed axial coding on 1,280 speech acts derived from 734 utterances, examining how analysis tasks are carried out with embodiment and linguistic features. Next, we measured Speech Input Uncertainty for each analysis task using the semantic entropy of utterances, estimating how uncertain users' speech inputs appear to an analytic system. Through these analyses, we identified five speech input patterns, showing that users dynamically blend embodied and non-embodied speech acts depending on data analysis tasks, phases, and Embodiment Reliance driven by the counts and types of embodiment cues in each utterance. We then examined how these patterns align with user reflections on factors that challenge speech interaction during the study. Finally, we propose design implications aligned with the five patterns. | false | true | [
"Hyemi Song",
"Matthew Johnson",
"Kirsten Whitley",
"Eric Krokos",
"Amitabh Varshney"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/sv4fn/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_33454738-a9a9-4944-a27f-262d1da9f35d.html",
"icon": "other"
}
] |
Vis | 2,025 | EmbryoProfiler: A Visual Clinical Decision Support System for IVF | 10.1109/TVCG.2025.3634780 | In-vitro fertilization (IVF) has become standard practice to address infertility, which affects more than one in ten couples in the US. However, current protocols yield relatively low success rates of about 20% per treatment cycle. A critical but complex and time-consuming step is the grading and selection of embryos for implantation. Although incubators with time-lapse microscopy have enabled computational analysis of embryo development, existing automated approaches either require extensive manual annotations or use opaque deep learning models that are hard for clinicians to validate and trust. We present EmbryoProfiler, a visual analytics system collaboratively developed with embryologists, biologists, and machine learning researchers to support clinicians in visually assessing embryo viability from time-lapse microscopy imagery. Our system incorporates a deep learning pipeline that automatically annotates microscopy images and extracts clinically interpretable features relevant for embryo grading. Our contributions include: (1) a semi-automatic, visualization-based workflow that guides clinicians through fertilization assessment, developmental timing evaluation, morphological inspection, and comparative analysis of embryos; (2) innovative interactive visualizations, such as cell-shape plots, designed to facilitate efficient analysis of morphological and developmental characteristics; and (3) an integrated, explainable machine learning classifier offering transparent, clinically-informed embryo viability scoring to predict live birth outcomes. Quantitative evaluation of our classifier and qualitative case studies conducted with practitioners demonstrate that EmbryoProfiler enables clinicians to make better-informed embryo selection decisions, potentially leading to improved clinical outcomes in IVF treatments. | false | true | [
"Johannes Knittel",
"Simon Warchol",
"Jakob Troidl",
"Camelia D. Brumar",
"Helen Yang",
"Eric Mörth",
"Robert Krüger",
"Daniel Needleman",
"Dalit Ben-Yosef",
"Hanspeter Pfister"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_28a70097-2b4f-4ac5-9bb0-7b452093c16b.html",
"icon": "other"
}
] |
Vis | 2,025 | EncQA: Benchmarking Vision-Language Models on Visual Encodings for Charts | 10.1109/TVCG.2025.3634249 | Multimodal vision-language models (VLMs) continue to achieve ever-improving scores on chart understanding benchmarks. Yet, we find that this progress does not fully capture the breadth of visual reasoning capabilities essential for interpreting charts. We introduce EncQA, a novel benchmark informed by the visualization literature, designed to provide systematic coverage of visual encodings and analytic tasks that are crucial for chart understanding. EncQA provides 2,076 synthetic question-answer pairs, enabling balanced coverage of six visual encoding channels (position, length, area, color quantitative, color nominal, and shape) and eight tasks (find extrema, retrieve value, find anomaly, filter values, compute derived value exact, compute derived value relative, correlate values, and correlate values relative). Our evaluation of 9 state-of-the-art VLMs reveals that performance varies significantly across encodings within the same task, as well as across tasks. Contrary to expectations, we observe that performance does not improve with model size for many task-encoding pairs. Our results suggest that advancing chart understanding requires targeted strategies addressing specific visual reasoning gaps, rather than solely scaling up model or dataset size. | false | true | [
"Kushin Mukherjee",
"Donghao Ren",
"Dominik Moritz",
"Yannick Assogba"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2508.04650v1",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9942014b-e0f8-4cae-9b2a-81167d226c7b.html",
"icon": "other"
}
] |
Vis | 2,025 | Enhancing Data Visualization Literacy: A Comparative Study of Learning Materials in Schools | 10.1109/TVCG.2025.3634817 | Interpreting data visualizations is an essential skill in today's education, yet students often struggle with understanding unfamiliar formats. This study investigates how four learning materials - textbook, comic, video, and game - affect middle- and high school students' ability to interpret line charts, area charts, stacked area charts, and stream graphs. We conducted a comparative classroom study with 68 students, using pre- and post-tests, worksheet activities, and group discussions to assess learning outcomes and understanding. Our results show statistically significant improvement in students' understanding of stacked area charts and stream graphs, while no significant differences between the learning materials were found. This suggests that more factors than initially anticipated - such as engagement, motivation and active learning strategies - influence the learning outcome. The analysis of the worksheets revealed that while students could infer surface-level insights from charts, over 70% struggled to identify underlying patterns or relationships. Additionally, a common challenge across all learning materials was reading fatigue, which often led students to skim content, disengage, or misinterpret key information. These findings highlight the need for educational tools and approaches that foster deeper understanding of unfamiliar visualizations, reduce cognitive load, and encourage active engagement. | true | true | [
"Magdalena Boucher",
"Magdalena Kejstova",
"Christina Stoiber",
"Martin Kandlhofer",
"Alena Boucher",
"Simone Kriglstein",
"Shelley Buchinger",
"Wolfgang Aigner"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://phaidra.fhstp.ac.at/detail/o:7302",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://phaidra.fhstp.ac.at/detail/o:7304",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_851f1b42-91da-4dcc-a08e-c6d8e564d251.html",
"icon": "other"
}
] |
Vis | 2,025 | Envisage: Towards Expressive Visual Graph Querying | 10.1109/TVCG.2025.3634234 | Graph querying is the process of retrieving information from graph data using specialized languages (e.g., Cypher), often requiring programming expertise. Visual Graph Querying (VGQ) streamlines this process by enabling users to construct and execute queries via an interactive interface without resorting to complex coding. However, current VGQ tools only allow users to construct simple and specific query graphs, limiting users' ability to interactively express their query intent, especially for underspecified query intent. To address these limitations, we propose Envisage, an interactive visual graph querying system to enhance the expressiveness of VGQ in complex query scenarios by supporting intuitive graph structure construction and flexible parameterized rule specification. Specifically, Envisage comprises four stages: Query Expression allows users to interactively construct graph queries through intuitive operations; Query Verification enables the validation of constructed queries via rule verification and query instantiation; Progressive Query Execution can progressively execute queries to ensure meaningful querying results; and Result Analysis facilitates result exploration and interpretation. To evaluate Envisage, we conducted two case studies and in-depth user interviews with 14 graph analysts, The results demonstrate its effectiveness and usability in constructing, verifying, and executina complex araoh aueries. | true | true | [
"Xiaolin Wen",
"Qishuang Fu",
"Shuangyue Han",
"Yichen Guo",
"Joseph Liu",
"Yong WANG"
] | [] | [
"PW",
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.11999",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/Selvalim/VGQ-front",
"icon": "code"
},
{
"name": "Website",
"url": "https://wenxiaolin.com/_pages/envisage.html",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_ae49090b-23a7-453b-8d5d-eb0ad8893a08.html",
"icon": "other"
}
] |
Vis | 2,025 | Evaluating Judgment of Spatial Correlation in Visual Displays of Scalar Field Distributions | 10.1109/TVCG.2025.3633830 | In this work we study the identification of spatial correlation in distributions of 2D scalar fields, presented across different forms of visual displays. We study simple visual displays that directly show color-mapped scalar fields, namely those drawn from a distribution, and whether humans can identify strongly correlated spatial regions in these displays. In this setting, the recognition of correlation requires making judgments on a set of fields, rather than just one field. Thus, in our experimental design we compare two basic visualization designs: animation-based displays against juxtaposed views of scalar fields, along different choices of color scales. Moreover, we investigate the impacts of the distribution itself, controlling for the level of spatial correlation and discriminability in spatial scales. Our study's results illustrate the impacts of these distribution characteristics, while also highlighting how different visual displays impact the types of judgments made in assessing spatial correlation. Supplemental material is available at https://osf.io/zn4qy/ | false | true | [
"Yayan Zhao",
"Matthew Berger"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.17997",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/zn4qy",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_14e8969b-dbaf-41b3-8330-a0decc6b53b0.html",
"icon": "other"
}
] |
Vis | 2,025 | EventBox: A Novel Visual Encoding for Interactive Analysis of Temporal and Multivariate Attributes in Event Sequences | 10.1109/TVCG.2025.3633904 | The rapid growth and availability of event sequence data across domains requires effective analysis and exploration methods to facilitate decision-making. Visual analytics combines computational techniques with interactive visualizations, enabling the identification of patterns, anomalies, and attribute interactions. However, existing approaches frequently overlook the interplay between temporal and multivariate attributes. We introduce EventBox, a novel data representation and visual encoding approach for analyzing groups of events and their multivariate attributes. We have integrated EventBox into Sequen-C, a visual analytics system for the analysis of event sequences. To enable the agile creation of EventBoxes in Sequen-C, we have added user-driven transformations, including alignment, sorting, substitution and aggregation. To enhance analytical depth, we incorporate automatically generated statistical analyses, providing additional insight into the significance of attribute interactions. We evaluated our approach involving 21 participants (3 domain experts, 18 novice data analysts). We used the ICE-T framework to assess visualization value, user performance metrics completing a series of tasks, and interactive sessions with domain experts. We also present three case studies with real-world healthcare data demonstrating how EventBox and its integration into Sequen-C reveal meaningful patterns, anomalies, and insights. These results demonstrate that our work advances visual analytics by providing a flexible solution for exploring temporal and multivariate attributes in event sequences. | false | true | [
"Luis Rene Montana Gonzalez",
"Jessica Magallanes",
"Miguel A Juarez",
"Suzanne Mason",
"Andrew Narracott",
"Lindsey van Gemeren",
"Steven Wood",
"Maria-Cruz Villa-Uriol"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.14685",
"icon": "paper"
},
{
"name": "Website",
"url": "http://bit.ly/3IyEUI6",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_29b8a92c-ef97-44df-b315-a903fe54fa53.html",
"icon": "other"
}
] |
Vis | 2,025 | Exploring 3D Unsteady Flow using 6D Observer Space Interactions | 10.1109/TVCG.2025.3642506 | Visualizing and analyzing 3D unsteady flow fields is a very challenging task. We approach this problem by leveraging the mathematical foundations of 3D observer fields to explore and analyze 3D flows in reference frames that are more suitable to visual analysis than the input reference frame. We design novel interactive tools for determining, filtering, and combining reference frames for observer-aware 3D unsteady flow visualization. We represent the space of reference frame motions in a 3D spatial domain via a 6D parameter space, in which every observer is a time-dependent curve. Our framework supports operations in this 6D observer space by separately focusing on two 3D subspaces, for 3D translations, and 3D rotations, respectively. We show that this approach facilitates a variety of interactions with 3D flow fields. Building on the interactive selection of observers, we furthermore introduce novel techniques such as observer-aware streamline- and pathline-filtering as well as observer-aware isosurface animations of scalar fluid properties for the enhanced visualization and analysis of 3D unsteady flows. We discuss the theoretical underpinnings as well as practical implementation considerations of our approach, and demonstrate the benefits of its 6+1D observer-based methodology on several 3D unsteady flow datasets. | true | true | [
"Xingdi Zhang",
"Amani Ageeli",
"Thomas Theußl",
"Markus Hadwiger",
"Peter Rautek"
] | [
"HM"
] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://vccvisualization.org/research/observerspaces/",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/Cindy-xdZhang/PyflowVis",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_fb22d5c6-8a2f-4d98-80ec-1fbbfed18538.html",
"icon": "other"
}
] |
Vis | 2,025 | Eye of the Beholder: Towards Measuring Visualization Complexity | 10.1109/TVCG.2025.3634789 | Constructing expressive and legible visualizations is a key activity for visualization designers. While numerous design guidelines exist, research on how specific graphical features affect perceived visual complexity remains limited. In this paper, we report on a crowdsourced study to collect human ratings of perceived complexity for diverse visualizations. Using these ratings as ground truth, we then evaluated three methods to estimate this perceived complexity: image analysis metrics, multilinear regression using manually coded visualization features, and automated feature extraction using a large language model (LLM). Image complexity metrics showed no correlation with human-perceived visualization complexity. Manual feature coding produced a reasonable predictive model but required substantial effort. In contrast, a zero-shot LLM (GPT-4o mini) demonstrated strong capabilities in both rating complexity and extracting relevant features. Our findings suggest that visualization complexity is truly in the eye of the beholder, yet can be effectively approximated using zero-shot LLM prompting, offering a scalable approach for evaluating the complexity of visualizations. The dataset and code for the study and data analysis can be found at https://osf.io/w85a4/. | true | true | [
"Johannes Ellemose",
"Niklas Elmqvist"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/w85a4/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_a610d354-3cae-4df3-ac69-db5a5a7572a6.html",
"icon": "other"
}
] |
Vis | 2,025 | F-Hash: Feature-Based Hash Design for Time-Varying Volume Visualization via Multi-Resolution Tesseract Encoding | 10.1109/TVCG.2025.3634812 | Interactive time-varying volume visualization is challenging due to its complex spatiotemporal features and sheer size of the dataset. Recent works transform the original discrete time-varying volumetric data into continuous Implicit Neural Representations (INR) to address the issues of compression, rendering, and super-resolution in both spatial and temporal domains. However, training the INR takes a long time to converge, especially when handling large-scale time-varying volumetric datasets. In this work, we proposed F-Hash, a novel feature-based multi-resolution Tesseract encoding architecture to greatly enhance the convergence speed compared with existing input encoding methods for modeling time-varying volumetric data. The proposed design incorporates multi-level collision-free hash functions that map dynamic 4D multi-resolution embedding grids without bucket waste, achieving high encoding capacity with compact encoding parameters. Our encoding method is agnostic to time-varying feature detection methods, making it a unified encoding solution for feature tracking and evolution visualization. Experiments show the F-Hash achieves state-of-the-art convergence speed in training various time-varying volumetric datasets for diverse features. We also proposed an adaptive ray marching algorithm to optimize the sample streaming for faster rendering of the time-varying neural representation. | true | true | [
"Jianxin Sun",
"David Lenz",
"Hongfeng Yu",
"Tom Peterka"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.03836",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_72737da9-0932-41f6-99a2-803817cf8c15.html",
"icon": "other"
}
] |
Vis | 2,025 | F2Stories: A Modular Framework for Multi-Objective Optimization of Storylines with a Focus on Fairness | 10.1109/TVCG.2025.3634228 | Storyline visualizations represent character interactions over time. When these characters belong to different groups, a new research question emerges: how can we balance optimization of readability across the groups while preserving the overall narrative structure of the story? Traditional algorithms that optimize global readability metrics (like minimizing crossings) can introduce quality biases between the different groups based on their cardinality and other aspects of the data. Visual consequences of these biases are: making characters of minority groups disproportionately harder to follow, and visually deprioritizing important characters when their curves become entangled with numerous secondary characters. We present F2Stories, a modular framework that addresses these challenges in storylines by offering three complementary optimization modes: (1) fairnessMode ensures that no group bears a disproportionate burden of visualization complexity regardless of their representation in the story; (2) focusMode allows prioritizing a group of characters while maintaining good readability for secondary characters; and (3) standardMode globally optimizes classical aesthetic metrics. Our approach is based on Mixed Integer Linear Programming (MILP), offering optimality guarantees, precise balancing of competing metrics through weighted objectives, and the flexibility to incorporate complex fairness concepts as additional constraints without the need to redesign the entire algorithm. We conducted an extensive experimental analysis to demonstrate how F2Stories enables more fair or focus group-prioritized storyline visualizations while maintaining adherence to established layout constraints. Our evaluation includes comprehensive results from a detailed case study that shows the effectiveness of our approach in real-world narrative contexts. An open access copy of this paper and all supplemental materials are available at osf.io/e2qvy. | false | true | [
"Tommaso Piselli",
"Giuseppe Liotta",
"Fabrizio Montecchiani",
"Martin Nöllenburg",
"Sara Di Bartolomeo"
] | [
"HM"
] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://www.ac.tuwien.ac.at/files/tr/ac-tr-25-001.pdf",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/e2qvy/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_bbfc6b4e-96d6-4981-87da-8bb285d256dc.html",
"icon": "other"
}
] |
Vis | 2,025 | FlowForge: Guiding the Creation of Multi-Agent Workflows with Design Space Visualization as a Thinking Scaffold | 10.1109/TVCG.2025.3634627 | Multi-agent workflows have become an effective strategy for tackling complicated tasks by decomposing them into multiple sub-tasks and assigning them to specialized agents. However, designing optimal workflows remains challenging due to the vast and intricate design space. Current practices rely heavily on the intuition and expertise of practitioners, often resulting in design fixation or an unstructured, time-consuming exploration of trial-and-error. To address these challenges, this work introduces FLOWFORGE, an interactive visualization tool to facilitate the creation of multi-agent workflow through i) a structured visual exploration of the design space and ii) in-situ guidance informed by established design patterns. Based on formative studies and literature review, FLOWFORGE organizes the workflow design process into three hierarchical levels (i.e., task planning, agent assignment, and agent optimization), ranging from abstract to concrete. This structured visual exploration enables users to seamlessly move from high-level planning to detailed design decisions and implementations, while comparing alternative solutions across multiple performance metrics. Additionally, drawing from established workflow design patterns, FLOWFORGE provides context-aware, in-situ suggestions at each level as users navigate the design space, enhancing the workflow creation process with practical guidance. Use cases and user studies demonstrate the usability and effectiveness of FLOWFORGE, while also yielding valuable insights into how practitioners explore design spaces and leverage guidance during workflow development. | false | true | [
"Pan Hao",
"Dongyeop Kang",
"Nicholas Hinds",
"Qianwen Wang"
] | [] | [
"PW",
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.15559",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/Visual-Intelligence-UMN/FlowForge",
"icon": "code"
},
{
"name": "Website",
"url": "https://vis-flow-forge-demo.vercel.app",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_ed3195e2-8726-4d85-acb7-c5ed2dc361bb.html",
"icon": "other"
}
] |
Vis | 2,025 | From Vision to Touch: Bridging Visual and Tactile Principles for Accessible Data Representation | 10.1109/TVCG.2025.3634254 | Tactile graphics are widely used to present maps and statistical diagrams to blind and low vision (BLV) people, with accessibility guidelines recommending their use for graphics where spatial relationships are important. Their use is expected to grow with the advent of commodity refreshable tactile displays. However, in stark contrast to visual information graphics, we lack a clear understanding of the benefts that well-designed tactile information graphics offer over text descriptions for BLV people. To address this gap, we introduce a framework considering the three components of encoding, perception and cognition to examine the known benefts for visual information graphics and explore their applicability to tactile information graphics. This work establishes a preliminary theoretical foundation for the tactile-frst design of information graphics and identifes future research avenues. | true | true | [
"Kim Marriott",
"Matthew Butler",
"Leona Holloway",
"William Jolley",
"Bongshin Lee",
"Bruce Maguire",
"Danielle Szafir"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_544233de-86be-4bc4-ab7e-2e5de7d366b2.html",
"icon": "other"
}
] |
Vis | 2,025 | GALE: Leveraging Heterogeneous Systems for Efficient Unstructured Mesh Data Analysis | 10.1109/TVCG.2025.3634637 | Unstructured meshes present challenges in scientific data analysis due to irregular distribution and complex connectivity. Computing and storing connectivity information is a major bottleneck for visualization algorithms, affecting both time and memory performance. Recent task-parallel data structures address this by precomputing connectivity information at runtime while the analysis algorithm executes, effectively hiding computation costs and improving performance. However, existing approaches are CPU-bound, forcing the data structure and analysis algorithm to compete for the same computational resources, limiting potential speedups. To overcome this limitation, we introduce a novel task-parallel approach optimized for heterogeneous CPU-GPU systems. Specifically, we offload the computation of mesh connectivity information to GPU threads, enabling CPU threads to focus on executing the visualization algorithm. Following this paradigm, we propose GPU-Aided Localized data structurE (GALE), the first open-source CUDA-based data structure designed for heterogeneous task parallelism. Experiments on two 20-core CPUs and an NVIDIA V100 GPU show that GALE achieves up to $2.7\times$ speedup over state-of-the-art localized data structures while maintaining memory efficiency. | false | true | [
"Guoxi Liu",
"Thomas Randall",
"Rong Ge",
"Federico Iuricich"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.15230",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/zxm4w",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_5cec6278-7356-4658-8125-0dc6b65f8b08.html",
"icon": "other"
}
] |
Vis | 2,025 | GhostUMAP2: Measuring and Analyzing (r,d)-Stability of UMAP | 10.1109/TVCG.2025.3633894 | Despite the widespread use of Uniform Manifold Approximation and Projection (UMAP), the impact of its stochastic optimization process on the results remains underexplored. We observed that it often produces unstable results where the projections of data points are determined mostly by chance rather than reflecting neighboring structures. To address this limitation, we introduce $(r,d)$-stability to UMAP: a framework that analyzes the stochastic positioning of data points in the projection space. To assess how stochastic elements—specifically, initial projection positions and negative sampling—impact UMAP results, we introduce “ghosts“, or duplicates of data points representing potential positional variations due to stochasticity. We define a data point's projection as $(r,d)$-stable if its ghosts perturbed within a circle of radius $r$ in the initial projection remain confined within a circle of radius $d$ for their final positions. To efficiently compute the ghost projections, we develop an adaptive dropping scheme that reduces a runtime up to 60% compared to an unoptimized baseline while maintaining approximately 90% of unstable points. We also present a visualization tool that supports the interactive exploration of the $(r,d)$-stability of data points. Finally, we demonstrate the effectiveness of our framework by examining the stability of projections of real-world datasets and present usage guidelines for the effective use of our framework. | true | true | [
"Myeongwon Jung",
"Takanori Fujiwara",
"Jaemin Jo"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.17174",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_28a61ed9-3f15-4835-a688-e72b1fd6fa0c.html",
"icon": "other"
}
] |
Vis | 2,025 | GoFish: a Grammar of More Graphics! | 10.1109/TVCG.2025.3634250 | Visualization grammars from ggplot2 to Vega-Lite are based on the Grammar of Graphics (GoG), our most comprehensive formal theory of visualization. The GoG helped expand the expressive gamut of visualization by moving beyond fixed chart types and towards a design space of composable operators. Yet, the resultant design space has surprising limitations, inconsistencies, and cliffs---even seemingly simple charts like mosaics, waffles, and ribbons fall out of scope of most GoG implementations. To author such charts, visualization designers must either rely on overburdened grammar developers to implement purpose-built mark types (thus reintroducing the issues of typologies) or drop to lower-level frameworks. In response, we present GoFish: a declarative visualization grammar that formalizes Gestalt principles (e.g., uniform spacing, containment, and connection) that have heretofore been complected in GoG constructs. These graphical operators achieve greater expressive power than their predecessors by enabling recursive composition: they can be nested and overlapped arbitrarily. Through a diverse example gallery, we demonstrate how graphical operators free users to arrange shapes in many different ways while retaining the benefits of high-level grammars like scale resolution and coordinate transform management. Recursive composition naturally yields an infinite design space that blurs the boundary between an expressive, low-level grammar and a concise, high-level one. In doing so, we point towards an updated theory of visualization, one that is open to an innumerable space of graphic representations instead of limited to a fixed set of good designs. | true | true | [
"Josh Pollock",
"Arvind Satyanarayan"
] | [] | [
"PW",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://github.com/starfish-graphics/gofish-graphics",
"icon": "other"
},
{
"name": "Website",
"url": "https://gofish.graphics/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_12e7281f-5e70-4d16-a9a0-a87a0d46df5c.html",
"icon": "other"
}
] |
Vis | 2,025 | Graphical Perception of Icon Arrays versus Bar Charts for Value Comparisons in Health Risk Communication | 10.1109/TVCG.2025.3634630 | Visualizations support critical decision making in domains like health risk communication. This is particularly important for those at higher health risks and their care providers, allowing for better risk interpretation which may lead to more informed decisions. However, the kinds of visualizations used to represent data may impart biases that influence data interpretation and decision making. Both continuous representations using bar charts and discrete representations using icon arrays are pervasive in health risk communication, but express the same quantities using fundamentally different visual paradigms. We conducted a series of studies to investigate how bar charts, icon arrays, and their layout (juxtaposed, explicit encoding, explicit encoding plus juxtaposition) affect the perception of value comparison and subsequent decision-making in health risk communication. Our results suggest that icon arrays and explicit encoding combined with juxtaposition can optimize for both accurate difference estimation and perceptual biases in decision making. We also found misalignment between estimation accuracy and decision making, as well as between low and high literacy groups, emphasizing the importance of tailoring visualization approaches to specific audiences and evaluating visualizations beyond perceptual accuracy alone. This research contributes empirically-grounded design recommendations to improve comparison in health risk communication and support more informed decision-making across domains. | false | true | [
"Jade Kandel",
"Jiayi Liu",
"Arran Zeyu Wang",
"Chin Tseng",
"Danielle Szafir"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/4nhtf/?view_only=b5168623250a44aa826ab027df42d87b",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_1e9e6140-6939-402f-8781-8420814824be.html",
"icon": "other"
}
] |
Vis | 2,025 | GSCache: Real-Time Radiance Caching for Volume Path Tracing using 3D Gaussian Splatting | 10.1109/TVCG.2025.3634634 | Real-time path tracing is rapidly becoming the standard for rendering in entertainment and professional applications. In scientific visualization, volume rendering plays a crucial role in helping researchers analyze and interpret complex 3D data. Recently, photorealistic rendering techniques have gained popularity in scientific visualization, yet they face significant challenges. One of the most prominent issues is slow rendering performance and high pixel variance caused by Monte Carlo integration. In this work, we introduce a novel radiance caching approach for path-traced volume rendering. Our method leverages advances in volumetric scene representation and adapts 3D Gaussian splatting to function as a multi-level, path-space radiance cache. This cache is designed to be trainable on the fly, dynamically adapting to changes in scene parameters such as lighting configurations and transfer functions. By incorporating our cache, we achieve less noisy, higher-quality images without increasing rendering costs. To evaluate our approach, we compare it against a baseline path tracer that supports uniform sampling and next-event estimation and the state-of-the-art for neural radiance caching. Through both quantitative and qualitative analyses, we demonstrate that our path-space radiance cache is a robust solution that is easy to integrate and significantly enhances the rendering quality of volumetric visualization applications while maintaining comparable computational efficiency. | false | true | [
"David Bauer",
"Qi Wu",
"Hamid Gadirov",
"Kwan-Liu Ma"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.19718",
"icon": "paper"
},
{
"name": "Website",
"url": "https://dbauer15.github.io/papers/gscache/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_99d47ba0-22e8-4d68-9484-346e6fc722b8.html",
"icon": "other"
}
] |
Vis | 2,025 | Here's what you need to know about my data: Exploring Expert Knowledge's Role in Data Analysis | 10.1109/TVCG.2025.3634821 | Data-driven decision making has become a popular practice in science, industry, and public policy. Yet data alone, as an imperfect and partial representation of reality, is often insufficient to make good analysis decisions. Knowledge about the context of a dataset, its strengths and weaknesses, and its applicability for certain tasks is essential. Analysts are often not only familiar with the data itself, but also have data hunches about their analysis subject. In this work, we present an interview study with analysts from a wide range of domains and with varied expertise and experience, inquiring about the role of contextual knowledge. We provide insights into how data is insufficient in analysts' workflows and how they incorporate other sources of knowledge into their analysis. We analyzed how knowledge of data shaped their analysis outcome. Based on the results, we suggest design opportunities to better and more robustly consider both knowledge and data in analysis processes. | true | true | [
"Haihan Lin",
"Maxim Lisnic",
"Derya Akbaba",
"Miriah Meyer",
"Alexander Lex"
] | [
"HM"
] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/dn32z_v2",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/f89jp/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_849ca631-398a-4087-9f66-a0837fc3af86.html",
"icon": "other"
}
] |
Vis | 2,025 | How do Data Journalists Design Maps to Tell Stories? | 10.1109/TVCG.2025.3633911 | Maps are essential to news media as they provide a familiar way to convey spatial context and present engaging narratives. However, the design of journalistic maps may be challenging, as editorial teams need to balance multiple aspects, such as aesthetics, the audience's expected data literacy, tight publication deadlines, and the team's technical skills. Data journalists often come from multiple areas and lack a cartography, data visualization, and data science background, limiting their competence in creating maps. While previous studies have examined spatial visualizations in data stories, this research seeks to gain a deeper understanding of the map design process employed by news outlets. To achieve this, we strive to answer two specific research questions: what is the design space of journalistic maps? and how do editorial teams produce journalistic map articles? To answer the first one, we collected and analyzed a large corpus of 462 journalistic maps used in news articles from five major news outlets published over three months. As a result, we created a design space comprised of eight dimensions that involved both properties describing the articles' aspects and the visual/interactive features of maps. We approach the second research question via semi-structured interviews with four data journalists who create data-driven articles daily. Through these interviews, we identified the most common design rationales made by editorial teams and potential gaps in current practices. We also collected the practitioners' feedback on our design space to externally validate it. With these results, we aim to provide researchers and journalists with empirical data to design and study journalistic maps. | false | true | [
"Arlindo Gomes",
"Emilly Brito",
"Luiz Morais",
"Nivan Ferreira"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/bhn6e/?view_only=6a130b6323364a1db514a3df45b94647",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_6c986f26-eabe-4109-b6ab-7ba4c489451d.html",
"icon": "other"
}
] |
Vis | 2,025 | HypoChainer: A Collaborative System Combining LLMs and Knowledge Graphs for Hypothesis-Driven Scientific Discovery | 10.1109/TVCG.2025.3633887 | Modern scientific discovery faces challenges in integrating the rapidly expanding and diverse knowledge required for exploring novel knowledge in biology. While traditional hypothesis-driven research has proven effective, it is constrained by human cognitive limitations, knowledge complexity, and the high costs of trial-and-error experimentation. Deep learning models, particularly graph neural networks (GNNs), have accelerated scientific progress. However, the vast predictions generated make manual selection for experimental validation impractical. Attempts to leverage large language models (LLMs) for filtering predictions and generating novel hypotheses have been impeded by issues such as hallucinations and the lack of structured knowledge grounding, which undermine their reliability. To address these challenges, we propose HypoChainer, a collaborative visualization framework that integrates human expertise, LLM-driven reasoning, and knowledge graphs (KGs) to enhance scientific discovery visually. HypoChainer operates through three key stages: (1) Contextual Exploration: Domain experts employ retrieval-augmented LLMs (RAGs) and visualizations to extract insights and research focuses from vast GNN predictions, supplemented by interactive explanations for in-depth understanding; (2) Hypothesis Construction: Experts iteratively explore the KG information relevant to the predictions and hypothesis-aligned entities, gaining knowledge and insights while refining the hypothesis through suggestions from LLMs; and (3) Validation Selection: Predictions are prioritized based on the refined hypothesis chains and KG-supported evidence, identifying high-priority candidates for validation. The hypothesis chains are further optimized through visual analytics of the retrieval results. We evaluated the effectiveness of HypoChainer in hypothesis construction and scientific discovery through a case study and expert interviews. | false | true | [
"Shaohan Shi",
"Haoran Jiang",
"Yunjie Yao",
"Chang Jiang",
"Quan Li"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.17209",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_c06df49a-9aa2-40f9-94b9-23789ec137b8.html",
"icon": "other"
}
] |
Vis | 2,025 | InsightChaser: Enhancing Visual Reasoning of Sports Tactical Visualization with Visual-Text Linking | 10.1109/TVCG.2025.3634639 | In sports analytics, tactical visualization is widely used to convey valuable insights. However, due to the complex domain knowledge and contextual information involved in tactical visualizations, it is challenging for users to connect high-level tactical insights to corresponding visual patterns. This requires users to engage in a reasoning process to interpret insights within game contexts, which remains insufficiently supported in existing visual-text linking studies. In this work, we propose InsightChaser, a novel approach to bridge tactical insights and soccer visualizations through visual-text linking and visual reasoning enhancement. InsightChaser constructs knowledge graphs to represent both visual elements and contextual game information. Integrating large language models (LLMs), our approach retrieves relevant visual elements and establishes explicit links with insights. Moreover, InsightChaser utilizes LLMs to enhance these visual-text links by providing reasoning explanations and visual effects. We further develop an interactive visualization system that supports navigation and explanation of enhanced visual-text links. Users can explore linked tactical insights interactively and reason through enhanced visual explanations. We conduct two case studies using real-world soccer data and a user study to demonstrate the effectiveness of our approach. | false | true | [
"Ziao Liu",
"Wenshuo Zhao",
"Xiao Xie",
"Anqi Cao",
"Yihong Wu",
"Hui Zhang",
"Yingcai Wu"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_ce206043-1fa9-45b7-801b-2910c1dec585.html",
"icon": "other"
}
] |
Vis | 2,025 | Interactive Composition Operators An Alternative Approach for Selecting Linear Embedding Parameters | 10.1109/TVCG.2025.3634874 | Linear embeddings support interactive visual exploration by mapping high-dimensional (nD) data into a two-dimensional space. Despite their popularity, selecting meaningful projection parameters remains a key challenge due to the infinite 2$n$-dimensional parameter space. Once an informative projection is found, users often seek similar ones that emphasize specific items differently while preserving global structure. For instance: Do clusters become outliers under slight changes? Can grouped items separate-or merge-through parameter adjustments? Which changes to the embedding parameters lead to such projections - and do they exist at all? Answering these questions efficiently is critical for effective visual search. Yet, current methods-such as projection tours or manual parameter tuning-are time-consuming and risk overlooking important views, including those of specific interest. We propose Composition Operators, a mathematical foundation for a novel set-of-point manipulation concept for linear embeddings-such as Star Coordinates-as an alternative approach to selecting informative embedding parameters in a more controllable manner with respect to the desired outcome. Users specify item-based constraints on the projection result; the corresponding 2n parameters are then derived automatically, eliminating the need to exhaustively search the entire parameter space to get a similar outcome. Neither the embedding space nor the set of parameters is altered - only the mechanism for navigating and selecting parameters is redefined. We provide closed-form solutions for this and demonstrate our interactive prototype on nD datasets from the UCI repository. | true | true | [
"Dirk Lehmann",
"Kai M. Blum",
"Manuel Rubio-Sánchez",
"Konrad Simon"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_b43c7d8b-fdda-4a75-818d-e1fb4e4dfe31.html",
"icon": "other"
}
] |
Vis | 2,025 | Interactive Hybrid Rice Breeding with Parametric Dual Projection | 10.1109/TVCG.2025.3634640 | Hybrid rice breeding crossbreeds different rice lines and cultivates the resulting hybrids in fields to select those with desirable agronomic traits, such as higher yields. Recently, genomic selection has emerged as an efficient way for hybrid rice breeding. It predicts the traits of hybrids based on their genes, which helps exclude many undesired hybrids, largely reducing the workload of field cultivation. However, due to the limited accuracy of genomic prediction models, breeders still need to combine their experience with the models to identify regulatory genes that control traits and select hybrids, which remains a time-consuming process. To ease this process, in this paper, we proposed a visual analysis method to facilitate interactive hybrid rice breeding. Regulatory gene identification and hybrid selection naturally ensemble a dual-analysis task. Therefore, we developed a parametric dual projection method with theoretical guarantees to facilitate interactive dual analysis. Based on this dual projection method, we further developed a gene visualization and a hybrid visualization to verify the identified regulatory genes and hybrids. The effectiveness of our method is demonstrated through the quantitative evaluation of the parametric dual projection method, identified regulatory genes and desired hybrids in the case study, and positive feedback from breeders. | false | true | [
"Changjian Chen",
"Pengcheng Wang",
"Fei Lyu",
"Zhuo Tang",
"Li Yang",
"Long Wang",
"Yong Cai",
"Feng Yu",
"Kenli Li"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.11848",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_4009fec8-7d38-4e00-9e7d-f9493fb32941.html",
"icon": "other"
}
] |
Vis | 2,025 | Investigating the Effects of Augmented Reality on Message Credibility When Visualizing Environmental Impacts | 10.1109/TVCG.2025.3634786 | Augmented reality (AR) has increasingly been used to communicate environmental impacts, offering greater engagement than conventional displays. However, its effect on message credibility-how much people believe in the content of the communication-remains unclear. In a preregistered study, we compared the perceived credibility of environmental information presented via visualizations on an AR headset or a desktop display. We created display-specific visual encodings (3D concrete for AR, 2D bar charts for desktop) and added two control conditions to cross display and encoding. We found no difference in message credibility between AR and desktop, though concrete AR was rated most engaging. Supplementary material is available at https://osf.io/n4p5c/. | false | true | [
"Aymeric Ferron",
"Ambre Assor",
"Pierre Dragicevic",
"Yvonne Jansen"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://hal.science/hal-05200516/document",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/n4p5c/files/osfstorage",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/3djhs",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_86dbd3cf-0467-4e0e-8612-b70910cdefbc.html",
"icon": "other"
}
] |
Vis | 2,025 | Locally Adapted Reference Frame Fields using Moving Least Squares | 10.1109/TVCG.2025.3634845 | The detection and analysis of features in fluid flow are important tasks in fluid mechanics and flow visu alization. One recent class of methods to approach this problem is to first compute objective optimal reference frames, relative to which the input vector field becomes as steady as possible. However, existing methods either optimize locally over a fixed neighborhood, which might not match the extent of interesting features well, or perform global optimization, which is costly. We propose a novel objective method for the computation of optimal reference frames that automatically adapts to the flow field locally, without having to choose neighborhoods a priori. We enable adaptivity by formulating this problem as a moving least squares approximation, through which we determine a continuous field of reference frames. To incorporate fluid features into the computation of the reference frame field, we introduce the use of a scalar guidance field into the moving least squares approximation. The guidance field determines a curved manifold on which a regularly sampled input vector field becomes a set of irregularly spaced samples, which then forms the input to the moving least squares approximation. Although the guidance field can be any scalar field, by using a field that corresponds to flow features the resulting reference frame field will adapt accordingly. We show that using an FTLE field as the guidance field results in a reference frame field that adapts better to local features in the flow than prior wo rk. However, our moving least squares framework is formulated in a very general way, and therefore other types of guidance fields could be used in the future to adapt to local fluid features. | false | true | [
"Julio Rey Ramirez",
"Peter Rautek",
"Tobias Günther",
"Markus Hadwiger"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_b71e9ec2-8579-410a-94c0-a9e2812a2712.html",
"icon": "other"
}
] |
Vis | 2,025 | MisVisFix: An Interactive Dashboard for Detecting, Explaining, and Correcting Misleading Visualizations using Large Language Models | 10.1109/TVCG.2025.3633884 | Misleading visualizations pose a significant challenge to accurate data interpretation. While recent research has explored the use of Large Language Models (LLMs) for detecting such misinformation, practical tools that also support explanation and correction remain limited. We present MisVisFix, an interactive dashboard that leverages both Claude and GPT models to support the full workflow of detecting, explaining, and correcting misleading visualizations. MisVisFix correctly identifies 96% of visualization issues and addresses all 74 known visualization misinformation types, classifying them as major, minor, or potential concerns. It provides detailed explanations, actionable suggestions, and automatically generates corrected charts. An interactive chat interface allows users to ask about specific chart elements or request modifications. The dashboard adapts to newly emerging misinformation strategies through targeted user interactions. User studies with visualization experts and developers of fact-checking tools show that MisVisFix accurately identifies issues and offers useful suggestions for improvement. By transforming LLM-based detection into an accessible, interactive platform, MisVisFix advances visualization literacy and supports more trustworthy data communication. | false | true | [
"Amit Kumar Das",
"Klaus Mueller"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.04679",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/vhcailab/MisVisFix",
"icon": "code"
},
{
"name": "Website",
"url": "http://167.71.222.168/",
"icon": "project_webste"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_e7cc7c10-4310-4753-b586-467106cdc883.html",
"icon": "other"
}
] |
Vis | 2,025 | Mixture of Cluster-guided Experts for Retrieval-Augmented Label Placement | 10.1109/TVCG.2025.3642518 | Text labels are widely used to convey auxiliary information in visualization and graphic design. The substantial variability in the categories and structures of labeled objects leads to diverse label layouts. Recent single-model learning-based solutions in label placement struggle to capture fine-grained differences between these layouts, which in turn limits their performance. In addition, although human designers often consult previous works to gain design insights, existing label layouts typically serve merely as training data, limiting the extent to which embedded design knowledge can be exploited. To address these challenges, we propose a mixture of cluster-guided experts (MoCE) solution for label placement. In this design, multiple experts jointly refine layout features, with each expert responsible for a specific cluster of layouts. A cluster-based gating function assigns input samples to experts based on representation clustering. We implement this idea through the Label Placement Cluster-guided Experts (LPCE) model, in which a MoCE layer integrates multiple feed-forward networks (FFNs), with each expert composed of a pair of FFNs. Furthermore, we introduce a retrieval augmentation strategy into LPCE, which retrieves and encodes reference layouts for each input sample to enrich its representations. Extensive experiments demonstrate that LPCE achieves superior performance in label placement, both quantitatively and qualitatively, surpassing a range of state-of-the-art baselines. Our algorithm is available at https://github.com/PingshunZhang/LPCE. | true | true | [
"Pingshun Zhang",
"Enyu Che",
"Yinan Chen",
"Bingyao Huang",
"Haibin Ling",
"Jingwei Qu"
] | [] | [
"PW",
"O"
] | [
{
"name": "Website",
"url": "https://jingweiqu.github.io/project/LPCE",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_4becbe1b-2bde-4b12-adfd-e45a36678884.html",
"icon": "other"
}
] |
Vis | 2,025 | MoE-INR: Implicit Neural Representation with Mixture-of-Experts for Time-Varying Volumetric Data Compression | 10.1109/TVCG.2025.3633893 | Implicit neural representations (INRs) have emerged as a transformative paradigm for time-varying volumetric data compression and representation, owing to their ability to model high-dimensional signals effectively. INRs represent scalar fields based on sampled coordinates, typically using either a single network for the entire field or multiple networks across different spatial domains. However, these approaches often face challenges in modeling complex patterns and introducing boundary artifacts. To address these limitations, we propose MoE-INR, an INR architecture based on a mixture-of-experts (MoE) framework. MoE-INR automates irregular subdivisions of spatiotemporal fields and dynamically assigns them to different expert networks. The architecture comprises three key components: a policy network, a shared encoder, and multiple expert decoders. The policy network subdivides the field and determines which expert decoder is responsible for a given input coordinate. The shared encoder extracts hidden representations from the input coordinates, and the expert decoders transform these high-dimensional features into scalar values. This design results in a unified framework accommodating diverse INR types, including conventional, grid-based, and ensemble. We evaluate the effectiveness of MoE-INR on multiple time-varying datasets with varying characteristics. Experimental results demonstrate that MoE-INR significantly outperforms existing non-MoE and MoE-based INRs and traditional lossy compression methods across quantitative and qualitative metrics under various compression ratios. | false | true | [
"Jun Han",
"Kaiyuan Tang",
"Chaoli Wang"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9af6703f-fc27-4be6-a995-f84e8091b517.html",
"icon": "other"
}
] |
Vis | 2,025 | MoMo - Combining Neuron Morphology and Connectivity for Interactive Motif Analysis in Connectomes | 10.1109/TVCG.2025.3634808 | Connectomics, a subfield of neuroscience, reconstructs structural and functional brain maps at synapse-level resolution. These complex spatial maps consist of tree-like neurons interconnected by synapses. Motif analysis is a widely used method for identifying recurring subgraph patterns in connectomes. These motifs, thus, potentially represent fundamental units of information processing. However, existing computational tools often oversimplify neurons as mere nodes in a graph, disregarding their intricate morphologies. In this paper, we introduce MoMo, a novel interactive visualization framework for analyzing neuron morphology-aware motifs in large connectome graphs. First, we propose an advanced graph data structure that integrates both neuronal morphology and synaptic connectivity. This enables highly efficient, parallel subgraph isomorphism searches, allowing for interactive morphological motif queries. Second, we develop a sketch-based interface that facilitates the intuitive exploration of morphology-based motifs within our new data structure. Users can conduct interactive motif searches on state-of-the-art connectomes and visualize results as interactive 3D renderings. We present a detailed goal and task analysis for motif exploration in connectomes, incorporating neuron morphology. Finally, we evaluate MoMo through case studies with four domain experts, who asses the tool's usefulness and effectiveness in motif exploration, and relevance to real-world neuroscience research. The source code for MoMo is available here. | false | true | [
"Michael Shewarega",
"Jakob Troidl",
"Oliver Alvarado Rodriguez",
"Mohammad Dindoost",
"Philipp Harth",
"Hannah Haberkern",
"Johannes Stegmaier",
"David Bader",
"Hanspeter Pfister"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://www.biorxiv.org/content/10.1101/2025.07.02.662847v1",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/VCG/momo",
"icon": "codd"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_bba75087-a544-4e3c-85f9-947b8b3537dc.html",
"icon": "other"
}
] |
Vis | 2,025 | Mosaic Selections: Managing and Optimizing User Selections for Scalable Data Visualization Systems | 10.1109/TVCG.2025.3634649 | Though powerful tools for analysis and communication, interactive visualizations often fail to support real-time interaction with large datasets with millions or more records. To highlight and filter data, users indicate values or intervals of interest. Such selections may span multiple components, combine in complex ways, and require optimizations to ensure low-latency updates. We describe Mosaic Selections, a model for representing, managing, and optimizing user selections, in which one or more filter predicates are added to queries that request data for visualizations and input widgets. By analyzing both queries and selection predicates, Mosaic Selections enable automatic optimizations, including pre-aggregating data to rapidly compute selection updates. We contribute a formal description of our selection model and optimization methods, and their implementation in the open-source Mosaic architecture. Benchmark results demonstrate orders-of-magnitude latency improvements for selection-based optimizations over unoptimized queries and existing optimizers for the Vega language. The Mosaic Selection model provides infrastructure for flexible, interoperable filtering across multiple visualizations, alongside automatic optimizations to scale to millions and even billions of records. | true | true | [
"Jeffrey Heer",
"Dominik Moritz",
"Ron Pechuk"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.19690",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/uwdata/mosaic-selection-benchmarks",
"icon": "other"
},
{
"name": "Website",
"url": "https://idl.uw.edu/mosaic",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_603070d4-693a-4aec-8b08-632a59a24c59.html",
"icon": "other"
}
] |
Vis | 2,025 | Motif Simplification for BioFabric Network Visualizations: Improving Pattern Recognition and Interpretation | 10.1109/TVCG.2025.3634266 | Detecting and interpreting common patterns in relational data is crucial for understanding complex topological structures across various domains. These patterns, or network motifs, can often be detected algorithmically. However, visual inspection remains vital for exploring and discovering patterns. This paper focuses on presenting motifs within BioFabric network visualizations—a unique technique that opens opportunities for research on scaling to larger networks, design variations, and layout algorithms to better expose motifs. Our goal is to show how highlighting motifs can assist users in identifying and interpreting patterns in BioFabric visualizations. To this end, we leverage existing motif simplification techniques. We replace edges with glyphs representing fundamental motifs such as staircases, cliques, paths, and connector nodes. The results of our controlled experiment and usage scenarios demonstrate that motif simplification for BioFabric is useful for detecting and interpreting network patterns. Our participants were faster and more confident using the simplified view without sacrificing accuracy. The efficacy of our current motif simplification approach depends on which extant layout algorithm is used. We hope our promising findings on user performance will motivate future research on layout algorithms tailored to maximizing motif presentation. Our supplemental material is available at https://osf.io/f8s3g/?view_only=7e2df9109dfd4e6c85b89ed828320843. | false | true | [
"Johannes Fuchs",
"Cody Dunne",
"Maria-Viktoria Heinle",
"Daniel Keim",
"Sara Di Bartolomeo"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/5d9q6_v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/f8s3g/?view_only=7e2df9109dfd4e6c85b89ed828320843",
"icon": "other"
},
{
"name": "Website",
"url": "https://biofabric.dbvis.de/motifs/",
"icon": "project_webste"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_e14aa8da-3794-45ba-9dd7-5e0ced3449c7.html",
"icon": "other"
}
] |
Vis | 2,025 | Natural Language-Driven Viewpoint Navigation for Volume Exploration via Semantic Block Representation | 10.1109/TVCG.2025.3634651 | Exploring volumetric data is crucial for interpreting scientific datasets. However, selecting optimal viewpoints for effective navigation can be challenging, particularly for users without extensive domain expertise or familiarity with 3D navigation. In this paper, we propose a novel framework that leverages natural language interaction to enhance volumetric data exploration. Our approach encodes volumetric blocks to capture and differentiate underlying structures. It further incorporates a CLIP Score mechanism, which provides semantic information to the blocks to guide navigation. The navigation is empowered by a reinforcement learning framework that leverage these semantic cues to efficiently search for and identify desired viewpoints that align with the user's intent. The selected viewpoints are evaluated using CLIP Score to ensure that they best reflect the user queries. By automating viewpoint selection, our method improves the efficiency of volumetric data navigation and enhances the interpretability of complex scientific phenomena. | false | true | [
"Xuan Zhao",
"Jun Tao"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3f463b80-5efc-4150-8b6f-3f5fbb1aed72.html",
"icon": "other"
}
] |
Vis | 2,025 | Neighborhood-Preserving Voronoi Treemaps | 10.1109/TVCG.2025.3633905 | Voronoi treemaps are used to depict nodes and their hierarchical relationships simultaneously. However, in addition to the hierarchical structure, data attributes, such as co-occurring features or similarities, frequently exist. Examples include geographical attributes like shared borders between countries or contextualized semantic information such as embedding vectors derived from large language models. In this work, we introduce a Voronoi treemap algorithm that leverages data similarity to generate neighborhood-preserving treemaps. First, we extend the treemap layout pipeline to consider similarity during data preprocessing. We then use a Kuhn-Munkres matching of similarities to centroidal Voronoi tessellation (CVT) cells to create initial Voronoi diagrams with equal cell sizes for each level. Greedy swapping is used to improve the neighborhoods of cells to match the data's similarity further. During optimization, cell areas are iteratively adjusted to their respective sizes while preserving the existing neighborhoods. We demonstrate the practicality of our approach through multiple real-world examples drawn from infographics and linguistics. To quantitatively assess the resulting treemaps, we employ treemap metrics and measure neighborhood preservation. | false | true | [
"Patrick Paetzold",
"Rebecca Kehlbeck",
"Yumeng Xue",
"Bin Chen",
"Yunhai Wang",
"Oliver Deussen"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.03445",
"icon": "paper"
},
{
"name": "Website",
"url": "https://graphics.uni-konstanz.de/voronoitreemap/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_093d9538-804e-4951-a5bf-dab72e99c35d.html",
"icon": "other"
}
] |
Vis | 2,025 | NLI4VolVis: Natural Language Interaction for Volume Visualization via LLM Multi-Agents and Editable 3D Gaussian Splatting | 10.1109/TVCG.2025.3633888 | Traditional volume visualization (VolVis) methods, like direct volume rendering, suffer from rigid transfer function designs and high computational costs. Although novel view synthesis approaches enhance rendering efficiency, they require additional learning effort for non-experts and lack support for semantic-level interaction. To bridge this gap, we propose NLI4VolVis, an interactive system that enables users to explore, query, and edit volumetric scenes using natural language. NLI4VolVis integrates multi-view semantic segmentation and vision-language models to extract and understand semantic components in a scene. We introduce a multi-agent large language model architecture equipped with extensive function-calling tools to interpret user intents and execute visualization tasks. The agents leverage external tools and declarative VolVis commands to interact with the VolVis engine powered by 3D editable Gaussians, enabling open-vocabulary object querying, real-time scene editing, best-view selection, and 2D stylization. We validate our system through case studies and a user study, highlighting its improved accessibility and usability in volumetric data exploration. We strongly recommend readers check out our case studies, demo video, and source code at https://nli4volvis.github.io/. | true | true | [
"Kuangshi Ai",
"Kaiyuan Tang",
"Chaoli Wang"
] | [
"BP"
] | [
"PW",
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.12621",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/KuangshiAi/nli4volvis",
"icon": "code"
},
{
"name": "Website",
"url": "https://nli4volvis.github.io/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_d22e24df-9562-498b-96d4-e9b19056c72d.html",
"icon": "other"
}
] |
Vis | 2,025 | OW-CLIP: Data-Efficient Visual Supervision for Open-World Object Detection via Human-AI Collaboration | 10.1109/TVCG.2025.3633870 | Open-world object detection (OWOD) extends traditional object detection to identifying both known and unknown object, necessitating continuous model adaptation as new annotations emerge. Current approaches face significant limitations: 1) data-hungry training due to reliance on a large number of crowdsourced annotations, 2) susceptibility to “partial feature overfitting,” and 3) limited flexibility due to required model architecture modifications. To tackle these issues, we present OW-CLIP, a visual analytics system that provides curated data and enables data-efficient OWOD model incremental training. OW-CLIP implements plug-and-play multimodal prompt tuning tailored for OWOD settings and introduces a novel “Crop-Smoothing” technique to mitigate partial feature overfitting. To meet the data requirements for the training methodology, we propose dual-modal data refinement methods that leverage large language models and cross-modal similarity for data generation and filtering. Simultaneously, we develope a visualization interface that enables users to explore and deliver high-quality annotations—including class-specific visual feature phrases and fine-grained differentiated images. Quantitative evaluation demonstrates that OW-CLIP achieves competitive performance at 89% of state-of-the-art performance while requiring only 3.8% self-generated data, while outperforming SOTA approach when trained with equivalent data volumes. A case study shows the effectiveness of the developed method and the improved annotation quality of our visualization system. | false | true | [
"Junwen Duan",
"Wei Xue",
"Ziyao Kang",
"Shixia Liu",
"Jiazhi Xia"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.19870",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_a79b8861-5ff1-41ab-ab84-c564496de8de.html",
"icon": "other"
}
] |
Vis | 2,025 | OwnershipTracker: A Visual Analytics Approach to Uncovering Historical Book Ownership Patterns | 10.1109/TVCG.2025.3634653 | Ownership relationships of early printed books from the 15th century reveal complex patterns of distribution and possession, offering valuable insights for historical research. This paper presents OwnershipTracker, a visual analytics application developed to explore and trace these relationships using data from the Material Evidence in Incunabula (MEI) database. OwnershipTracker integrates bibliographic records, copy-specific data, and book provenance and ownership details, enabling users to uncover intricate ownership sequences over time. The application combines several visualization techniques, including network graphs to map connections between owners, timelines for temporal analysis, chord diagrams to quantify transfer patterns, and a distinctive, collaboratively designed spiderweb-like diagram highlighting converging and dispersing ownership transfers through specific owners. Developed iteratively with input from historical book researchers, the application underwent multiple refinements to align with domain research requirements. A summative evaluation with domain experts showcased the tool's ability to address the defined requirements and tasks. The final version of OwnershipTracker is deployed and accessible at: https://booktracker.nms.kcl.ac.uk/ownership. | false | true | [
"Yiwen Xing",
"Meilai Ji",
"Cristina Dondi",
"Alfie Abdul-Rahman"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://kclpure.kcl.ac.uk/portal/en/publications/ownershiptracker-a-visual-analytics-approach-to-uncovering-histor",
"icon": "paper"
},
{
"name": "Website",
"url": "https://booktracker.nms.kcl.ac.uk/ownership",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_4e91945c-334a-4953-87ed-1564920f05dc.html",
"icon": "other"
}
] |
Vis | 2,025 | Perceiving Slope and Acceleration: Evidence for Variable Tempo Sampling in Pitch-Based Sonification of Functions | 10.1109/TVCG.2025.3633835 | Sonification offers a non-visual way to understand data, with pitch-based encodings being the most common. Yet, how well people perceive slope and acceleration—key features of data trends—remains poorly understood. Drawing on people's natural abilities to perceive tempo, we introduce a novel sampling method for pitch-based sonification to enhance the perception of slope and acceleration in univariate functions. While traditional sonification methods often sample data at uniform x-spacing, yielding notes played at a fixed tempo with variable pitch intervals (Variable Pitch Interval), our approach samples at uniform y-spacing, producing notes with consistent pitch intervals but variable tempo (Variable Tempo). We conducted psychoacoustic experiments to understand slope and acceleration perception across three sampling methods: Variable Pitch Interval, Variable Tempo, and a Continuous (no sampling) baseline. In slope comparison tasks, Variable Tempo was more accurate than the other methods when modulated by the magnitude ratio between slopes. For acceleration perception, just-noticeable differences under Variable Tempo were over 13 times finer than with other methods. Participants also commonly reported higher confidence, lower mental effort, and a stronger preference for Variable Tempo compared to other methods. This work contributes models of slope and acceleration perception across pitch-based sonification techniques, introduces Variable Tempo as a novel and preferred sampling method, and provides promising initial evidence that leveraging timing can lead to more sensitive, accurate, and precise interpretation of derivative-based data features. | true | true | [
"Danyang Fan",
"Walker Smith",
"Takako Fujioka",
"Chris Chafe",
"Sile O'Modhrain",
"Diana Deutsch",
"Sean Follmer"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://shape.stanford.edu/research/TempoSonification/Fan25PerceivingSlopeAndAccelerationVariableTempoSamplingSonification.pdf",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/a4cth/?view_only=784965f9e9f64a3ea720cf53ff241481",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9e3dfe2c-bbe4-4431-ae8b-1ee64a399c8b.html",
"icon": "other"
}
] |
Vis | 2,025 | PiCCL: Data-Driven Composition of Bespoke Pictorial Charts | 10.1109/TVCG.2025.3634264 | We present PiCCL (Pictorial Chart Composition Language), a new language that enables users to easily create pictorial charts using a set of simple operators. To support systematic construction while addressing the main challenge of expressive pictorial chart authoring-manual composition and fine-tuning of visual properties-PiCCL introduces a parametric representation that integrates data-driven chart generation with graphical composition. It also employs a lazy data-binding mechanism that automatically synthesizes charts. PiCCL is grounded in a comprehensive analysis of real-world pictorial chart examples. We describe PiCCL's design and its implementation as piccl.js, a JavaScript-based library. To evaluate PiCCL, we showcase a gallery that demonstrates its expressiveness and report findings from a user study assessing the usability of piccl.js. We conclude with a discussion of PiCCL's limitations and potential, as well as future research directions. | false | true | [
"Haoyan Shi",
"Yunhai Wang",
"Junhao Chen",
"Chenglong Wang",
"Bongshin Lee"
] | [] | [
"PW",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/5eqb7",
"icon": "other"
},
{
"name": "Website",
"url": "https://piccl.github.io/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_1513daf8-e0da-48f1-b800-f7b0975869ff.html",
"icon": "other"
}
] |
Vis | 2,025 | PixelatedScatter: Arbitrary-level Visual Abstraction for Large-scale Multiclass Scatterplots | 10.1109/TVCG.2025.3633908 | Overdraw is inevitable in large-scale scatterplots. Current scatterplot abstraction methods lose features in medium-to-low density regions. We propose a visual abstraction method designed to provide better feature preservation across arbitrary abstraction levels for large-scale scatterplots, particularly in medium-to-low density regions. The method consists of three closely interconnected steps: first, we partition the scatterplot into iso-density regions and equalize visual density; then, we allocate pixels for different classes within each region; finally, we reconstruct the data distribution based on pixels. User studies, quantitative and qualitative evaluations demonstrate that, compared to previous methods, our approach better preserves features and exhibits a special advantage when handling ultra-high dynamic range data distributions. | false | true | [
"Ziheng Guo",
"Tianxiang Wei",
"Zeyu Li",
"Lianghao Zhang",
"Sisi Li",
"Jiawan Zhang"
] | [] | [
"PW",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://github.com/Guozihengwww/PixelatedScatter",
"icon": "other"
},
{
"name": "Website",
"url": "https://guozihengwww.github.io/PixelatedScatter-Demo",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_76560e50-f8e7-446f-943e-2ba7ff188c86.html",
"icon": "other"
}
] |
Vis | 2,025 | ProactiveVA: Proactive Visual Analytics with LLM-based UI Agent | 10.1109/TVCG.2025.3642628 | Visual analytics (VA) is typically applied to complex data, thus requiring complex tools. While visual analytics empowers analysts in data analysis, analysts may get lost in the complexity occasionally. This highlights the need for intelligent assistance mechanisms. However, even the latest LLM-assisted VA systems only provide help when explicitly requested by the user, making them insufficiently intelligent to offer suggestions when analysts need them the most. We propose a ProactiveVA framework in which LLM-powered UI agent monitors user interactions and delivers context-aware assistance proactively. To design effective proactive assistance, we first conducted a formative study analyzing help-seeking behaviors in user interaction logs, identifying when users need proactive help, what assistance they require, and how the agent should intervene. Based on this analysis, we distilled key design requirements in terms of intent recognition, solution generation, interpretability and controllability. Guided by these requirements, we develop a three-stage UI agent pipeline including perception, reasoning, and acting. The agent autonomously perceives users' needs from VA interaction logs, providing tailored suggestions and intuitive guidance through interactive exploration of the system. We implemented the framework in two representative types of VA systems, demonstrating its generalizability, and evaluated the effectiveness through an algorithm evaluation, case and expert study and a user study. We also discuss current design trade-offs of proactive VA and areas for further exploration. | false | true | [
"Yuheng Zhao",
"Xueli Shu",
"Liwen Fan",
"Lin Gao",
"Yu Zhang",
"Siming Chen"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://www.arxiv.org/abs/2507.18165",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_47c46990-8edb-4014-be27-c1c07f128d60.html",
"icon": "other"
}
] |
Vis | 2,025 | Probing the Visualization Literacy of Vision Language Models: the Good, the Bad, and the Ugly | 10.1109/TVCG.2025.3634791 | Vision Language Models (VLMs) demonstrate promising chart comprehension capabilities. Yet, prior explorations of their visualization literacy have been limited to assessing their response correctness and fail to explore their internal reasoning. To address this gap, we adapted attention-guided class activation maps (AG-CAM) for VLMs, to visualize the influence and importance of input features (image and text) on model responses. Using this approach, we conducted an examination of four open-source (ChartGemma, Janus 1B and 7B, and LLaVA) and two closed-source (GPT-4o, Gemini) models comparing their performance and, for the open-source models, their AG-CAM results. Overall, we found that ChartGemma, a 3B parameter VLM fine-tuned for chart question-answering (QA), outperformed other open-source models and exhibited performance on par with significantly larger closed-source VLMs. We also found that VLMs exhibit spatial reasoning by accurately localizing key chart features, and semantic reasoning by associating visual elements with corresponding data values and query tokens. Our approach is the first to demonstrate the use of AG-CAM on early fusion VLM architectures, which are widely used, and for chart QA. We also show preliminary evidence that these results can align with human reasoning. Our promising open-source VLMs results pave the way for transparent and reproducible research in AI visualization literacy. Code and Supplemental Materials: https://osf.io/fp3rg | false | true | [
"Lianghan Dong",
"Anamaria Crisan"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2504.05445",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/fp3rg/",
"icon": "other"
},
{
"name": "Website",
"url": "https://huggingface.co/spaces/uw-insight-lab/Probing-Vis-Literacy-of-VLMs",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_6f935187-a32b-477d-90a0-1a7e959c6222.html",
"icon": "other"
}
] |
Vis | 2,025 | Qualitative Study for LLM-assisted Design Study Process: Strategies, Challenges, and Roles | 10.1109/TVCG.2025.3634820 | Design studies aim to develop visualization solutions for real-world problems across various application domains. Recently, the emergence of large language models (LLMs) has introduced new opportunities to enhance the design study process, providing capabilities such as creative problem-solving, data handling, and insightful analysis. However, despite their growing popularity, there remains a lack of systematic understanding of how LLMs can effectively assist researchers in visualization-specific design studies. In this paper, we conducted a rnulti-stage qualitative study to fill this gap, which involved 30 design study researchers from diverse backgrounds and expertise levels. Through in-depth interviews and carefully-designed questionnaires, we investigated strategies for utilizing LLMs, the challenges encountered, and the practices used to overcome them. We further compiled the roles that LLMs can play across different stages of the design study process. Our findings highlight practical implications to inform visualization practitioners, and also provide a framework for leveraging LLMs to facilitate the design study process in visualization research. | true | true | [
"Shaolun Ruan",
"Rui Sheng",
"Xiaolin Wen",
"Jiachen Wang",
"Tianyi Zhang",
"Yong WANG",
"Tim Dwyer",
"Jiannan Li"
] | [
"HM"
] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.10024",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_29656e00-00d7-4080-b589-2d7c838c72be.html",
"icon": "other"
}
] |
Vis | 2,025 | Quantifying Visualization Vibes: Measuring Socio-Indexicality at Scale | 10.1109/TVCG.2025.3634819 | What impressions might readers form with visualizations that go beyond the data they encode? In this paper, we build on recent work that demonstrates the socio-indexical function of visualization, showing that visualizations communicate more than the data they explicitly encode. Bridging this with prior work examining public discourse about visualizations, we contribute an analytic framework for describing inferences about an artifact's social provenance. Via a series of attribution-elicitation surveys, we offer descriptive evidence that these social inferences: (1) can be studied asynchronously, (2) are not unique to a particular sociocultural group or a function of limited data literacy, and (3) may influence assessments of trust. Further, we demonstrate (4) how design features act in concert with the topic and underlying messages of an artifact's data to give rise to such ‘beyond-data’ readings. We conclude by discussing the design and research implications of inferences about social provenance, and why we believe broadening the scope of research on human factors in visualization to include sociocultural phenomena can yield actionable design recommendations to address urgent challenges in public data communication. | true | true | [
"Amy Rae Fox",
"Michelle Morgenstern",
"Graham Jones",
"Arvind Satyanarayan"
] | [
"HM"
] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://doi.org/10.17605/OSF.IO/23HYX",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_c35d8073-d3b1-4978-bdd4-0b428299f354.html",
"icon": "other"
}
] |
Vis | 2,025 | Reframing Pattern: A Comprehensive Approach to a Composite Visual Variable | 10.1109/TVCG.2025.3633909 | We present a new comprehensive theory for explaining, exploring, and using pattern as a visual variable in visualization. Although patterns have long been used for data encoding and continue to be valuable today, their conceptual foundations are precarious: the concepts and terminology used across the research literature and in practice are inconsistent, making it challenging to use patterns effectively and to conduct research to inform their use. To address this problem, we conduct a comprehensive cross-disciplinary literature review that clarifies ambiguities around the use of “pattern” and “texture”. As a result, we offer a new consistent treatment of pattern as a composite visual variable composed of structured groups of graphic primitives that can serve as marks for encoding data individually and collectively. This new and widely applicable formulation opens a sizable design space for the visual variable pattern, which we formalize as a new system comprising three sets of variables: the spatial arrangement of primitives, the appearance relationships among primitives, and the retinal visual variables that characterize individual primitives. We show how our pattern system relates to existing visualization theory and highlight opportunities for visualization design. We further explore patterns based on complex spatial arrangements, demonstrating explanatory power and connecting our conceptualization to broader theory on maps and cartography. An author version and additional materials are available on OSF: osf.io/z7ae2. | false | true | [
"Tingying He",
"Jason Dykes",
"Petra Isenberg",
"Tobias Isenberg"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.02639",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/z7ae2/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_625a86de-1529-4724-8869-f081c7776f88.html",
"icon": "other"
}
] |
Vis | 2,025 | RelMap: Reliable Spatiotemporal Sensor Data Visualization via Imputative Spatial Interpolation | 10.1109/TVCG.2025.3633901 | Accurate and reliable visualization of spatiotemporal sensor data such as environmental parameters and meteorological conditions is crucial for informed decision-making. Traditional spatial interpolation methods, however, often fall short of producing reliable interpolation results due to the limited and irregular sensor coverage. This paper introduces a novel spatial interpolation pipeline that achieves reliable interpolation results and produces a novel heatmap representation with uncertainty information encoded. We leverage imputation reference data from Graph Neural Networks (GNNs) to enhance visualization reliability and temporal resolution. By integrating Principal Neighborhood Aggregation (PNA) and Geographical Positional Encoding (GPE), our model effectively learns the spatiotemporal dependencies. Furthermore, we propose an extrinsic, static visualization technique for interpolation-based heatmaps that effectively communicates the uncertainties arising from various sources in the interpolated map. Through a set of use cases, extensive evaluations on real-world datasets, and user studies, we demonstrate our model's superior performance for data imputation, the improvements to the interpolant with reference data, and the effectiveness of our visualization design in communicating uncertainties. | false | true | [
"Juntong Chen",
"Huayuan Ye",
"He Zhu",
"Siwei Fu",
"Changbo Wang",
"Chenhui Li"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.01240",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/jtchen2k/relmap/",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_10233019-3fcc-4f01-b478-96b0ec01cf29.html",
"icon": "other"
}
] |
Vis | 2,025 | ReVISit 2: A Full Experiment Life Cycle User Study Framework | 10.1109/TVCG.2025.3633896 | Online user studies of visualizations, visual encodings, and interaction techniques are ubiquitous in visualization research. Yet, designing, conducting, and analyzing studies effectively is still a major burden. Although various packages support such user studies, most solutions address only facets of the experiment life cycle, make reproducibility difficult, or do not cater to nuanced study designs or interactions. We introduce reVISit 2, a software framework that supports visualization researchers at all stages of designing and conducting browser-based user studies. ReVISit supports researchers in the design, debug & pilot, data collection, analysis, and dissemination experiment phases by providing both technical affordances (such as replay of participant interactions) and sociotechnical aids (such as a mindfully maintained community of support). It is a proven system that can be (and has been) used in publication-quality studies-which we demonstrate through a series of experimental replications. We reflect on the design of the system via interviews and an analysis of its technical dimensions. Through this work, we seek to elevate the ease with which studies are conducted, improve the reproducibility of studies within our community, and support the construction of advanced interactive studies. | true | true | [
"Zach Cutler",
"Jack Wilburn",
"Hilson Shrestha",
"Yiren Ding",
"Brian Bollen",
"Khandaker Abrar Nadib",
"Tingying He",
"Andrew McNutt",
"Lane Harrison",
"Alexander Lex"
] | [
"BP"
] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.03876",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/e8anx/",
"icon": "other"
},
{
"name": "Preregistration 1",
"url": "https://osf.io/7baqy",
"icon": "other"
},
{
"name": "Preregistration 2",
"url": "https://osf.io/w6csx",
"icon": "other"
},
{
"name": "Preregistration 3",
"url": "https://osf.io/t6gp2",
"icon": "other"
},
{
"name": "ReVISit Homepage",
"url": "revisit.dev",
"icon": "project_website"
},
{
"name": "Replication Studies",
"url": "revisit.dev/replication-studies",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_ebdd5b65-b5cb-4d4b-bd78-92ff8cc2f243.html",
"icon": "other"
}
] |
Vis | 2,025 | Running with Data: a Survey of the Current Research and a Design Exploration of Future Immersive Visualisations | 10.1109/TVCG.2025.3634631 | This work investigates the current research on in-situ visualisations for running: visualisations about data that are referred to during the running activity. We analyse 47 papers from 33 Human-Computer Interaction and Visualisation venues and identify six dimensions of a design space of in-situ running visualisations. Our analysis of this design space highlights an emerging trend: a shift from on-body, peripersonal visualisations (i.e., in the space within direct reach, such as visualisations on a smartwatch or a mobile phone display) towards extrapersonal displays (i.e., in the space beyond immediate reach, such as visualisations in immersive augmented reality displays) that integrate data in the runner's surrounding environment. We explore this opportunity by conducting a series of workshops with 10 active runners in total, eliciting design concepts for running visualisations and interactions beyond conventional 2D displays. We find that runners show a strong interest for visualisation designs that favour more context-aware, interactive, and unobtrusive experiences that seamlessly integrate with their run. These findings inform a set of design considerations for future immersive running visualisations and highlight directions for further research. | true | true | [
"Ang Li",
"Charles Perin",
"Gianluca Demartini",
"Stephen Viller",
"Jarrod Knibbe",
"Maxime Cordeil"
] | [] | [
"PW",
"O"
] | [
{
"name": "Website",
"url": "https://runningwithdata.github.io/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_54daa89a-c0e5-4d1e-9891-4ce10711d2f9.html",
"icon": "other"
}
] |
Vis | 2,025 | SceneLoom: Communicating Data with Scene Context | 10.1109/TVCG.2025.3634816 | In data-driven storytelling contexts such as data journalism and data videos, data visualizations are often presented alongside real-world imagery to support narrative context. However, these visualizations and contextual images typically remain separated, limiting their combined narrative expressiveness and engagement. Achieving this is challenging due to the need for fine-grained alignment and creative ideation. To address this, we present SceneLoom, a Vision-Language Model (VLM)-powered system that facilitates the coordination of data visualization with real-world imagery based on narrative intents. Through a formative study, we investigated the design space of coordination relationships between data visualization and real-world scenes from the perspectives of visual alignment and semantic coherence. Guided by the derived design considerations, SceneLoom leverages VLMs to extract visual and semantic features from scene images and data visualization, and perform design mapping through a reasoning process that incorporates spatial organization, shape similarity, layout consistency, and semantic binding. The system generates a set of contextually expressive, image-driven design alternatives that achieve coherent alignments across visual, semantic, and data dimensions. Users can explore these alternatives, select preferred mappings, and further refine the design through interactive adjustments and animated transitions to support expressive data communication. A user study and an example gallery validate SceneLoom's effectiveness in inspiring creative design and facilitating design externalization. | false | true | [
"Lin Gao",
"Leixian Shen",
"Yuheng Zhao",
"Jiexiang Lan",
"Huamin Qu",
"Siming Chen"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.16466",
"icon": "paper"
},
{
"name": "Website",
"url": "https://lynnegao.me/scene-loom/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_2b7f7862-26fb-4ce7-81a9-01537d804ead.html",
"icon": "other"
}
] |
Vis | 2,025 | SEAL: Spatially-resolved Embedding Analysis with Linked Imaging Data | 10.1109/TVCG.2025.3634794 | Dimensionality reduction techniques help analysts make sense of complex, high-dimensional spatial datasets, such as multiplexed tissue imaging, satellite imagery, and astronomical observations, by projecting data attributes into a two-dimensional space. However, these techniques typically abstract away crucial spatial, positional, and morphological contexts, complicating interpretation and limiting insights. To address these limitations, we present SEAL, an interactive visual analytics system designed to bridge the gap between abstract 2D embeddings and their rich spatial imaging context. SEAL introduces a novel hybrid-embedding visualization that preserves image and morphological information while integrating critical high-dimensional feature data. By adapting set visualization methods, SEAL allows analysts to identify, visualize, and compare selections-defined manually or algorithmically-in both the embedding and original spatial views, facilitating a deeper understanding of the spatial arrangement and morphological characteristics of entities of interest. To elucidate differences between selected sets of items, SEAL employs a scalable surrogate model to calculate feature importance scores, identifying the most influential features governing the position of objects within embeddings. These importance scores are visually summarized across selections, with mathematical set operations enabling detailed comparative analyses. We demonstrate SEAL's effectiveness and versatility through three case studies: colorectal cancer tissue analysis with a pharmacologist, melanoma investigation with a cell biologist, and exploration of sky survey data with an astronomer. These studies underscore the importance of integrating image context into embedding spaces when interpreting complex imaging datasets. Implemented as a standalone tool while also integrating seamlessly with computational notebooks, SEAL provides an interactive platform for spatially informed exploration of high-dimensional datasets, significantly enhancing interpretability and insight generation. | false | true | [
"Simon Warchol",
"Grace Guo",
"Johannes Knittel",
"Dan Freeman",
"Usha Bhalla",
"Jeremy Muhlich",
"Peter Sorger",
"Hanspeter Pfister"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://www.biorxiv.org/content/10.1101/2025.07.19.665696",
"icon": "paper"
},
{
"name": "Website",
"url": "https://sealvis.com/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3eefd0dc-4542-4d3b-8a9e-bd1530cdebab.html",
"icon": "other"
}
] |
Vis | 2,025 | Sel3DCraft: Interactive Visual Prompts for User-Friendly Text-to-3D Generation | 10.1109/TVCG.2025.3633875 | Text-to-3D (T23D) generation has transformed digital content creation, yet remains bottlenecked by blind trial-and-error prompting processes that yield unpredictable results. While visual prompt engineering has advanced in text-to-image domains, its application to 3D generation presents unique challenges requiring multi-view consistency evaluation and spatial understanding. We present Sel3DCraft, a visual prompt engineering system for T23D that transforms unstructured exploration into a guided visual process. Our approach introduces three key innovations: a dual-branch structure combining retrieval and generation for diverse candidate exploration; a multi-view hybrid scoring approach that leverages MLLMs with innovative high-level metrics to assess 3D models with human-expert consistency; and a prompt-driven visual analytics suite that enables intuitive defect identification and refinement. Extensive testing and a user study demonstrate that Sel3DCraft surpasses other T23D systems in supporting creativity for designers. | false | true | [
"Nan Xiang",
"Tianyi Liang",
"Haiwen Huang",
"Shiqi Jiang",
"Hao Huang",
"Yifei Huang",
"Liangyu Chen",
"Changbo Wang",
"Chenhui Li"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.00428",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_84ef8d69-bdba-43e8-a078-ae651a3de164.html",
"icon": "other"
}
] |
Vis | 2,025 | Self-Supervised Continuous Colormap Recovery from a 2D Scalar Field Visualization without a Legend | 10.1109/TVCG.2025.3633886 | Recovering a continuous colormap from a single 2D scalar field visualization can be quite challenging, especially in the absence of a corresponding color legend. In this paper, we propose a novel colormap recovery approach that extracts the colormap from a color-encoded 2D scalar field visualization by simultaneously predicting the colormap and underlying data using a decoupling-and-reconstruction strategy. Our approach first separates the input visualization into colormap and data using a decoupling module, then reconstructs the visualization with a differentiable color-mapping module. To guide this process, we design a reconstruction loss between the input and reconstructed visualizations, which serves both as a constraint to ensure strong correlation between colormap and data during training, and as a self-supervised optimizer for fine-tuning the predicted colormap of unseen visualizations during inferencing. To ensure smoothness and correct color ordering in the extracted colormap, we introduce a compact colormap representation using cubic B-spline curves and an associated color order loss. We evaluate our method quantitatively and qualitatively on a synthetic dataset and a collection of real-world visualizations from the VIS30K dataset [9]. Additionally, we demonstrate its utility in two prototype applications-colormap adjustment and colormap transfer-and explore its generalization to visualizations with color legends and ones encoded using discrete color palettes. | true | true | [
"Hongxu Liu",
"Xinyu Chen",
"Haoyang Zheng",
"Manyi Li",
"Zhenfan Liu",
"Fumeng Yang",
"Yunhai Wang",
"Changhe Tu",
"Qiong Zeng"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2507.20632",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "osf.io/gb2tx/?view_only=4d2a269b59bc4144b2806ec2d1a34e11",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_c1893c22-b83f-4f15-82d7-c8f37bd65213.html",
"icon": "other"
}
] |
Vis | 2,025 | Set Size Matters: Capacity-Limited Perception of Grouped Spatial-Frequency Glyphs | 10.1109/TVCG.2025.3634876 | Recent work suggests that shape can encode quantitative data via a mapping between value and spatial frequency (SF). However, the set-size effect when perceiving multiple SF based items remains unclear. While automatic feature extraction has been found to be less affected by set size (number of items in a group), higher-level processes for making perceptual decisions tend to require increased cognitive demand. To investigate the set-size effect on comparing integrated SF based items, we used a risk-based scenario to assess discrimination performance. Participants were asked to discriminate between pairs of maps containing multiple SF glyphs, in which each glyph represents one of four discrete levels (none, low, medium, high), forming an aggregate “risk strength” per map. The set size was also adjusted across conditions, ranging from small (3 items) to large (7 items). Discrimination sensitivity is modeled with a logistic function and response time with a mixed-effect linear model. Results show that smaller set sizes and lower overall strength enable more precise discrimination, with faster response times for larger differences between maps. Incorporating set size and overall strength into the logistic model, we found that these variables both independently and jointly influence discrimination sensitivity. We suggest these results point towards capacity-limited processes rather than purely automatic ensemble coding. Our findings highlight the importance of set size and overall signal strength when presenting multiple SF glyphs in data visualization. | true | true | [
"Yiran Li",
"Shan Shao",
"Peter Baudains",
"Andrew Meso",
"Nick Holliman",
"Alfie Abdul-Rahman",
"Rita Borgo"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://kclpure.kcl.ac.uk/portal/en/publications/set-size-matters-capacity-limited-perception-of-grouped-spatial-f",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_1acd711b-69a4-486f-9955-0f6adb58292b.html",
"icon": "other"
}
] |
Vis | 2,025 | Shifting Expectations for Encoding Rules Mitigates Misinterpretation of Connected Scatterplots | 10.1109/TVCG.2025.3634225 | Connected scatterplots visualize time-series data by connecting the points on a scatterplot based on temporal sequence. Viewers are prone to misinterpret the direction of time in these visualizations, possibly because they encode time with an unexpected rule - along the connected line (TIME IS A LINE) instead of from left to right (RIGHT IS LATER) as conventional in line charts. In this paper, we use the connected scatterplot to illustrate a perspective on visualization comprehension centered around expectations of encoding rules. People have initial expectations of encoding rules for visualizations that can stem from conventional practices or metaphors, and these expectations have been recognized as a potential factor influencing visualization comprehension. We present three preregistered experiments (n = 1429 in total) demonstrating two kinds of design interventions to strengthen the correct expectation for time and testing their effectiveness in reducing errors for understanding realistic connected scatterplots. We found that visual treatments that suppress the incorrect chart-type expectation and directional cues that emphasize the correct expectation both led viewers to expect TIME IS A LINE more. An explicit directional cue (arrows), ideally redundantly encoded with another cue (trace-line effect or animation), was most effective for reducing misinterpretations. Our findings not only provide practical guidelines for designing connected scatterplots but also contribute theoretical insights to inform the design of novel visualizations that challenge interpretability by defying expectations. | false | true | [
"Wen Xu",
"Lace Padilla"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/4mc2b",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/k45es/",
"icon": "other"
},
{
"name": "Experiment 1",
"url": "https://osf.io/2d3hm",
"icon": "other"
},
{
"name": "Experiment 2",
"url": "https://osf.io/a6hnv",
"icon": "other"
},
{
"name": "Experiment 3",
"url": "https://osf.io/4jpdz",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_cdeb2b03-68fd-4816-8611-544145e35ba2.html",
"icon": "other"
}
] |
Vis | 2,025 | Stitching Meaning: Practices of Data Textile Creators | 10.1109/TVCG.2025.3634255 | Tens of thousands of people have represented data by creating data-encoding textile pieces like blankets, scarves, and more. A prototypical example is the temperature blanket, which represents the weather through rows or blocks of different colors mapped to temperature ranges. While researchers have used fiber arts mediums to create exploratory projects, data visualization and physicalization research has largely not engaged with examples from this enormous and diverse community. We explore the space of data textiles, or fiber arts that encode information, by surveying creators (i.e., data fiber artists) on their projects and processes. We create a corpus of 159 examples of data textiles and present a schema characterizing the data encoding methods used in these projects. We also gather insights into creators' data workflows as well as their motives and discoveries through making with their data. Creators of data textiles use distinct processes to map their data, building fabric from component structures and substructures while using material properties like color and texture. From diverse data-tracking procedures, creators use and relate to data in varied ways. Working on these pieces also contributes to the creators' personal growth and data understanding. Our findings point to new opportunities for visualization, including opportunities to support fiber artists with tools formatted to their needs and opportunities to incorporate concepts from data textiles into other types of visualization (e.g., using texture, structural layouts, colorways). | true | true | [
"Sydney Purdue",
"Eduardo Puerta",
"Enrico Bertini",
"Melanie Tory"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/eyw2r_v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/t9jhu/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_2b91d1cf-ebf6-46b2-9cbc-cc51da3a56ed.html",
"icon": "other"
}
] |
Vis | 2,025 | Story Ribbons: Reimagining Storyline Visualizations with Large Language Models | 10.1109/TVCG.2025.3634265 | Analyzing literature involves tracking interactions between characters, locations, and themes. Visualization has the potential to facilitate the mapping and analysis of these complex relationships, but capturing structured information from unstructured story data remains a challenge. As large language models (LLMs) continue to advance, we see an opportunity to use their text processing and analysis capabilities to augment and reimagine existing storyline visualization techniques. Toward this goal, we introduce an LLM-driven data parsing pipeline that automatically extracts relevant narrative information from novels and scripts. We then apply this pipeline to create Story Ribbons, an interactive visualization system that helps novice and expert literary analysts explore detailed character and theme trajectories at multiple narrative levels. Through pipeline evaluations and user studies with Story Ribbons on 36 literary works, we demonstrate the potential of LLMs to streamline narrative visualization creation and reveal new insights about familiar stories. We also describe current limitations of AI-based systems, and interaction motifs designed to address these issues. | true | true | [
"Catherine Yeh",
"Tara Menon",
"Robin Singh Arya",
"Helen He",
"Moira Weigel",
"Fernanda Viegas",
"Martin Wattenberg"
] | [
"HM"
] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.06772",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_f9fb5a2a-c9bd-48f5-b5be-8a75e1b77b02.html",
"icon": "other"
}
] |
Vis | 2,025 | StreetWeave: A Declarative Grammar for Street-Overlaid Visualization of Multivariate Data | 10.1109/TVCG.2025.3634647 | The visualization and analysis of street and pedestrian networks are important to various domain experts, including urban planners, climate researchers, and health experts. This has led to the development of new techniques for street and pedestrian network visualization, expanding possibilities for effective data presentation and interpretation. Despite their increasing adoption, there is no established design framework to guide the creation of these visualizations while addressing the diverse requirements of various domains. When exploring a feature of interest, domain experts often need to transform, integrate, and visualize a combination of thematic data (e.g., demographic, socioeconomic, pollution) and physical data (e.g., zip codes, street networks), often spanning multiple spatial and temporal scales. This not only complicates the process of visual data exploration and system implementation for developers but also creates significant entry barriers for experts who lack a background in programming. With this in mind, in this paper, we reviewed 45 studies utilizing street-overlaid visualizations to understand how they are applied in practice. Through qualitative coding of these visualizations, we analyzed three key aspects of street and pedestrian network visualization usage: their analytical purposes, the visualization approaches employed, and the data sources used in their creation. Building on this design space, we introduce StreetWeave, a declarative grammar for designing custom visualizations of multivariate spatial network data across multiple resolutions. We demonstrate how StreetWeave can be used to create various street-overlaid visualizations, enabling effective exploration and analysis of spatial data. StreetWeave is available at urbantk.org/streetweave. | false | true | [
"Sanjana Srabanti",
"G. Elisabeta Marai",
"Fabio Miranda"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.07496",
"icon": "paper"
},
{
"name": "Website",
"url": "http://urbantk.org/streetweave",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_5a5e8c22-ecab-4548-b9f8-3b84454ee079.html",
"icon": "other"
}
] |
Vis | 2,025 | StressDiffVis: Visual Analytics for Multi-Model Stress Comparison | 10.1109/TVCG.2025.3634642 | Structural analysis is essential in modern industrial design, where engineers iteratively refine geometry models based on stress simulations to achieve optimized designs. However, comparing stress distributions across multiple model variants remains challenging due to the complexity of stress fields, which are high-dimensional, unevenly distributed, and dependent on intricate geometric structures. Existing tools primarily support single-model analysis and lack dedicated functionalities for multi-model comparison. As a result, engineers must rely on manual, cognitively demanding visual inspections, making it difficult to systematically identify and interpret stress variations across design iterations. To address these limitations, we propose StressDiffVis, a visual analytics approach that facilitates stress field comparison across multiple structural models. StressDiffVis employs a volumetric representation to encode stress distributions while minimizing occlusion, enabling voxel-wise difference analysis for model comparison. To support localized analysis, we introduce model segmentation, grouping voxels with similar stress patterns across models. StressDiffVis integrates these techniques into an interactive interface with a tree view, organizing models by the iterative design process, and a comparison view, using a matrix layout for detailed comparisons. We demonstrate the effectiveness of StressDiffVis through two case studies illustrating its utility in comparative stress analysis. In addition, expert interviews confirm its potential to enhance engineering workflows. | false | true | [
"Jiabao Huang",
"Zikun Deng",
"Hanlin Song",
"Xiang Chen",
"Shaowu Gao",
"Yi Cai"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_39db4241-5a8e-4bf7-abaf-b638cfbfee08.html",
"icon": "other"
}
] |
Vis | 2,025 | SynAnno: Interactive Guided Proofreading of Synaptic Annotations | 10.1109/TVCG.2025.3634824 | Connectomics, a subfield of neuroscience, aims to map and analyze synapse-level wiring diagrams of the nervous system. While recent advances in deep learning have accelerated automated neuron and synapse segmentation, reconstructing accurate connectomes still demands extensive human proofreading to correct segmentation errors. We present SynAnno, an interactive tool designed to streamline and enhance the proofreading of synaptic annotations in large-scale connectomics datasets. SynAnno integrates into existing neuroscience workflows by enabling guided, neuron-centric proofreading. To address the challenges posed by the complex spatial branching of neurons, it introduces a structured workflow with an optimized traversal path and a 3D mini-map for tracking progress. In addition, SynAnno incorporates fine-tuned machine learning models to assist with error detection and correction, reducing the manual burden and increasing proofreading efficiency. We evaluate SynAnno through a user and case study involving seven neuroscience experts. Results show that SynAnno significantly accelerates synapse proofreading while reducing cognitive load and annotation errors through structured guidance and visualization support. The source code and interactive demo are available at: https://github.com/PytorchConnectomics/SynAnno. | true | true | [
"Leander Lauenburg",
"Jakob Troidl",
"Adam Gohain",
"Zudi Lin",
"Hanspeter Pfister",
"Donglai Wei"
] | [] | [
"PW",
"C",
"O"
] | [
{
"name": "Code",
"url": "https://github.com/PytorchConnectomics/SynAnno",
"icon": "code"
},
{
"name": "Demo 1",
"url": "http://54.210.88.222/demo",
"icon": "project_website"
},
{
"name": "Demo 2",
"url": "http://54.210.88.222/reset",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_b5d2c402-def7-4b9d-a6df-ac4ad7572111.html",
"icon": "other"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.