Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
Vis | 2,025 | TactiVis: Towards Better Understanding of Team-based Combat Tactics | 10.1109/TVCG.2025.3634656 | Team-based combat scenarios are prevalent in various real-world applications like video gaming. Analyzing tactics in these scenarios is essential for gaining insights into game processes and improving combat behaviors. The decision-making data in team-based combat include character actions, movement trajectories, and event sequences. Existing studies face challenges in visualizing and analyzing combat tactics due to the complexity and the multifaceted characteristics of the decision-making data. To address these challenges, we introduce TactiVis, a visual analytics system designed for analyzing combat decision-making behavior. Using MOBA game as a representative case of team-based combat, TactiVis adopts a macro-to-micro tactics visual analytics framework consisting of three stages: match-level analysis, event-level understanding, and character-level comparison. In the TactiVis system, we introduce the v-storyline visualization, which encodes positions along the vertical axis to reveal tactical patterns. Case studies and a usability study demonstrate the utility and usability of TactiVis for helping analysts understand combat patterns and analyze tactics. | true | true | [
"Hancheng Zhang",
"Guozheng Li",
"Min Lu",
"Jincheng Li",
"Chi Harold Liu"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_5f806c3f-27b1-47e9-81ce-9255b7532d0f.html",
"icon": "other"
}
] |
Vis | 2,025 | Tell Me Without Telling Me: Two-Way Prediction of Visualization Literacy and Visual Attention | 10.1109/TVCG.2025.3634815 | Accounting for individual differences can improve the effectiveness of visualization design. While the role of visual attention in visualization interpretation is well recognized, existing work often overlooks how this behavior varies based on visual literacy levels. Based on data from a 235-participant user study covering three visualization tests (mini-VLAT, CALVI, and SGL), we show that distinct attention patterns in visual data exploration can correlate with participants' literacy levels: While experts (high-scorers) generally show a strong attentional focus, novices (low-scorers) focus less and explore more. We then propose two computational models leveraging these insights: Lit2Sal - a novel visual saliency model that predicts observer attention given their visualization literacy level, and Sal2Lit - a model to predict visual literacy from human visual attention data. Our quantitative and qualitative evaluation demonstrates that Lit2Sal outperforms state-of-the-art saliency models with literacy-aware considerations. Sal2Lit predicts literacy with 86% accuracy using a single attention map, providing a time-efficient supplement to literacy assessment that only takes less than a minute. Taken together, our unique approach to consider individual differences in salience models and visual attention in literacy assessments paves the way for new directions in personalized visual data communication to enhance understanding. | false | true | [
"Minsuk Chang",
"Yao Wang",
"Huichen Wang",
"Yuanhong Zhou",
"Andreas Bulling",
"Cindy Xiong Bearfield"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.03713",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/2crb9/",
"icon": "other"
},
{
"name": "Website",
"url": "https://minsukchang.info/Lit-Att/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_22ccb527-7a1d-49e1-81b8-9e4f335089e1.html",
"icon": "other"
}
] |
Vis | 2,025 | TexGS-VolVis: Expressive Scene Editing for Volume Visualization via Textured Gaussian Splatting | 10.1109/TVCG.2025.3634643 | Advancements in volume visualization (VolVis) focus on extracting insights from 3D volumetric data by generating visually compelling renderings that reveal complex internal structures. Existing VolVis approaches have explored non-photorealistic rendering techniques to enhance the clarity, expressiveness, and informativeness of visual communication. While effective, these methods often rely on complex predefined rules and are limited to transferring a single style, restricting their flexibility. To overcome these limitations, we advocate the representation of VolVis scenes using differentiable Gaussian primitives combined with pretrained large models to enable arbitrary style transfer and real-time rendering. However, conventional 3D Gaussian primitives tightly couple geometry and appearance, leading to suboptimal stylization results. To address this, we introduce TexGS-VolVis, a textured Gaussian splatting framework for VolVis. TexGS-VolVis employs 2D Gaussian primitives, extending each Gaussian with additional texture and shading attributes, resulting in higher-quality, geometry-consistent stylization and enhanced lighting control during inference. Despite these improvements, achieving flexible and controllable scene editing remains challenging. To further enhance stylization, we develop image-and text-driven non-photorealistic scene editing tailored for TexGS-VolVis and 2D-lift-3D segmentation to enable partial editing with fine-grained control. We evaluate TexGS-VolVis both qualitatively and quantitatively across various volume rendering scenes, demonstrating its superiority over existing methods in terms of efficiency, visual quality, and editing flexibility. | false | true | [
"Kaiyuan Tang",
"Kuangshi Ai",
"Jun Han",
"Chaoli Wang"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.13586",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_77c7ae29-07ac-4a7b-90d1-5ca6794c6632.html",
"icon": "other"
}
] |
Vis | 2,025 | TFZ: Topology-Preserving Compression of 2D Symmetric and Asymmetric Second-Order Tensor Fields | 10.1109/TVCG.2025.3634844 | In this paper, we present a novel compression framework, TFZ, that preserves the topology of 2D symmetric and asymmetric second-order tensor fields defined on flat triangular meshes. A tensor field assigns a tensor—a multi-dimensional array of numbers—to each point in space. Tensor fields, such as the stress and strain tensors, and the Riemann curvature tensor, are essential to both science and engineering. The topology of tensor fields captures the core structure of data, and is useful in various disciplines, such as graphics (for manipulating shapes and textures) and neuroscience (for analyzing brain structures from diffusion MRI). Lossy data compression may distort the topology of tensor fields, thus hindering downstream analysis and visualization tasks. TFZ ensures that certain topological features are preserved during lossy compression. Specifically, TFZ preserves degenerate points essential to the topology of symmetric tensor fields and retains eigenvector and eigenvalue graphs that represent the topology of asymmetric tensor fields. TFZ scans through each cell, preserving the local topology of each cell, and thereby ensuring certain global topological guarantees. We showcase the effectiveness of our framework in enhancing the lossy scientific data compressors SZ3 and SPERR. | false | true | [
"Nathaniel Gorski",
"Xin Liang",
"Hanqi Guo",
"Bei Wang"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_84ab0ade-9529-4942-b5e1-ca953e56c880.html",
"icon": "other"
}
] |
Vis | 2,025 | The Hue-Man Factor: An Empirical Evaluation of Visualization Perception and Accessibility Across Color Vision Profiles | 10.1109/TVCG.2025.3634261 | Color is a powerful tool in data visualization, but for individuals with color vision deficiencies (CVD), hue can become a barrier rather than an aid. In this paper, we examine how real-world visualizations are perceived across vision profiles through three complementary studies. Study 1 assessed how normal vision participants rated 46 visualizations shown in original and simulated red/green colorblind versions. Study 2 collected matched responses from participants with diagnosed CVD. Study 3 involved in-depth interviews exploring how users interpret, adapt to, and evaluate inaccessible designs. Across studies, we find that simulations capture directional perceptual shifts but fail to reflect the interpretive breakdowns and emotional work described by real CVD users. Factor analysis reveals two dominant perceptual dimensions: functional utility and affective experience. While normal vision participants prioritize functional clarity, CVD users rely more on structural cues and emotional resonance, particularly when color is unreliable. Qualitative insights show that perceptual breakdowns occur not only in high-interference charts but also when redundant encoding or layout scaffolding is missing. We synthesize these findings and offer empirically grounded design recommendations to guide inclusive visualization practices. Our results argue that accessibility must go beyond color correction, embracing structural clarity, redundancy, and real-user validation to ensure inclusive visual communication. | false | true | [
"Zhuojun Jiang",
"Anjana Arunkumar",
"Chris Bryan"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/ymucn/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_093872c4-bec7-4c27-bb8e-c89465a4381b.html",
"icon": "other"
}
] |
Vis | 2,025 | The Impact of Visual Segmentation on Lexical Word Recognition | 10.1109/TVCG.2025.3634641 | When a reader encounters a word in English, they split the word into smaller orthographic units in the process of recognizing its meaning. For example, “rough”, when split according to phonemes, is decomposed as r-ou-gh (not as r-o-ugh or r-ough), where each group of letters corresponds to a sound. Since there are many ways to segment a group of letters, this constitutes a computational operation that has to be solved by the reading brain, many times per minute, in order to achieve the recognition of words in text necessary for reading. In English, the irregular relationships between groups of letters and sounds, and the wide variety of possible groupings make this operation harder than in more regular languages such as Italian. If this segmentation takes a significant amount of time in the process of recognizing a word, it is conceivable that providing segmentation information in the text itself could help the reading process by reducing its computational cost. In this paper we explore whether and how different visual interventions from the visualization literature could communicate segmentation information for reading and word recognition. We ran a series of pre-registered lexical decision experiments with 192 participants that tested five main types of visual segmentations: outlines, spacing, connections, underlines and color. The evidence indicates that, even with a moderate amount of training, these visual interventions always slow down word identification, but each to a different extent (between 32.7ms—color technique—and 70.7ms—connection technique). These findings are important because they indicate that, at least for typical adult readers with a moderate amount of specific training in these visual interventions, accelerating the lexical decision task is unlikely. Importantly, the results also offer an empirical measurement of the cost of a common set of visual manipulations of text, which can be useful for practitioners seeking to visualize alongside or within text without impacting reading performance. Finally, the interaction between typographically encoded information and visual variables presented unique patterns that deviate from existing theories, suggesting new directions for future inquiry. | true | true | [
"Matthew Termuende",
"Kevin Larson",
"Miguel Nacenta"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://doi.org/10.17605/OSF.IO/79Y5S",
"icon": "other"
},
{
"name": "Preregistration 1",
"url": "https://doi.org/10.17605/OSF.IO/RG7FH",
"icon": "other"
},
{
"name": "Preregistration 2",
"url": "https://doi.org/10.17605/OSF.IO/GC7QX",
"icon": "other"
},
{
"name": "Preregistration 3",
"url": "https://doi.org/10.17605/OSF.IO/F5397",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_bf10121c-1a25-4391-b766-450e53b2348e.html",
"icon": "other"
}
] |
Vis | 2,025 | TiVy: Time Series Visual Summary for Scalable Visualization | 10.1109/TVCG.2025.3633882 | Visualizing multiple time series presents fundamental tradeoffs between scalability and visual clarity. Time series capture the behavior of many large-scale real-world processes, from stock market trends to urban activities. Users often gain insights by visualizing them as line charts, juxtaposing or superposing multiple time series to compare them and identify trends and patterns. However, existing representations struggle with scalability: when covering long time spans, leading to visual clutter from too many small multiples or overlapping lines. We propose TiVy, a new algorithm that summarizes time series using sequential patterns. It transforms the series into a set of symbolic sequences based on subsequence visual similarity using Dynamic Time Warping (DTW), then constructs a disjoint grouping of similar subsequences based on the frequent sequential patterns. The grouping result, a visual summary of time series, provides uncluttered superposition with fewer small multiples. Unlike common clustering techniques, TiVy extracts similar subsequences (of varying lengths) aligned in time. We also present an interactive time series visualization that renders large-scale time series in real-time. Our experimental evaluation shows that our algorithm (1) extracts clear and accurate patterns when visualizing time series data, (2) achieves a significant speed-up (1000×) compared to a straightforward DTW clustering. We also demonstrate the efficiency of our approach to explore hidden structures in massive time series data in two usage scenarios. | false | true | [
"Gromit Yeuk-Yin Chan",
"Luis Gustavo Nonato",
"Themis Palpanas",
"Claudio Silva",
"Juliana Freire"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2507.18972",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/GromitC/TiVy",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_8347663a-7e88-43e2-a557-e7b2179fd4e5.html",
"icon": "other"
}
] |
Vis | 2,025 | TrajLens: Visual Analysis for Constructing Cell Developmental Trajectories in Cross-Sample Exploration | 10.1109/TVCG.2025.3634875 | Constructing cell developmental trajectories is a critical task in single-cell RNA sequencing (scRNA-seq) analysis, enabling the inference of potential cellular progression paths. However, current automated methods are limited to establishing cell developmental trajectories within individual samples, necessitating biologists to manually link cells across samples to construct complete cross-sample evolutionary trajectories that consider cellular spatial dynamics. This process demands substantial human effort due to the complex spatial correspondence between each pair of samples. To address this challenge, we first proposed a GNN-based model to predict cross-sample cell developmental trajectories. We then developed TrajLens, a visual analytics system that supports biologists in exploring and refining the cell developmental trajectories based on predicted links. Specifically, we designed the visualization that integrates features on cell distribution and developmental direction across multiple samples, providing an overview of the spatial evolutionary patterns of cell populations along trajectories. Additionally, we included contour maps superimposed on the original cell distribution data, enabling biologists to explore them intuitively. To demonstrate our system's performance, we conducted quantitative evaluations of our model with two case studies and expert interviews to validate its usefulness and effectiveness. | true | true | [
"Qipeng Wang",
"Shaolun Ruan",
"Rui Sheng",
"Yong WANG",
"Min Zhu",
"Huamin Qu"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2507.15620",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://doi.org/10.17605/OSF.IO/ETFJ2",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9d6c7874-e4fa-449b-88aa-4b3937241e7d.html",
"icon": "other"
}
] |
Vis | 2,025 | TrialCompass: Visual Analytics for Enhancing the Eligibility Criteria Design of Clinical Trials | 10.1109/TVCG.2025.3634803 | Eligibility criteria play a critical role in clinical trials by determining the target patient population, which significantly influences the outcomes of medical interventions. However, current approaches for designing eligibility criteria have limitations to support interactive exploration of the large space of eligibility criteria. They also ignore incorporating detailed characteristics from the original electronic health record (EHR) data for criteria refinement. To address these limitations, we proposed TrialCompass, a visual analytics system integrating a novel workflow, which can empower clinicians to iteratively explore the vast space of eligibility criteria through knowledge-driven and outcome-driven approaches. TrialCompass supports history-tracking to help clinicians trace the evolution of their adjustments and decisions when exploring various forms of data (i.e., eligibility criteria, outcome metrics, and detailed characteristics of original EHR data) through these two approaches. This feature can help clinicians comprehend the impact of eligibility criteria on outcome metrics and patient characteristics, which facilitates systematic refinement of eligibility criteria. Using a real-world dataset, we demonstrated the effectiveness of TrialCompass in providing insights into designing eligibility criteria for septic shock and sepsis-associated acute kidney injury. We also discussed the research prospects of applying visual analytics to clinical trials. | false | true | [
"Rui Sheng",
"Xingbo Wang",
"Jiachen Wang",
"Xiaofu Jin",
"Zhonghua SHENG",
"Zhenxing Xu",
"Suraj Rajendran",
"Huamin Qu",
"Fei Wang"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2507.12298",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3f0a5ad9-2440-47b4-a741-f6b92c58bf8d.html",
"icon": "other"
}
] |
Vis | 2,025 | Uncertainty-Aware PCA Revisited | 10.1109/TVCG.2025.3633868 | Principal Component Analysis (PCA) is perhaps the most popular linear projection technique for dimensionality reduction. We consider PCA under the assumption that the high-dimensional data points are equipped with Gaussian uncertainty. Several approaches to such uncertainty-aware PCA have been developed recently in the visualization community. Since PCA is a discontinuous map, a small uncertainty in the data points can result in a huge uncertainty in the projected points. We show that the uncertainty of the data points also creates uncertainty in the eigenvectors of the covariance matrix that defines the PCA projection. We present a closed-form expression to quantify eigenvector uncertainty. Based on this, we propose a 3D glyph that supports the decision whether existing solutions for uncertainty-aware PCA are sufficient, or whether a more expensive sampling-based approach is required. We apply our approach to several test data sets. | true | true | [
"Lukas Friesecke",
"Christian Braune",
"Christian Roessl",
"Holger Theisel"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://vc.cs.ovgu.de/assets/publications/2025/Friesecke_2025_VIS.pdf",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/lfriesecke/uncertainty-aware-pca-revisited",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_367cd3fe-b8ff-4bee-bd9f-ae8e51feeb9e.html",
"icon": "other"
}
] |
Vis | 2,025 | Understanding Aortic Dissection Hemodynamics: Evaluating Adapted Smoke Surfaces Against Streakline-Based Techniques | 10.1109/TVCG.2025.3634823 | Aortic dissection is a life-threatening cardiovascular disease characterized by blood entering the media layer of the aortic vessel wall. This creates a second flow channel, known as the false lumen, which weakens the aortic wall and can potentially lead to fatal aortic rupture. Current risk stratification of aortic dissections is primarily based on morphological features of the aorta. However, hemodynamics also play a significant role in disease progression, though their investigation and visualization remain challenging. Common flow visualizations often experience visual clutter, especially when dealing with the intricate morphologies of aortic dissections. In this work, we implement and evaluate different approaches to visualizing the flow in aortic dissections effectively. We employ three techniques, namely streaklines with depth-dependent halos, transparent streaklines, and smoke surfaces. The latter is a technique based on streak surfaces, enhanced with opacity modulations, to produce a smoke-like appearance that improves visual clarity. We adapt the original opacity modulation of smoke surfaces to visualize flow even within the complex geometries of aortic dissections, thereby enhancing visual fidelity. To effectively capture dissection hemodynamics, we developed customized seeding structures that adapt to the shape of the surrounding lumen. Our evaluation, conducted via an online questionnaire, included medical professionals, fluid simulation experts, and visualization specialists. By analyzing results across these groups, we highlight differences in preference and interpretability, offering insight into domain-specific needs. No single visualization technique emerged as the best overall. Smoke surfaces provide the best overall clarity and visual realism. However, participants found streaklines with halos to be the best for quantifying flow, dispite them introducing significant visual clutter. Transparent streaklines serve as a middle ground, offering improved clarity over halos while maintaining some level of detail. Across all participant groups, smoke surfaces were rated as the most visually appealing and lifelike, with medical professionals highlighting their resemblance to contrast-agent injections used in clinical practice. | false | true | [
"Aaron Schroeder",
"Kai Ostendorf",
"Kathrin Baeumler",
"Domenico Mastrodicasa",
"Dominik Fleischmann",
"Bernhard Preim",
"Holger Theisel",
"Gabriel Mistelbauer"
] | [] | [
"C",
"O"
] | [
{
"name": "Code",
"url": "https://github.com/aaschr/vis_2025_smoke_surfaces_for_AD",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_7b762da7-ec92-461b-ab0a-95e7e278234d.html",
"icon": "other"
}
] |
Vis | 2,025 | Understanding Large Language Model Behaviors through Interactive Counterfactual Generation and Analysis | 10.1109/TVCG.2025.3634646 | Understanding the behavior of large language models (LLMs) is crucial for ensuring their safe and reliable use. However, existing explainable AI (XAI) methods for LLMs primarily rely on word-level explanations, which are often computationally inefficient and misaligned with human reasoning processes. Moreover, these methods often treat explanation as a one-time output, overlooking its inherently interactive and iterative nature. In this paper, we present LLM Analyzer, an interactive visualization system that addresses these limitations by enabling intuitive and efficient exploration of LLM behaviors through counterfactual analysis. Our system features a novel algorithm that generates fluent and semantically meaningful counterfactuals via targeted removal and replacement operations at user-defined levels of granularity. These counterfactuals are used to compute feature attribution scores, which are then integrated with concrete examples in a table-based visualization, supporting dynamic analysis of model behavior. A user study with LLM practitioners and interviews with experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants rather than passive recipients. | true | true | [
"Furui Cheng",
"Vilém Zouhar",
"Robin Chan",
"Daniel Fürst",
"Hendrik Strobelt",
"Mennatallah El-Assady"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2405.00708",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_7a68a26e-4b20-444f-a76a-eee7a08ab3be.html",
"icon": "other"
}
] |
Vis | 2,025 | Unveiling the Visual Rhetoric of Persuasive Cartography: A Case Study of the Design of Octopus Maps | 10.1109/TVCG.2025.3634776 | When designed deliberately, data visualizations can become powerful persuasive tools, influencing viewers' opinions, values, and actions. While researchers have begun studying this issue (e.g., to evaluate the effects of persuasive visualization), we argue that a fundamental mechanism of persuasion resides in rhetorical construction, a perspective inadequately addressed in current visualization research. To fill this gap, we present a focused analysis of octopus maps, a visual genre that has maintained persuasive power across centuries and achieved significant social impact. Employing rhetorical schema theory, we collected and analyzed 90 octopus maps spanning from the 19th century to contemporary times. We closely examined how octopus maps implement their persuasive intents and constructed a design space that reveals how visual metaphors are strategically constructed and what common rhetorical strategies are applied to components such as maps, octopus imagery, and text. Through the above analysis, we also uncover a set of interesting findings. For instance, contrary to the common perception that octopus maps are primarily a historical phenomenon, our research shows that they remain a lively design convention in today's digital age. Additionally, while most octopus maps stem from Western discourse that views the octopus as an evil symbol, some designs offer alternative interpretations, highlighting the dynamic nature of rhetoric across different sociocultural settings. Lastly, drawing from the lessons provided by octopus maps, we discuss the associated ethical concerns of persuasive visualization. | true | true | [
"Daocheng Lin",
"Yifan Wang",
"Yutong Yang",
"Xingyu Lan"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.11903",
"icon": "paper"
},
{
"name": "Website",
"url": "https://octopusmap.github.io",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_19140b17-2652-4f77-a749-4088ec870052.html",
"icon": "other"
}
] |
Vis | 2,025 | Urbanite: A Dataflow-Based Framework for Human-AI Interactive Alignment in Urban Visual Analytics | 10.1109/TVCG.2025.3634644 | With the growing availability of urban data and the increasing complexity of societal challenges, visual analytics has become essential for deriving insights into pressing real-world problems. However, analyzing such data is inherently complex and iterative, requiring expertise across multiple domains. The need to manage diverse datasets, distill intricate workflows, and integrate various analytical methods presents a high barrier to entry, especially for researchers and urban experts who lack proficiency in data management, machine learning, and visualization. Advancements in large language models offer a promising solution to lower the barriers to the construction of analytics systems by enabling users to specify intent rather than define precise computational operations. However, this shift from explicit operations to intent-based interaction introduces challenges in ensuring alignment throughout the design and development process. Without proper mechanisms, gaps can emerge between user intent, system behavior, and analytical outcomes. To address these challenges, we propose Urbanite, a framework for human-AI collaboration in urban visual analytics. Urbanite leverages a dataflow-based model that allows users to specify intent at multiple scopes, enabling interactive alignment across the specification, process, and evaluation stages of urban analytics. Based on findings from a survey to uncover challenges, Urbanite incorporates features to facilitate explainability, multi-resolution definition of tasks across dataflows, nodes, and parameters, while supporting the provenance of interactions. We demonstrate Urbanite's effectiveness through usage scenarios created in collaboration with urban experts. Urbanite is available at urbantk.org/urbanite. | false | true | [
"Gustavo Moreira",
"Leonardo Ferreira",
"Carolina Veiga",
"Maryam Hosseini",
"Fabio Miranda"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.07390",
"icon": "paper"
},
{
"name": "Website",
"url": "http://urbantk.org/urbanite",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_f8717d69-b5a1-497d-9241-6b1931d4c195.html",
"icon": "other"
}
] |
Vis | 2,025 | Using Tactile Charts to Support Comprehension and Learning of Complex Visualizations for Blind and Low-Vision Individuals | 10.1109/TVCG.2025.3633874 | We investigate whether tactile charts support comprehension and learning of complex visualizations for blind and low-vision (BLV) individuals and contribute four tactile chart designs and an interview study. Visualizations are powerful tools for conveying data, yet BLV individuals typically can rely only on assistive technologies—primarily alternative texts—to access this information. Prior research shows the importance of mental models of chart types for interpreting these descriptions, yet BLV individuals have no means to build such a mental model based on images of visualizations. Tactile charts show promise to fill this gap in supporting the process of building mental models. Yet studies on tactile data representations mostly focus on simple chart types, and it is unclear whether they are also appropriate for more complex charts as would be found in scientific publications. Working with two BLV researchers, we designed 3D-printed tactile template charts with exploration instructions for four advanced chart types: UpSet plots, violin plots, clustered heatmaps, and faceted line charts. We then conducted an interview study with 12 BLV participants comparing whether using our tactile templates improves mental models and understanding of charts and whether this understanding translates to novel datasets experienced through alt texts. Thematic analysis shows that tactile models support chart type understanding and are the preferred learning method by BLV individuals. We also report participants' opinions on tactile chart design and their role in BLV education. | true | true | [
"Tingying He",
"Maggie McCracken",
"Daniel Hajas",
"Sarah Creem-Regehr",
"Alexander Lex"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.21462",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/9dwgq/",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/uhq68",
"icon": "other"
},
{
"name": "Website",
"url": "https://vdl.sci.utah.edu/tactile-charts/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_055a075f-b15e-43ec-becb-cc09ad944d2c.html",
"icon": "other"
}
] |
Vis | 2,025 | VA-Blueprint: Uncovering Building Blocks for Visual Analytics System Design | 10.1109/TVCG.2025.3634809 | Designing and building visual analytics (VA) systems is a complex, iterative process that requires the seamless integration of data processing, analytics capabilities, and visualization techniques. While prior research has extensively examined the social and collaborative aspects of VA system authoring, the practical challenges of developing these systems remain underexplored. As a result, despite the growing number of VA systems, there are only a few structured knowledge bases to guide their design and development. To tackle this gap, we propose VA-Blueprint, a methodology and knowledge base that systematically reviews and categorizes the fundamental building blocks of urban VA systems, a domain particularly rich and representative due to its intricate data and unique problem sets. Applying this methodology to an initial set of 20 systems, we identify and organize their core components into a multi-level structure, forming an initial knowledge base with a structured blueprint for VA system development. To scale this effort, we leverage a large language model to automate the extraction of these components for other 81 papers (completing a corpus of 101 papers), assessing its effectiveness in scaling knowledge base construction. We evaluate our method through interviews with experts and a quantitative analysis of annotation metrics. Our contributions provide a deeper understanding of VA systems' composition and establish a practical foundation to support more structured, reproducible, and efficient system development. VA-Blueprint is available at urbantk.org/va-blueprint. | false | true | [
"Leonardo Ferreira",
"Gustavo Moreira",
"Fabio Miranda"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.07497",
"icon": "paper"
},
{
"name": "Website",
"url": "http://urbantk.org/va-blueprint",
"icon": "project_website"
},
{
"name": "Website",
"url": "https://leovsferreira.github.io/va-building-blocks/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_f2b6c739-7eed-4161-a955-230c614bae12.html",
"icon": "other"
}
] |
Vis | 2,025 | VisAnatomy: An SVG Chart Corpus with Fine-Grained Semantic Labels | 10.1109/TVCG.2025.3634263 | Chart corpora, which comprise data visualizations and their semantic labels, are crucial for advancing visualization research. However, the labels in most existing corpora are high-level (e.g., chart types), hindering their utility for broader applications in the era of AI. In this paper, we contribute VisAnatomy, a corpus containing 942 real-world SVG charts produced by over 50 tools, encompassing 40 chart types and featuring structural and stylistic design variations. Each chart is augmented with multi-level fine-grained labels on its semantic components, including each graphical element's type, role, and position, hierarchical groupings of elements, group layouts, and visual encodings. In total, VisAnatomy provides labels for more than 383k graphical elements. We demonstrate the richness of the semantic labels by comparing VisAnatomy with existing corpora. We illustrate its usefulness through four applications: semantic role inference for SVG elements, chart semantic decomposition, chart type classification, and content navigation for accessibility. Finally, we discuss research opportunities to further improve VisAnatomy. | false | true | [
"Chen Chen",
"Hannah Bako",
"Peihong Yu",
"John Hooker",
"Jeffrey Joyal",
"Simon Wang",
"Samuel Kim",
"Jessica Wu",
"Aoxue Ding",
"Lara Sandeep",
"Alex Chen",
"Chayanika Sinha",
"Zhicheng Liu"
] | [] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2410.12268",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/962xc/?view_only=adbb315fd8794f6dac6b9625d385900f",
"icon": "other"
},
{
"name": "Website",
"url": "https://visanatomy.github.io/",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_7ecd3f2d-f7b0-4b86-aa66-248cc17c0720.html",
"icon": "other"
}
] |
Vis | 2,025 | VisGuard: Securing Visualization Dissemination through Tamper-Resistant Data Retrieval | 10.1109/TVCG.2025.3634818 | The dissemination of visualizations is primarily in the form of raster images, which often results in the loss of critical information such as source code, interactive features, and metadata. While previous methods have proposed embedding metadata into images to facilitate Visualization Image Data Retrieval (VIDR), most existing methods lack practicability since they are fragile to common image tampering during online distribution such as cropping and editing. To address this issue, we propose VisGuard, a tamper-resistant VIDR framework that reliably embeds metadata link into visualization images. The embedded data link remains recoverable even after substantial tampering upon images. We propose several techniques to enhance robustness, including repetitive data tiling, invertible information broadcasting, and an anchor-based scheme for crop localization. VisGuard enables various applications, including interactive chart reconstruction, tampering detection, and copyright protection. We conduct comprehensive experiments on VisGuard's superior performance in data retrieval accuracy, embedding capacity, and security against tampering and steganalysis, demonstrating VisGuard's competence in facilitating and safeguarding visualization dissemination and information conveyance. | false | true | [
"Huayuan Ye",
"Juntong Chen",
"Shenzhuo Zhang",
"Yipeng Zhang",
"Changbo Wang",
"Chenhui Li"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2507.14459",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_88a99aaa-b89c-494c-b810-2113f43914ab.html",
"icon": "other"
}
] |
Vis | 2,025 | VisMoDAl: Visual Analytics for Evaluating and Improving Corruption Robustness of Vision-Language Models | 10.1109/TVCG.2025.3634257 | Vision-language (VL) models have shown transformative potential across various critical domains due to their capability to comprehend multi-modal information. However, their performance frequently degrades under distribution shifts, making it crucial to assess and improve robustness against real-world data corruption encountered in practical applications. While advancements in VL benchmark datasets and data augmentation (DA) have contributed to robustness evaluation and improvement, there remain challenges due to a lack of in-depth comprehension of model behavior as well as the need for expertise and iterative efforts to explore data patterns. Given the achievement of visualization in explaining complex models and exploring large-scale data, understanding the impact of various data corruption on VL models aligns naturally with a visual analytics approach. To address these challenges, we introduce VisMoDAI, a visual analytics framework designed to evaluate VL model robustness against various corruption types and identify underperformed samples to guide the development of effective DA strategies. Grounded in the literature review and expert discussions, VisMoDAI supports multi-level analysis, ranging from examining performance under specific corruptions to task-driven inspection of model behavior and corresponding data slice. Unlike conventional works, VisMoDAI enables users to reason about the effects of corruption on VL models, facilitating both model behavior understanding and DA strategy formulation. The utility of our system is demonstrated through case studies and quantitative evaluations focused on corruption robustness in the image captioning task. | false | true | [
"Huanchen Wang",
"Wencheng Zhang",
"Zhiqiang Wang",
"Zhicong Lu",
"Yuxin Ma"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_e8ed00a7-2d3b-46b8-9edd-6853dd9c2f8c.html",
"icon": "other"
}
] |
Vis | 2,025 | Visual Analytics Using Tensor Unified Linear Comparative Analysis | 10.1109/TVCG.2025.3633912 | Comparing tensors and identifying their (dis)similar structures is fundamental in understanding the underlying phenomena for complex data. Tensor decomposition methods help analysts extract tensors' essential characteristics and aid in visual analytics for tensors. In contrast to dimensionality reduction (DR) methods designed only for analyzing a matrix (i.e., second-order tensor), existing tensor decomposition methods do not support flexible comparative analysis. To address this analysis limitation, we introduce a new tensor decomposition method, named tensor unified linear comparative analysis (TULCA), by extending its DR counterpart, ULCA, for tensor analysis. TULCA integrates discriminant analysis and contrastive learning schemes for tensor decomposition, enabling flexible comparison of tensors. We also introduce an effective method to visualize a core tensor extracted from TULCA into a set of 2D visualizations. We integrate TULCA's functionalities into a visual analytics interface to support analysts in interpreting and refining the TULCA results. We demonstrate the efficacy of TULCA and the visual analytics interface with computational evaluations and two case studies, including an analysis of log data collected from a supercomputer. | true | true | [
"Naoki Okami",
"Kazuki Miyake",
"Naohisa Sakamoto",
"Jorji Nonaka",
"Takanori Fujiwara"
] | [] | [
"P",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.19988",
"icon": "paper"
},
{
"name": "Code",
"url": "https://github.com/vizlab-kobe/tulca",
"icon": "code"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_b5fec96f-039e-4af2-93d6-898b9a2caece.html",
"icon": "other"
}
] |
Vis | 2,025 | Visual Extraction of Interaction Patterns Guided by Hierarchical Clustering and Process Mining | 10.1109/TVCG.2025.3634837 | Understanding user interactions in digital systems is essential in analyzing user behaviors and improving system usability. However, a collection of interaction sequences is often large and unstructured, making it challenging to uncover interaction patterns. To address this challenge, we introduce a visual analytics approach that integrates hierarchical clustering and process mining techniques to support analysts in exploring unstructured, large interaction sequence data. Our system employs a tailored dynamic time warping-based similarity measure to enable comparison of interaction sequences. Based on the sequence similarities, we provide stepwise, interactive navigation of clustering results with contextual visual cues for refinement and validation. We further apply process mining to characterize derived clusters. Through these hierarchical clustering and process mining steps, analysts can progressively uncover meaningful interaction patterns while utilizing visual guidance and incorporating domain expertise. We demonstrate our system's effectiveness and applicability through two case studies involving system designers, developers, and domain experts. | true | true | [
"Peilin Yu",
"Aida Nordman",
"Takanori Fujiwara",
"Marta Koc-Januchta",
"Konrad Schönborn",
"Lonni Besançon",
"Katerina Vrotsou"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.31219/osf.io/n5dxe_v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/6az29/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_440d12ea-914c-49b0-a380-163dab6a3e41.html",
"icon": "other"
}
] |
Vis | 2,025 | Visualization Badges: Communicating Design and Provenance through Graphical Labels Alongside Visualizations | 10.1109/TVCG.2025.3634782 | This paper presents Visualization Badges, graphical labels shown alongside visualizations to communicate provenance and design considerations to enhance understandability and transparency. Badges may, for example, highlight a major finding, disclose that an axis has been truncated, or warn of possible visual artifacts. Inspired by nutrition and energy labels on product packaging, visualization badges aim (i) to allow visualization authors to justify and disclose analysis and design decisions and (ii) to make readers aware of important information when viewing and interpreting visualizations. Collectively, visualization badges aim to foster trust in visualizations and prevent readers from drawing incorrect conclusions. Based on a series of co-design workshops, we define and evaluate the concept of visualization badges and formulate a conceptual framework for analysis, application, and further research. Our framework includes a catalog of 132 visualization badges, categorization schemes, design options for their visual representations, applied visualization examples, and guidelines for their use. We hope that visualization badges will help communicate data and collectively improve communication, visualization literacy, and the quality of visualization techniques. Our badges, workshops, and guidelines can be found online https://vis-badges.github.io. | false | true | [
"Valentin Edelsbrunner",
"Jinrui Wang",
"Alexis Pister",
"Tomas Vancisin",
"Sian Phillips",
"Min Chen",
"Benjamin Bach"
] | [
"HM"
] | [
"PW",
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://hal.science/view/index/docid/5199752",
"icon": "paper"
},
{
"name": "Website",
"url": "https://vis-badges.github.io/#/about",
"icon": "project_website"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_49b63c40-81c7-41ec-a38a-9d433769c1c6.html",
"icon": "other"
}
] |
Vis | 2,025 | Visualization Vibes: The Socio-Indexical Function of Visualization Design | 10.1109/TVCG.2025.3634814 | In contemporary information ecologies saturated with misinformation, disinformation, and a distrust of science itself, public data communication faces significant hurdles. Although visualization research has broadened criteria for effective design, governing paradigms privilege the accurate and efficient transmission of data. Drawing on theory from linguistic anthropology, we argue that such approaches-focused on encoding and decoding propositional content-cannot fully account for how people engage with visualizations and why particular visualizations might invite adversarial or receptive responses. In this paper, we present evidence that data visualizations communicate not only semantic, propositional meaning-meaning about data-but also social, indexical meaning-meaning beyond data. From a series of ethnographically-informed interviews, we document how readers make rich and varied assessments of a visualization's “vibes”-inferences about the social provenance of a visualization based on its design features. Furthermore, these social attributions have the power to influence reception, as readers' decisions about how to engage with a visualization concern not only content, or even aesthetic appeal, but also their sense of alignment or disalignment with the entities they imagine to be involved in its production and circulation. We argue these inferences hinge on a function of human sign systems that has thus far been little studied in data visualization: socio-indexicality, whereby the formal features (rather than the content) of communication evoke social contexts, identities, and characteristics. Demonstrating the presence and significance of this socio-indexical function in visualization, this paper offers both a conceptual foundation and practical intervention for troubleshooting breakdowns in public data communication. | true | true | [
"Michelle Morgenstern",
"Amy Fox",
"Graham Jones",
"Arvind Satyanarayan"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://doi.org/10.17605/OSF.IO/ERC6P",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_6a8c1148-d065-4251-9183-2955b9445817.html",
"icon": "other"
}
] |
Vis | 2,025 | Visualizing Trust: How Chart Embellishments Influence Perceptions of Credibility | 10.1109/TVCG.2025.3634785 | Effective data visualizations enhance perception, support cognitive processing, and facilitate informed decision-making by aligning with human perceptual strengths. Conversely, poorly designed visualizations can impede comprehension, introduce interpretive bias, and diminish the perceived credibility of the conveyed message. This paper investigates the extent to which visual embellishments influence perceived message credibility in data visualizations. We conducted two crowdsourced experiments to examine both holistic and component-level effects of embellishment. In the first experiment, participants evaluated the relative credibility of plain bar charts versus two embellished variants-cartoon-style and image-style-across topics. Participants provided both comparative judgments and qualitative feedback. In the second experiment, we systematically isolated the influence of specific design elements-color, font, and bar style-on credibility perceptions through controlled variations. Our findings reveal that the impact of embellishments on perceived message credibility is complex and context-dependent. While certain embellishments, such as the use of color and image style bars, enhanced credibility, others-most notably hand-drawn fonts and cartoon-style bars-significantly undermined it. By operationalizing trust through the lens of message credibility, this work offers empirical insight into the design factors that shape viewers' perceptions. We conclude by proposing actionable design guidelines to support the creation of visualizations that are effective for communication and credible. | false | true | [
"Hayeong Song",
"Aeree Cho",
"Cindy Xiong Bearfield",
"John Stasko"
] | [] | [
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/ph4b8/files/osfstorage",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_de570d65-b80f-4e25-9d54-88f11281e2c5.html",
"icon": "other"
}
] |
Vis | 2,025 | VizGenie: Toward Self-Refining, Domain-Aware Workflows for Next-Generation Scientific Visualization | 10.1109/TVCG.2025.3634655 | We present VizGenie, a self-improving, agentic framework that advances scientific visualization through large language model (LLM) by orchestrating of a collection of domain-specific and dynamically generated modules. Users initially access core functionalities—such as threshold-based filtering, slice extraction, and statistical analysis—through pre-existing tools. For tasks beyond this baseline, VizGenie autonomously employs LLMs to generate new visualization scripts (e.g., VTK Python code), expanding its capabilities on-demand. Each generated script undergoes automated backend validation and is seamlessly integrated upon successful testing, continuously enhancing the system's adaptability and robustness. A distinctive feature of VizGenie is its intuitive natural language interface, allowing users to issue high-level feature-based queries (e.g., “visualize the skull” or “highlight tissue boundaries”). The system leverages image-based analysis and visual question answering (VQA) via fine-tuned vision models to interpret these queries precisely, bridging domain expertise and technical implementation. Additionally, users can interactively query generated visualizations through VQA, facilitating deeper exploration. Reliability and reproducibility are further strengthened by Retrieval-Augmented Generation (RAG), providing context-driven responses while maintaining comprehensive provenance records. Evaluations on complex volumetric datasets demonstrate significant reductions in cognitive overhead for iterative visualization tasks. By integrating curated domain-specific tools with LLM-driven flexibility, VizGenie not only accelerates insight generation but also establishes a sustainable, continuously evolving visualization practice. The resulting platform dynamically learns from user interactions, consistently enhancing support for feature-centric exploration and reproducible research in scientific visualization. | false | true | [
"Ayan Biswas",
"Terece Turton",
"Nishath Ranasinghe",
"Shawn Jones",
"Bradley Love",
"William Jones",
"Aric Hagberg",
"Han-Wei Shen",
"Nathan Debardeleben",
"Earl Lawrence"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.21124",
"icon": "paper"
},
{
"name": "Video",
"url": "https://www.youtube.com/watch?v=P8InRaOBnKI",
"icon": "video"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_9e05775a-056c-42f7-a3c2-e81281c5045a.html",
"icon": "other"
}
] |
Vis | 2,025 | VolMoVis: Real-Time Volume Generation and Motion Visualization with Dynamic Tomographic Reconstruction | 10.1109/TVCG.2025.3634648 | We present VolMoVis, a method for dynamic tomographic reconstruction that supports real-time volume generation and volumetric motion visualization from 2D projections. Visualizing the motion of 3D anatomical structures, such as organs and tumors, is critical for computer-aided interventions. However, conventional 4D volumetric reconstruction methods typically produce a limited set of volumes at discrete phases, suffering from low temporal resolution. Moreover, it often requires extensive segmentation of 3D structures or regions for visualizing volumetric data, making it challenging to segment and visualize dynamic volumes in real-time. To address these challenges, VolMoVis framework employs a continuous implicit neural representation that decomposes the dynamic volumetric data into a static reference volume and a continuous deformation field. This decomposition, along with an efficient deformation network, enables our framework to achieve real-time volume generation and volumetric visualization of continuous anatomical motions. We evaluate VolMoVis on both 4D digital phantoms and real patient datasets, demonstrating its effectiveness for accurate anatomical reconstruction and motion tracking. Furthermore, we highlight its capabilities in real-time simultaneous volume generation and tumor segmentation for visualizing dynamic volumes and 4D tumor tracking, showcasing its potential in image-guided radiation therapy. | false | true | [
"Gaofeng Deng",
"Arie Kaufman"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_f84ebbf0-98bb-4a65-9640-3161c7449f60.html",
"icon": "other"
}
] |
Vis | 2,025 | VolSegGS: Segmentation and Tracking in Dynamic Volumetric Scenes via Deformable 3D Gaussians | 10.1109/TVCG.2025.3642516 | Visualization of large-scale time-dependent simulation data is crucial for domain scientists to analyze complex phenomena, but it demands significant I/O bandwidth, storage, and computational resources. To enable effective visualization on local, low-end machines, recent advances in view synthesis techniques, such as neural radiance fields, utilize neural networks to generate novel visualizations for volumetric scenes. However, these methods focus on reconstruction quality rather than facilitating interactive visualization exploration, such as feature extraction and tracking. We introduce VolSegGS, a novel Gaussian splatting framework that supports interactive segmentation and tracking in dynamic volumetric scenes for exploratory visualization and analysis. Our approach utilizes deformable 3D Gaussians to represent a dynamic volumetric scene, allowing for real-time novel view synthesis. For accurate segmentation, we leverage the view-independent colors of Gaussians for coarse-level segmentation and refine the results with an affinity field network for fine-level segmentation. Additionally, by embedding segmentation results within the Gaussians, we ensure that their deformation enables continuous tracking of segmented regions over time. We demonstrate the effectiveness of VolSegGS with several time-varying datasets and compare our solutions against state-of-the-art methods. With the ability to interact with a dynamic scene in real time and provide flexible segmentation and tracking capabilities, VolSegGS offers a powerful solution under low computational demands. This framework unlocks exciting new possibilities for time-varying volumetric data analysis and visualization. | false | true | [
"Siyuan Yao",
"Chaoli Wang"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.12667",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_3d97e8c0-6d8b-4fe5-aa06-3ddf284b1a45.html",
"icon": "other"
}
] |
Vis | 2,025 | What is the Color of Serendipity? Investigating the Use of Language Models for Semantically Resonant Color Generation | 10.1109/TVCG.2025.3634243 | Humans inherently connect certain colors with particular concepts in semantically meaningful ways that facilitate visual communication. These colors are known as semantically resonant colors. For instance, we associate “sky” and “ocean” with shades of blue, and “cherry” with red. In this paper, we investigate how language models, including Word2Vec, RoBERTa, GPT-4o mini and the vision language model CLIP generate and represent nuanced semantically resonant colors for diverse concepts. To achieve this, we utilized a large dataset of color names and concepts, tailored models for the structure of each language model, and developed an interactive web interface, Concept2Color, as a use case. Additionally, we conducted experiments and a detailed analysis to assess the ability of these models to generate meaningful colors. Through these experiments, we examined how factors such as model design, training data and context affect the color output. Our findings reveal the capabilities and limitations of language models in processing and generating semantically resonant colors for concepts, thus contributing insights into how they depict semantic color-concept connections. These insights have implications for data visualization, design, and human-computer interaction, where leveraging effective semantic color generation can enhance communication and user experience. | false | true | [
"Shahreen Salim",
"Tanzir Pial",
"Klaus Mueller"
] | [] | [
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_b00813fe-c0fb-4bdb-8040-b6e338088247.html",
"icon": "other"
}
] |
Vis | 2,025 | What Makes a Visualization Image Complex? | 10.1109/TVCG.2025.3633827 | We investigate the perceived visual complexity (VC) in data visualizations using objective image-based metrics. We collected VC scores through a large-scale crowdsourcing experiment involving 349 participants and 1,800 visualization images. We then examined how these scores align with 12 image-based metrics spanning pixel-based and statistic-information-theoretic (clutter), color, shape, and our two new object-based metrics (meaningful-color-count (MeC) and text-to-ink ratio (TiR)). Our results show that both low-level edges and high-level elements affect perceived VC in visualization images; the number of corners and distinct colors are robust metrics across visualizations. Second, feature congestion, a statistical information-theoretic metric capturing color and texture patterns, is the strongest predictor of perceived complexity in visualizations rich in the same continuous color/texture stimuli; edge density effectively explains VC in node-link diagrams. Additionally, we observe a bell-curve effect for texts: increasing TiR initially reduces complexity, reaching an optimal point, beyond which further text increases VC. Our quantification model is also interpretable-enabling metric-based explanations-grounded in the VisComplexity2K dataset, bridging computational metrics with human perceptual responses. The preregistration is available at osf.io/5xe8a. osf.io/bdet6 has the dataset and analysis code. | false | true | [
"Mengdi Chu",
"Zefeng Qiu",
"Meng Ling",
"Shuning Jiang",
"Robert Laramee",
"Michael Sedlmair",
"Jian Chen"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/ypez4",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/bdet6/",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/5xe8a",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_5cf45946-05e6-4d9c-876e-abeacdf5b8e1.html",
"icon": "other"
}
] |
Vis | 2,025 | Write, Rank, or Rate: Comparing Methods for Studying Visualization Affordances | 10.1109/TVCG.2025.3633872 | A growing body of work on visualization affordances highlights how specific design choices shape reader takeaways from information visualizations. However, mapping the relationship between design choices and reader conclusions often requires labor-intensive crowdsourced studies, generating large corpora of free-response text for analysis. To address this challenge, we explored alternative scalable research methodologies to assess chart affordances. We test four elicitation methods from human-subject studies: free response, visualization ranking, conclusion ranking, and salience rating, and compare their effectiveness in eliciting reader interpretations of line charts, dot plots, and heatmaps. Overall, we find that while no method fully replicates affordances observed in free-response conclusions, combinations of ranking and rating methods can serve as an effective proxy at a broad scale. The two ranking methodologies were influenced by participant bias towards certain chart types and the comparison of suggested conclusions. Rating conclusion salience could not capture the specific variations between chart types observed in the other methods. To supplement this work, we present a case study with GPT-40, exploring the use of large language models (LLMs) to elicit human-like chart interpretations. This aligns with recent academic interest in leveraging LLMs as proxies for human participants to improve data collection and analysis efficiency. GPT-40 performed best as a human proxy for the salience rating methodology but suffered from severe constraints in other areas. Overall, the discrepancies in affordances we found between various elicitation methodologies, including GPT-40, highlight the importance of intentionally selecting and combining methods and evaluating trade-offs. | true | true | [
"Chase Stokes",
"Kylie Lin",
"Cindy Xiong Bearfield"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2507.17024",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_4cf2c4a7-98c7-4d49-a5c2-410ecde134cb.html",
"icon": "other"
}
] |
Vis | 2,025 | Your Model Is Unfair, Are You Even Aware? Inverse Relationship Between Comprehension and Trust in Explainability Visualizations of Biased ML Models | 10.1109/TVCG.2025.3634245 | Systems relying on ML have become ubiquitous, but so has biased behavior within them. Research shows that bias significantly affects stakeholders' trust in systems and how they use them. Further, stakeholders of different backgrounds view and trust the same systems differently. Thus, how ML models' behavior is explained plays a key role in comprehension and trust. We survey explainability visualizations, creating a taxonomy of design characteristics. We conduct user studies to evaluate five state-of-the-art visualization tools (LIME, SHAP, CP, Anchors, and ELl5) for model explainability, measuring how taxonomy characteristics affect comprehension, bias perception, and trust for non-expert ML users. Surprisingly, we find an inverse relationship between comprehension and trust: the better users understand the models, the less they trust them. We investigate the cause and find that this relationship is strongly mediated by bias perception: more comprehensible visualizations increase people's perception of bias, and increased bias perception reduces trust. We confirm this relationship is causal: Manipulating explainability visualizations to control comprehension, bias perception, and trust, we show that visualization design can significantly (p < 0.001) increase comprehension, increase perceived bias, and reduce trust. Conversely, reducing perceived model bias, either by improving model fairness or by adjusting visualization design, significantly increases trust even when comprehension remains high. Our work advances understanding of how comprehension affects trust and systematically investigates visualization's role in facilitating responsible ML applications. | true | true | [
"Zhanna Kaufman",
"Madeline Endres",
"Cindy Xiong Bearfield",
"Yuriy Brun"
] | [] | [
"P",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2508.00140",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/c87xm/?view_only=31dfc1f2a7624f5cb20b0f07d3730df3",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2025/program/paper_1b3b1b15-e378-4c65-8984-717a7f7baa3a.html",
"icon": "other"
}
] |
EuroVis | 2,025 | A Process-Oriented Approach to Analyze Analysts' Use of Visualizations: Revealing Insights into the What, When, and How | 10.1111/cgf.70104 | Despite Visual Analytics (VA) tools being essential for supporting data analysis, evaluating their use in real-world analytical processes remains challenging. Traditional evaluation methods often overlook the nuanced and evolving nature of analysis processes and are not always suitable for investigating scenarios in which analysts combine multiple tools and visualization types. In this paper, we propose a flexible analysis approach for studying analysts' use of visualizations within and across VA tools. Our qualitative method allows researchers to extract user behavior and cognitive steps from screen recordings and think-aloud data and generate event sequences that capture analytic processes. This enables the analysis of usage patterns from multiple perspectives and levels of granularity and allows for the evaluation of effectiveness measures, such as efficiency and accuracy. We demonstrate our approach in the domain of process mining, where our findings provide insights into the use of existing visualizations, and we reflect on lessons learned from this application. | false | false | [
"Lisa Zimmermann 0002",
"Francesca Zerbato",
"Katerina Vrotsou",
"Barbara Weber"
] | [] | [] | [] |
EuroVis | 2,025 | Accessible Text Descriptions for UpSet Plots | 10.1111/cgf.70102 | Data visualizations are typically not accessible to blind and low-vision (BLV) users. Automatically generating text descriptions offers an enticing mechanism for democratizing access to the information held in complex scientific charts, yet appropriate procedures for generating those texts remain elusive. Pursuing this issue, we study a single complex chart form: UpSet plots. UpSet Plots are a common way to analyze set data, an area largely unexplored by prior accessibility literature. By analyzing the patterns present in real-world examples, we develop a system for automatically captioning any UpSet plot. We evaluated the utility of our captions via semi-structured interviews with (N=11) BLV users and found that BLV users find them informative. In extensions, we find that sighted users can use our texts similarly to UpSet plots and that they are better than naive LLM usage. | false | false | [
"Andrew M. McNutt",
"Maggie K. McCracken",
"Ishrat Jahan Eliza 0001",
"Daniel Hajas",
"Jake Wagoner",
"Nate Lanza",
"Jack Wilburn",
"Sarah H. Creem-Regehr",
"Alexander Lex"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2503.17517v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | An Interactive Visual Enhancement for Prompted Programmatic Weak Supervision in Text Classification | 10.1111/cgf.70131 | Programmatic Weak Supervision (PWS) has emerged as a powerful technique for text classification. By aggregating weak labels provided by manually written label functions, it allows training models on large-scale unlabeled data without the need for costly manual annotations. As an improvement, Prompted PWS incorporates pre-trained large language models (LLMs) as part of the label function, replacing programs coded by experts with natural language prompts. This allows for the more accessible expression of complex and ambiguous concepts. However, the existing workflow does not fully utilize the advantages of Prompted PWS, and the annotators have difficulty in effectively converging their ideas to develop high-quality LFs, and lack support during the iterations. To address this issue, this study improves the existing PWS workflow through interactive visualization. We first propose a collaborative LF development workflow between humans and LLMs, where the large language model assists humans in creating a structured development space for exploration and automatically generates prompted LFs based on human selections. Annotators can integrate their knowledge through informed selection and judgment. Then, we present an interactive visual system that supports efficient development, in-depth exploration, and iteration of LFs. Our evaluation, comprising a quantitative evaluation on the benchmark, a case study, and a user study, demonstrates the effectiveness of our approach. | false | false | [
"Yiming Lin",
"S. Wei",
"Huijie Zhang",
"Dezhan Qu",
"Jinghan Bai"
] | [] | [] | [] |
EuroVis | 2,025 | Benchmarking Visual Language Models on Standardized Visualization Literacy Tests | 10.1111/cgf.70137 | The increasing integration of Visual Language Models (VLMs) into visualization systems demands a comprehensive understanding of their visual interpretation capabilities and constraints. While existing research has examined individual models, systematic comparisons of VLMs' visualization literacy remain unexplored. We bridge this gap through a rigorous, first-of-its-kind evaluation of four leading VLMs (GPT-4, Claude, Gemini, and Llama) using standardized assessments: the Visualization Literacy Assessment Test (VLAT) and Critical Thinking Assessment for Literacy in Visualizations (CALVI). Our methodology uniquely combines randomized trials with structured prompting techniques to control for order effects and response variability - a critical consideration overlooked in many VLM evaluations. Our analysis reveals that while specific models demonstrate competence in basic chart interpretation (Claude achieving 67.9% accuracy on VLAT), all models exhibit substantial difficulties in identifying misleading visualization elements (maximum 30.0% accuracy on CALVI). We uncover distinct performance patterns: strong capabilities in interpreting conventional charts like line charts (76-96% accuracy) and detecting hierarchical structures (80-100% accuracy), but consistent difficulties with data-dense visualizations involving multiple encodings (bubble charts: 18.6-61.4%) and anomaly detection (25-30% accuracy). Significantly, we observe distinct uncertainty management behavior across models, with Gemini displaying heightened caution (22.5% question omission) compared to others (7-8%). These findings provide crucial insights for the visualization community by establishing reliable VLM evaluation benchmarks, identifying areas where current models fall short, and highlighting the need for targeted improvements in VLM architectures for visualization tasks. To promote reproducibility, encourage further research, and facilitate benchmarking of future VLMs, our complete evaluation framework, including code, prompts, and analysis scripts, is available at https://github.com/washuvis/VisLit-VLM-Eval. | false | false | [
"Saugat Pandey",
"Alvitta Ottley"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2503.16632v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Beyond Entertainment: An Investigation of Externalization Design in Video Games | 10.1111/cgf.70124 | This article investigates when and how video games enable players to create externalizations in a diverse sample of 388 video games. We follow a grounded-theory approach, extracting externalizations from video games to explore design ideas and relate them to practices in visualization. Video games often engage players in problem-solving activities, like solving a murder mystery or optimizing a strategy, requiring players to interpret heterogeneous data—much like tasks in the visualization domain. In many cases, externalizations can help reduce a user's mental load by making tangible what otherwise only lives in their head, acting as external storage or a visual playground. Over five coding phases, we created a hierarchy of 277 tags to describe the video games in our collection, from which we extracted 169 externalizations. We characterize these externalizations along nine dimensions like mental load, visual encodings, and motivations, resulting in 13 categories divided into four clusters: quick access, storage, sensemaking, and communication. We formulate considerations to guide future work, looking at tasks and challenges, naming potentials for inspiration, and discussing which topics could advance the state of externalization. | false | false | [
"Franziska Becker",
"Rene P. Warnking",
"Hendrik Brückler",
"Tanja Blascheck"
] | [] | [] | [] |
EuroVis | 2,025 | Coupling Guidance and Progressiveness in Visual Analytics | 10.1111/cgf.70115 | Data size and complexity in Visual Analytics (VA)pose significant challenges for VA systems andVA users. Two recent developments address these challenges: progressive VA (PVA) and guidance for VA (GVA). Both share the goal of supporting the analysis flow. PVA primarily considers the system perspective and incrementally generates partial results during long computations to avoid an unresponsive VA system. GVA is primarily concerned with the user perspective and strives to mitigate knowledge gaps during VA activities to prevent the analysis from stalling. Although PVA and GVA share the same goal, it has not yet been studied how PVA and GVA can join forces to achieve it. Our paper investigates this in detail. We structure our research around two questions: How can guidance enhance PVA and how can progressiveness enhance GVA? This leads to two main themes: Guidance for Progressiveness (G4P) and Progressiveness for Guidance (P4G). By exploring both themes, we arrive at a conceptual model of how progressiveness and guidance can work together. We illustrate the practical value of our theoretical considerations in two case studies ofG4P and P4G. | false | false | [
"Ignacio Pérez-Messina",
"Marco Angelini",
"Davide Ceneda",
"Christian Tominski",
"Silvia Miksch"
] | [] | [] | [] |
EuroVis | 2,025 | DashGuide: Authoring Interactive Dashboard Tours for Guiding Dashboard Users | 10.1111/cgf.70107 | Dashboard guidance helps dashboard users better navigate interactive features, understand the underlying data, and assess insights they can potentially extract from dashboards. However, authoring dashboard guidance is a time consuming task, and embedding guidance into dashboards for effective delivery is difficult to realize. In this work, we contribute DashGuide, a framework and system to support the creation of interactive dashboard guidance with minimal authoring input. Given a dashboard and a communication goal, DashGuide captures a sequence of author-performed interactions to generate guidance materials delivered as playable step-by-step overlays, a.k.a., dashboard tours. Authors can further edit and refine individual tour steps while receiving generative assistance. We also contribute findings from a formative assessment with 9 dashboard creators, which helped inform the design of DashGuide; and findings from an evaluation of DashGuide with 12 dashboard creators, suggesting it provides an improved authoring experience that balances efficiency, expressiveness, and creative freedom. | false | false | [
"Md. Naimul Hoque",
"Nicole Sultanum"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2504.17150v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | DataWeaver: Authoring Data-Driven Narratives through the Integrated Composition of Visualization and Text | 10.1111/cgf.70098 | Data-driven storytelling has gained prominence in journalism and other data reporting fields. However, the process of creating these stories remains challenging, often requiring the integration of effective visualizations with compelling narratives to form a cohesive, interactive presentation. To help streamline this process, we present an integrated authoring framework and system, DataWeaver, that supports both visualization-to-text and text-to-visualization composition. DataWeaver enables users to create data narratives anchored to data facts derived from "call-out" interactions, i.e., user-initiated highlights of visualization elements that prompt relevant narrative content. In addition to this "vis-to-text" composition, DataWeaver also supports a "text-initiated" approach, generating relevant interactive visualizations from existing narratives. Key findings from an evaluation with 13 participants highlighted the utility and usability of DataWeaver and the effectiveness of its integrated authoring framework. The evaluation also revealed opportunities to enhance the framework by refining filtering mechanisms and visualization recommendations and better support authoring creativity by introducing advanced customization options. | false | false | [
"Yu Fu 0010",
"Dennis Bromley",
"Vidya Setlur"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2503.22946v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Either Or: Interactive Articles or Videos for Climate Science Communication | 10.1111/cgf.70129 | Effective communication of climate science is critical as climate-related disasters become more frequent and severe. Translating complex information, such as uncertainties in climate model predictions, into formats accessible to diverse audiences is key to informed decision-making and public engagement. This study investigates how different teaching formats can enhance understanding of these uncertainties. This study compares two multimodal strategies: (1) a text-image format with interactive components and (2) an explainer video combining dynamic visuals with narration. Participants' immediate and delayed retention (one week) and engagement are assessed to determine which format offers greater saliency. | false | false | [
"Jeran Poehls",
"Monique Meuschke",
"Nuno Carvalhais",
"Kai Lawonn"
] | [] | [] | [] |
EuroVis | 2,025 | Embedded and Situated Visualisation in Mixed Reality to Support Interval Running | 10.1111/cgf.70133 | We investigate the use of mixed reality visualisations to help pace tracking for interval running. We introduce three immersive visual designs to support pace tracking. Our designs leverage two properties afforded by mixed reality environments to display information: the space in front of the user and the physical environment to embed pace visualisation. In this paper, we report on the first design exploration and controlled study of mixed reality technology to support pacing tracking during interval running on an outdoor running track. Our results show that mixed reality and immersive visualisation designs for interval training offer a viable option to help runners (a) maintain regular pace, (b) maintain running flow, and (c) reduce mental task load. | false | false | [
"Ang Li 0007",
"Charles Perin",
"Jarrod Knibbe",
"Gianluca Demartini",
"Stephen Viller",
"Maxime Cordeil"
] | [] | [] | [] |
EuroVis | 2,025 | Enhancing Material Boundary Visualizations in 2D Unsteady Flow through Local Reference Frame Transformations | 10.1111/cgf.70128 | We present a novel technique for the extraction, visualization, and analysis of material boundaries and Lagrangian coherent structures (LCS) in 2D unsteady flow fields relative to local reference frame transformations. In addition to the input flow field, we leverage existing methods for computing reference frames adapted to local fluid features, in particular those that minimize the observed time derivative. Although, by definition, transforming objective tensor fields between reference frames does not change the tensor field, we show that transforming objective tensors, such as the finite-time Lyapunov exponent (FTLE) or Lagrangian-averaged vorticity deviation (LAVD), or the second-order rate-of-strain tensor, into local reference frames that are naturally adapted to coherent fluid structures has several advantages: (1) The transformed fields enable analyzing LCS in space-time visualizations that are adapted to each structure; (2) They facilitate extracting geometric features, such as iso-surfaces and ridge lines, in a straightforward manner with high accuracy. The resulting visualizations are characterized by lower geometric complexity and enhanced topological fidelity. To demonstrate the effectiveness of our technique, we measure geometric complexity and compare it with iso-surfaces extracted in the conventional reference frame. We show that the decreased geometric complexity of the iso-surfaces in the local reference frame, not only leads to improved geometric and topological results, but also to a decrease in computation time. | false | false | [
"Xingdi Zhang",
"Peter Rautek",
"Thomas Theußl",
"Markus Hadwiger"
] | [] | [] | [] |
EuroVis | 2,025 | Euclidean, Hyperbolic, and Spherical Networks: An Empirical Study of Matching Network Structure to Best Visualizations | 10.1111/cgf.70126 | We investigate the usability of Euclidean, spherical and hyperbolic geometries for network visualization. Several techniques have been proposed for both spherical and hyperbolic network visualization tools, based on the fact that some networks admit lower embedding error (distortion) in such non-Euclidean geometries. However, it is not yet known whether a lower embedding error translates to human subject benefits, e.g., better task accuracy or lower task completion time. We design, implement, conduct, and analyze a human subjects study to compare Euclidean, spherical and hyperbolic network visualizations using tasks that span the network task taxonomy. While in some cases accuracy and response times are negatively impacted when using non-Euclidean visualizations, the evaluation shows that differences in accuracy for hyperbolic and spherical visualizations are not statistically significant when compared to Euclidean visualizations. Additionally, differences in response times for spherical visualizations are not statistically significant compared to Euclidean visualizations. | false | false | [
"Jacob Miller 0001",
"Dhruv Bhatia",
"Helen C. Purchase",
"Stephen G. Kobourov"
] | [] | [] | [] |
EuroVis | 2,025 | FairSpace: An Interactive Visualization System for Constructing Fair Consensus from Many Rankings | 10.1111/cgf.70132 | Decisions involving algorithmic rankings affect our lives in many ways, from product recommendations, receiving scholarships, to securing jobs. While tools have been developed for interactively constructing fair consensus rankings from a handful of rankings, addressing the more complex real-world scenario— where diverse opinions are represented by a larger collection of rankings— remains a challenge. In this paper, we address these challenges by reformulating the exploration of rankings as a dimension reduction problem in a system called FairSpace. FairSpace provides new views, including Fair Divergence View and Cluster Views, by juxtaposing fairness metrics of different local and alternative global consensus rankings to aid ranking analysis tasks. We illustrate the effectiveness of FairSpace through a series of use cases, demonstrating via interactive workflows that users are empowered to create local consensuses by grouping rankings similar in their fairness or utility properties, followed by hierarchically aggregating local consensuses into a global consensus through direct manipulation. We discuss how FairSpace opens the possibility for advances in dimension reduction visualization to benefit the research area of supporting fair decision-making in ranking based decision-making contexts. | false | false | [
"Hilson Shrestha",
"Kathleen Cachel",
"Mallak Alkhathlan",
"Elke A. Rundensteiner",
"Lane Harrison"
] | [] | [] | [] |
EuroVis | 2,025 | Fast and Invertible Simplicial Approximation of Magnetic-Following Interpolation for Visualizing Fusion Plasma Simulation Data | 10.1111/cgf.70120 | We introduce a fast and invertible approximation for fusion plasma simulation data represented as 2D planar meshes with connectivities approximating magnetic field lines along the toroidal dimension in deformed 3D toroidal spaces. Scientific variables (e.g., density and temperature) in these fusion data are interpolated following a complex magnetic-field-line-following scheme in the toroidal space represented by a cylindrical coordinate system. This deformation in the 3D space poses challenges for root-finding and interpolation. To this end, we propose a novel paradigm for visualizing and analyzing such data based on a newly developed algorithm for constructing a 3D simplicial mesh within the deformed 3D space. Our algorithm generates a tetrahedral mesh that connects the 2D meshes using tetrahedra while adhering to the constraints on node connectivities imposed by the magnetic field-line scheme. Specifically, we first divide the space into smaller partitions to reduce complexity based on the input geometries and constraints on connectivities. Then, we independently search for a feasible tetrahedralization of each partition, considering nonconvexity. We demonstrate our method with two X-Point Gyrokinetic Code (XGC) simulation datasets on the International Thermonuclear Experimental Reactor (ITER) and Wendelstein 7-X (W7-X), and use an ocean simulation dataset to substantiate broader applicability of our method. An open source implementation of our algorithm is available at https://github.com/rcrcarissa/DeformedSpaceTet. | false | false | [
"Congrong Ren",
"Robert Hager",
"Randy Michael Churchill",
"Albert Mollén",
"Seung-Hoe Ku",
"Choong-Seock Chang",
"Hanqi Guo 0001"
] | [] | [] | [] |
EuroVis | 2,025 | Fast HARDI Uncertainty Quantification and Visualization with Spherical Sampling | 10.1111/cgf.70138 | In this paper, we study uncertainty quantification and visualization of orientation distribution functions (ODF), which corresponds to the diffusion profile of high angular resolution diffusion imaging (HARDI) data. The shape inclusion probability (SIP) function is the state-of-the-art method for capturing the uncertainty of ODF ensembles. The current method of computing the SIP function with a volumetric basis exhibits high computational and memory costs, which can be a bottleneck to integrating uncertainty into HARDI visualization techniques and tools. We propose a novel spherical sampling framework for faster computation of the SIP function with lower memory usage and increased accuracy. In particular, we propose direct extraction of SIP isosurfaces, which represent confidence intervals indicating spatial uncertainty of HARDI glyphs, by performing spherical sampling of ODFs. Our spherical sampling approach requires much less sampling than the state-of-the-art volume sampling method, thus providing significantly enhanced performance, scalability, and the ability to perform implicit ray tracing. Our experiments demonstrate that the SIP isosurfaces extracted with our spherical sampling approach can achieve up to 8164× speedup, 37282× memory reduction, and 50.2% less SIP isosurface error compared to the classical volume sampling approach. We demonstrate the efficacy of our methods through experiments on synthetic and human-brain HARDI datasets. | false | false | [
"Tark Patel",
"Tushar M. Athawale",
"Timbwaoga A. J. Ouermi",
"Chris R. Johnson 0001"
] | [] | [] | [] |
EuroVis | 2,025 | Fluidly Revealing Information: A Survey of Un/foldable Data Visualizations | 10.1111/cgf.70152 | Revealing relevant information on demand is an essential requirement for visual data exploration. In this state-of-the-art report, we review and classify techniques that are inspired by the physical metaphor of un/folding to reveal relevant information or, conversely, to reduce irrelevant information in data visualizations. Similar to focus+context approaches, un/foldable visualizations transform the visual data representation, often between different granularities, in an integrated manner while preserving the overall context. This typically involves switching between different visibility states of data elements or adjusting the graphical abstraction linked by gradual display transitions. We analyze a literature corpus of 101 visualization techniques specifically with respect to their use of the un/folding metaphor. In particular, we consider the type of data, the focus scope and the effect scope, the number of un/folding states, the transformation type, and the controllability and interaction directness of un/folding. The collection of un/foldables is available as an online catalog that includes classic focus+context, semantic zooming, and multi-scale visualizations as well as contemporary un/foldable visualizations. From our literature analysis, we further extract families of un/folding techniques, summarize empirical findings to date, and identify promising research directions for un/foldable data visualization. | false | false | [
"Mark-Jan Bludau",
"Marian Dörk",
"Stefan Bruckner",
"Christian Tominski"
] | [] | [] | [] |
EuroVis | 2,025 | Gridded Visualization of Statistical Trees for High-Dimensional Multipartite Data in Systems Genetics | 10.1111/cgf.70113 | In systems genetics and other multi-omics research, exploring high-dimensional relationships among molecular and physiological variables across individuals poses significant challenges. We present the Gridded Trees interface, a novel interactive visualization tool designed to facilitate the exploration of conditional inference trees, which are hierarchical models of relationships in these complex datasets. Traditional static tools struggle to reveal patterns in tree-structured data, but the Gridded Trees interface provides interactive, coordinated views, allowing users to navigate between overview and detail, filter data dynamically, and compare molecular-physiological relationships across subgroups. By combining filtering techniques, strip plots, Sankey diagrams, and small multiples, the Gridded Trees interface enhances exploratory data analysis and supports hypothesis generation. In our systems genetics research use case, this tool has revealed significant associations among microbial populations and addiction-related behavioral traits in genetically diverse mice. The Gridded Trees interface suggests broad potential for visualizing hierarchical and multipartite data across domains. A preprint of this paper as well as Supplemental Materials are available on OSF at https://osf.io/9emn5/. | false | false | [
"Jane Lydia Adams",
"Robyn L. Ball",
"Jason A. Bubier",
"Elissa J. Chesler",
"Melanie Tory",
"Michelle A. Borkin"
] | [
"HM"
] | [] | [] |
EuroVis | 2,025 | HyperFLINT: Hypernetwork-based Flow Estimation and Temporal Interpolation for Scientific Ensemble Visualization | 10.1111/cgf.70134 | We present HyperFLINT (Hypernetwork-based FLow estimation and temporal INTerpolation), a novel deep learning-based approach for estimating flow fields, temporally interpolating scalar fields, and facilitating parameter space exploration in spatio-temporal scientific ensemble data. This work addresses the critical need to explicitly incorporate ensemble parameters into the learning process, as traditional methods often neglect these, limiting their ability to adapt to diverse simulation settings and provide meaningful insights into the data dynamics. HyperFLINT introduces a hypernetwork to account for simulation parameters, enabling it to generate accurate interpolations and flow fields for each timestep by dynamically adapting to varying conditions, thereby outperforming existing parameter-agnostic approaches. The architecture features modular neural blocks with convolutional and deconvolutional layers, supported by a hypernetwork that generates weights for the main network, allowing the model to better capture intricate simulation dynamics. A series of experiments demonstrates HyperFLINT's significantly improved performance in flow field estimation and temporal interpolation, as well as its potential in enabling parameter space exploration, offering valuable insights into complex scientific ensembles. | false | false | [
"Hamid Gadirov",
"Qi Wu 0015",
"David Bauer",
"Kwan-Liu Ma",
"Jos B. T. M. Roerdink",
"Steffen Frey"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2412.04095v2",
"icon": "paper"
}
] |
EuroVis | 2,025 | In Situ Workload Estimation for Block Assignment and Duplication in Parallelization-Over-Data Particle Advection | 10.1111/cgf.70108 | Particle advection is a foundational algorithm for analyzing a flow field. The commonly used Parallelization-Over-Data (POD) strategy for particle advection can become slow and inefficient when there are unbalanced workloads, which are particularly prevalent in in situ workflows. In this work, we present an in situ workflow containing workload estimation for block assignment and duplication in a parallelization-over-data algorithm. With tightly coupled workload estimation and load-balanced block assignment strategy, our workflow offers a considerable improvement over the traditional round-robin block assignment strategy. Our experiments demonstrate that particle advection is up to 3X faster and associated workflow saves approximately 30% of execution time after adopting strategies presented in this work. | false | false | [
"Zhe Wang 0059",
"Kenneth Moreland",
"Matthew Larsen",
"James Kress",
"Hank Childs",
"Guan Li",
"Guihua Shan",
"David Pugmire"
] | [] | [] | [] |
EuroVis | 2,025 | Instructional Comics for Self-Paced Learning of Data Visualization Tools and Concepts | 10.1111/cgf.70130 | In this paper, we introduce instructional comics to explain concepts and routines in data visualization tools. As tools for visual data exploration proliferate, there is a growing need for tailored training and onboarding demonstrating interfaces, concepts, and interactions. Building on recent research in visualization education, we detail our iterative process of designing instructional comics for four different types of instructional content. Through a mixed-method eye-tracking study involving 20 participants, we analyze how people engage with these comics when using a new visualization tool, and validate our design choices. We interpret observed behaviors as unique affordances of instructional comics, supporting their use during tasks and complementing traditional instructional methods like video tutorials and workshops, and formulate six guidelines to inform the design of future instructional comics for visualization. | false | false | [
"Magdalena Boucher",
"Mashael AlKadi",
"Benjamin Bach",
"Wolfgang Aigner"
] | [] | [] | [] |
EuroVis | 2,025 | IntelliCircos: A Data-driven and AI-powered Authoring Tool for Circos Plots | 10.1111/cgf.70118 | Genomics data is essential in biological and medical domains, and bioinformatics analysts often manually create circos plots to analyze the data and extract valuable insights. However, creating circos plots is complex, as it requires careful design for multiple track attributes and positional relationships between them. Typically, analysts often seek inspiration from existing circos plots, and they have to iteratively adjust and refine the plot to achieve a satisfactory final design, making the process both tedious and time-intensive. To address these challenges, we propose IntelliCircos, an AI-powered interactive authoring tool that streamlines the process from initial visual design to the final implementation of circos plots. Specifically, we build a new dataset containing 4396 circos plots with corresponding annotations and configurations, which are extracted and labeled from published papers. With the dataset, we further identify track combination patterns, and utilize Large Language Model (LLM) to provide domain-specific design recommendations and configuration references to navigate the design of circos plots. We conduct a user study with 8 bioinformatics analysts to evaluate IntelliCircos, and the results demonstrate its usability and effectiveness in authoring circos plots. | false | false | [
"Mingyang Gu",
"Jiamin Zhu",
"Qipeng Wang",
"Fengjie Wang",
"Xiaolin Wen",
"Yong Wang 0021",
"Min Zhu 0005"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2503.24021v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Interactive Discovery and Exploration of Visual Bias in Generative Text-to-Image Models | 10.1111/cgf.70135 | Bias in generative Text-to-Image (T2I) models is a known issue, yet systematically analyzing such models' outputs to uncover it remains challenging. We introduce the Visual Bias Explorer (ViBEx) to interactively explore the output space of T2I models to support the discovery of visual bias. ViBEx introduces a novel flexible prompting tree interface in combination with zero-shot bias probing using CLIP for quick and approximate bias exploration. It additionally supports in-depth confirmatory bias analysis through visual inspection of forward, intersectional, and inverse bias queries. ViBEx is model-agnostic and publicly available. In four case study interviews, experts in AI and ethics were able to discover visual biases that have so far not been described in literature. | false | false | [
"Johannes Eschner",
"Roberto Labadie Tamayo",
"Matthias Zeppelzauer",
"Manuela Waldner"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2504.19703v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | InterChat: Enhancing Generative Visual Analytics using Multimodal Interactions | 10.1111/cgf.70112 | The rise of Large Language Models (LLMs) and generative visual analytics systems has transformed data-driven insights, yet significant challenges persist in accurately interpreting users analytical and interaction intents. While language inputs offer flexibility, they often lack precision, making the expression of complex intents inefficient, error-prone, and time-intensive. To address these limitations, we investigate the design space of multimodal interactions for generative visual analytics through a literature review and pilot brainstorming sessions. Building on these insights, we introduce a highly extensible workflow that integrates multiple LLM agents for intent inference and visualization generation. We develop InterChat, a generative visual analytics system that combines direct manipulation of visual elements with natural language inputs. This integration enables precise intent communication and supports progressive, visually driven exploratory data analyses. By employing effective prompt engineering, and contextual interaction linking, alongside intuitive visualization and interaction designs, InterChat bridges the gap between user interactions and LLM-driven visualizations, enhancing both interpretability and usability. Extensive evaluations, including two usage scenarios, a user study, and expert feedback, demonstrate the effectiveness of InterChat. Results show significant improvements in the accuracy and efficiency of handling complex visual analytics tasks, highlighting the potential of multimodal interactions to redefine user engagement and analytical depth in generative visual analytics. | false | false | [
"Juntong Chen",
"Jiang Wu 0012",
"Jiajing Guo",
"Vikram Mohanty",
"Xueming Li",
"Jorge Piazentin Ono",
"Wenbin He",
"Liu Ren",
"Dongyu Liu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2503.04110v2",
"icon": "paper"
}
] |
EuroVis | 2,025 | Lactea: Web-Based Spectrum-Preserving Multi-Resolution Visualization of the GAIA Star Catalog | 10.1111/cgf.70117 | The explosion of data in astronomy has resulted in an era of unprecedented opportunities for discovery. The GAIA mission's catalog, containing a large number of light sources (mostly stars) with several parameters such as sky position and proper motion, is playing a significant role in advancing astronomy research and has been crucial in various scientific breakthroughs over the past decade. In its current release, more than 200 million stars contain a calibrated continuous spectrum, which is essential for characterizing astronomical information such as effective temperature and surface gravity, and enabling complex tasks like interstellar extinction detection and narrow-band filtering. Even though numerous studies have been conducted to visualize and analyze the data in the SciVis and AstroVis communities, no work has attempted to leverage spectral information for visualization in real-time. Interactive exploration of such complex, massive data presents several challenges for visualization. This paper introduces a novel multi-resolution, spectrum-preserving data structure and a progressive, real-time visualization algorithm to handle the sheer volume of the data efficiently, enabling interactive visualization and exploration of the whole catalog's spectra. We show the efficiency of our method with our open-source, interactive, web-based tool for exploring the GAIA catalog, and discuss astronomically relevant use cases of our system. | false | false | [
"Reem Alghamdi",
"Markus Hadwiger",
"Guido Reina",
"Alberto Jaspe-Villanueva"
] | [] | [] | [] |
EuroVis | 2,025 | LayerFlow: Layer-wise Exploration of LLM Embeddings using Uncertainty-aware Interlinked Projections | 10.1111/cgf.70123 | Large language models (LLMs) represent words through contextual word embeddings encoding different language properties like semantics and syntax. Understanding these properties is crucial, especially for researchers investigating language model capabilities, employing embeddings for tasks related to text similarity, or evaluating the reasons behind token importance as measured through attribution methods. Applications for embedding exploration frequently involve dimensionality reduction techniques, which reduce high-dimensional vectors to two dimensions used as coordinates in a scatterplot. This data transformation step introduces uncertainty that can be propagated to the visual representation and influence users' interpretation of the data. To communicate such uncertainties, we present LayerFlow – a visual analytics workspace that displays embeddings in an interlinked projection design and communicates the transformation, representation, and interpretation uncertainty. In particular, to hint at potential data distortions and uncertainties, the workspace includes several visual components, such as convex hulls showing 2D and HD clusters, data point pairwise distances, cluster summaries, and projection quality metrics. We show the usability of the presented workspace through replication and expert case studies that highlight the need to communicate uncertainty through multiple visual components and different data perspectives. | false | false | [
"Rita Sevastjanova",
"Robin Gerling",
"Thilo Spinner",
"Mennatallah El-Assady"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2504.10504v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Mapping Mental Models of Uncertainty to Parallel Coordinates by Probabilistic Brushing | 10.1111/cgf.70103 | Through training and gathered experience, domain experts attain a mental model of the uncertainties inherent in the visual analytics processes for their respective domain. For an accurate data analysis and trustworthiness of the analysis results, it is essential to include this knowledge and consider this model of uncertainty during the analytical process. For multi-dimensional data analysis, Parallel Coordinates are a widely used approach due to their linear scalability with the number of dimensions and bijective (i.e., loss-less) data transformation. However, selections in Parallel Coordinates are typically achieved by a binary brushing operation on the axes, which does not allow the users to map their mental model of uncertainties to their selection. We, therefore, propose Probabilistic Parallel Coordinates as a natural extension of the classical Parallel Coordinates approach that integrates probabilistic brushing on the axes. It supports the interactive modeling of a probability distribution for each parallel coordinate. The selections on multiple axes are combined accordingly. An efficient rendering on a compute shader facilitates interactive frame rates. We evaluated our open-source tool with practitioners and compared it to classical Parallel Coordinates on multiple regression and uncertain selection tasks in user studies. | false | false | [
"Gabriel Borrelli",
"Till Ittermann",
"Lars Linsen"
] | [] | [] | [] |
EuroVis | 2,025 | MatplotAlt: A Python Library for Adding Alt Text to Matplotlib Figures in Computational Notebooks | 10.1111/cgf.70119 | We present MatplotAlt, an open-source Python package for easily adding alternative text to Matplotlib figures. MatplotAlt equips Jupyter notebook authors to automatically generate and surface chart descriptions with a single line of code or command, and supports a range of options that allow users to customize the generation and display of captions based on their preferences and accessibility needs. Our evaluation indicates that MatplotAlt's heuristic and LLM-based methods to generate alt text can create accurate long-form descriptions of both simple univariate and complex Matplotlib figures. We find that state-of-the-art LLMs still struggle with factual errors when describing charts, and improve the accuracy of our descriptions by prompting GPT4-turbo with heuristic-based alt text or data tables parsed from the Matplotlib figure. | false | false | [
"Kai Nylund",
"Jennifer Mankoff",
"Venkatesh Potluri"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2503.20089v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Modeling and Measuring the Chart Communication Recall Process | 10.1111/cgf.70099 | Understanding memory in the context of data visualizations is paramount for effective design. While immediate clarity in a visualization is crucial, retention of its information determines its long-term impact. While extensive research has underscored the elements enhancing visualization memorability, a limited body of work has delved into modeling the recall process. This study investigates the temporal dynamics of visualization recall, focusing on factors influencing recollection, shifts in recall veracity, and the role of participant demographics. Using data from an empirical study (n = 104), we propose a novel approach combining temporal clustering and handcrafted features to model recall over time. A long short-term memory (LSTM) model with attention mechanisms predicts recall patterns, revealing alignment with informativeness scores and participant characteristics. Our findings show that perceived informativeness dictates recall focus, with more informative visualizations eliciting narrative-driven insights and less informative ones prompting aesthetic-driven responses. Recall accuracy diminishes over time, particularly for unfamiliar visualizations, with age and education significantly shaping recall emphases. These insights advance our understanding of visualization recall, offering practical guidance for designing visualizations that enhance retention and comprehension. All data and materials are available at: https://osf.io/ghe2j/. | false | false | [
"Anjana Arunkumar",
"Lace M. K. Padilla",
"Chris Bryan"
] | [] | [] | [] |
EuroVis | 2,025 | Multipla: Multiscale Pangenomic Locus Analysis | 10.1111/cgf.70147 | Comparing gene organization across genomic sequences reveals insights into evolutionary and functional diversity among different organisms and varieties. Performing this task across many sequences, such as from a pangenome, is challenging because of the scale, the density of information, and the inherent variation. Often, analyses are centered on a genomic region of interest—a locus that might be associated with a trait or contain genes within the same family or biological pathway. Within these regions, researchers examine the conservation of gene order and orientation across organisms and assess sequence similarity, along with other gene content features such as gene size, to find biological variations or potential errors in the data. Automated methods in comparative genomics struggle to identify meaningful patterns due to varying and often unknown features of interest, leaving manual, time-intensive, and scalability-challenged visualization as the primary alternative. To address these challenges, we present a multiscale design for studying gene organization within pangenomes, developed in close collaboration with domain experts. Our tool, Multipla, enables users to explore organization at multiple levels of detail in a decluttered manner through layout abstractions, semantic zooming, and layouts with flexible distance definitions and feature selections, combining the advantages of manual and automated methods used in practice. We evaluate the design of Multipla through two pangenomic use cases and conclude with lessons learned from designing multiscale views for pangenomic locus analysis. | false | false | [
"Astrid van den Brandt",
"Emilia Ståhlbom",
"Fredericus Johannes Maria van Workum",
"Huub van de Wetering",
"Claes Lundström",
"Sandra Smit",
"Anna Vilanova"
] | [] | [] | [] |
EuroVis | 2,025 | Necessary but not Sufficient: Limitations of Projection Quality Metrics | 10.1111/cgf.70101 | High-dimensional data analysis often uses dimensionality reduction (DR, also called projection) to map data patterns to human-digestible visual patterns in a 2D scatterplot. Yet, DR methods may fail to show true data patterns and/or create visual patterns that do not represent any data patterns. Projection Quality Metrics (PQMs) are used as objective measures to gauge the above process: the higher a projection's scores in PQMs, the more it is deemed faithful to the data it represents. We show that, while PQMs can be used as exclusion criteria — low values usually mean poor projections — the converse does not always hold. For this, we develop a technique to automatically generate projections that score similar or even higher PQM values than projections created by well-known techniques, but show different, often confusing, visual patterns. Our results show that accepted PQMs cannot be used as an exclusive way to tell whether a projection yields accurate and interpretable visual patterns — in this sense, PQMs play a role akin to that of summary statistics in exploratory data analysis. We also show that not all studied metrics can befooled equally well, suggesting a ranking of metrics in their ability to reliably capture quality. | false | false | [
"Alister Machado",
"Michael Behrisch 0001",
"Alexandru C. Telea"
] | [] | [] | [] |
EuroVis | 2,025 | Nodes, Edges, and Artistic Wedges: A Survey on Network Visualization in Art History | 10.1111/cgf.70154 | Art history traditionally relies on qualitative methods. However, the increasing availability of digitized archives has opened new possibilities for research by integrating visual analytics. This survey presents a comprehensive review of the intersection between art history and visual analytics, focusing on network visualization and how it supports researchers in analyzing and understanding complex art historical relationships through nodes (e.g., artists, artworks, institutions) and edges (the relationships between them). We explore how these approaches enable dynamic analysis, offering novel perspectives on artistic influence, stylistic evolution, and social interactions within the art world. Through this, we also examine wedges, a metaphor for the friction often present in art history between individuals and institutions. These tensions, which have historically played a pivotal role in shaping artistic movements, are now better understood through the lens of network visualization, revealing how conflicts and power dynamics influenced the development of art. Through a hierarchical categorization of the literature, we outline saturated problems and research areas as well as ongoing challenges in art historical research. Furthermore, we highlight the potential of visual analytics to bridge the gap between traditional qualitative research and modern computational analysis, offering interactive exploration, temporal analysis, and complex network visualization. We provide a structured foundation for future research in art history, emphasizing the value of network visualization in enriching the understanding of art history. | false | false | [
"Michaela Tuscher",
"Velitchko Filipov",
"Teresa Kamencek",
"Raphael Rosenberg",
"Silvia Miksch"
] | [] | [] | [] |
EuroVis | 2,025 | NODKANT: Exploring Constructive Network Physicalization | 10.1111/cgf.70140 | Physicalizations, which combine perceptual and sensorimotor interactions, offer an immersive way to comprehend complex data visualizations by stimulating active construction and manipulation. This study investigates the impact of personal construction on the comprehension of physicalized networks. We propose a physicalization toolkit—NODKANT—for constructing modular node-link diagrams consisting of a magnetic surface, 3D printable and stackable node labels, and edges of adjustable length. In a mixed-methods between-subject lab study with 27 participants, three groups of people used NODKANT to complete a series of low-level analysis tasks in the context of an animal contact network. The first group was tasked with freely constructing their network using a sorted edge list, the second group received step-by-step instructions to create a predefined layout, and the third group received a pre-constructed representation. While free construction proved on average more time-consuming, we show that users extract more insights from the data during construction and interact with their representation more frequently, compared to those presented with step-by-step instructions. Interestingly, the increased time demand cannot be measured in users' subjective task load. Finally, our findings indicate that participants who constructed their own representations were able to recall more detailed insights after a period of 10–14 days compared to those who were given a pre-constructed network physicalization. All materials, data, code for generating instructions, and 3D printable meshes are available on https://osf.io/tk3g5/. | false | false | [
"Daniel Pahr",
"Sara Di Bartolomeo",
"Henry Ehlers",
"Velitchko Andreev Filipov",
"Christina Stoiber",
"Wolfgang Aigner",
"Hsiang-Yun Wu",
"Renata G. Raidou"
] | [
"BP"
] | [] | [] |
EuroVis | 2,025 | Optimizing Staircase Motifs in Biofabric Network Layouts | 10.1111/cgf.70139 | Biofabric is a novel method for network visualization, with promising potential to highlight specific network features. Recent studies emphasize the importance of staircase motifs — equivalent to fans or stars in node-link diagrams — within Biofabric. However, to effectively showcase these motifs, we need to formulate specialized layout algorithms. This paper introduces a method to compute optimal layouts for Biofabric, focusing on maximizing staircase formation. We present an Integer Linear Programming (ILP) model for this task and evaluate its performance in terms of scalability and output quality against a leading heuristic method, Degreecending. Our results demonstrate that the ILP approach identifies significantly more, and often longer, staircases compared to Degreecending, albeit with the trade-off of higher computation times. Our supplemental material, including a full copy of the paper, code, and results, is available on osf.io. | false | false | [
"Sara Di Bartolomeo",
"Markus Wallinger",
"Martin Nöllenburg"
] | [
"HM"
] | [] | [] |
EuroVis | 2,025 | Player-Centric Shot Maps in Table Tennis | 10.1111/cgf.70109 | Shot maps are popular in many sports as they typically plot events and player positions in the way they are collected, using a pitch or a table as an absolute coordinate system. We introduce a variation of a table tennis shot map that shifts the point of view from the table to the player. This results in a new reference system to plot incoming balls relative to the player's position rather than on the table. This approach aligns with how table tennis tactical analysis is conducted, focusing on identifying empty spaces and weak spots around the players. We describe the motivation behind this work, built through close collaboration with two table tennis experts, and demonstrate how this approach aligns with the way they analyze games to reveal key tactical aspects. We also present the design rationale and the computer vision pipeline used to accurately collect data from broadcast videos. Our findings show that the technique enables capturing insights that were not visible with the absolute coordinate system, particularly in understanding regions that are reachable and those close to the pivot area of the player. | false | false | [
"Aymeric Erades",
"Romain Vuillemot"
] | [] | [] | [] |
EuroVis | 2,025 | PrismBreak: Exploration of Multi-Dimensional Mixture Models | 10.1111/cgf.70121 | In data science, visual data exploration becomes increasingly more challenging due to the continued rapid increase of data dimensionality and data sizes. To manage complexity, two orthogonal approaches are commonly used in practice: First, data is frequently clustered in high-dimensional space by fitting mixture models composed of normal distributions or Student t-distributions. Second, dimensionality reduction is employed to embed high-dimensional point clouds in a two- or three-dimensional space. Those algorithms determine the spatial arrangement in low-dimensional space without further user interaction. This leaves little room for a guided exploration and data analysis. In this paper, we propose a novel visualization system for the effective exploration and construction of potential subspaces onto which mixture models can be projected. The subspaces are spanned linearly via basis vectors, for which a vast number of basis vector combinations is theoretically imaginable. Our system guides the user step-by-step through the selection process by letting users choose one basis vector at a time. To guide the process, multiple choices are pre-visualized at once on a multi-faceted prism. In addition to the qualitative visualization of the distributions, multiple quantitative metrics are calculated by which subspaces can be compared and reordered, including variance, sparsity, and visibility. Further, a bookmarking tool lets users record and compare different basis vector combinations. The usability of the system is evaluated by data scientists and is tested on several high-dimensional data sets. | false | false | [
"Brian Zahoransky",
"Tobias Günther",
"Kai Lawonn"
] | [] | [] | [] |
EuroVis | 2,025 | Random Access Segmentation Volume Compression for Interactive Volume Rendering | 10.1111/cgf.70116 | Segmentation volumes are voxel data sets often used in machine learning, connectomics, and natural sciences. Their large sizes make compression indispensable for storage and processing, including GPU video memory constrained real-time visualization. Fast Compressed Segmentation Volumes (CSGV) [PD24] provide strong brick-wise compression and random access at the brick level. Voxels within a brick, however, have to be decoded serially and thus rendering requires caching of visible full bricks, consuming extra memory. Without caching, accessing voxels can have a worst-case decoding overhead of up to a full brick (typically over 32.000 voxels). We present CSGV-R which provide true multi-resolution random access on a per-voxel level. We leverage Huffman-shaped Wavelet Trees for random accesses to variable bit-length encoding and their rank operation to query label palette offsets in bricks. Our real-time segmentation volume visualization removes decoding artifacts from CSGV and renders CSGV-R volumes without caching bricks at faster render times. CSGV-R has slightly lower compression rates than CSGV, but outperforms Neuroglancer, the state-of-the-art compression technique with true random access, with 2× to 4× smaller data sets at rates between 0.648% and 4.411% of the original volume sizes. | false | false | [
"Max Piochowiak",
"Florian Kurpicz",
"Carsten Dachsbacher"
] | [] | [] | [] |
EuroVis | 2,025 | Sca2Gri: Scalable Gridified Scatterplots | 10.1111/cgf.70141 | Scatterplots are widely used in exploratory data analysis. Representing data points as glyphs is often crucial for in-depth investigation, but this can lead to significant overlap and visual clutter. Recent post-processing techniques address this issue, but their computational and/or visual scalability is generally limited to thousands of points and unable to effectively deal with large datasets in the order of millions. This paper introduces Sca2Gri (Scalable Gridified Scatterplots), a grid-based post-processing method designed for analysis scenarios where the number of data points substantially exceeds the number of glyphs that can be reasonably displayed. Sca2Gri enables interactive grid generation for large datasets, offering flexible user control of glyph size, maximum displacement for point to cell mapping, and scatterplot focus area. While Sca2Gri's computational complexity scales cubically with the number of cells (which is practically bound to thousands for legible glyph sizes), its complexity is linear with respect to the number of data points, making it highly scalable beyond millions of points. | false | false | [
"Steffen Frey"
] | [] | [] | [] |
EuroVis | 2,025 | SUPQA: LLM-based Geo-Visualization for Subjective Urban Performance Question-Answering | 10.1111/cgf.70106 | As urbanization accelerates, urban performance has become a growing concern, impacting every aspect of residents' lives. However, urban performance exploration is a tedious and highly subjective process for users. Users need to manually collect and integrate various information, or spend a large amount of time and effort due to the steep learning curves of existing specialized tools. To address these challenges, we introduce SUPQA, a novel approach for urban performance exploration using natural language as input and interactive geographic visualizations as output. Our approach leverages Large Language Models (LLMs) to effectively interpret user intents and quantify various urban performance measures. We integrate progressive navigation and multi-geographic scale analysis in our visualization system, explaining the reasoning process and streamlining users' decision-making workflow. Two usage scenarios and evaluations demonstrate the effectiveness of SUPQA in helping residents and planners acquire desired information more efficiently and enhancing the quality of decision-making. | false | false | [
"Haiwen Huang",
"Juntong Chen",
"Changbo Wang",
"Chenhui Li 0001"
] | [] | [] | [] |
EuroVis | 2,025 | SurpriseExplora: Tuning and Contextualizing Model-derived Maps with Interactive Visualizations | 10.1111/cgf.70114 | People craft choropleth maps to monitor, analyze, and understand spatially distributed data. Recent visualization work has addressed several known biases in choropleth maps by developing new model- and metrics- based approaches (e.g. Bayesian surprise). However, effective use of these techniques requires extensive parameter setting and tuning, making them difficult or impossible for users without substantial technical skills. In this paper we describe SurpriseExplora, which addresses this gap through direct manipulation techniques for re-targeting a Bayesian surprise model's scope and parameters. We present three use cases to illustrate the capabilities of SurpriseExplora, showing for example how models calculated at a national level can obscure key findings that can be revealed through interaction sequences common to map visualizations (e.g. zooming), and how augmenting funnel-plot visualizations with interactions that adjust underlying models can account for outliers or skews in spatial datasets. We evaluate SurpriseExplora through an expert review with visualization researchers and practitioners. We conclude by discussing how SurpriseExplora uncovers new opportunities for sense-making within the broader ecosystem of map visualizations, as well as potential empirical studies with non-expert populations. | false | false | [
"Akim Ndlovu",
"Hilson Shrestha",
"Evan Peck",
"Lane Harrison"
] | [] | [] | [] |
EuroVis | 2,025 | Tasks and Visual Abstractions for 3D Chromatin Representation | 10.1111/cgf.70142 | The spatial organization of chromatin fiber directly influences its function. However, the high visual complexity of chromatin spatial models makes the understanding of the structure extremely challenging. Therefore, genomic researchers still primarily rely on indirect analysis of chromatin through 2D views, missing the advantages that 3D visualization can offer. In this paper, we first analyze the task space of genomic research and identify biological domain tasks that can benefit from dedicated spatial representations. We organize these tasks into four categories: tasks related to structural features, additional meta-data, structural relationships, and comparative tasks. We analyze these tasks in terms of their complexity, co-dependence, and potential benefits of 3D-based solutions. Secondly, we present four newly designed visual representations of chromatin 3D structure, focused on enhancing the understanding of structural features and solving relationships tasks. These include the hierarchical nature of spatial chromatin sub-units, their visual abstractions, spatial interactions, and a cumulative representation of chromatin dynamic behavior. We also include feedback from four domain researchers and discuss future steps necessary to make spatial representations valid and valuable part of genomic research. | false | false | [
"Adam Rychlý",
"Jan Byska",
"Barbora Kozlíková",
"Katarína Furmanová"
] | [] | [] | [] |
EuroVis | 2,025 | The Geometry of Color in the Light of a Non-Riemannian Space | 10.1111/cgf.70136 | We formalize Schrödinger's definitions of hue, saturation, and lightness, building on the foundational idea from Helmholtz that these perceptual attributes can be derived solely from the perceptual metric. We identify three shortcomings in Schrödinger's approach and propose solutions to them. First, to encompass the Bezold-Brücke effect, we replace the straight-line definition of stimulus quality between a color and black with the geodesic path in perceptual color space. Second, to model diminishing returns in color perception, we employ a non-Riemannian perceptual metric, which introduces a potential ambiguity in defining lightness, but our experiments show that this ambiguity is inconsequential. Third, we provide a geometric definition of the neutral axis as the closest color to black within each equal-lightness surface—a definition feasible only in a non-Riemannian framework. Collectively, our solutions provide the first comprehensive realization of Helmholtz's vision: formal geometric definitions of hue, saturation, and lightness derived entirely from the metric of perceptual similarity, without reliance on external constructs. | false | false | [
"Roxana Bujack",
"Emily N. Stark",
"Terece L. Turton",
"Jonah M. Miller",
"David H. Rogers 0001"
] | [] | [] | [] |
EuroVis | 2,025 | Towards a Better Evaluation of 3D CVML Algorithms: Immersive Debugging of a Localization Model | 10.1111/cgf.70111 | As advancements in robotics, autonomous driving, and spatial computing continue to unfold, a growing number of Computer Vision and Machine Learning (CVML) algorithms are incorporating three-dimensional data into their frameworks. Debugging these 3D CVML models often requires going beyond traditional performance evaluation methods, necessitating a deeper understanding of an algorithm's behavior within its spatio-temporal context. However, the lack of appropriate visualization tools presents a significant obstacle to effectively exploring 3D data and spatial features in relation to key performance indicators (KPIs). To address this challenge, we explore the application of Immersive Analytics (IA) methodologies to enhance the debugging process of 3D CVML models. Through in-depth interviews with eight CVML engineers, we identify common tasks and challenges faced during the development of spatial algorithms, and establish a set of design principles for creating tools tailored to spatial model evaluation. Building on these insights, we propose a novel immersive analytics system for debugging an indoor localization algorithm. The system is built using web technologies and integrates WebXR to enable fluid transitions across the reality-virtuality continuum. We conduct a qualitative study with six CVML engineers using our system on Apple Vision Pro, observing their analytical workflow as they debug an indoor localization sequence. We discuss the advantages of employing immersive analytics in the model evaluation workflow, emphasizing the role of seamlessly integrating 2D and 3D visualizations across varying levels of immersion to facilitate more effective model assessment. Finally, we reflect on the implementation trade-offs and discuss the generalizability of our findings for future efforts in immersive 3D CVML model debugging. | false | false | [
"Tica Lin",
"Jun Yuan",
"Kevin Miao",
"Tigran Katolikyan",
"Isaac Walker",
"Marco Cavallo"
] | [] | [] | [] |
EuroVis | 2,025 | Uncertainty-Aware Visualization of Biomolecular Structures | 10.1111/cgf.70155 | null | false | false | [
"Anna Sterzik",
"Christina Gillmann",
"Michael Krone",
"Kai Lawonn"
] | [] | [] | [] |
EuroVis | 2,025 | Viewpoint Optimization for 3D Graph Drawings | 10.1111/cgf.70127 | Graph drawings using a node-link metaphor and straight edges are widely used to represent and understand relational data. While such drawings are typically created in 2D, 3D representations have also gained popularity. When exploring 3D drawings, finding viewpoints that help understanding the graph's structure is crucial. Finding good viewpoints also allows using the 3D drawings to generate good 2D graph drawings. In this work, we tackle the problem of automatically finding high-quality viewpoints for 3D graph drawings. We propose and evaluate strategies based on sampling, gradient descent, and evolutionary-inspired meta-heuristics. Our results show that most strategies quickly converge to high-quality viewpoints within a few dozen function evaluations, with meta-heuristic approaches showing robust performance regardless of the quality metric. | false | false | [
"Simon van Wageningen",
"Tamara Mchedlidze",
"Alexandru C. Telea"
] | [] | [] | [] |
EuroVis | 2,025 | VISLIX: An XAI Framework for Validating Vision Models with Slice Discovery and Analysis | 10.1111/cgf.70125 | Real-world machine learning models require rigorous evaluation before deployment, especially in safety-critical domains like autonomous driving and surveillance. The evaluation of machine learning models often focuses on data slices, which are subsets of the data that share a set of characteristics. Data slice finding automatically identifies conditions or data subgroups where models underperform, aiding developers in mitigating performance issues. Despite its popularity and effectiveness, data slicing for vision model validation faces several challenges. First, data slicing often needs additional image metadata or visual concepts, and falls short in certain computer vision tasks, such as object detection. Second, understanding data slices is a labor-intensive and mentally demanding process that heavily relies on the expert's domain knowledge. Third, data slicing lacks a human-in-the-loop solution that allows experts to form hypothesis and test them interactively. To overcome these limitations and better support the machine learning operations lifecycle, we introduce VISLIX, a novel visual analytics framework that employs state-of-the-art foundation models to help domain experts analyze slices in computer vision models. Our approach does not require image metadata or visual concepts, automatically generates natural language insights, and allows users to test data slice hypothesis interactively. We evaluate VISLIX with an expert study and three use cases, that demonstrate the effectiveness of our tool in providing comprehensive insights for validating object detection models. | false | false | [
"Xinyuan Yan",
"Xiwei Xuan",
"Jorge Piazentin Ono",
"Jiajing Guo",
"Vikram Mohanty",
"Arvind Kumar Shekar",
"Liang Gou",
"Bei Wang 0001",
"Liu Ren"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2505.03132v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Visually Assessing 1-D Orderings of Contiguous Spatial Polygons | 10.1111/cgf.70100 | One-dimensional orderings of spatial entities have been researched in many contexts, e.g. spatial indexing structures or visualizations for spatiotemporal trend analysis. While plenty of studies have been conducted to evaluate orderings of point-based data, polygonal shapes, despite their different topological properties, have received less attention. Existing measures to quantify errors in projections or orderings suffer from generic neighborhood definitions and over-simplification of distances when applied to polygonal data. In this work, we address these shortcomings by introducing measures that adapt to a varying neighborhood size depending on the number of contiguous neighbors and thus, address the limitations of existing measures for polygonal shapes. To guide experts in determining a suitable ordering, we propose a user-steerable visual analytics prototype capable of locally and globally inspecting ordering errors, investigating the impact of geographic obstacles, and comparing ordering strategies using our measures. We demonstrate the effectiveness of our approach through a use case and conducted an expert study with 8 data scientists as a qualitative evaluation of our approach. Our results show that users are capable of identifying ordering errors, comparing ordering strategies on a global and local scale, as well as assessing the impact of semantically relevant geographic obstacles. | false | false | [
"Julius Rauscher",
"Frederik L. Dennig",
"Udo Schlegel",
"Daniel A. Keim",
"Johannes Fuchs 0001"
] | [] | [] | [] |
EuroVis | 2,025 | VizTA: Enhancing Comprehension of Distributional Visualization with Visual-Lexical Fused Conversational Interface | 10.1111/cgf.70110 | Comprehending visualizations requires readers to interpret visual encoding and the underlying meanings actively. This poses challenges for visualization novices, particularly when interpreting distributional visualizations that depict statistical uncertainty. Advancements in LLM-based conversational interfaces show promise in promoting visualization comprehension. However, they fail to provide contextual explanations at fine-grained granularity, and chart readers are still required to mentally bridge visual information and textual explanations during conversations. Our formative study highlights the expectations for both lexical and visual feedback, as well as the importance of explicitly linking these two modalities throughout the conversation. The findings motivate the design of VizTA, a visualization teaching assistant that leverages the fusion of visual and lexical feedback to help readers better comprehend visualization. VizTA features a semantic-aware conversational agent capable of explaining contextual information within visualizations and employs a visual-lexical fusion design to facilitate chart-centered conversation. A between-subject study with 24 participants demonstrates the effectiveness of VizTA in supporting the understanding and reasoning tasks of distributional visualization across multiple scenarios. | false | false | [
"Liangwei Wang 0001",
"Zhan Wang 0001",
"Shishi Xiao",
"Le Liu 0008",
"Fugee Tsung",
"Wei Zeng 0004"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2504.14507v1",
"icon": "paper"
}
] |
EuroVis | 2,025 | Voronoi Cell Interface-Based Parameter Sensitivity Analysis for Labeled Samples | 10.1111/cgf.70122 | Varying the input parameters of simulations or experiments often leads to different classes of results. Parameter sensitivity analysis in this context includes estimating the sensitivity to the individual parameters, that is, to understand which parameters contribute most to changes in output classifications and for which parameter ranges these occur. We propose a novel visual parameter sensitivity analysis approach based on Voronoi cell interfaces between the sample points in the parameter space to tackle the problem. The Voronoi diagram of the sample points in the parameter space is first calculated. We then extract Voronoi cell interfaces which we use to quantify the sensitivity to parameters, considering the class label information of each sample's corresponding output. Multiple visual encodings are then utilized to represent the cell interface transitions and class label distribution, including stacked graphs for local parameter sensitivity. We evaluate the approach's expressiveness and usefulness with case studies for synthetic and real-world datasets. | false | false | [
"Ruben Bauer",
"Marina Evers",
"Quynh Quang Ngo",
"Guido Reina",
"Steffen Frey",
"Michael Sedlmair"
] | [] | [] | [] |
EuroVis | 2,025 | When Dimensionality Reduction Meets Graph (Drawing) Theory: Introducing a Common Framework, Challenges and Opportunities | 10.1111/cgf.70105 | In the vast landscape of visualization research, Dimensionality Reduction (DR) and graph analysis are two popular subfields, often essential to most visual data analytics setups. DR aims to create representations to support neighborhood and similarity analysis on complex, large datasets. Graph analysis focuses on identifying the salient topological properties and key actors within network data, with specialized research investigating how such features could be presented to users to ease the comprehension of the underlying structure. Although these two disciplines are typically regarded as disjoint subfields, we argue that both fields share strong similarities and synergies that can potentially benefit both. Therefore, this paper discusses and introduces a unifying framework to help bridge the gap between DR and graph (drawing) theory. Our goal is to use the strongly math-grounded graph theory to improve the overall process of creating DR visual representations. We propose how to break the DR process into well-defined stages, discuss how to match some of the DR state-of-the-art techniques to this framework, and present ideas on how graph drawing, topology features, and some popular algorithms and strategies used in graph analysis can be employed to improve DR topology extraction, embedding generation, and result validation. We also discuss the challenges and identify opportunities for implementing and using our framework, opening directions for future visualization research. | false | false | [
"Fernando V. Paulovich",
"Alessio Arleo",
"Stef van den Elzen"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2412.06555v1",
"icon": "paper"
}
] |
CHI | 2,025 | "Housing Diversity Means Diverse Housing": Blending Generative AI into Speculative Design in Rural Co-Housing Communities | 10.1145/3706598.3713906 | In response to various environmental and societal challenges, co-housing has emerged to support social cohesion, grassroots innovation and ecological regeneration. Co-housing communities typically have smaller personal spaces, closer neighbourly relationships, and engage in more mutually supportive sustainable practices. To understand such communities' motivations and visions, we developed a speculative design tool that harnesses Generative Artificial Intelligence (GenAI) to facilitate the envisioning of alternative future scenarios that challenge prevailing values, beliefs, lifestyles, and ways of knowing in contemporary society. Within the context of co-housing communities, we conducted a participatory design study with participants in co-creating their future communities. This paper unpacks implications and also reflects on the co-design approach employing GenAI. Our main findings highlight that GenAI, as a catalyst for imagination, empowers individuals to create visualisations that pose questions through a plural and situated speculative discourse. | false | false | [
"Hongyi Tao",
"Dhaval Vyas"
] | [] | [] | [] |
CHI | 2,025 | "I can run at night!": Using Augmented Reality to Support Nighttime Guided Running for Low-vision Runners | 10.1145/3706598.3714284 | Dark environment challenges low-vision (LV) individuals to engage in running by following sighted guide—a Caller-style guided running—due to insufficient illumination, because it prevents them from using their residual vision to follow the guide and be aware about their environment. We design, develop, and evaluate RunSight, an augmented reality (AR)-based assistive tool to support LV individuals to run at night. RunSight combines see-through HMD and image processing to enhance one's visual awareness of the surrounding environment (e.g., potential hazard) and visualize the guide's position with AR-based visualization. To demonstrate RunSight's efficacy, we conducted a user study with 8 LV runners. The results showed that all participants could run at least 1km (mean = 3.44 km) using RunSight, while none could engage in Caller-style guided running without it. Our participants could run safely because they effectively synthesized RunSight-provided cues and information gained from runner-guide communication. | false | false | [
"Yuki Abe",
"Keisuke Matsushima",
"Kotaro Hara",
"Daisuke Sakamoto",
"Tetsuo Ono"
] | [] | [] | [] |
CHI | 2,025 | "Piecing Data Connections Together Like a Puzzle": Effects of Increasing Task Complexity on the Effectiveness of Data Storytelling Enhanced Visualisations | 10.1145/3706598.3714270 | The emerging concept of data storytelling (DS) suggests that enhancing visualisations with annotations and narratives can make complex data more insightful than conventional visualisations. Previous works found that DS-enhanced visualisations are more effective than conventional visualisations for simple tasks like identifying key data points or the main message. However, no previous work has explored the extent to which DS enhancements influence task completion across different levels of cognitive complexity. We address this gap by presenting the results of a study where 128 participants completed tasks based on four visualisations (two line charts and two choropleth maps, either with or without DS elements) spanning a range of complexity based on Bloom's taxonomy, which has been applied in data visualisation to categorise tasks hierarchically from lower to higher-order thinking. Results suggest that while DS-enhanced visualisations effectively support lower-order tasks (finding data points and understanding insights), they don't necessarily aid the correct completion of higher-order tasks (application, analysis, evaluation and creation). However, DS enhancements improve how efficiently participants complete complex tasks. | false | false | [
"Mikaela Elizabeth Milesi",
"Paola Mejia-Domenzain",
"Laura Brandl",
"Vanessa Echeverría",
"Yueqiao Jin",
"Dragan Gasevic",
"Yi-Shan Tsai",
"Tanja Käser",
"Roberto Martínez-Maldonado"
] | [] | [] | [] |
CHI | 2,025 | A Novel Lens on Metacognition in Visualization | 10.1145/3706598.3714400 | Metacognition, or the awareness and regulation of one's own cognitive processes, allows individuals to take command of their learning and decision making in various contexts. In tasks that require problem-solving and adaptive learning, individuals with heightened metacognitive awareness tend to outperform others, as they are better equipped to regulate cognition, leading to more effective processes. On the other hand, visualization research facilitates exploration and decision making with data. We posit that metacognitive frameworks that examine how individuals think about their own thinking processes can likewise enhance visualization processes. In this paper, we review metacognition literature from the cognitive and learning science to identify opportunities in visualization to improve people's ability to reason with data. We propose the use of a metacognitive framework, serving as a starting point to inspire future research to improve visualization practices and outcomes. | false | false | [
"Mengyu Chen",
"Andrew Yang",
"Seungchan Min",
"Kristy A. Hamilton",
"Emily Wall 0001"
] | [] | [] | [] |
CHI | 2,025 | A Placebo Concert: The Placebo Effect for Visualization of Physiological Audience Data during Experience Recreation in Virtual Reality | 10.1145/3706598.3713594 | A core use case for Virtual Reality applications is recreating real-life scenarios for training or entertainment. Promoting physiological responses for users in VR that match those of real-life spectators can maximize engagement and contribute to more co-presence. Current research focuses on visualizations and measurements of physiological data to ensure experience accuracy. However, placebo effects are known to influence performance and self-perception in HCI studies, creating a need to investigate the effect of visualizing different types of data (real, unmatched, and fake) on user perception during event recreation in VR. We investigate these conditions through a balanced between-groups study (n=44) of uninformed and informed participants. The informed group was provided with the information that the data visualizations represented previously recorded human physiological data. Our findings reveal a placebo effect, where the informed group demonstrated enhanced engagement and co-presence. Additionally, the fake data condition in the informed group evoked a positive emotional response. | false | false | [
"Xiaru Meng",
"Yulan Ju",
"Christopher Changmok Kim",
"Yan He",
"Giulia Barbareschi",
"Kouta Minamizawa",
"Kai Kunze",
"Matthias Hoppe 0003"
] | [] | [] | [] |
CHI | 2,025 | Accessibility for Whom? Perceptions of Mobility Barriers Across Disability Groups and Implications for Designing Personalized Maps | 10.1145/3706598.3713421 | Today's mapping tools fail to address the varied experiences of different mobility device users. This paper presents a large-scale online survey exploring how five mobility groups—users of canes, walkers, mobility scooters, manual wheelchairs, and motorized wheelchairs—perceive sidewalk barriers and differences therein. Using 52 sidewalk barrier images, respondents evaluated their confidence in navigating each scenario. Our findings (N=190) reveal variations in barrier perceptions across groups, while also identifying shared concerns. To further demonstrate the value of this data, we showcase its use in two custom prototypes: a visual analytics tool and a personalized routing tool. Our survey findings and open dataset advance work in accessibility-focused maps, routing algorithms, and urban planning. | false | false | [
"Chu Li",
"Rock Yuren Pang",
"Delphine Labbé",
"Yochai Eisenberg",
"Maryam Hosseini",
"Jon E. Froehlich"
] | [] | [] | [] |
CHI | 2,025 | Ad-Blocked Reality: Evaluating User Perceptions of Content Blocking Concepts Using Extended Reality | 10.1145/3706598.3713230 | Inspired by the concepts of diminishing reality and ad-blocking in browsers, this study investigates the perceived benefits and concerns of blocking physical, real-world content, particularly ads, through Extended Reality (XR). To understand how users perceive this concept, we first conducted a user study (N = 18) with an ad-blocking prototype to gather initial insights. The results revealed a mixed willingness to adopt XR blockers, with participants appreciating aspects such as customizability, convenience, and privacy. Expected benefits included enhanced focus and reduced stress, while concerns centered on missing important information and increased feelings of isolation. Hence, we investigated the user acceptance of different ad-blocking visualizations through a follow-up online survey (N = 120), comparing six concepts based on related work. The results indicated that the XR ad-blocker visualizations play a significant role in how and for what kinds of advertisements such a concept might be used, paving the path for future feedback-driven prototyping. | false | false | [
"Christopher Katins",
"Jannis Strecker",
"Jan Hinrichs",
"Pascal Knierim",
"Bastian Pfleging",
"Thomas Kosch"
] | [] | [] | [] |
CHI | 2,025 | AIFiligree: A Generative AI Framework for Designing Exquisite Filigree Artworks | 10.1145/3706598.3713281 | Filigree art, which represents typical intricate metalwork, has been captivating audiences worldwide with its delicate lace-like patterns and interwoven metal wires' refined aesthetics. Particularly, Chinese Intangible Cultural Heritage filigree craftsmanship has a unique aesthetic value in fine patterns and complex three-dimensional shapes. However, designing and creating filigree artworks is a labor-intensive and technically complex task and often requires extensive training and a deep understanding of the craft, which limits its design aesthetic and cultural continuity. Aiming to overcome these challenges, this study proposes an artificial intelligence (AI) -aided method that uses AI-generated content (AIGC) technology to accelerate the visualization process of this time-consuming and intricate craft by investigating the role of AI in craft design. First, a comprehensive study of filigree art culture is conducted to identify more than ten historic filigree techniques to obtain AI opportunities. Then, an AI-powered framework called AIFiligree is developed by optimizing culture-based labels and training parameters, enabling the generation of highly authentic fine filigree structures. Further, user workflows are introduced to support diverse design scenarios. Through user studies involving 22 filigree experts and 16 designers, we finally gained insights into AI's opportunities and challenges in cultural learning, expression, and design. | false | false | [
"Ye Tao 0001",
"Xiaohui Fu",
"Jiaying Wu",
"Ze Bian",
"Aiyu Zhu",
"Qi Bao",
"Weiyue Zheng",
"Yubo Wang 0009",
"Bin Zhu 0013",
"Cheng Yang 0014",
"Chuyi Zhou"
] | [] | [] | [] |
CHI | 2,025 | An Investigation of Interaction and Information Needs for Protocol Reverse Engineering Automation | 10.1145/3706598.3713630 | Protocol reverse engineering (ProtocolREing) consists of taking streams of network data and inferring the communication protocol. ProtocolREing is critical task in malware and system security analysis. Several ProtocolREing automation tools have been developed, however, in practice, they are not used because they offer limited interaction. Instead, reverse engineers (ProtocolREs) perform this task manually or use less complex visualization tools. To give ProtocolREs the power of more complex automation, we must first understand ProtocolREs processes and information and interaction needs to design better interfaces. We interviewed 16 ProtocolREs, presenting a paper prototype ProtocolREing automation interface, and ask them to discuss their approach to ProtocolREing while using the tool and suggest missing information and interactions. We designed our prototype based on existing ProtocolREing tool features and prior reverse engineering research's usability guidelines. We found ProtocolREs follow a flexible, hypothesis-driven process and identified multiple information and interaction needs when validating the automation's inferences. We provide suggestions for future interaction design. | false | false | [
"Samantha Katcher",
"James Mattei",
"Jared Chandler",
"Daniel Votipka"
] | [] | [] | [] |
CHI | 2,025 | ARticulate: Interactive Visual Guidance for Demonstrated Rotational Degrees of Freedom in Mobile AR | 10.1145/3706598.3713179 | Mobile Augmented Reality (AR) offers a powerful way to provide spatially-aware guidance for real-world applications. In many cases, these applications involve the configuration of a camera or articulated subject, asking users to navigate several spatial degrees of freedom (DOF) at once. Most guidance for such tasks relies on decomposing available DOF into subspaces that can be more easily mapped to simple 1D or 2D visualizations. Unfortunately, different factorizations of the same motion often map to very different visual feedback, and finding the factorization that best matches a user's intuition can be difficult. We propose an interactive approach that infers rotational degrees of freedom from short user demonstrations. Users select one or two DOFs at a time by demonstrating a small range of motion, which we use to learn a rotational frame that best aligns with user control of the object. We show that deriving visual feedback from this inferred learned rotational frame leads to improved task completion times on 6DOF guidance tasks compared to standard default reference frames used in most mixed reality applications. | false | false | [
"Nhan (Nathan) Tran",
"Ethan Yang",
"Abe Davis"
] | [] | [] | [] |
CHI | 2,025 | Augmented Journeys: Interactive Points of Interest for In-Car Augmented Reality | 10.1145/3706598.3714323 | As passengers spend more time in vehicles, the demand for non-driving related tasks (NDRTs) increases. In-car Augmented Reality (AR) has the potential to enhance passenger experiences by enabling interaction with the environment through NDRTs using world-fixed Points of Interest (POIs). However, the effectiveness of existing interaction techniques and visualization methods for in-car AR remains unclear. Based on a survey (N=110) and a pre-study (N=10), we developed an interactive in-car AR system using a video see-through head-mounted display to engage with POIs via eye-gaze and pinch. Users could explore passed and upcoming POIs using three visualization techniques: List, Timeline, and Minimap. We evaluated the system's feasibility in a field study (N=21). Our findings indicate general acceptance of the system, with the List visualization being the preferred method for exploring POIs. Additionally, the study highlights limitations of current AR hardware, particularly the impact of vehicle movement on 3D interaction. | false | false | [
"Robin Connor Schramm",
"Ginevra Fedrizzi",
"Markus Sasalovici",
"Jann Philipp Freiwald",
"Ulrich Schwanecke"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2502.08437v1",
"icon": "paper"
}
] |
CHI | 2,025 | AVEC: An Assessment of Visual Encoding Ability in Visualization Construction | 10.1145/3706598.3713364 | Visualization literacy is the ability to both interpret and construct visualizations. Yet existing assessments focus solely on visualization interpretation. A lack of construction-related measurements hinders efforts in understanding and improving literacy in visualizations. We design and develop avec, an assessment of a person's visual encoding ability—a core component of the larger process of visualization construction—by: (1) creating an initial item bank using a design space of visualization tasks and chart types, (2) designing an assessment tool to support the combinatorial nature of selecting appropriate visual encodings, (3) building an autograder from expert scores of answers to our items, and (4) refining and validating the item bank and autograder through an analysis of test tryout data with 95 participants and feedback from the expert panel. We discuss recommendations for using avec, potential alternative scoring strategies, and the challenges in assessing higher-level visualization skills using constructed-response tests. Supplemental materials are available at: https://osf.io/hg7kx/. | false | false | [
"Lily W. Ge",
"Yuan Cui",
"Matthew Kay 0001"
] | [] | [] | [] |
CHI | 2,025 | Becoming One with Kuddi: Touching Data through an Intimate Data Physicalisation | 10.1145/3706598.3713221 | Kuddi is a haptic data physicalisation in the form of a soft pillow which combines 12 inflatable pockets to dynamically touch and be touched in relation to the changing menstruating body. This paper presents the soma design process that led to Kuddi's design, as well as Kuddi's evaluation through an auto-ethnographic approach, where the first author lived with Kuddi for two menstrual cycles. The resulting dataset was analysed by the research team using a narrative-led approach. Based on this analysis, we present five thick descriptions that capture how the experience of living with Kuddi led to a changing relation with menstrual pain. We contribute a design case of a haptic data physicalisation intended to touch the body and discuss how the material and interaction design choices embodied in Kuddi led to data visceralisation - a way of feeling data in ways which promote new somatic knowledge and experience. | false | false | [
"Guðrún Margrét Ívansdóttir",
"Joo Young Park",
"Anna Ståhl",
"Madeline Balaam"
] | [] | [] | [] |
CHI | 2,025 | Beyond Time and Accuracy: Strategies in Visual Problem-Solving | 10.1145/3706598.3714024 | In this paper, we explore viewers' strategies in visual problem-solving tasks. We build on the traditional metrics of accuracy and time to better understand the learning that occurs as individuals interact with visualizations. We conducted an in-lab eye-tracking user study with 53 participants from diverse demographic backgrounds. Using questions from the Visualization Literacy Assessment Test (VLAT), we examined participants' problem-solving strategies. We employed a mixed-methods approach capturing quantitative data on performance and gaze patterns, as well as qualitative data through think-alouds and sketches by participants as they reported on their problem-solving approach. Our analysis reveals not only the various cognitive strategies leading to correct answers but also the nature of mistakes and the conceptual misunderstandings that underlie them. This research contributes to the enhancement of visualization design guidelines by incorporating insights into the diverse strategies and cognitive processes employed by users. | false | false | [
"Eric Mörth",
"Zona Kostic",
"Nils Gehlenborg",
"Hanspeter Pfister",
"Johanna Beyer",
"Carolina Nobre"
] | [] | [] | [] |
CHI | 2,025 | Blending the Worlds: An evaluation of World-Fixed Visual Appearances in Automotive Augmented Reality | 10.1145/3706598.3713185 | With the transition to fully autonomous vehicles, non-driving related tasks (NDRTs) become increasingly important, allowing passengers to use their driving time more efficiently. In-car Augmented Reality (AR) gives the possibility to engage in NDRTs while also allowing passengers to engage with their surroundings, for example, by displaying world-fixed points of interest (POIs). This can lead to new discoveries, provide information about the environment, and improve locational awareness. To explore the optimal visualization of POIs using in-car AR, we conducted a field study (N = 38) examining six parameters: positioning, scaling, rotation, render distance, information density, and appearance. We also asked for intention of use, preferred seat positions and preferred automation level for the AR function in a post-study questionnaire. Our findings reveal user preferences and general acceptance of the AR functionality. Based on these results, we derived UX-guidelines for the visual appearance and behavior of location-based POIs in in-car AR. | false | false | [
"Robin Connor Schramm",
"Markus Sasalovici",
"Jann Philipp Freiwald",
"Michael Martin Otto",
"Melissa Reinelt",
"Ulrich Schwanecke"
] | [] | [] | [] |
CHI | 2,025 | CalliSense: An Interactive Educational Tool for Process-based Learning in Chinese Calligraphy | 10.1145/3706598.3714176 | Process-based learning is crucial for the transmission of intangible cultural heritage, especially in complex arts like Chinese calligraphy, where mastering techniques cannot be achieved by merely observing the final work. To explore the challenges faced in calligraphy heritage transmission, we conducted semi-structured interviews (N=8) as a formative study. Our findings indicate that the lack of calligraphy instructors and tools makes it difficult for students to master brush techniques, and teachers struggle to convey the intricate details and rhythm of brushwork. To address this, we collaborated with calligraphy instructors to develop an educational tool that integrates writing process capture and visualization, showcasing the writing rhythm, hand force, and brush posture. Through empirical studies conducted in multiple teaching workshops, we evaluated the system's effectiveness with teachers (N=4) and students (N=12). The results show that the tool significantly enhances teaching efficiency and aids learners in better understanding brush techniques. | false | false | [
"Xinya Gong",
"Wenhui Tao",
"Yuxin Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2502.15883v1",
"icon": "paper"
}
] |
CHI | 2,025 | Chartist: Task-driven Eye Movement Control for Chart Reading | 10.1145/3706598.3713128 | To design data visualizations that are easy to comprehend, we need to understand how people with different interests read them. Computational models of predicting scanpaths on charts could complement empirical studies by offering estimates of user performance inexpensively; however, previous models have been limited to gaze patterns and overlooked the effects of tasks. Here, we contribute Chartist, a computational model that simulates how users move their eyes to extract information from the chart in order to perform analysis tasks, including value retrieval, filtering, and finding extremes. The novel contribution lies in a two-level hierarchical control architecture. At the high level, the model uses LLMs to comprehend the information gained so far and applies this representation to select a goal for the lower-level controllers, which, in turn, move the eyes in accordance with a sampling policy learned via reinforcement learning. The model is capable of predicting human-like task-driven scanpaths across various tasks. It can be applied in fields such as explainable AI, visualization design evaluation, and optimization. While it displays limitations in terms of generalizability and accuracy, it takes modeling in a promising direction, toward understanding human behaviors in interacting with charts. | false | false | [
"Danqing Shi",
"Yao Wang 0018",
"Yunpeng Bai",
"Andreas Bulling",
"Antti Oulasvirta"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2502.03575v1",
"icon": "paper"
}
] |
CHI | 2,025 | Comparing Native and Non-native English Speakers' Behaviors in Collaborative Writing through Visual Analytics | 10.1145/3706598.3713693 | Understanding collaborative writing dynamics between native speakers (NS) and non-native speakers (NNS) is critical for enhancing collaboration quality and team inclusivity. In this paper, we partnered with communication researchers to develop visual analytics solutions for comparing NS and NNS behaviors in 162 writing sessions across 27 teams. The primary challenges in analyzing writing behaviors are data complexity and the uncertainties introduced by automated methods. In response, we present COALA, a novel visual analytics tool that improves model interpretability by displaying uncertainties in author clusters, generating behavior summaries using large language models, and visualizing writing-related actions at multiple granularities. We validated the effectiveness of COALA through user studies with domain experts (N=2+2) and researchers with relevant experience (N=8). We present the insights discovered by participants using COALA, suggest features for future AI-assisted collaborative writing tools, and discuss the broader implications for analyzing collaborative processes beyond writing. | false | false | [
"Yuexi Chen",
"Yimin Xiao",
"Kazi Tasnim Zinat",
"Naomi Yamashita",
"Ge Gao 0001",
"Zhicheng Liu 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2502.18681v1",
"icon": "paper"
}
] |
CHI | 2,025 | Confirmation Bias: The Double-Edged Sword of Data Facts in Visual Data Communication | 10.1145/3706598.3713831 | Incorporating data facts, which are natural language descriptions of data patterns, alongside visualizations can guide readers and enhance the visibility of data patterns. However, data facts might also induce confirmation bias in visual analysis. We conducted a series of crowdsourced experiments to explore the biasing effects of data facts. Our findings show that the presentation style, strength, and alignment of data facts with pre-existing beliefs significantly impact confirmation bias. Data facts that support prior beliefs can exacerbate confirmation bias, whereas those that refute an individual's beliefs can mitigate it. This effect is amplified when data facts are used in combination with visual annotations. Data facts describing variable correlations are perceived to be more compelling than ones describing average values and are associated with higher levels of confirmation bias. We underscore the persuasive influence of data facts in visualizations and caution against their indiscriminate use in efforts to mitigate bias. | false | false | [
"Shiyao Li",
"Thomas James Davidson",
"Cindy Xiong Bearfield",
"Emily Wall 0001"
] | [] | [] | [] |
CHI | 2,025 | CPVis: Evidence-based Multimodal Learning Analytics for Evaluation in Collaborative Programming | 10.1145/3706598.3713353 | As programming education becomes more widespread, many college students from non-computer science backgrounds begin learning programming. Collaborative programming emerges as an effective method for instructors to support novice students in developing coding and teamwork abilities. However, due to limited class time and attention, instructors face challenges in monitoring and evaluating the progress and performance of groups or individuals. To address this issue, we collect multimodal data from real-world settings and develop CPVis, an interactive visual analytics system designed to assess student collaboration dynamically. Specifically, CPVis enables instructors to evaluate both group and individual performance efficiently. CPVis employs a novel flower-based visual encoding to represent performance and provides time-based views to capture the evolution of collaborative behaviors. A within-subject experiment (N=22), comparing CPVis with two baseline systems, reveals that users gain more insights, find the visualization more intuitive, and report increased confidence in their assessments of collaboration. | false | false | [
"Gefei Zhang",
"Shenming Ji",
"Yicao Li",
"Jingwei Tang",
"Jihong Ding",
"Meng Xia 0002",
"Guodao Sun",
"Ronghua Liang"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2502.17835v1",
"icon": "paper"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.