Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
Vis
2,023
RL-LABEL: A Deep Reinforcement Learning Approach Intended for AR Label Placement in Dynamic Scenarios
10.1109/TVCG.2023.3326568
Labels are widely used in augmented reality (AR) to display digital information. Ensuring the readability of AR labels requires placing them in an occlusion-free manner while keeping visual links legible, especially when multiple labels exist in the scene. Although existing optimization-based methods, such as force-based methods, are effective in managing AR labels in static scenarios, they often struggle in dynamic scenarios with constantly moving objects. This is due to their focus on generating layouts optimal for the current moment, neglecting future moments and leading to sub-optimal or unstable layouts over time. In this work, we present RL-Label, a deep reinforcement learning-based method intended for managing the placement of AR labels in scenarios involving moving objects. RL-Label considers both the current and predicted future states of objects and labels, such as positions and velocities, as well as the user's viewpoint, to make informed decisions about label placement. It balances the trade-offs between immediate and long-term objectives. We tested RL-Label in simulated AR scenarios on two real-world datasets, showing that it effectively learns the decision-making process for long-term optimization, outperforming two baselines (i.e., no view management and a force-based method) by minimizing label occlusions, line intersections, and label movement distance. Additionally, a user study involving 18 participants indicates that, within our simulated environment, RL-Label excels over the baselines in aiding users to identify, compare, and summarize data on labels in dynamic scenes.
false
false
[ "Zhutian Chen", "Daniele Chiappalupi", "Tica Lin", "Yalong Yang 0001", "Johanna Beyer", "Hanspeter Pfister" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.13540v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/BeWKBFH2xhA", "icon": "video" } ]
Vis
2,023
Roses Have Thorns: Understanding the Downside of Oncological Care Delivery Through Visual Analytics and Sequential Rule Mining
10.1109/TVCG.2023.3326939
Personalized head and neck cancer therapeutics have greatly improved survival rates for patients, but are often leading to understudied long-lasting symptoms which affect quality of life. Sequential rule mining (SRM) is a promising unsupervised machine learning method for predicting longitudinal patterns in temporal data which, however, can output many repetitive patterns that are difficult to interpret without the assistance of visual analytics. We present a data-driven, human-machine analysis visual system developed in collaboration with SRM model builders in cancer symptom research, which facilitates mechanistic knowledge discovery in large scale, multivariate cohort symptom data. Our system supports multivariate predictive modeling of post-treatment symptoms based on during-treatment symptoms. It supports this goal through an SRM, clustering, and aggregation back end, and a custom front end to help develop and tune the predictive models. The system also explains the resulting predictions in the context of therapeutic decisions typical in personalized care delivery. We evaluate the resulting models and system with an interdisciplinary group of modelers and head and neck oncology researchers. The results demonstrate that our system effectively supports clinical and symptom research.
false
false
[ "Carla Floricel", "Andrew Wentzel", "Abdallah Sherif Radwan Mohamed", "Clifton David Fuller", "Guadalupe Canahuate", "G. Elisabeta Marai" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.07895v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uH5MNOblrbI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/0ctRQ3mF4AE", "icon": "video" } ]
Vis
2,023
Scalable Hypergraph Visualization
10.1109/TVCG.2023.3326599
Hypergraph visualization has many applications in network data analysis. Recently, a polygon-based representation for hypergraphs has been proposed with demonstrated benefits. However, the polygon-based layout often suffers from excessive self-intersections when the input dataset is relatively large. In this paper, we propose a framework in which the hypergraph is iteratively simplified through a set of atomic operations. Then, the layout of the simplest hypergraph is optimized and used as the foundation for a reverse process that brings the simplest hypergraph back to the original one, but with an improved layout. At the core of our approach is the set of atomic simplification operations and an operation priority measure to guide the simplification process. In addition, we introduce necessary definitions and conditions for hypergraph planarity within the polygon representation. We extend our approach to handle simultaneous simplification and layout optimization for both the hypergraph and its dual. We demonstrate the utility of our approach with datasets from a number of real-world applications.
false
false
[ "Peter Oliver", "Eugene Zhang", "Yue Zhang 0009" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.05043v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/DYQn0OaNgN4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Q0aOWOBRCUk", "icon": "video" } ]
Vis
2,023
SkiVis: Visual Exploration and Route Planning in Ski Resorts
10.1109/TVCG.2023.3326940
Optimal ski route selection is a challenge based on a multitude of factors, such as the steepness, compass direction, or crowdedness. The personal preferences of every skier towards these factors require individual adaptations, which aggravate this task. Current approaches within this domain do not combine automated routing capabilities with user preferences, missing out on the possibility of integrating domain knowledge in the analysis process. We introduce SkiVis, a visual analytics application to interactively explore ski slopes and provide routing recommendations based on user preferences. In collaboration with ski guides and enthusiasts, we elicited requirements and guidelines for such an application and propose different workflows depending on the skiers' familiarity with the resort. In a case study on the resort of Ski Arlberg, we illustrate how to leverage volunteered geographic information to enable a numerical comparison between slopes. We evaluated our approach through a pair-analytics study and demonstrate how it supports skiers in discovering relevant and preference-based ski routes. Besides the tasks investigated in the study, we derive additional use cases from the interviews that showcase the further potential of SkiVis, and contribute directions for further research opportunities.
false
false
[ "Julius Rauscher", "Raphael Buchmüller", "Daniel A. Keim", "Matthias Miller" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.08570v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/DCy63Q2QC-4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/kAR_SMveFoI", "icon": "video" } ]
Vis
2,023
Socrates: Data Story Generation via Adaptive Machine-Guided Elicitation of User Feedback
10.1109/TVCG.2023.3327363
Visual data stories can effectively convey insights from data, yet their creation often necessitates intricate data exploration, insight discovery, narrative organization, and customization to meet the communication objectives of the storyteller. Existing automated data storytelling techniques, however, tend to overlook the importance of user customization during the data story authoring process, limiting the system's ability to create tailored narratives that reflect the user's intentions. We present a novel data story generation workflow that leverages adaptive machine-guided elicitation of user feedback to customize the story. Our approach employs an adaptive plug-in module for existing story generation systems, which incorporates user feedback through interactive questioning based on the conversation history and dataset. This adaptability refines the system's understanding of the user's intentions, ensuring the final narrative aligns with their goals. We demonstrate the feasibility of our approach through the implementation of an interactive prototype: Socrates. Through a quantitative user study with 18 participants that compares our method to a state-of-the-art data story generation algorithm, we show that Socrates produces more relevant stories with a larger overlap of insights compared to human-generated stories. We also demonstrate the usability of Socrates via interviews with three data analysts and highlight areas of future work.
false
false
[ "Guande Wu", "Shunan Guo", "Jane Hoffswell", "Gromit Yeuk-Yin Chan", "Ryan A. Rossi", "Eunyee Koh" ]
[]
[ "V" ]
[ { "name": "Prerecorded Talk", "url": "https://youtu.be/u5bDYCzkIEI", "icon": "video" } ]
Vis
2,023
SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking Effectiveness
10.1109/TVCG.2023.3326932
As communications are increasingly taking place virtually, the ability to present well online is becoming an indispensable skill. Online speakers are facing unique challenges in engaging with remote audiences. However, there has been a lack of evidence-based analytical systems for people to comprehensively evaluate online speeches and further discover possibilities for improvement. This paper introduces SpeechMirror, a visual analytics system facilitating reflection on a speech based on insights from a collection of online speeches. The system estimates the impact of different speech techniques on effectiveness and applies them to a speech to give users awareness of the performance of speech techniques. A similarity recommendation approach based on speech factors or script content supports guided exploration to expand knowledge of presentation evidence and accelerate the discovery of speech delivery possibilities. SpeechMirror provides intuitive visualizations and interactions for users to understand speech factors. Among them, SpeechTwin, a novel multimodal visual summary of speech, supports rapid understanding of critical speech factors and comparison of different speech samples, and SpeechPlayer augments the speech video by integrating visualization of the speaker's body language with interaction, for focused analysis. The system utilizes visualizations suited to the distinct nature of different speech factors for user comprehension. The proposed system and visualization techniques were evaluated with domain experts and amateurs, demonstrating usability for users with low visualization literacy and its efficacy in assisting users to develop insights for potential improvement.
false
false
[ "Ze-Yuan Huang", "Qiang He", "Kevin T. Maher", "Xiaoming Deng 0001", "Yu-Kun Lai", "Cuixia Ma", "Sheng Feng Qin", "Yong-Jin Liu", "Hongan Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.05091v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/8HGPaautdlk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/80PUu8Ll7Q8", "icon": "video" } ]
Vis
2,023
Supporting Guided Exploratory Visual Analysis on Time Series Data with Reinforcement Learning
10.1109/TVCG.2023.3327200
The exploratory visual analysis (EVA) of time series data uses visualization as the main output medium and input interface for exploring new data. However, for users who lack visual analysis expertise, interpreting and manipulating EVA can be challenging. Thus, providing guidance on EVA is necessary and two relevant questions need to be answered. First, how to recommend interesting insights to provide a first glance at data and help develop an exploration goal. Second, how to provide step-by-step EVA suggestions to help identify which parts of the data to explore. In this work, we present a reinforcement learning (RL)-based system, Visail, which generates EVA sequences to guide the exploration of time series data. As a user uploads a time series dataset, Visail can generate step-by-step EVA suggestions, while each step is visualized as an annotated chart combined with textual descriptions. The RL-based algorithm uses exploratory data analysis knowledge to construct the state and action spaces for the agent to imitate human analysis behaviors in data exploration tasks. In this way, the agent learns the strategy of generating coherent EVA sequences through a well-designed network. To evaluate the effectiveness of our system, we conducted an ablation study, a user study, and two case studies. The results of our evaluation suggested that Visail can provide effective guidance on supporting EVA on time series data.
false
false
[ "Yang Shi 0007", "Bingchang Chen", "Ying Chen", "Zhuochen Jin", "Ke Xu", "Xiaohan Jiao", "Tian Gao", "Nan Cao 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/WBOV9-xf2RQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/l26n79AiXTE", "icon": "video" } ]
Vis
2,023
Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms
10.1109/TVCG.2023.3327356
We conducted a longitudinal study during the 2022 U.S. midterm elections, investigating the real-world impacts of uncertainty visualizations. Using our forecast model of the governor elections in 33 states, we created a website and deployed four uncertainty visualizations for the election forecasts: single quantile dotplot (1-Dotplot), dual quantile dotplots (2-Dotplot), dual histogram intervals (2-Interval), and Plinko quantile dotplot (Plinko), an animated design with a physical and probabilistic analogy. Our online experiment ran from Oct. 18, 2022, to Nov. 23, 2022, involving 1,327 participants from 15 states. We use Bayesian multilevel modeling and post-stratification to produce demographically-representative estimates of people's emotions, trust in forecasts, and political participation intention. We find that election forecast visualizations can heighten emotions, increase trust, and slightly affect people's intentions to participate in elections. 2-Interval shows the strongest effects across all measures; 1-Dotplot increases trust the most after elections. Both visualizations create emotional and trust gaps between different partisan identities, especially when a Republican candidate is predicted to win. Our qualitative analysis uncovers the complex political and social contexts of election forecast visualizations, showcasing that visualizations may provoke polarization. This intriguing interplay between visualization types, partisanship, and trust exemplifies the fundamental challenge of disentangling visualization from its context, underscoring a need for deeper investigation into the real-world impacts of visualizations. Our preprint and supplements are available at https://doi.org/osf.io/ajq8f.
false
false
[ "Fumeng Yang", "Mandi Cai", "Chloe Mortenson", "Hoda Fakhari", "Ayse D. Lokmanoglu", "Jessica Hullman", "Steven Franconeri", "Nicholas Diakopoulos", "Erik C. Nisbet", "Matthew Kay 0001" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/qpyna", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/4-XECJkfMec", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/LK9bxqOrrmw", "icon": "video" } ]
Vis
2,023
TactualPlot: Spatializing Data as Sound Using Sensory Substitution for Touchscreen Accessibility
10.1109/TVCG.2023.3326937
Tactile graphics are one of the best ways for a blind person to perceive a chart using touch, but their fabrication is often costly, time-consuming, and does not lend itself to dynamic exploration. Refreshable haptic displays tend to be expensive and thus unavailable to most blind individuals. We propose TactualPlot, an approach to sensory substitution where touch interaction yields auditory (sonified) feedback. The technique relies on embodied cognition for spatial awareness—i.e., individuals can perceive 2D touch locations of their fingers with reference to other 2D locations such as the relative locations of other fingers or chart characteristics that are visualized on touchscreens. Combining touch and sound in this way yields a scalable data exploration method for scatterplots where the data density under the user's fingertips is sampled. The sample regions can optionally be scaled based on how quickly the user moves their hand. Our development of TactualPlot was informed by formative design sessions with a blind collaborator, whose practice while using tactile scatterplots caused us to expand the technique for multiple fingers. We present results from an evaluation comparing our TactualPlot interaction technique to tactile graphics printed on swell touch paper.
false
false
[ "Pramod Chundury", "Yasmin Reyazuddin", "J. Bern Jordan", "Jonathan Lazar", "Niklas Elmqvist" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/ZuHqdff6lb0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/FGOP_V_r_8c", "icon": "video" } ]
Vis
2,023
The Arrangement of Marks Impacts Afforded Messages: Ordering, Partitioning, Spacing, and Coloring in Bar Charts
10.1109/TVCG.2023.3326590
Data visualizations present a massive number of potential messages to an observer. One might notice that one group's average is larger than another's, or that a difference in values is smaller than a difference between two others, or any of a combinatorial explosion of other possibilities. The message that a viewer tends to notice – the message that a visualization ‘affords’ – is strongly affected by how values are arranged in a chart, e.g., how the values are colored or positioned. Although understanding the mapping between a chart's arrangement and what viewers tend to notice is critical for creating guidelines and recommendation systems, current empirical work is insufficient to lay out clear rules. We present a set of empirical evaluations of how different messages-including ranking, grouping, and part-to-whole relationships–are afforded by variations in ordering, partitioning, spacing, and coloring of values, within the ubiquitous case study of bar graphs. In doing so, we introduce a quantitative method that is easily scalable, reviewable, and replicable, laying groundwork for further investigation of the effects of arrangement on message affordances across other visualizations and tasks. Pre-registration and all supplemental materials are available at https://osf.io/np3q7 and https://osf.io/bvy95, respectively.
false
false
[ "Racquel Fygenson", "Steven Franconeri", "Enrico Bertini" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.13321v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/4JA9qD06WH4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xOuQs8lYO1s", "icon": "video" } ]
Vis
2,023
The Rational Agent Benchmark for Data Visualization
10.1109/TVCG.2023.3326513
Understanding how helpful a visualization is from experimental results is difficult because the observed performance is confounded with aspects of the study design, such as how useful the information that is visualized is for the task. We develop a rational agent framework for designing and interpreting visualization experiments. Our framework conceives two experiments with the same setup: one with behavioral agents (human subjects), and the other one with a hypothetical rational agent. A visualization is evaluated by comparing the expected performance of behavioral agents to that of a rational agent under different assumptions. Using recent visualization decision studies from the literature, we demonstrate how the framework can be used to pre-experimentally evaluate the experiment design by bounding the expected improvement in performance from having access to visualizations, and post-experimentally to deconfound errors of information extraction from errors of optimization, among other analyses.
false
false
[ "Yifan Wu", "Ziyang Guo", "Michalis Mamakos", "Jason D. Hartline", "Jessica Hullman" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.03432v5", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ZcqYd0O_7ps", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/1dOwlsp-0K0", "icon": "video" } ]
Vis
2,023
The Urban Toolkit: A Grammar-Based Framework for Urban Visual Analytics
10.1109/TVCG.2023.3326598
While cities around the world are looking for smart ways to use new advances in data collection, management, and analysis to address their problems, the complex nature of urban issues and the overwhelming amount of available data have posed significant challenges in translating these efforts into actionable insights. In the past few years, urban visual analytics tools have significantly helped tackle these challenges. When analyzing a feature of interest, an urban expert must transform, integrate, and visualize different thematic (e.g., sunlight access, demographic) and physical (e.g., buildings, street networks) data layers, oftentimes across multiple spatial and temporal scales. However, integrating and analyzing these layers require expertise in different fields, increasing development time and effort. This makes the entire visual data exploration and system implementation difficult for programmers and also sets a high entry barrier for urban experts outside of computer science. With this in mind, in this paper, we present the Urban Toolkit (UTK), a flexible and extensible visualization framework that enables the easy authoring of web-based visualizations through a new high-level grammar specifically built with common urban use cases in mind. In order to facilitate the integration and visualization of different urban data, we also propose the concept of knots to merge thematic and physical urban layers. We evaluate our approach through use cases and a series of interviews with experts and practitioners from different domains, including urban accessibility, urban planning, architecture, and climate science. UTK is available at urbantk.org.
false
false
[ "Gustavo Moreira", "Maryam Hosseini", "Md Nafiul Alam Nipu", "Marcos Lage", "Nivan Ferreira", "Fabio Miranda 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.07769v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/1g0s4PpPM6s", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/LF27VgtUGQ4", "icon": "video" } ]
Vis
2,023
TimeSplines: Sketch-Based Authoring of Flexible and Idiosyncratic Timelines
10.1109/TVCG.2023.3326520
Timelines are essential for visually communicating chronological narratives and reflecting on the personal and cultural significance of historical events. Existing visualization tools tend to support conventional linear representations, but fail to capture personal idiosyncratic conceptualizations of time. In response, we built TimeSplines, a visualization authoring tool that allows people to sketch multiple free-form temporal axes and populate them with heterogeneous, time-oriented data via incremental and lazy data binding. Authors can bend, compress, and expand temporal axes to emphasize or de-emphasize intervals based on their personal importance; they can also annotate the axes with text and figurative elements to convey contextual information. The results of two user studies show how people appropriate the concepts in TimeSplines to express their own conceptualization of time, while our curated gallery of images demonstrates the expressive potential of our approach.
false
false
[ "Anna Offenwanger", "Matthew Brehmer", "Fanny Chevalier", "Theophanis Tsandilas" ]
[ "BP" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/qP0q5ySbN80", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/vNWgwHK4ExY", "icon": "video" } ]
Vis
2,023
TimeTuner: Diagnosing Time Representations for Time-Series Forecasting with Counterfactual Explanations
10.1109/TVCG.2023.3327389
Deep learning (DL) approaches are being increasingly used for time-series forecasting, with many efforts devoted to designing complex DL models. Recent studies have shown that the DL success is often attributed to effective data representations, fostering the fields of feature engineering and representation learning. However, automated approaches for feature learning are typically limited with respect to incorporating prior knowledge, identifying interactions among variables, and choosing evaluation metrics to ensure that the models are reliable. To improve on these limitations, this paper contributes a novel visual analytics framework, namely TimeTuner, designed to help analysts understand how model behaviors are associated with localized correlations, stationarity, and granularity of time-series representations. The system mainly consists of the following two-stage technique: We first leverage counterfactual explanations to connect the relationships among time-series representations, multivariate features and model predictions. Next, we design multiple coordinated views including a partition-based correlation matrix and juxtaposed bivariate stripes, and provide a set of interactions that allow users to step into the transformation selection process, navigate through the feature space, and reason the model performance. We instantiate TimeTuner with two transformation methods of smoothing and sampling, and demonstrate its applicability on real-world time-series forecasting of univariate sunspots and multivariate air pollutants. Feedback from domain experts indicates that our system can help characterize time-series representations and guide the feature engineering processes.
false
false
[ "Jianing Hao", "Qing Shi", "Yilin Ye", "Wei Zeng 0004" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.09916v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/eCNQTStE0l0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/ofNNirHeJeE", "icon": "video" } ]
Vis
2,023
Too Many Cooks: Exploring How Graphical Perception Studies Influence Visualization Recommendations in Draco
10.1109/TVCG.2023.3326527
Findings from graphical perception can guide visualization recommendation algorithms in identifying effective visualization designs. However, existing algorithms use knowledge from, at best, a few studies, limiting our understanding of how complementary (or contradictory) graphical perception results influence generated recommendations. In this paper, we present a pipeline of applying a large body of graphical perception results to develop new visualization recommendation algorithms and conduct an exploratory study to investigate how results from graphical perception can alter the behavior of downstream algorithms. Specifically, we model graphical perception results from 30 papers in Draco—a framework to model visualization knowledge—to develop new recommendation algorithms. By analyzing Draco-generated algorithms, we showcase the feasibility of our method to (1) identify gaps in existing graphical perception literature informing recommendation algorithms, (2) cluster papers by their preferred design rules and constraints, and (3) investigate why certain studies can dominate Draco's recommendations, whereas others may have little influence. Given our findings, we discuss the potential for mutually reinforcing advancements in graphical perception and visualization recommendation research.
false
false
[ "Zehua Zeng", "Junran Yang", "Dominik Moritz", "Jeffrey Heer", "Leilani Battle" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.14241v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/VDA1aW1tTfY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/9oZ6MiFDud8", "icon": "video" } ]
Vis
2,023
TopoSZ: Preserving Topology in Error-Bounded Lossy Compression
10.1109/TVCG.2023.3326920
Existing error-bounded lossy compression techniques control the pointwise error during compression to guarantee the integrity of the decompressed data. However, they typically do not explicitly preserve the topological features in data. When performing post hoc analysis with decompressed data using topological methods, preserving topology in the compression process to obtain topologically consistent and correct scientific insights is desirable. In this paper, we introduce TopoSZ, an error-bounded lossy compression method that preserves the topological features in 2D and 3D scalar fields. Specifically, we aim to preserve the types and locations of local extrema as well as the level set relations among critical points captured by contour trees in the decompressed data. The main idea is to derive topological constraints from contour-tree-induced segmentation from the data domain, and incorporate such constraints with a customized error-controlled quantization strategy from the SZ compressor (version 1.4). Our method allows users to control the pointwise error and the loss of topological features during the compression process with a global error bound and a persistence threshold.
false
false
[ "Lin Yan 0003", "Xin Liang 0001", "Hanqi Guo 0001", "Bei Wang 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.11768v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/FahwYhReces", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/LfqGC3UEkEY", "icon": "video" } ]
Vis
2,023
TransforLearn: Interactive Visual Tutorial for the Transformer Model
10.1109/TVCG.2023.3327353
The widespread adoption of Transformers in deep learning, serving as the core framework for numerous large-scale language models, has sparked significant interest in understanding their underlying mechanisms. However, beginners face difficulties in comprehending and learning Transformers due to its complex structure and abstract data representation. We present TransforLearn, the first interactive visual tutorial designed for deep learning beginners and non-experts to comprehensively learn about Transformers. TransforLearn supports interactions for architecture-driven exploration and task-driven exploration, providing insight into different levels of model details and their working processes. It accommodates interactive views of each layer's operation and mathematical formula, helping users to understand the data flow of long text sequences. By altering the current decoder-based recursive prediction results and combining the downstream task abstractions, users can deeply explore model processes. Our user study revealed that the interactions of TransforLearn are positively received. We observe that TransforLearn facilitates users' accomplishment of study tasks and a grasp of key concepts in Transformer effectively.
false
false
[ "Lin Gao", "Zekai Shao", "Ziqin Luo", "Haibo Hu 0002", "Cagatay Turkay", "Siming Chen 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/IN6SyphYpZ0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/cfbs70RaxvA", "icon": "video" } ]
Vis
2,023
Transitioning to a Commercial Dashboarding System: Socio-Technical Observations and Opportunities
10.1109/TVCG.2023.3326525
Many long-established, traditional manufacturing businesses are becoming more digital and data-driven to improve their production. These companies are embracing visual analytics in these transitions through their adoption of commercial dashboarding systems. Although a number of studies have looked at the technical challenges of adopting these systems, very few have focused on the socio-technical issues that arise. In this paper, we report on the results of an interview study with 17 participants working in a range of roles at a long-established, traditional manufacturing company as they adopted Microsoft Power BI. The results highlight a number of socio-technical challenges the employees faced, including difficulties in training, using and creating dashboards, and transitioning to a modern digital company. Based on these results, we propose a number of opportunities for both companies and visualization researchers to improve these difficult transitions, as well as opportunities for rethinking how we design dashboarding systems for real-world use.
false
false
[ "Conny Walchshofer", "Vaishali Dhanoa", "Marc Streit", "Miriah Meyer" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/zt5TCRnhpko", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/-1ydaDEiVg0", "icon": "video" } ]
Vis
2,023
TROPHY: A Topologically Robust Physics-Informed Tracking Framework for Tropical Cyclones
10.1109/TVCG.2023.3326905
Tropical cyclones (TCs) are among the most destructive weather systems. Realistically and efficiently detecting and tracking TCs are critical for assessing their impacts and risks. In particular, the eye is a signature feature of a mature TC. Therefore, knowing the eyes' locations and movements is crucial for both operational weather forecasts and climate risk assessments. Recently, a multilevel robustness framework has been introduced to study the critical points of time-varying vector fields. The framework quantifies the robustness (i.e., structural stability) of critical points across varying neighborhoods. By relating the multilevel robustness with critical point tracking, the framework has demonstrated its potential in cyclone tracking. An advantage is that it identifies cyclonic features using only 2D wind vector fields, which is encouraging as most tracking algorithms require multiple dynamic and thermodynamic variables at different altitudes. A disadvantage is that the framework does not scale well computationally for datasets containing a large number of cyclones. This paper introduces a topologically robust physics-informed tracking framework (TROPHY) for TC tracking. The main idea is to integrate physical knowledge of TC to drastically improve the computational efficiency of multilevel robustness framework for large-scale climate datasets. First, during preprocessing, we propose a physics-informed feature selection strategy to filter 90% of critical points that are short-lived and have low stability, thus preserving good candidates for TC tracking. Second, during in-processing, we impose constraints during the multilevel robustness computation to focus only on physics-informed neighborhoods of TCs. We apply TROPHY to 30 years of 2D wind fields from reanalysis data in ERA5 and generate a number of TC tracks. In comparison with the observed tracks, we demonstrate that TROPHY can capture TC characteristics (e.g., frequency, intensity, duration, latitudes with maximum intensity, and genesis) that are comparable to and sometimes even better than a well-validated TC tracking algorithm that requires multiple dynamic and thermodynamic scalar fields.
false
false
[ "Lin Yan 0003", "Hanqi Guo 0001", "Thomas Peterka", "Bei Wang 0001", "Jiali Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.15243v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Phan4CK2sjM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/9WGLQVcTbGs", "icon": "video" } ]
Vis
2,023
Unraveling the Design Space of Immersive Analytics: A Systematic Review
10.1109/TVCG.2023.3327368
Immersive analytics has emerged as a promising research area, leveraging advances in immersive display technologies and techniques, such as virtual and augmented reality, to facilitate data exploration and decision-making. This paper presents a systematic literature review of 73 studies published between 2013-2022 on immersive analytics systems and visualizations, aiming to identify and categorize the primary dimensions influencing their design. We identified five key dimensions: Academic Theory and Contribution, Immersive Technology, Data, Spatial Presentation, and Visual Presentation. Academic Theory and Contribution assess the motivations behind the works and their theoretical frameworks. Immersive Technology examines the display and input modalities, while Data dimension focuses on dataset types and generation. Spatial Presentation discusses the environment, space, embodiment, and collaboration aspects in IA, and Visual Presentation explores the visual elements, facet and position, and manipulation of views. By examining each dimension individually and cross-referencing them, this review uncovers trends and relationships that help inform the design of immersive systems visualizations. This analysis provides valuable insights for researchers and practitioners, offering guidance in designing future immersive analytics systems and shaping the trajectory of this rapidly evolving field. A free copy of this paper and all supplemental materials are available at osf.io/5ewaj.
false
false
[ "David Saffo", "Sara Di Bartolomeo", "Tarik Crnovrsanin", "Laura South", "Justin Raynor", "Caglar Yildirim", "Cody Dunne" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/2e9x4", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/AQzEyNdhwoo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/BSLMlmUF-tk", "icon": "video" } ]
Vis
2,023
VideoPro: A Visual Analytics Approach for Interactive Video Programming
10.1109/TVCG.2023.3326586
Constructing supervised machine learning models for real-world video analysis require substantial labeled data, which is costly to acquire due to scarce domain expertise and laborious manual inspection. While data programming shows promise in generating labeled data at scale with user-defined labeling functions, the high dimensional and complex temporal information in videos poses additional challenges for effectively composing and evaluating labeling functions. In this paper, we propose VideoPro, a visual analytics approach to support flexible and scalable video data programming for model steering with reduced human effort. We first extract human-understandable events from videos using computer vision techniques and treat them as atomic components of labeling functions. We further propose a two-stage template mining algorithm that characterizes the sequential patterns of these events to serve as labeling function templates for efficient data labeling. The visual interface of VideoPro facilitates multifaceted exploration, examination, and application of the labeling templates, allowing for effective programming of video data at scale. Moreover, users can monitor the impact of programming on model performance and make informed adjustments during the iterative programming process. We demonstrate the efficiency and effectiveness of our approach with two case studies and expert interviews.
false
false
[ "Jianben He", "Xingbo Wang 0001", "Kamkwai Wong", "Xijie Huang", "Changjian Chen", "Zixin Chen", "Fengjie Wang", "Min Zhu", "Huamin Qu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.00401v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/FCb64peiqBA", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Hn2_KqSMDaY", "icon": "video" } ]
Vis
2,023
Vimo: Visual Analysis of Neuronal Connectivity Motifs
10.1109/TVCG.2023.3327388
Recent advances in high-resolution connectomics provide researchers with access to accurate petascale reconstructions of neuronal circuits and brain networks for the first time. Neuroscientists are analyzing these networks to better understand information processing in the brain. In particular, scientists are interested in identifying specific small network motifs, i.e., repeating subgraphs of the larger brain network that are believed to be neuronal building blocks. Although such motifs are typically small (e.g., 2–6 neurons), the vast data sizes and intricate data complexity present significant challenges to the search and analysis process. To analyze these motifs, it is crucial to review instances of a motif in the brain network and then map the graph structure to detailed 3D reconstructions of the involved neurons and synapses. We present Vimo, an interactive visual approach to analyze neuronal motifs and motif chains in large brain networks. Experts can sketch network motifs intuitively in a visual interface and specify structural properties of the involved neurons and synapses to query large connectomics datasets. Motif instances (MIs) can be explored in high-resolution 3D renderings. To simplify the analysis of MIs, we designed a continuous focus&context metaphor inspired by visual abstractions. This allows users to transition from a highly-detailed rendering of the anatomical structure to views that emphasize the underlying motif structure and synaptic connectivity. Furthermore, Vimo supports the identification of motif chains where a motif is used repeatedly (e.g., 2–4 times) to form a larger network structure. We evaluate Vimo in a user study and an in-depth case study with seven domain experts on motifs in a large connectome of the fruit fly, including more than 21,000 neurons and 20 million synapses. We find that Vimo enables hypothesis generation and confirmation through fast analysis iterations and connectivity highlighting.
false
false
[ "Jakob Troidl", "Simon Warchol", "Jinhan Choi", "Jordan Matelsky", "Nagaraju Dhanyasi", "Xueying Wang", "Brock A. Wester", "Donglai Wei 0001", "Jeff W. Lichtman", "Hanspeter Pfister", "Johanna Beyer" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/9OEPANsTtjs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MtAWc_XoilE", "icon": "video" } ]
Vis
2,023
VIRD: Immersive Match Video Analysis for High-Performance Badminton Coaching
10.1109/TVCG.2023.3327161
Badminton is a fast-paced sport that requires a strategic combination of spatial, temporal, and technical tactics. To gain a competitive edge at high-level competitions, badminton professionals frequently analyze match videos to gain insights and develop game strategies. However, the current process for analyzing matches is time-consuming and relies heavily on manual note-taking, due to the lack of automatic data collection and appropriate visualization tools. As a result, there is a gap in effectively analyzing matches and communicating insights among badminton coaches and players. This work proposes an end-to-end immersive match analysis pipeline designed in close collaboration with badminton professionals, including Olympic and national coaches and players. We present VIRD, a VR Bird (i.e., shuttle) immersive analysis tool, that supports interactive badminton game analysis in an immersive environment based on 3D reconstructed game views of the match video. We propose a top-down analytic workflow that allows users to seamlessly move from a high-level match overview to a detailed game view of individual rallies and shots, using situated 3D visualizations and video. We collect 3D spatial and dynamic shot data and player poses with computer vision models and visualize them in VR. Through immersive visualizations, coaches can interactively analyze situated spatial data (player positions, poses, and shot trajectories) with flexible viewpoints while navigating between shots and rallies effectively with embodied interaction. We evaluated the usefulness of VIRD with Olympic and national-level coaches and players in real matches. Results show that immersive analytics supports effective badminton match analysis with reduced context-switching costs and enhances spatial understanding with a high sense of presence.
false
false
[ "Tica Lin", "Alexandre Aouididi", "Zhutian Chen", "Johanna Beyer", "Hanspeter Pfister", "Jui-Hsien Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12539v2", "icon": "paper" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/LxEuKLl9JJA", "icon": "video" } ]
Vis
2,023
VisGrader: Automatic Grading of D3 Visualizations
10.1109/TVCG.2023.3327181
Manually grading D3 data visualizations is a challenging endeavor, and is especially difficult for large classes with hundreds of students. Grading an interactive visualization requires a combination of interactive, quantitative, and qualitative evaluation that are conventionally done manually and are difficult to scale up as the visualization complexity, data size, and number of students increase. We present VisGrader, a first-of-its kind automatic grading method for D3 visualizations that scalably and precisely evaluates the data bindings, visual encodings, interactions, and design specifications used in a visualization. Our method enhances students' learning experience, enabling them to submit their code frequently and receive rapid feedback to better inform iteration and improvement to their code and visualization design. We have successfully deployed our method and auto-graded D3 submissions from more than 4000 students in a visualization course at Georgia Tech, and received positive feedback for expanding its adoption.
false
false
[ "Matthew Hull", "Vivian Pednekar", "Hannah Murray", "Nimisha Roy", "Emmanuel Tung", "Susanta Routray", "Connor Guerin", "Justin Chen", "Zijie J. Wang", "Seongmin Lee 0007", "M. Mahdi Roozbahani", "Duen Horng Chau" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2310.12347v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ip7L9Wfrnvs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/4E6cevh6ol0", "icon": "video" } ]
Vis
2,023
VISPUR: Visual Aids for Identifying and Interpreting Spurious Associations in Data-Driven Decisions
10.1109/TVCG.2023.3326587
Big data and machine learning tools have jointly empowered humans in making data-driven decisions. However, many of them capture empirical associations that might be spurious due to confounding factors and subgroup heterogeneity. The famous Simpson's paradox is such a phenomenon where aggregated and subgroup-level associations contradict with each other, causing cognitive confusions and difficulty in making adequate interpretations and decisions. Existing tools provide little insights for humans to locate, reason about, and prevent pitfalls of spurious association in practice. We propose Vispur, a visual analytic system that provides a causal analysis framework and a human-centric workflow for tackling spurious associations. These include a Confounder Dashboard, which can automatically identify possible confounding factors, and a Subgroup Viewer, which allows for the visualization and comparison of diverse subgroup patterns that likely or potentially result in a misinterpretation of causality. Additionally, we propose a Reasoning Storyboard, which uses a flow-based approach to illustrate paradoxical phenomena, as well as an interactive Decision Diagnosis panel that helps ensure accountable decision-making. Through an expert interview and a controlled user experiment, our qualitative and quantitative results demonstrate that the proposed “de-paradox” workflow and the designed visual analytic system are effective in helping human users to identify and understand spurious associations, as well as to make accountable causal decisions.
false
false
[ "Xian Teng", "Yongsu Ahn", "Yu-Ru Lin" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.14448v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/AB8x_Tn7KXw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/eEg-_6MA4Zw", "icon": "video" } ]
Vis
2,023
Vistrust: a Multidimensional Framework and Empirical Study of Trust in Data Visualizations
10.1109/TVCG.2023.3326579
Trust is an essential aspect of data visualization, as it plays a crucial role in the interpretation and decision-making processes of users. While research in social sciences outlines the multi-dimensional factors that can play a role in trust formation, most data visualization trust researchers employ a single-item scale to measure trust. We address this gap by proposing a comprehensive, multidimensional conceptualization and operationalization of trust in visualization. We do this by applying general theories of trust from social sciences, as well as synthesizing and extending earlier work and factors identified by studies in the visualization field. We apply a two-dimensional approach to trust in visualization, to distinguish between cognitive and affective elements, as well as between visualization and data-specific trust antecedents. We use our framework to design and run a large crowd-sourced study to quantify the role of visual complexity in establishing trust in science visualizations. Our study provides empirical evidence for several aspects of our proposed theoretical framework, most notably the impact of cognition, affective responses, and individual differences when establishing trust in visualizations.
false
false
[ "Hamza Elhamdadi", "Adam Stefkovics", "Johanna Beyer", "Eric Mörth", "Hanspeter Pfister", "Cindy Xiong Bearfield", "Carolina Nobre" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.16915v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Mb83yBTxJY4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/gFXeHdqm6vM", "icon": "video" } ]
Vis
2,023
Visual Analysis of Displacement Processes in Porous Media using Spatio-Temporal Flow Graphs
10.1109/TVCG.2023.3326931
We developed a new approach comprised of different visualizations for the comparative spatio-temporal analysis of displacement processes in porous media. We aim to analyze and compare ensemble datasets from experiments to gain insight into the influence of different parameters on fluid flow. To capture the displacement of a defending fluid by an invading fluid, we first condense an input image series to a single time map. From this map, we generate a spatio-temporal flow graph covering the whole process. This graph is further simplified to only reflect topological changes in the movement of the invading fluid. Our interactive tools allow the visual analysis of these processes by visualizing the graph structure and the context of the experimental setup, as well as by providing charts for multiple metrics. We apply our approach to analyze and compare ensemble datasets jointly with domain experts, where we vary either fluid properties or the solid structure of the porous medium. We finally report the generated insights from the domain experts and discuss our contribution's advantages, generality, and limitations.
false
false
[ "Alexander Straub", "Nikolaos Karadimitriou", "Guido Reina", "Steffen Frey", "Holger Steeb", "Thomas Ertl" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.14949v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/2jFzP_AuBp0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/WngF_Xyr87k", "icon": "video" } ]
Vis
2,023
Visual Analytics for Understanding Draco's Knowledge Base
10.1109/TVCG.2023.3326912
Draco has been developed as an automated visualization recommendation system formalizing design knowledge as logical constraints in ASP (Answer-Set Programming). With an increasing set of constraints and incorporated design knowledge, even visualization experts lose overview in Draco and struggle to retrace the automated recommendation decisions made by the system. Our paper proposes an Visual Analytics (VA) approach to visualize and analyze Draco's constraints. Our VA approach is supposed to enable visualization experts to accomplish identified tasks regarding the knowledge base and support them in better understanding Draco. We extend the existing data extraction strategy of Draco with a data processing architecture capable of extracting features of interest from the knowledge base. A revised version of the ASP grammar provides the basis for this data processing strategy. The resulting incorporated and shared features of the constraints are then visualized using a hypergraph structure inside the radial-arranged constraints of the elaborated visualization. The hierarchical categories of the constraints are indicated by arcs surrounding the constraints. Our approach is supposed to enable visualization experts to interactively explore the design rules' violations based on highlighting respective constraints or recommendations. A qualitative and quantitative evaluation of the prototype confirms the prototype's effectiveness and value in acquiring insights into Draco's recommendation process and design constraints.
false
false
[ "Johanna Schmidt", "Bernhard Pointner", "Silvia Miksch" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12866v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Jkaz0DJ25bs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/n3SVTfXvJc8", "icon": "video" } ]
Vis
2,023
Visualization According to Statisticians: An Interview Study on the Role of Visualization for Inferential Statistics
10.1109/TVCG.2023.3326521
Statisticians are not only one of the earliest professional adopters of data visualization, but also some of its most prolific users. Understanding how these professionals utilize visual representations in their analytic process may shed light on best practices for visual sensemaking. We present results from an interview study involving 18 professional statisticians (19.7 years average in the profession) on three aspects: (1) their use of visualization in their daily analytic work; (2) their mental models of inferential statistical processes; and (3) their design recommendations for how to best represent statistical inferences. Interview sessions consisted of discussing inferential statistics, eliciting participant sketches of suitable visual designs, and finally, a design intervention with our proposed visual designs. We analyzed interview transcripts using thematic analysis and open coding, deriving thematic codes on statistical mindset, analytic process, and analytic toolkit. The key findings for each aspect are as follows: (1) statisticians make extensive use of visualization during all phases of their work (and not just when reporting results); (2) their mental models of inferential methods tend to be mostly visually based; and (3) many statisticians abhor dichotomous thinking. The latter suggests that a multi-faceted visual display of inferential statistics that includes a visual indicator of analytically important effect sizes may help to balance the attributed epistemic power of traditional statistical testing with an awareness of the uncertainty of sensemaking.
false
false
[ "Eric Newburger", "Niklas Elmqvist" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.12684v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/LstX25H2Uho", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/D4z_F2b5aEQ", "icon": "video" } ]
Vis
2,023
Visualization of Discontinuous Vector Field Topology
10.1109/TVCG.2023.3326519
This paper extends the concept and the visualization of vector field topology to vector fields with discontinuities. We address the non-uniqueness of flow in such fields by introduction of a time-reversible concept of equivalence. This concept generalizes streamlines to streamsets and thus vector field topology to discontinuous vector fields in terms of invariant streamsets. We identify respective novel critical structures as well as their manifolds, investigate their interplay with traditional vector field topology, and detail the application and interpretation of our approach using specifically designed synthetic cases and a simulated case from physics.
false
false
[ "Egzon Miftari", "Daniel Durstewitz", "Filip Sadlo" ]
[ "BP" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/v-jSC58mF-E", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/OuJbldoK7yw", "icon": "video" } ]
Vis
2,023
Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts
10.1109/TVCG.2023.3326923
The circulation of historical books has always been an area of interest for historians. However, the data used to represent the journey of a book across different places and times can be difficult for domain experts to digest due to buried geographical and chronological features within text-based presentations. This situation provides an opportunity for collaboration between visualization researchers and historians. This paper describes a design study where a variant of the Nine-Stage Framework [46] was employed to develop a Visual Analytics (VA) tool called DanteExploreVis. This tool was designed to aid domain experts in exploring, explaining, and presenting book trade data from multiple perspectives. We discuss the design choices made and how each panel in the interface meets the domain requirements. We also present the results of a qualitative evaluation conducted with domain experts. The main contributions of this paper include: 1) the development of a VA tool to support domain experts in exploring, explaining, and presenting book trade data; 2) a comprehensive documentation of the iterative design, development, and evaluation process following the variant Nine-Stage Framework; 3) a summary of the insights gained and lessons learned from this design study in the context of the humanities field; and 4) reflections on how our approach could be applied in a more generalizable way.
false
false
[ "Yiwen Xing", "Cristina Dondi", "Rita Borgo", "Alfie Abdul-Rahman" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.10795v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/hVYdZL4MMqI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/5EswGZzXctA", "icon": "video" } ]
Vis
2,023
Visualizing Large-Scale Spatial Time Series with GeoChron
10.1109/TVCG.2023.3327162
In geo-related fields such as urban informatics, atmospheric science, and geography, large-scale spatial time (ST) series (i.e., geo-referred time series) are collected for monitoring and understanding important spatiotemporal phenomena. ST series visualization is an effective means of understanding the data and reviewing spatiotemporal phenomena, which is a prerequisite for in-depth data analysis. However, visualizing these series is challenging due to their large scales, inherent dynamics, and spatiotemporal nature. In this study, we introduce the notion of patterns of evolution in ST series. Each evolution pattern is characterized by 1) a set of ST series that are close in space and 2) a time period when the trends of these ST series are correlated. We then leverage Storyline techniques by considering an analogy between evolution patterns and sessions, and finally design a novel visualization called GeoChron, which is capable of visualizing large-scale ST series in an evolution pattern-aware and narrative-preserving manner. GeoChron includes a mining framework to extract evolution patterns and two-level visualizations to enhance its visual scalability. We evaluate GeoChron with two case studies, an informal user study, an ablation study, parameter analysis, and running time analysis.
false
false
[ "Zikun Deng", "Shifu Chen", "Tobias Schreck", "Dazhen Deng", "Tan Tang", "Mingliang Xu", "Di Weng", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/HJOANK17sTM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/DLJtBFaW6HY", "icon": "video" } ]
Vis
2,023
Vortex Lens: Interactive Vortex Core Line Extraction using Observed Line Integral Convolution
10.1109/TVCG.2023.3326915
This paper describes a novel method for detecting and visualizing vortex structures in unsteady 2D fluid flows. The method is based on an interactive local reference frame estimation that minimizes the observed time derivative of the input flow field $\mathrm{v}(x, t)$. A locally optimal reference frame $\mathrm{w}(x, t)$ assists the user in the identification of physically observable vortex structures in Observed Line Integral Convolution (LIC) visualizations. The observed LIC visualizations are interactively computed and displayed in a user-steered vortex lens region, embedded in the context of a conventional LIC visualization outside the lens. The locally optimal reference frame is then used to detect observed critical points, where $\mathrm{v}=\mathrm{w}$, which are used to seed vortex core lines. Each vortex core line is computed as a solution of the ordinary differential equation (ODE) $\dot{w}(t)=\mathrm{w}(w(t), t)$, with an observed critical point as initial condition $(w(t_{0}), t_{0})$. During integration, we enforce a strict error bound on the difference between the extracted core line and the integration of a path line of the input vector field, i.e., a solution to the ODE $\dot{v}(t)=\mathrm{v}(v(t), t)$. We experimentally verify that this error depends on the step size of the core line integration. This ensures that our method extracts Lagrangian vortex core lines that are the simultaneous solution of both ODEs with a numerical error that is controllable by the integration step size. We show the usability of our method in the context of an interactive system using a lens metaphor, and evaluate the results in comparison to state-of-the-art vortex core line extraction methods.
false
false
[ "Peter Rautek", "Xingdi Zhang", "Bernhard Woschizka", "Thomas Theußl", "Markus Hadwiger" ]
[ "BP" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/dwYel_PpX-0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/NkWG5RWrWsY", "icon": "video" } ]
Vis
2,023
Why Change My Design: Explaining Poorly Constructed Visualization Designs with Explorable Explanations
10.1109/TVCG.2023.3327155
Although visualization tools are widely available and accessible, not everyone knows the best practices and guidelines for creating accurate and honest visual representations of data. Numerous books and articles have been written to expose the misleading potential of poorly constructed charts and teach people how to avoid being deceived by them or making their own mistakes. These readings use various rhetorical devices to explain the concepts to their readers. In our analysis of a collection of books, online materials, and a design workshop, we identified six common explanation methods. To assess the effectiveness of these methods, we conducted two crowdsourced studies (each with $N=125$) to evaluate their ability to teach and persuade people to make design changes. In addition to these existing methods, we brought in the idea of Explorable Explanations, which allows readers to experiment with different chart settings and observe how the changes are reflected in the visualization. While we did not find significant differences across explanation methods, the results of our experiments indicate that, following the exposure to the explanations, the participants showed improved proficiency in identifying deceptive charts and were more receptive to proposed alterations of the visualization design. We discovered that participants were willing to accept more than 60% of the proposed adjustments in the persuasiveness assessment. Nevertheless, we found no significant differences among different explanation methods in convincing participants to accept the modifications.
false
false
[ "Leo Yu-Ho Lo", "Yifan Cao", "Leni Yang", "Huamin Qu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.01445v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Bwdk5t-Qy0A", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/966i7IF1IEI", "icon": "video" } ]
Vis
2,023
Wizualization: A “Hard Magic” Visualization System for Immersive and Ubiquitous Analytics
10.1109/TVCG.2023.3326580
What if magic could be used as an effective metaphor to perform data visualization and analysis using speech and gestures while mobile and on-the-go? In this paper, we introduce Wizualization, a visual analytics system for eXtended Reality (XR) that enables an analyst to author and interact with visualizations using such a magic system through gestures, speech commands, and touch interaction. Wizualization is a rendering system for current XR headsets that comprises several components: a cross-device (or Arcane Focuses) infrastructure for signalling and view control (Weave), a code notebook (Spellbook), and a grammar of graphics for XR (Optomancy). The system offers users three modes of input: gestures, spoken commands, and materials. We demonstrate Wizualization and its components using a motivating scenario on collaborative data analysis of pandemic data across time and space.
false
false
[ "Andrea Batch", "Peter W. S. Butcher", "Panagiotis D. Ritsos", "Niklas Elmqvist" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/trKRCtbFTUM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Pv6glacsfgg", "icon": "video" } ]
EuroVis
2,023
A Comparative Evaluation of Visual Summarization Techniques for Event Sequences
10.1111/cgf.14821
Real‐world event sequences are often complex and heterogeneous, making it difficult to create meaningful visualizations using simple data aggregation and visual encoding techniques. Consequently, visualization researchers have developed numerous visual summarization techniques to generate concise overviews of sequential data. These techniques vary widely in terms of summary structures and contents, and currently there is a knowledge gap in understanding the effectiveness of these techniques. In this work, we present the design and results of an insight‐based crowdsourcing experiment evaluating three existing visual summarization techniques: CoreFlow, SentenTree, and Sequence Synopsis. We compare the visual summaries generated by these techniques across three tasks, on six datasets, at six levels of granularity. We analyze the effects of these variables on summary quality as rated by participants and completion time of the experiment tasks. Our analysis shows that Sequence Synopsis produces the highest‐quality visual summaries for all three tasks, but understanding Sequence Synopsis results also takes the longest time. We also find that the participants evaluate visual summary quality based on two aspects: content and interpretability. We discuss the implications of our findings on developing and evaluating new visual summarization techniques.
false
false
[ "Kazi Tasnim Zinat", "Jinhua Yang", "Arjun Gandhi", "Nistha Mitra", "Zhicheng Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2306.02489v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/C9NWr0kaWDo?si=KFMLUE0BdSwFV5rN&t=99", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/pg6WmvOmLQk?si=0COAR7hH6-fGTfiY", "icon": "video" } ]
EuroVis
2,023
A Fully Integrated Pipeline for Visual Carotid Morphology Analysis
10.1111/cgf.14808
Analyzing stenoses of the internal carotids – local constrictions of the artery – is a critical clinical task in cardiovascular disease treatment and prevention. For this purpose, we propose a self‐contained pipeline for the visual analysis of carotid artery geometries. The only inputs are computed tomography angiography (CTA) scans, which are already recorded in clinical routine. We show how integrated model extraction and visualization can help to efficiently detect stenoses and we provide means for automatic, highly accurate stenosis degree computation. We directly connect multiple sophisticated processing stages, including a neural prediction network for lumen and plaque segmentation and automatic global diameter computation. We enable interactive and retrospective user control over the processing stages. Our aims are to increase user trust by making the underlying data validatable on the fly, to decrease adoption costs by minimizing external dependencies, and to optimize scalability by streamlining the data processing. We use interactive visualizations for data inspection and adaption to guide the user through the processing stages. The framework was developed and evaluated in close collaboration with radiologists and neurologists. It has been used to extract and analyze over 100 carotid bifurcation geometries and is built with a modular architecture, available as an extendable open‐source platform.
false
false
[ "Pepe Eulzer", "Fabienne von Deylen", "Chan-Wei Hsu", "Ralph Wickenhöfer", "Carsten M. Klingner", "Kai Lawonn" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/dV2ZgGRY8ls?si=zpDZ5F6bCQxjnLix&t=67", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yh-eXernLgk?si=1JZp8qhajfSy82SD", "icon": "video" } ]
EuroVis
2,023
Been There, Seen That: Visualization of Movement and 3D Eye Tracking Data from Real-World Environments
10.1111/cgf.14838
The distribution of visual attention can be evaluated using eye tracking, providing valuable insights into usability issues and interaction patterns. However, when used in real, augmented, and collaborative environments, new challenges arise that go beyond desktop scenarios and purely virtual environments. Toward addressing these challenges, we present a visualization technique that provides complementary views on the movement and eye tracking data recorded from multiple people in real‐world environments. Our method is based on a space‐time cube visualization and a linked 3D replay of recorded data. We showcase our approach with an experiment that examines how people investigate an artwork collection. The visualization provides insights into how people moved and inspected individual pictures in their spatial context over time. In contrast to existing methods, this analysis is possible for multiple participants without extensive annotation of areas of interest. Our technique was evaluated with a think‐aloud experiment to investigate analysis strategies and an interview with domain experts to examine the applicability in other research fields.
false
false
[ "Nelusa Pathmanathan", "Seyda Öney", "Michael Becher", "Michael Sedlmair", "Daniel Weiskopf", "Kuno Kurzhals" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/B_glqLEgBZw?si=sCVGipmLjn9IKWWW&t=96", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/K4gCf2DYHsQ?si=W3JtTgOAmtGvZE2A", "icon": "video" } ]
EuroVis
2,023
Belief Decay or Persistence? A Mixed-method Study on Belief Movement Over Time
10.1111/cgf.14816
When individuals encounter new information (data), that information is incorporated with their existing beliefs (prior) to form a new belief (posterior) in a process referred to as belief updating. While most studies on rational belief updating in visual data analysis elicit beliefs immediately after data is shown, we posit that there may be critical movement in an individual's beliefs when elicited immediately after data is shown v. after a temporal delay (e.g., due to forgetfulness or weak incorporation of the data). Our paper investigates the hypothesis that posterior beliefs elicited after a time interval will “decay” back towards the prior beliefs compared to the posterior beliefs elicited immediately after new data is presented. In this study, we recruit 101 participants to complete three tasks where beliefs are elicited immediately after seeing new data and again after a brief distractor task. We conduct (1) a quantitative analysis of the results to understand if there are any systematic differences in beliefs elicited immediately after seeing new data or after a distractor task and (2) a qualitative analysis of participants' reflections on the reasons for their belief update. While we find no statistically significant global trends across the participants beliefs elicited immediately v. after the delay, the qualitative analysis provides rich insight into the reasons for an individual's belief movement across 9 prototypical scenarios, which includes (i) decay of beliefs as a result of either forgetting the information shown or strongly held prior beliefs, (ii) strengthening of confidence in updated beliefs by positively integrating the new data and (iii) maintaining a consistently updated belief over time, among others. These results can guide subsequent experiments to disambiguate when and by what mechanism new data is truly incorporated into one's belief system.
false
false
[ "Shrey Gupta", "Alireza Karduni", "Emily Wall" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/7rAJy8BgwQ4?si=axhulYqFOU4vYt4G&t=36", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/X8-Yv7O6VBc?si=3zOBo_pKvkl5YTDb", "icon": "video" } ]
EuroVis
2,023
Beyond Alternative Text and Tables: Comparative Analysis of Visualization Tools and Accessibility Methods
10.1111/cgf.14833
Modern visualization software and programming libraries have made data visualization construction easier for everyone. However, the extent of accessibility design they support for blind and low‐vision people is relatively unknown. It is also unclear how they can improve chart content accessibility beyond conventional alternative text and data tables. To address these issues, we examined the current accessibility features in popular visualization tools, revealing limited support for the standard accessibility methods and scarce support for chart content exploration. Next, we investigate two promising accessibility approaches that provide off‐the‐shelf solutions for chart content accessibility: structured navigation and conversational interaction. We present a comparative evaluation study and discuss what to consider when incorporating them into visualization tools.
false
false
[ "Nam Wook Kim", "Grace Ataguba", "Shakila Cherise Joyner", "Chuangdian Zhao", "Hyejin Im" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/1CRJV5gDVpY?si=vCNLK_HEgxqOpim3&t=65", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/-mGrHhrStw8?si=qJR1R7Ypb1aSS91f", "icon": "video" } ]
EuroVis
2,023
ChemoGraph: Interactive Visual Exploration of the Chemical Space
10.1111/cgf.14807
Exploratory analysis of the chemical space is an important task in the field of cheminformatics. For example, in drug discovery research, chemists investigate sets of thousands of chemical compounds in order to identify novel yet structurally similar synthetic compounds to replace natural products. Manually exploring the chemical space inhabited by all possible molecules and chemical compounds is impractical, and therefore presents a challenge. To fill this gap, we present ChemoGraph, a novel visual analytics technique for interactively exploring related chemicals. In ChemoGraph, we formalize a chemical space as a hypergraph and apply novel machine learning models to compute related chemical compounds. It uses a database to find related compounds from a known space and a machine learning model to generate new ones, which helps enlarge the known space. Moreover, ChemoGraph highlights interactive features that support users in viewing, comparing, and organizing computationally identified related chemicals. With a drug discovery usage scenario and initial expert feedback from a case study, we demonstrate the usefulness of ChemoGraph.
false
false
[ "Bharat Kale", "Austin Clyde", "Maoyuan Sun", "Arvind Ramanathan", "Rick L. Stevens", "Michael E. Papka" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/dV2ZgGRY8ls?si=aDx-eQlrNyCDPAZf&t=37", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/qtWXrS_02S4?si=avrDP_Th-KuTe8-H", "icon": "video" } ]
EuroVis
2,023
DASS Good: Explainable Data Mining of Spatial Cohort Data
10.1111/cgf.14830
Developing applicable clinical machine learning models is a difficult task when the data includes spatial information, for example, radiation dose distributions across adjacent organs at risk. We describe the co‐design of a modeling system, DASS, to support the hybrid human‐machine development and validation of predictive models for estimating long‐term toxicities related to radiotherapy doses in head and neck cancer patients. Developed in collaboration with domain experts in oncology and data mining, DASS incorporates human‐in‐the‐loop visual steering, spatial data, and explainable AI to augment domain knowledge with automatic data mining. We demonstrate DASS with the development of two practical clinical stratification models and report feedback from domain experts. Finally, we describe the design lessons learned from this collaborative experience.
false
false
[ "Andrew Wentzel", "Carla Floricel", "Guadalupe Canahuate", "Mohamed A. Naser", "Abdallah S. Mohamed", "Clifton D. Fuller", "Lisanne van Dijk", "G. Elisabeta Marai" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.04870v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/96YIrSpWNHA?si=BVq2PH65e7QxQGL8&t=95", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/aUK7-jbuToI?si=1Xyxz8KGVxN_6jBP", "icon": "video" } ]
EuroVis
2,023
Data Stories of Water: Studying the Communicative Role of Data Visualizations within Long-form Journalism
10.1111/cgf.14815
We present a methodology for making sense of the communicative role of data visualizations in journalistic storytelling and share findings from surveying water‐related data stories. Data stories are a genre of long‐form journalism that integrate text, data visualization, and other visual expressions (e.g., photographs, illustrations, videos) for the purpose of data‐driven storytelling. In the last decade, a considerable number of data stories about a wide range of topics have been published worldwide. Authors use a variety of techniques to make complex phenomena comprehensible and use visualizations as communicative devices that shape the understanding of a given topic. Despite the popularity of data stories, we, as scholars, still lack a methodological framework for assessing the communicative role of visualizations in data stories. To this extent, we draw from data journalism, visual culture, and multimodality studies to propose an interpretative framework in six stages. The process begins with the analysis of content blocks and framing elements and ends with the identification of dimensions, patterns, and relationships between textual and visual elements. The framework is put to the test by analyzing 17 data stories about water‐related issues. Our observations from the survey illustrate how data visualizations can shape the framing of complex topics.
false
false
[ "Manuela Garretón", "Francesca Morini", "Daniela Paz Moyano", "Gianna-Carina Grün", "Denis Parra" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/7rAJy8BgwQ4?si=kEiEAdAYypywNqLK&t=5", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/G2k-YK-VwQA?si=0EEqFkk-9V8pcYGM", "icon": "video" } ]
EuroVis
2,023
Do Disease Stories need a Hero? Effects of Human Protagonists on a Narrative Visualization about Cerebral Small Vessel Disease
10.1111/cgf.14817
Authors use various media formats to convey disease information to a broad audience, from articles and videos to interviews or documentaries. These media often include human characters, such as patients or treating physicians, who are involved with the disease. While artistic media, such as hand‐crafted illustrations and animations are used for health communication in many cases, our goal is to focus on data‐driven visualizations. Over the last decade, narrative visualization has experienced increasing prominence, employing storytelling techniques to present data in an understandable way. Similar to classic storytelling formats, narrative medical visualizations may also take a human character‐centered design approach. However, the impact of this form of data communication on the user is largely unexplored. This study investigates the protagonist's influence on user experience in terms of engagement, identification, self‐referencing, emotional response, perceived credibility, and time spent in the story. Our experimental setup utilizes a character‐driven story structure for disease stories derived from Joseph Campbell's Hero's Journey. Using this structure, we generated three conditions for a cerebral small vessel disease story that vary by their protagonist: (1) a patient, (2) a physician, and (3) a base condition with no human protagonist. These story variants formed the basis for our hypotheses on the effect of a human protagonist in disease stories, which we evaluated in an online study with 30 participants. Our findings indicate that a human protagonist exerts various influences on the story perception and that these also vary depending on the type of protagonist.
false
false
[ "Sarah Mittenentzwei", "Veronika Weiß", "Stefanie Schreiber", "Laura A. Garrison", "Stefan Bruckner", "Malte Pfister", "Bernhard Preim", "Monique Meuschke" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/7rAJy8BgwQ4?si=T5tyAk_RCPtGLfsC&t=67", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/KoO2iGStI0Q?si=SL-_idJbZecXUGLR", "icon": "video" } ]
EuroVis
2,023
Don't Peek at My Chart: Privacy-preserving Visualization for Mobile Devices
10.1111/cgf.14818
Data visualizations have been widely used on mobile devices like smartphones for various tasks (e.g., visualizing personal health and financial data), making it convenient for people to view such data anytime and anywhere. However, others nearby can also easily peek at the visualizations, resulting in personal data disclosure. In this paper, we propose a perception‐driven approach to transform mobile data visualizations into privacy‐preserving ones. Specifically, based on human visual perception, we develop a masking scheme to adjust the spatial frequency and luminance contrast of colored visualizations. The resulting visualization retains its original information in close proximity but reduces visibility when viewed from a certain distance or farther away. We conducted two user studies to inform the design of our approach (N=16) and systematically evaluate its performance (N=18), respectively. The results demonstrate the effectiveness of our approach in terms of privacy preservation for mobile data visualizations.
false
false
[ "Songheng Zhang", "Dong Ma 0001", "Yong Wang 0021" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.13307v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/7rAJy8BgwQ4?si=fKQ8xRR3oBLnRMIH&t=96", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/l-EK_DnLFws?si=xsD_vQnZUAIYLzX5", "icon": "video" } ]
EuroVis
2,023
Doom or Deliciousness: Challenges and Opportunities for Visualization in the Age of Generative Models
10.1111/cgf.14841
Generative text‐to‐image models (as exemplified by DALL‐E, MidJourney, and Stable Diffusion) have recently made enormous technological leaps, demonstrating impressive results in many graphical domains—from logo design to digital painting to photographic composition. However, the quality of these results has led to existential crises in some fields of art, leading to questions about the role of human agency in the production of meaning in a graphical context. Such issues are central to visualization, and while these generative models have yet to be widely applied in visualization, it seems only a matter of time until their integration is manifest. Seeking to circumvent similar ponderous dilemmas, we attempt to understand the roles that generative models might play across visualization. We do so by constructing a framework that characterizes what these technologies offer at various stages of the visualization workflow, augmented and analyzed through semi‐structured interviews with 21 experts from related domains. Through this work, we map the space of opportunities and risks that might arise in this intersection, identifying doomsday prophecies and delicious low‐hanging fruits that are ripe for research.
false
false
[ "Victor Schetinger", "Sara Di Bartolomeo", "Mennatallah El-Assady", "Andrew M. McNutt", "Matthias Miller", "João Paulo Apolinário Passos", "Jane Lydia Adams" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/3jrcm", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/6G2da-GfEdA?si=XaBSq8v10BiR0oSG&t=69", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/i8CQBfVinUs?si=TrOmRd9zYDrWxsYE", "icon": "video" } ]
EuroVis
2,023
Doppler Volume Rendering: A Dynamic, Piecewise Linear Spectral Representation for Visualizing Astrophysics Simulations
10.1111/cgf.14810
We present a novel approach for rendering volumetric data including the Doppler effect of light. Similar to the acoustic Doppler effect, which is caused by relative motion between a sound emitter and an observer, light waves also experience compression or expansion when emitter and observer exhibit relative motion. We account for this by employing spectral volume rendering in an emission–absorption model, with the volumetric matter moving according to an accompanying vector field, and emitting and attenuating light at wavelengths subject to the Doppler effect. By introducing a novel piecewise linearear representation of the involved light spectra, we achieve accurate volume rendering at interactive frame rates. We compare our technique to rendering with traditional point‐based spectral representation, and demonstrate its utility using a simulation of galaxy formation.
false
false
[ "Reem Alghamdi", "Thomas Müller 0005", "Alberto Jaspe Villanueva", "Markus Hadwiger", "Filip Sadlo" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/8RAjxI4jF0g?si=pl_djDSI5EGKQTN5&t=5", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/znTUaYtN4wQ?si=Sa9p0k3uSXBFsamd", "icon": "video" } ]
EuroVis
2,023
Evaluating View Management for Situated Visualization in Web-based Handheld AR
10.1111/cgf.14835
As visualization makes the leap to mobile and situated settings, where data is increasingly integrated with the physical world using mixed reality, there is a corresponding need for effectively managing the immersed user's view of situated visualizations. In this paper we present an analysis of view management techniques for situated 3D visualizations in handheld augmented reality: a shadowbox, a world‐in‐miniature metaphor, and an interactive tour. We validate these view management solutions through a concrete implementation of all techniques within a situated visualization framework built using a web‐based augmented reality visualization toolkit, and present results from a user study in augmented reality accessed using handheld mobile devices.
false
false
[ "Andrea Batch", "Sungbok Shin", "Julia Liu", "Peter W. S. Butcher", "Panagiotis D. Ritsos", "Niklas Elmqvist" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/B_glqLEgBZw?si=vyPAoRAjwFJluQqo&t=6", "icon": "video" } ]
EuroVis
2,023
Exploring Interpersonal Relationships in Historical Voting Records
10.1111/cgf.14824
Historical records from democratic processes and negotiation of constitutional texts are a complex type of data to navigate due to the many different elements that are constantly interacting with one another: people, timelines, different proposed documents, changes to such documents, and voting to approve or reject those changes. In particular, voting records can offer various insights about relationships between people of note in that historical context, such as alliances that can form and dissolve over time and people with unusual behavior. In this paper, we present a toolset developed to aid users in exploring relationships in voting records from a particular domain of constitutional conventions. The toolset consists of two elements: a dataset visualizer, which shows the entire timeline of a convention and allows users to investigate relationships at different moments in time via dimensionality reduction, and a person visualizer, which shows details of a given person's activity in that convention to aid in understanding the behavior observed in the dataset visualizer. We discuss our design choices and how each tool in those elements works towards our goals, and how they were perceived in an evaluation conducted with domain experts.
false
false
[ "Gabriel Dias Cantareira", "Yiwen Xing", "Nicholas Cole", "Rita Borgo", "Alfie Abdul-Rahman" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/EnUr6cCmlTY?si=jrbVfr1PMWvso_Zq&t=5", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/q5SCDTn2vmw?si=Ufr3AochrCPvPTXA", "icon": "video" } ]
EuroVis
2,023
Ferret: Reviewing Tabular Datasets for Manipulation
10.1111/cgf.14822
How do we ensure the veracity of science? The act of manipulating or fabricating scientific data has led to many high‐profile fraud cases and retractions. Detecting manipulated data, however, is a challenging and time‐consuming endeavor. Automated detection methods are limited due to the diversity of data types and manipulation techniques. Furthermore, patterns automatically flagged as suspicious can have reasonable explanations. Instead, we propose a nuanced approach where experts analyze tabular datasets, e.g., as part of the peer‐review process, using a guided, interactive visualization approach. In this paper, we present an analysis of how manipulated datasets are created and the artifacts these techniques generate. Based on these findings, we propose a suite of visualization methods to surface potential irregularities. We have implemented these methods in Ferret, a visualization tool for data forensics work. Ferret makes potential data issues salient and provides guidance on spotting signs of tampering and differentiating them from truthful data.
false
false
[ "Devin Lange", "Shaurya Sahai", "Jeff M. Phillips", "Alexander Lex" ]
[]
[ "PW", "P", "V", "C", "O" ]
[ { "name": "Project Website with Demo", "url": "https://ferret.sci.utah.edu/", "icon": "project_website" }, { "name": "Paper Preprint", "url": "https://osf.io/anj8v", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/NtYGV-XJoBE?si=2L5FcT0UzGdMwD6o", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/NcKqIBiipvA?si=9AHNzyy4Mgadj3KA", "icon": "video" }, { "name": "Blog Post", "url": "https://vdl.sci.utah.edu/blog/2023/09/15/ferret/", "icon": "other" }, { "name": "Source Code", "url": "https://github.com/visdesignlab/Ferret", "icon": "code" }, { "name": "Slides — Keynote", "url": "https://sci.utah.edu/~vdl/papers/2023_eurovis_ferret_slides.key", "icon": "other" }, { "name": "Slides — PDF", "url": "https://sci.utah.edu/~vdl/papers/2023_eurovis_ferret_slides.pdf", "icon": "other" }, { "name": "Supplemental Material", "url": "https://osf.io/sfxj2/", "icon": "other" }, { "name": "Case Study", "url": "https://www.youtube.com/watch?v=T3_Rmy3pBqU", "icon": "video" } ]
EuroVis
2,023
FlexEvent: going beyond Case-Centric Exploration and Analysis of Multivariate Event Sequences
10.1111/cgf.14820
In many domains, multivariate event sequence data is collected focused around an entity (the case). Typically, each event has multiple attributes, for example, in healthcare a patient has events such as hospitalization, medication, and surgery. In addition to the multivariate events, also the case (a specific attribute, e.g., patient) has associated multivariate data (e.g., age, gender, weight). Current work typically only visualizes one attribute per event (label) in the event sequences. As a consequence, events can only be explored from a predefined case‐centric perspective. However, to find complex relations from multiple perspectives (e.g., from different case definitions, such as doctor), users also need an event‐ and attribute‐centric perspective. In addition, support is needed to effortlessly switch between and within perspectives. To support such a rich exploration, we present FlexEvent: an exploration and analysis method that enables investigation beyond a fixed case‐centric perspective. Based on an adaptation of existing visualization techniques, such as scatterplots and juxtaposed small multiples, we enable flexible switching between different perspectives to explore the multivariate event sequence data needed to answer multi‐perspective hypotheses. We evaluated FlexEvent with three domain experts in two use cases with sleep disorder and neonatal ICU data that show our method facilitates experts in exploring and analyzing real‐world multivariate sequence data from different perspectives.
false
false
[ "Sanne van der Linden", "Bernice M. Wulterkens", "Merel van Gilst", "Sebastiaan Overeem", "Carola van Pul", "Anna Vilanova", "Stef van den Elzen" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/C9NWr0kaWDo?si=i0-jHtH44CABHL6y&t=68", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/FHl_6vPr3uc?si=psFB5zANemsFcZUF", "icon": "video" } ]
EuroVis
2,023
GO-Compass: Visual Navigation of Multiple Lists of GO terms
10.1111/cgf.14829
Analysis pipelines in genomics, transcriptomics, and proteomics commonly produce lists of genes, e.g., differentially expressed genes. Often these lists overlap only partly or not at all and contain too many genes for manual comparison. However, using background knowledge, such as the functional annotations of the genes, the lists can be abstracted to functional terms. One approach is to run Gene Ontology (GO) enrichment analyses to determine over‐ and/or underrepresented functions for every list of genes. Due to the hierarchical structure of the Gene Ontology, lists of enriched GO terms can contain many closely related terms, rendering the lists still long, redundant, and difficult to interpret for researchers.In this paper, we present GO‐Compass (Gene Ontology list comparison using Semantic Similarity), a visual analytics tool for the dispensability reduction and visual comparison of lists of GO terms. For dispensability reduction, we adapted the RE‐VIGO algorithm, a summarization method based on the semantic similarity of GO terms, to perform hierarchical dispensability clustering on multiple lists. In an interactive dashboard, GO‐Compass offers several visualizations for the comparison and improved interpretability of GO terms lists. The hierarchical dispensability clustering is visualized as a tree, where users can interactively filter out dispensable GO terms and create flat clusters by cutting the tree at a chosen dispensability. The flat clusters are visualized in animated treemaps and are compared using a correlation heatmap, UpSet plots, and bar charts. With two use cases on published datasets from different omics domains, we demonstrate the general applicability and effectiveness of our approach. In the first use case, we show how the tool can be used to compare lists of differentially expressed genes from a transcriptomics pipeline and incorporate gene information into the analysis. In the second use case using genomics data, we show how GO‐Compass facilitates the analysis of many hundreds of GO terms. For qualitative evaluation of the tool, we conducted feedback sessions with five domain experts and received positive comments. GO‐Compass is part of the Tue‐Vis Visualization Server as a web application available at https://go‐compass‐tuevis.cs.uni‐tuebingen.de/
false
false
[ "Theresa Harbig", "Mathias Witte Paz", "Kay Nieselt" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/96YIrSpWNHA?si=_-_Is8e7F79-amqO&t=65", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/_DHPxCdCigk?si=ranx17zy4K5SMIVF", "icon": "video" } ]
EuroVis
2,023
Human-Computer Collaboration for Visual Analytics: an Agent-based Framework
10.1111/cgf.14823
The visual analytics community has long aimed to understand users better and assist them in their analytic endeavors. As a result, numerous conceptual models of visual analytics aim to formalize common workflows, techniques, and goals leveraged by analysts. While many of the existing approaches are rich in detail, they each are specific to a particular aspect of the visual analytic process. Furthermore, with an ever‐expanding array of novel artificial intelligence techniques and advances in visual analytic settings, existing conceptual models may not provide enough expressivity to bridge the two fields. In this work, we propose an agent‐based conceptual model for the visual analytic process by drawing parallels from the artificial intelligence literature. We present three examples from the visual analytics literature as case studies and examine them in detail using our framework. Our simple yet robust framework unifies the visual analytic pipeline to enable researchers and practitioners to reason about scenarios that are becoming increasingly prominent in the field, namely mixed‐initiative, guided, and collaborative analysis. Furthermore, it will allow us to characterize analysts, visual analytic settings, and guidance from the lenses of human agents, environments, and artificial agents, respectively.
false
false
[ "Shayan Monadjemi", "Mengtian Guo", "David Gotz", "Roman Garnett", "Alvitta Ottley" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.09415v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Qpr17pRxMq4?si=XHXyS0AFhPMEcwo0&t=65", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/npBpeBxHE58?si=EX1dHM2zAEvQZXk6", "icon": "video" } ]
EuroVis
2,023
Illustrative Motion Smoothing for Attention Guidance in Dynamic Visualizations
10.1111/cgf.14836
3D animations are an effective method to learn about complex dynamic phenomena, such as mesoscale biological processes. The animators' goals are to convey a sense of the scene's overall complexity while, at the same time, visually guiding the user through a story of subsequent events embedded in the chaotic environment. Animators use a variety of visual emphasis techniques to guide the observers' attention through the story, such as highlighting, halos – or by manipulating motion parameters of the scene. In this paper, we investigate the effect of smoothing the motion of contextual scene elements to attract attention to focus elements of the story exhibiting high‐frequency motion. We conducted a crowdsourced study with 108 participants observing short animations with two illustrative motion smoothing strategies: geometric smoothing through noise reduction of contextual motion trajectories and visual smoothing through motion blur of context items. We investigated the observers' ability to follow the story as well as the effect of the techniques on speed perception in a molecular scene. Our results show that moderate motion blur significantly improves users' ability to follow the story. Geometric motion smoothing is less effective but increases the visual appeal of the animation. However, both techniques also slow down the perceived speed of the animation. We discuss the implications of these results and derive design guidelines for animators of complex dynamic visualizations.
false
false
[ "Johannes Eschner", "Peter Mindek", "Manuela Waldner" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2305.16030v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/B_glqLEgBZw?si=xIhZ-cKnyzYME5k2&t=35", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Vx35B_QDvs8?si=XBALPGVPcDTjVRRm", "icon": "video" } ]
EuroVis
2,023
LINGO : Visually Debiasing Natural Language Instructions to Support Task Diversity
10.1111/cgf.14840
Cross‐task generalization is a significant outcome that defines mastery in natural language understanding. Humans show a remarkable aptitude for this, and can solve many different types of tasks, given definitions in the form of textual instructions and a small set of examples. Recent work with pre‐trained language models mimics this learning style: users can define and exemplify a task for the model to attempt as a series of natural language prompts or instructions. While prompting approaches have led to higher cross‐task generalization compared to traditional supervised learning, analyzing ‘bias’ in the task instructions given to the model is a difficult problem, and has thus been relatively unexplored. For instance, are we truly modeling a task, or are we modeling a user's instructions? To help investigate this, we develop LINGO, a novel visual analytics interface that supports an effective, task‐driven workflow to (1) help identify bias in natural language task instructions, (2) alter (or create) task instructions to reduce bias, and (3) evaluate pre‐trained model performance on debiased task instructions. To robustly evaluate LINGO, we conduct a user study with both novice and expert instruction creators, over a dataset of 1,616 linguistic tasks and their natural language instructions, spanning 55 different languages. For both user groups, LINGO promotes the creation of more difficult tasks for pre‐trained models, that contain higher linguistic diversity and lower instruction bias. We additionally discuss how the insights learned in developing and evaluating LINGO can aid in the design of future dashboards that aim to minimize the effort involved in prompt creation across multiple domains.
false
false
[ "Anjana Arunkumar", "Shubham Sharma", "Rakhi Agrawal", "Sriram Chandrasekaran", "Chris Bryan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.06184v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/6G2da-GfEdA?si=qlLLozcL2noh68zd&t=37", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/KBKALQlUsKg?si=HRbt4Pt0dq_CDlD6", "icon": "video" } ]
EuroVis
2,023
Memory-Efficient GPU Volume Path Tracing of AMR Data Using the Dual Mesh
10.1111/cgf.14811
A common way to render cell‐centric adaptive mesh refinement (AMR) data is to compute the dual mesh and visualize that with a standard unstructured element renderer. While the dual mesh provides a high‐quality interpolator, the memory requirements of the dual mesh data structure are significantly higher than those of the original grid, which prevents rendering very large data sets. We introduce a GPU‐friendly data structure and a clustering algorithm that allow for efficient AMR dual mesh rendering with a competitive memory footprint. Fundamentally, any off‐the‐shelf unstructured element renderer running on GPUs could be extended to support our data structure just by adding a gridlet element type in addition to the standard tetrahedra, pyramids, wedges, and hexahedra supported by default. We integrated the data structure into a volumetric path tracer to compare it to various state‐of‐the‐art unstructured element sampling methods. We show that our data structure easily competes with these methods in terms of rendering performance, but is much more memory‐efficient.
false
false
[ "Stefan Zellmann", "Qi Wu 0015", "Kwan-Liu Ma", "Ingo Wald" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/8RAjxI4jF0g?si=DWVDPEhT610-slSX&t=35", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/zcrdT3ZjD4c?si=hWgoBSo-4gjutP_w", "icon": "video" } ]
EuroVis
2,023
Mini-VLAT: A Short and Effective Measure of Visualization Literacy
10.1111/cgf.14809
The visualization community regards visualization literacy as a necessary skill. Yet, despite the recent increase in research into visualization literacy by the education and visualization communities, we lack practical and time‐effective instruments for the widespread measurements of people's comprehension and interpretation of visual designs. We present Mini‐VLAT, a brief but practical visualization literacy test. The Mini‐VLAT is a 12‐item short form of the 53‐item Visualization Literacy Assessment Test (VLAT). The Mini‐VLAT is reliable (coefficient omega = 0.72) and strongly correlates with the VLAT. Five visualization experts validated the Mini‐VLAT items, yielding an average content validity ratio (CVR) of 0.6. We further validate Mini‐VLAT by demonstrating a strong positive correlation between study participants' Mini‐VLAT scores and their aptitude for learning an unfamiliar visualization using a Parallel Coordinate Plot test. Overall, the Mini‐VLAT items showed a similar pattern of validity and reliability as the 53‐item VLAT. The results show that Mini‐VLAT is a psychometrically sound and practical short measure of visualization literacy.
false
false
[ "Saugat Pandey", "Alvitta Ottley" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.07905v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/dV2ZgGRY8ls?si=5KFysVCIRnBdIiBB&t=6", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/GqX5HJYbqGc?si=XMcxP-vfIS4GvkZl", "icon": "video" } ]
EuroVis
2,023
ParaDime: A Framework for Parametric Dimensionality Reduction
10.1111/cgf.14834
ParaDime is a framework for parametric dimensionality reduction (DR). In parametric DR, neural networks are trained to embed high‐dimensional data items in a low‐dimensional space while minimizing an objective function. ParaDime builds on the idea that the objective functions of several modern DR techniques result from transformed inter‐item relationships. It provides a common interface for specifying these relations and transformations and for defining how they are used within the losses that govern the training process. Through this interface, ParaDime unifies parametric versions of DR techniques such as metric MDS, t‐SNE, and UMAP. It allows users to fully customize all aspects of the DR process. We show how this ease of customization makes ParaDime suitable for experimenting with interesting techniques such as hybrid classification/embedding models and supervised DR. This way, ParaDime opens up new possibilities for visualizing high‐dimensional data.
false
false
[ "Andreas P. Hinterreiter", "Christina Humer", "Bernhard Kainz", "Marc Streit" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2210.04582v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/1CRJV5gDVpY?si=a9a9t2n2jXXD5JJ4&t=95", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yuDZ57aJBP0?si=ILWC7nwu5Zg0t1Lf", "icon": "video" } ]
EuroVis
2,023
Process and Pitfalls of Online Teaching and Learning with Design Study ''Lite'' Methodology: A Retrospective Analysis
10.1111/cgf.14813
Design studies are an integral method of visualization research with hundreds of instances in the literature. Although taught as a theory, the practical implementation of design studies is often excluded from visualization pedagogy due to the lengthy time commitments associated with such studies. Recent research has addressed this challenge and developed an expedited design study framework, the Design Study “Lite” Methodology (DSLM), which can implement design studies with novice students within just 14 weeks. The framework was developed and evaluated based on five semesters of in‐person data visualization courses with 30 students or less and was implemented in conjunction with Service‐Learning (S‐L). With the growth and popularity of the data visualization field—and the teaching environment created by the COVID‐19 pandemic—more academic institutions are offering visualization courses online. Therefore, in this paper, we strengthen and validate the epistemological foundations of the DSLM framework by testing its (1) adaptability to online learning environments and conditions and (2) scalability to larger classes with up to 57 students. We present two online implementations of the DSLM framework, with and without Service‐Learning (S‐L), to test the adaptability and scalability of the framework. We further demonstrate that the framework can be applied effectively without the S‐L component. We reflect on our experience with the online DSLM implementations and contribute a detailed retrospective analysis using thematic analysis and grounded theory methods to draw valuable recommendations and guidelines for future applications of the framework. This work verifies that DSLM can be used successfully in online classes to teach design study methodology. Finally, we contribute novel additions to the DSLM framework to further enhance it for teaching and learning design studies in the classroom. The preprint and supplementary materials for this paper can be found at https://osf.io/6bjx5/.
false
false
[ "Uzma Haque Syeda", "Cody Dunne", "Michelle A. Borkin" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/GonMgzpFxDI?si=mEz3o1ozDa7UwE8T&t=5", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/FheRUMju5xA?si=oSB0w8cW7cRoks61", "icon": "video" } ]
EuroVis
2,023
RectEuler: Visualizing Intersecting Sets using Rectangles
10.1111/cgf.14814
Euler diagrams are a popular technique to visualize set‐typed data. However, creating diagrams using simple shapes remains a challenging problem for many complex, real‐life datasets. To solve this, we propose RectEuler: a flexible, fully‐automatic method using rectangles to create Euler‐like diagrams. We use an efficient mixed‐integer optimization scheme to place set labels and element representatives (e.g., text or images) in conjunction with rectangles describing the sets. By defining appropriate constraints, we adhere to well‐formedness properties and aesthetic considerations. If a dataset cannot be created within a reasonable time or at all, we iteratively split the diagram into multiple components until a drawable solution is found. Redundant encoding of the set membership using dots and set lines improves the readability of the diagram. Our web tool lets users see how the layout changes throughout the optimization process and provides interactive explanations. For evaluation, we perform quantitative and qualitative analysis across different datasets and compare our method to state‐of‐the‐art Euler diagram generation methods.
false
false
[ "Patrick Paetzold", "Rebecca Kehlbeck", "Hendrik Strobelt", "Yumeng Xue", "Sabine Storandt", "Oliver Deussen" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/k8Jqa-LnDE8?si=9yxdbRGGy3jkUF6N&t=98", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/A2aoqwLuIow?si=ROPpi-LBsW6wLiAw", "icon": "video" } ]
EuroVis
2,023
State-of-the-art in Large-Scale Volume Visualization Beyond Structured Data
10.1111/cgf.14857
Volume data these days is usually massive in terms of its topology, multiple fields, or temporal component. With the gap between compute and memory performance widening, the memory subsystem becomes the primary bottleneck for scientific volume visualization. Simple, structured, regular representations are often infeasible because the buses and interconnects involved need to accommodate the data required for interactive rendering. In this state-of-the-art report, we review works focusing on large-scale volume rendering beyond those typical structured and regular grid representations. We focus primarily on hierarchical and adaptive mesh refinement representations, unstructured meshes, and compressed representations that gained recent popularity. We review works that approach this kind of data using strategies such as out-of-core rendering, massive parallelism, and other strategies to cope with the sheer size of the ever-increasing volume of data produced by today's supercomputers and acquisition devices. We emphasize the data management side of large-scale volume rendering systems and also include a review of tools that support the various volume data types discussed.
false
false
[ "Jonathan Sarton", "Stefan Zellmann", "Serkan Demirci", "Ugur Güdükbay", "Welcome Alexandre-Barff", "Laurent Lucas", "Jean-Michel Dischler", "Stefan Wesner", "Ingo Wald" ]
[]
[]
[]
EuroVis
2,023
State-of-the-Art Report on Optimizing Particle Advection Performance
10.1111/cgf.14858
The computational work to perform particle advection-based flow visualization techniques varies based on many factors, including number of particles, duration, and mesh type. In many cases, the total work is significant, and total execution time ("performance") is a critical issue. This state-of-the-art report considers existing optimizations for particle advection, using two high-level categories: algorithmic optimizations and hardware efficiency. The sub-categories for algorithmic optimizations include solvers, cell locators, I/O efficiency, and precomputation, while the sub-categories for hardware efficiency all involve parallelism: shared-memory, distributed-memory, and hybrid. Finally, this STAR concludes by identifying current gaps in our understanding of particle advection performance and its optimizations.
false
false
[ "Abhishek Yenpure", "Sudhanshu Sane", "Roba Binyahib", "David Pugmire", "Christoph Garth", "Hank Childs" ]
[]
[]
[]
EuroVis
2,023
Tac-Anticipator: Visual Analytics of Anticipation Behaviors in Table Tennis Matches
10.1111/cgf.14825
Anticipation skill is important for elite racquet sports players. Successful anticipation allows them to predict the actions of the opponent better and take early actions in matches. Existing studies of anticipation behaviors, largely based on the analysis of in‐lab behaviors, failed to capture the characteristics of in‐situ anticipation behaviors in real matches. This research proposes a data‐driven approach for research on anticipation behaviors to gain more accurate and reliable insight into anticipation skills. Collaborating with domain experts in table tennis, we develop a complete solution that includes data collection, the development of a model to evaluate anticipation behaviors, and the design of a visual analytics system called Tac‐Anticipator. Our case study reveals the strengths and weaknesses of top table tennis players' anticipation behaviors. In a word, our work enriches the research methods and guidelines for visual analytics of anticipation behaviors.
false
false
[ "Jiachen Wang", "Yihong Wu", "Xiaolong Zhang 0001", "Yixin Zeng", "Zheng Zhou", "Hui Zhang 0051", "Xiao Xie", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/EnUr6cCmlTY?si=isaxi49-bhDkZF8V&t=94", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/q5SCDTn2vmw?si=e6l3xpO1Ksu8KaIR", "icon": "video" } ]
EuroVis
2,023
Teru Teru Bozu: Defensive Raincloud Plots
10.1111/cgf.14826
Univariate visualizations like histograms, rug plots, or box plots provide concise visual summaries of distributions. However, each individual visualization may fail to robustly distinguish important features of a distribution, or provide sufficient information for all of the relevant tasks involved in summarizing univariate data. One solution is to juxtapose or superimpose multiple univariate visualizations in the same chart, as in Allen et al.'s [APW*19] “raincloud plots.” In this paper I examine the design space of raincloud plots, and, through a series of simulation studies, explore designs where the component visualizations mutually “defend” against situations where important distribution features are missed or trivial features are given undue prominence. I suggest a class of “defensive” raincloud plot designs that provide good mutual coverage for surfacing distributional features of interest.
false
false
[ "Michael Correll" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.17709v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/SVPkKGydF3g?si=WVGjw5mNwomZfLMY&t=6", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/7UpbGGEpZ-I?si=v_o3WMJ8KKid8u6a", "icon": "video" } ]
EuroVis
2,023
The State of the Art in Creating Visualization Corpora for Automated Chart Analysis
10.1111/cgf.14855
We present a state-of-the-art report on visualization corpora in automated chart analysis research. We survey 56 papers that created or used a visualization corpus as the input of their research techniques or systems. Based on a multi-level task taxonomy that identifies the goal, method, and outputs of automated chart analysis, we examine the property space of existing chart corpora along five dimensions: format, scope, collection method, annotations, and diversity. Through the survey, we summarize common patterns and practices of creating chart corpora, identify research gaps and opportunities, and discuss the desired properties of future benchmark corpora and the required tools to create them.
false
false
[ "Chen Chen 0080", "Zhicheng Liu 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2305.14525v2", "icon": "paper" } ]
EuroVis
2,023
The State of the Art in Visualizing Dynamic Multivariate Networks
10.1111/cgf.14856
Most real-world networks are both dynamic and multivariate in nature, meaning that the network is associated with various attributes and both the network structure and attributes evolve over time. Visualizing dynamic multivariate networks is of great significance to the visualization community because of their wide applications across multiple domains. However, it remains challenging because the techniques should focus on representing the network structure, attributes and their evolution concurrently. Many real-world network analysis tasks require the concurrent usage of the three aspects of the dynamic multivariate networks. In this paper, we analyze current techniques and present a taxonomy to classify the existing visualization techniques based on three aspects: temporal encoding, topology encoding, and attribute encoding. Finally, we survey application areas and evaluation methods; and discuss challenges for future research.
false
false
[ "Bharat Kale", "Maoyuan Sun", "Michael E. Papka" ]
[]
[]
[]
EuroVis
2,023
Unfolding Edges: Adding Context to Edges in Multivariate Graph Visualization
10.1111/cgf.14831
Existing work on visualizing multivariate graphs is primarily concerned with representing the attributes of nodes. Even though edges are the constitutive elements of networks, there have been only few attempts to visualize attributes of edges. In this work, we focus on the critical importance of edge attributes for interpreting network visualizations and building trust in the underlying data. We propose ‘unfolding of edges’ as an interactive approach to integrate multivariate edge attributes dynamically into existing node‐link diagrams. Unfolding edges is an in‐situ approach that gradually transforms basic links into detailed representations of the associated edge attributes. This approach extends focus+context, semantic zoom, and animated transitions for network visualizations to accommodate edge details on‐demand without cluttering the overall graph layout. We explore the design space for the unfolding of edges, which covers aspects of making space for the unfolding, of actually representing the edge context, and of navigating between edges. To demonstrate the utility of our approach, we present two case studies in the context of historical network analysis and computational social science. For these, web‐based prototypes were implemented based on which we conducted interviews with domain experts. The experts' feedback suggests that the proposed unfolding of edges is a useful tool for exploring rich edge information of multivariate graphs.
false
false
[ "Mark-Jan Bludau", "Marian Dörk", "Christian Tominski" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/1CRJV5gDVpY?si=Iq_sO940Ety0OTy0&t=5", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Dy6C_Zn7zGg?si=KZ4TDCrO44ifvoV_", "icon": "video" } ]
EuroVis
2,023
VA + Embeddings STAR: A State-of-the-Art Report on the Use of Embeddings in Visual Analytics
10.1111/cgf.14859
Over the past years, an increasing number of publications in information visualization, especially within the field of visual analytics, have mentioned the term "embedding" when describing the computational approach. Within this context, embeddings are usually (relatively) low-dimensional, distributed representations of various data types (such as texts or graphs), and since they have proven to be extremely useful for a variety of data analysis tasks across various disciplines and fields, they have become widely used. Existing visualization approaches aim to either support exploration and interpretation of the embedding space through visual representation and interaction, or aim to use embeddings as part of the computational pipeline for addressing downstream analytical tasks. To the best of our knowledge, this is the first survey that takes a detailed look at embedding methods through the lens of visual analytics, and the purpose of our survey article is to provide a systematic overview of the state of the art within the emerging field of embedding visualization. We design a categorization scheme for our approach, analyze the current research frontier based on peer-reviewed publications, and discuss existing trends, challenges, and potential research directions for using embeddings in the context of visual analytics. Furthermore, we provide an interactive survey browser for the collected and categorized survey data, which currently includes 122 entries that appeared between 2007 and 2023.
false
false
[ "Zeyang Huang", "Daniel Witschard", "Kostiantyn Kucher", "Andreas Kerren" ]
[]
[]
[]
EuroVis
2,023
VENUS: A Geometrical Representation for Quantum State Visualization
10.1111/cgf.14827
Visualizations have played a crucial role in helping quantum computing users explore quantum states in various quantum computing applications. Among them, Bloch Sphere is the widely‐used visualization for showing quantum states, which leverages angles to represent quantum amplitudes. However, it cannot support the visualization of quantum entanglement and superposition, the two essential properties of quantum computing. To address this issue, we propose VENUS, a novel visualization for quantum state representation. By explicitly correlating 2D geometric shapes based on the math foundation of quantum computing characteristics, VENUS effectively represents quantum amplitudes of both the single qubit and two qubits for quantum entanglement. Also, we use multiple coordinated semicircles to naturally encode probability distribution, making the quantum superposition intuitive to analyze. We conducted two well‐designed case studies and an in‐depth expert interview to evaluate the usefulness and effectiveness of VENUS. The result shows that VENUS can effectively facilitate the exploration of quantum states for the single qubit and two qubits.
false
false
[ "Shaolun Ruan", "Ribo Yuan", "Qiang Guan", "Yanna Lin", "Ying Mao", "Weiwen Jiang", "Zhepeng Wang", "Wei Xu 0020", "Yong Wang 0021" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.08366v4", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/SVPkKGydF3g?si=IbrJNjTt-dvm1u7G&t=35", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/jw0CwM3uYe0?si=WohTZwU5nWAxBM7F", "icon": "video" } ]
EuroVis
2,023
VisCoMET: Visually Analyzing Team Collaboration in Medical Emergency Trainings
10.1111/cgf.14819
Handling emergencies requires efficient and effective collaboration of medical professionals. To analyze their performance, in an application study, we have developed VisCoMET, a visual analytics approach displaying interactions of healthcare personnel in a triage training of a mass casualty incident. The application scenario stems from social interaction research, where the collaboration of teams is studied from different perspectives. We integrate recorded annotations from multiple sources, such as recorded videos of the sessions, transcribed communication, and eye‐tracking information. For each session, an information‐rich timeline visualizes events across these different channels, specifically highlighting interactions between the team members. We provide algorithmic support to identify frequent event patterns and to search for user‐defined event sequences. Comparing different teams, an overview visualization aggregates each training session in a visual glyph as a node, connected to similar sessions through edges. An application example shows the usage of the approach in the comparative analysis of triage training sessions, where multiple teams encountered the same scene, and highlights discovered insights. The approach was evaluated through feedback from visualization and social interaction experts. The results show that the approach supports reflecting on teams' performance by exploratory analysis of collaboration behavior while particularly enabling the comparison of triage training sessions.
false
false
[ "Carina Liebers", "Shivam Agarwal", "Maximilian Krug", "Karola Pitsch", "Fabian Beck 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/C9NWr0kaWDo?si=V489RWIpa6vMe0rB&t=36", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/O-eT3n903IM?si=a5p9azG2wS2-sd8O", "icon": "video" } ]
EuroVis
2,023
VISITOR: Visual Interactive State Sequence Exploration for Reinforcement Learning
10.1111/cgf.14839
Understanding the behavior of deep reinforcement learning agents is a crucial requirement throughout their development. Existing work has addressed the identification of observable behavioral patterns in state sequences or analysis of isolated internal representations; however, the overall decision‐making of deep‐learning RL agents remains opaque. To tackle this, we present VISITOR, a visual analytics system enabling the analysis of entire state sequences, the diagnosis of singular predictions, and the comparison between agents. A sequence embedding view enables the multiscale analysis of state sequences, utilizing custom embedding techniques for a stable spatialization of the observations and internal states. We provide multiple layers: (1) a state space embedding, highlighting different groups of states inside the state‐action sequences, (2) a trajectory view, emphasizing decision points, (3) a network activation mapping, visualizing the relationship between observations and network activations, (4) a transition embedding, enabling the analysis of state‐to‐state transitions. The embedding view is accompanied by an interactive reward view that captures the temporal development of metrics, which can be linked directly to states in the embedding. Lastly, a model list allows for the quick comparison of models across multiple metrics. Annotations can be exported to communicate results to different audiences. Our two‐stage evaluation with eight experts confirms the effectiveness in identifying states of interest, comparing the quality of policies, and reasoning about the internal decision‐making processes.
false
false
[ "Yannick Metz", "Eugene Bykovets", "Lucas Joos", "Daniel A. Keim", "Mennatallah El-Assady" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/6G2da-GfEdA?si=zrVjstqOq4LvOxUh&t=5", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Kx0FYuV0FsU?si=mLhAyU4P72mXwvyW", "icon": "video" } ]
EuroVis
2,023
visMOP - A Visual Analytics Approach for Multi-omics Pathways
10.1111/cgf.14828
We present an approach for the visual analysis of multi‐omics data obtained using high‐throughput methods. The term “omics” denotes measurements of different types of biologically relevant molecules like the products of gene transcription (transcriptomics) or the abundance of proteins (proteomics). Current popular visualization approaches often only support analyzing each of these omics separately. This, however, disregards the interconnectedness of different biologically relevant molecules and processes. Consequently, it describes the actual events in the organism suboptimally or only partially. Our visual analytics approach for multi‐omics data provides a comprehensive overview and details‐on‐demand by integrating the different omics types in multiple linked views. To give an overview, we map the measurements to known biological pathways and use a combination of a clustered network visualization, glyphs, and interactive filtering. To ensure the effectiveness and utility of our approach, we designed it in close collaboration with domain experts and assessed it using an exemplary workflow with real‐world transcriptomics, proteomics, and lipidomics measurements from mice.
false
false
[ "Nicolas Brich", "Nadine Schacherer", "Miriam Hoene", "Cora Weigert", "Rainer Lehmann", "Michael Krone" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/96YIrSpWNHA?si=_4FJ7oYigU1tG-pS&t=35", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/2dJle2njAJM?si=qKd4CJh4oi1Wp44e", "icon": "video" } ]
EuroVis
2,023
Visual Analytics on Network Forgetting for Task-Incremental Learning
10.1111/cgf.14842
Task‐incremental learning (Task‐IL) aims to enable an intelligent agent to continuously accumulate knowledge from new learning tasks without catastrophically forgetting what it has learned in the past. It has drawn increasing attention in recent years, with many algorithms being proposed to mitigate neural network forgetting. However, none of the existing strategies is able to completely eliminate the issues. Moreover, explaining and fully understanding what knowledge and how it is being forgotten during the incremental learning process still remains under‐explored. In this paper, we propose KnowledgeDrift, a visual analytics framework, to interpret the network forgetting with three objectives: (1) to identify when the network fails to memorize the past knowledge, (2) to visualize what information has been forgotten, and (3) to diagnose how knowledge attained in the new model interferes with the one learned in the past. Our analytical framework first identifies the occurrence of forgetting by tracking the task performance under the incremental learning process and then provides in‐depth inspections of drifted information via various levels of data granularity. KnowledgeDrift allows analysts and model developers to enhance their understanding of network forgetting and compare the performance of different incremental learning algorithms. Three case studies are conducted in the paper to further provide insights and guidance for users to effectively diagnose catastrophic forgetting over time.
false
false
[ "Ziwei Li", "Jiayi Xu 0001", "Wei-Lun Chao", "Han-Wei Shen" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/6G2da-GfEdA?si=ARf8i2L1mjrIZbeZ&t=103", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/5CsX5Pqtnh0?si=fq21AtxhpyctwJ3B", "icon": "video" } ]
EuroVis
2,023
Visual Gaze Labeling for Augmented Reality Studies
10.1111/cgf.14837
Augmented Reality (AR) provides new ways for situated visualization and human‐computer interaction in physical environments. Current evaluation procedures for AR applications rely primarily on questionnaires and interviews, providing qualitative means to assess usability and task solution strategies. Eye tracking extends these existing evaluation methodologies by providing indicators for visual attention to virtual and real elements in the environment. However, the analysis of viewing behavior, especially the comparison of multiple participants, is difficult to achieve in AR. Specifically, the definition of areas of interest (AOIs), which is often a prerequisite for such analysis, is cumbersome and tedious with existing approaches. To address this issue, we present a new visualization approach to define AOIs, label fixations, and investigate the resulting annotated scanpaths. Our approach utilizes automatic annotation of gaze on virtual objects and an image‐based approach that also considers spatial context for the manual annotation of objects in the real world. Our results show, that with our approach, eye tracking data from AR scenes can be annotated and analyzed flexibly with respect to data aspects and annotation strategies.
false
false
[ "Seyda Öney", "Nelusa Pathmanathan", "Michael Becher", "Michael Sedlmair", "Daniel Weiskopf", "Kuno Kurzhals" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/B_glqLEgBZw?si=6vnb-qapOoV_Fp0j&t=65", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/2NkGJz6WfhI?si=jSDPP_fFse5cWhuP", "icon": "video" } ]
EuroVis
2,023
WYTIWYR: A User Intent-Aware Framework with Multi-modal Inputs for Visualization Retrieval
10.1111/cgf.14832
Retrieving charts from a large corpus is a fundamental task that can benefit numerous applications such as visualization recommendations. The retrieved results are expected to conform to both explicit visual attributes (e.g., chart type, colormap) and implicit user intents (e.g., design style, context information) that vary upon application scenarios. However, existing example‐based chart retrieval methods are built upon non‐decoupled and low‐level visual features that are hard to interpret, while definition‐based ones are constrained to pre‐defined attributes that are hard to extend. In this work, we propose a new framework, namely WYTIWYR (What‐You‐Think‐Is‐What‐You‐Retrieve), that integrates user intents into the chart retrieval process. The framework consists of two stages: first, the Annotation stage disentangles the visual attributes within the query chart; and second, the Retrieval stage embeds the user's intent with customized text prompt as well as bitmap query chart, to recall targeted retrieval result. We develop aprototype WYTIWYR system leveraging a contrastive language‐image pre‐training (CLIP) model to achieve zero‐shot classification as well as multi‐modal input encoding, and test the prototype on a large corpus with charts crawled from the Internet. Quantitative experiments, case studies, and qualitative interviews are conducted. The results demonstrate the usability and effectiveness of our proposed framework.
false
false
[ "Shishi Xiao", "Yihan Hou", "Cheng Jin", "Wei Zeng 0004" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.06991v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/1CRJV5gDVpY?si=9Xp_TjlYyfQFtUeG&t=35", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xZt13I9F_XI?si=P30cp_uUq6-GEjH6", "icon": "video" } ]
EuroVis
2,023
xOpat: eXplainable Open Pathology Analysis Tool
10.1111/cgf.14812
Histopathology research quickly evolves thanks to advances in whole slide imaging (WSI) and artificial intelligence (AI). However, existing WSI viewers are tailored either for clinical or research environments, but none suits both. This hinders the adoption of new methods and communication between the researchers and clinicians. The paper presents xOpat, an open‐source, browser‐based WSI viewer that addresses these problems. xOpat supports various data sources, such as tissue images, pathologists' annotations, or additional data produced by AI models. Furthermore, it provides efficient rendering of multiple data layers, their visual representations, and tools for annotating and presenting findings. Thanks to its modular, protocol‐agnostic, and extensible architecture, xOpat can be easily integrated into different environments and thus helps to bridge the gap between research and clinical practice. To demonstrate the utility of xOpat, we present three case studies, one conducted with a developer of AI algorithms for image segmentation and two with a research pathologist.
false
false
[ "Jirí Horák", "Katarína Furmanová", "Barbora Kozlíková", "Tomás Brázdil", "Petr Holub", "Martin Kacenga", "Matej Gallo", "Rudolf Nenutil", "Jan Byska", "Vít Rusnák" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/8RAjxI4jF0g?si=SqTJD9hHLeur1oAW&t=64", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/V3svw5L43z8?si=w5OcjtI2PSjngQu4", "icon": "video" } ]
CHI
2,023
"Explain What a Treemap is": Exploratory Investigation of Strategies for Explaining Unfamiliar Chart to Blind and Low Vision Users
10.1145/3544548.3581139
Visualization designers increasingly use diverse types of visualizations, but assistive technologies and education for blind and low vision people often focus on elementary chart types. We explore textual explanation as a more generalizable solution. We define three dimensions of explanation strategies based on education theories: comparing to a familiar chart type, describing how to draw one, and using a concrete example. We develop a prototype system that automatically generates text explanations from a given chart specification. We conduct an exploratory study with 24 legally blind people to observe both the effectiveness and the perceived effectiveness of the strategies. The findings include: description of visual appearance is overall more effective than instructions for drawing, effective strategies differ by each chart type and by each participant, and the user’s perceived effectiveness does not always lead to better performance. We demonstrate the feasibility of an explanation generation system and compile design considerations.
false
false
[ "Gyeongri Kim", "Jiho Kim", "Yea-Seul Kim" ]
[]
[]
[]
CHI
2,023
"What else can I do?" Examining the Impact of Community Data on Adaptation and Quality of Reflection in an Educational Game
10.1145/3544548.3580664
Adaptation, or ability and willingness to consider an alternative approach, is a critical component of learning through reflection, especially in educational games, where there are often multiple avenues to success. As a domain, educational games have shown increased interest in using retrospective visualizations to promote and support reflection. Such visualizations, which can facilitate comparison with peer data, may also have an impact on adaptation in educational games. This has, however, not been empirically examined within the domain. In this work, we examine how comparison with other players’ data influenced adaptation, a part of reflection, in the context of a game that teaches parallel programming. Our results indicate that comparison with peers does significantly impact willingness to try a different approach, but suggest that there may also be other ways. We discuss what these results mean for future use of retrospective visualizations in educational games and present opportunities for future work.
false
false
[ "Erica Kleinman", "Jennifer Villareale", "Murtuza N. Shergadwala", "Zhaoqing Teng", "Andy Bryant", "Jichen Zhu", "Magy Seif El-Nasr" ]
[]
[]
[]
CHI
2,023
"You Can See the Connections": Facilitating Visualization of Care Priorities in People Living with Multiple Chronic Health Conditions
10.1145/3544548.3580908
Individuals with multiple chronic health conditions (MCC) often face an overwhelming set of self-management work, resulting in a need to set care priorities. Yet, much self-management work is invisible to healthcare providers. This study aimed to understand how to support the development and sharing of connections between personal values and self-management tasks through the facilitated use of an interactive visualization system: Conversation Canvas. We conducted a field study with 13 participants with MCC, 3 caregivers, and 7 primary care providers in Washington State. Analysis of interviews with MCC participants showed that developing visualizations of connections between personal values, self-management tasks, and health conditions helped individuals make sense of connections relevant to their health and wellbeing, recognize a road map of central issues and their impacts, feel respected and understood, share priorities with providers, and support value-aligned changes. These findings demonstrated potential for the guided process and visualization to support priorities-aligned care.
false
false
[ "Hyeyoung Ryu", "Andrew B. L. Berry", "Catherine Y. Lim", "Andrea L. Hartzler", "Tad Hirsch", "Juanita I Trejo", "Zoë Abigail Bermet", "Brandi Crawford-Gallagher", "Vi Tran", "Dawn M. Ferguson", "David J. Cronkite", "Brooks Tiffany", "John Weeks", "James D. Ralston" ]
[]
[]
[]
CHI
2,023
A Need-Finding Study with Users of Geospatial Data
10.1145/3544548.3581370
Geospatial data is playing an increasingly critical role in the work of Earth and climate scientists, social scientists, and data journalists exploring spatiotemporal change in our environment and societies. However, existing software and programming tools for geospatial analysis and visualization are challenging to learn and difficult to use. The aim of this work is to identify the unmet computing needs of the diverse and expanding community of geospatial data users. We conducted a contextual inquiry study (n = 25) with domain experts using geospatial data in their current work. Through a thematic analysis, we found that participants struggled to (1) find and transform geospatial data to satisfy spatiotemporal constraints, (2) understand the behavior of geospatial operators, (3) track geospatial data provenance, and (4) explore the cartographic design space. These findings suggest design opportunities for developers and designers of geospatial analysis and visualization systems.
false
false
[ "Parker Ziegler", "Sarah E. Chasins" ]
[]
[]
[]
CHI
2,023
A Review and Collation of Graphical Perception Knowledge for Visualization Recommendation
10.1145/3544548.3581349
Selecting appropriate visual encodings is critical to designing effective visualization recommendation systems, yet few findings from graphical perception are typically applied within these systems. We observe two significant limitations in translating graphical perception knowledge into actionable visualization recommendation rules/constraints: inconsistent reporting of findings and a lack of shared data across studies. How can we translate the graphical perception literature into a knowledge base for visualization recommendation? We present a review of 59 papers that study user perception and performance across ten visual analysis tasks. Through this study, we contribute a JSON dataset that collates existing theoretical and experimental knowledge and summarizes key study outcomes in graphical perception. We illustrate how this dataset can inform automated encoding decisions with three representative visualization recommendation systems. Based on our findings, we highlight open challenges and opportunities for the community in collating graphical perception knowledge for a range of visualization recommendation scenarios.
false
false
[ "Zehua Zeng", "Leilani Battle" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.01271v3", "icon": "paper" } ]
CHI
2,023
Accessible Data Representation with Natural Sound
10.1145/3544548.3581087
Sonification translates data into non-speech audio. Such auditory representations can make data visualization accessible to people who are blind or have low vision (BLV). This paper presents a sonification method for translating common data visualization into a blend of natural sounds. We hypothesize that people’s familiarity with sounds drawn from nature, such as birds singing in a forest, and their ability to listen to these sounds in parallel, will enable BLV users to perceive multiple data points being sonified at the same time. Informed by an extensive literature review and a preliminary study with 5 BLV participants, we designed an accessible data representation tool, Susurrus, that combines our sonification method with other accessibility features, such as keyboard interaction and text-to-speech feedback. Finally, we conducted a user study with 12 BLV participants and report the potential and application of natural sounds for sonification compared to existing sonification tools.
false
false
[ "Md. Naimul Hoque", "Md Ehtesham-Ul-Haque", "Niklas Elmqvist", "Syed Masum Billah" ]
[]
[]
[]
CHI
2,023
AeroRigUI: Actuated TUIs for Spatial Interaction using Rigging Swarm Robots on Ceilings in Everyday Space
10.1145/3544548.3581437
We present AeroRigUI, an actuated tangible UI for 3D spatial embodied interaction. Using strings controlled by self-propelled swarm robots with a reeling mechanism on ceiling surfaces, our approach enables rigging (control through strings) physical objects’ position and orientation in the air. This can be applied to novel interactions in 3D space, including dynamic physical affordances, 3D information displays, and haptics. Utilizing the ceiling, an often underused room area, AeroRigUI can be applied for a range of applications such as room organization, data physicalization, and animated expressions. We demonstrate the applications based on our proof-of-concept prototype, which includes the hardware design of the rigging robots, named RigBots, and the software design for mid-air object control via interactive string manipulation. We also introduce technical evaluation and analysis of our approach prototype to address the hardware feasibility and safety. Overall, AeroRigUI enables a novel spatial and tangible UI system with great controllability and deployability.
false
false
[ "Lilith Yu", "Chenfeng Gao", "David Wu", "Ken Nakagaki" ]
[]
[]
[]
CHI
2,023
AI Shall Have No Dominion: on How to Measure Technology Dominance in AI-supported Human decision-making
10.1145/3544548.3581095
In this article, we propose a conceptual and methodological framework for measuring the impact of the introduction of AI systems in decision settings, based on the concept of technological dominance, i.e. the influence that an AI system can exert on human judgment and decisions. We distinguish between a negative component of dominance (automation bias) and a positive one (algorithm appreciation) by focusing on and systematizing the patterns of interaction between human judgment and AI support, or reliance patterns, and their associated cognitive effects. We then define statistical approaches for measuring these dimensions of dominance, as well as corresponding qualitative visualizations. By reporting about four medical case studies, we illustrate how the proposed methods can be used to inform assessments of dominance and of related cognitive biases in real-world settings. Our study lays the groundwork for future investigations into the effects of introducing AI support into naturalistic and collaborative decision-making.
false
false
[ "Federico Cabitza", "Andrea Campagner", "Riccardo Angius", "Chiara Natali", "Carlo Reverberi" ]
[]
[]
[]
CHI
2,023
Angler: Helping Machine Translation Practitioners Prioritize Model Improvements
10.1145/3544548.3580790
Machine learning (ML) models can fail in unexpected ways in the real world, but not all model failures are equal. With finite time and resources, ML practitioners are forced to prioritize their model debugging and improvement efforts. Through interviews with 13 ML practitioners at Apple, we found that practitioners construct small targeted test sets to estimate an error’s nature, scope, and impact on users. We built on this insight in a case study with machine translation models, and developed Angler, an interactive visual analytics tool to help practitioners prioritize model improvements. In a user study with 7 machine translation experts, we used Angler to understand prioritization practices when the input space is infinite, and obtaining reliable signals of model quality is expensive. Our study revealed that participants could form more interesting and user-focused hypotheses for prioritization by analyzing quantitative summary statistics and qualitatively assessing data by reading sentences.
false
false
[ "Samantha Robertson", "Zijie J. Wang", "Dominik Moritz", "Mary Beth Kery", "Fred Hohman" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.05967v1", "icon": "paper" } ]
CHI
2,023
AutoVis: Enabling Mixed-Immersive Analysis of Automotive User Interface Interaction Studies
10.1145/3544548.3580760
Automotive user interface (AUI) evaluation becomes increasingly complex due to novel interaction modalities, driving automation, heterogeneous data, and dynamic environmental contexts. Immersive analytics may enable efficient explorations of the resulting multilayered interplay between humans, vehicles, and the environment. However, no such tool exists for the automotive domain. With AutoVis, we address this gap by combining a non-immersive desktop with a virtual reality view enabling mixed-immersive analysis of AUIs. We identify design requirements based on an analysis of AUI research and domain expert interviews (N=5). AutoVis supports analyzing passenger behavior, physiology, spatial interaction, and events in a replicated study environment using avatars, trajectories, and heatmaps. We apply context portals and driving-path events as automotive-specific visualizations. To validate AutoVis against real-world analysis tasks, we implemented a prototype, conducted heuristic walkthroughs using authentic data from a case study and public datasets, and leveraged a real vehicle in the analysis process.
false
false
[ "Pascal Jansen", "Julian Britten", "Alexander Häusele", "Thilo Segschneider", "Mark Colley", "Enrico Rukzio" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.10531v1", "icon": "paper" } ]
CHI
2,023
CALVI: Critical Thinking Assessment for Literacy in Visualizations
10.1145/3544548.3581406
Visualization misinformation is a prevalent problem, and combating it requires understanding people’s ability to read, interpret, and reason about erroneous or potentially misleading visualizations, which lacks a reliable measurement: existing visualization literacy tests focus on well-formed visualizations. We systematically develop an assessment for this ability by: (1) developing a precise definition of misleaders (decisions made in the construction of visualizations that can lead to conclusions not supported by the data), (2) constructing initial test items using a design space of misleaders and chart types, (3) trying out the provisional test on 497 participants, and (4) analyzing the test tryout results and refining the items using Item Response Theory, qualitative analysis, a wrong-due-to-misleader score, and the content validity index. Our final bank of 45 items shows high reliability, and we provide item bank usage recommendations for future tests and different use cases. Related materials are available at: https://osf.io/pv67z/.
false
false
[ "Lily W. Ge", "Yuan Cui", "Matthew Kay 0001" ]
[ "HM" ]
[]
[]
CHI
2,023
Causalvis: Visualizations for Causal Inference
10.1145/3544548.3581236
Causal inference is a statistical paradigm for quantifying causal effects using observational data. It is a complex process, requiring multiple steps, iterations, and collaborations with domain experts. Analysts often rely on visualizations to evaluate the accuracy of each step. However, existing visualization toolkits are not designed to support the entire causal inference process within computational environments familiar to analysts. In this paper, we address this gap with Causalvis, a Python visualization package for causal inference. Working closely with causal inference experts, we adopted an iterative design process to develop four interactive visualization modules to support causal inference analysis tasks. The modules are then presented back to the experts for feedback and evaluation. We found that Causalvis effectively supported the iterative causal inference process. We discuss the implications of our findings for designing visualizations for causal inference, particularly for tasks of communication and collaboration.
false
false
[ "Grace Guo", "Ehud Karavani", "Alex Endert", "Bum Chul Kwon" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.00617v1", "icon": "paper" } ]
CHI
2,023
Charagraph: Interactive Generation of Charts for Realtime Annotation of Data-Rich Paragraphs
10.1145/3544548.3581091
Documents often have paragraphs packed with numbers that are difficult to extract, compare, and interpret. To help readers make sense of data in text, we introduce the concept of Charagraphs: dynamically generated interactive charts and annotations for in-situ visualization, comparison, and manipulation of numeric data included within text. Three Charagraph characteristics are defined: leveraging related textual information about data; integrating textual and graphical representations; and interacting at different contexts. We contribute a document viewer to select in-text data; generate and customize Charagraphs; merge and refine a Charagraph using other in-text data; and identify, filter, compare, and sort data synchronized between text and visualization. Results of a study show participants can easily create Charagraphs for diverse examples of data-rich text, and when answering questions about data in text, participants were more correct compared to only reading text.
false
false
[ "Damien Masson", "Sylvain Malacria", "Géry Casiez", "Daniel Vogel 0001" ]
[]
[]
[]
CHI
2,023
Chart Reader: Accessible Visualization Experiences Designed with Screen Reader Users
10.1145/3544548.3581186
Even though screen readers are a core accessibility tool for blind and low vision individuals (BLVIs), most visualizations are incompatible with screen readers. To improve accessible visualization experiences, we partnered with 10 BLV screen reader users (SRUs) in an iterative co-design study to design and develop accessible visualization experiences that afford SRUs the autonomy to interactively read and understand visualizations and their underlying data. During the five-month study, we explored accessible visualization prototypes with our design partners for three one-hour sessions. Our results provide feedback on the synthesized design concepts we explored, why (or why not) they aid comprehension and exploration for SRUs, and how differing design concepts can fit into cohesive accessible visualization experiences. We contribute both Chart Reader, a web-based accessibility engine resulting from our design iterations, and our distilled study findings—organized by design dimensions—in the creation of comprehensive accessible visualization experiences.
false
false
[ "John R. Thompson 0002", "Jesse J. Martinez", "Alper Sarikaya 0001", "Edward Cutrell", "Bongshin Lee" ]
[]
[]
[]
CHI
2,023
ChartDetective: Easy and Accurate Interactive Data Extraction from Complex Vector Charts
10.1145/3544548.3581113
Extracting underlying data from rasterized charts is tedious and inaccurate; values might be partially occluded or hard to distinguish, and the quality of the image limits the precision of the data being recovered. To address these issues, we introduce a semi-automatic system leveraging vector charts to extract the underlying data easily and accurately. The system is designed to make the most of vector information by relying on a drag-and-drop interface combined with selection, filtering, and previsualization features. A user study showed that participants spent less than 4 minutes to accurately recover data from charts published at CHI with diverse styles, thousands of data points, a combination of different encodings, and elements partially or completely occluded. Compared to other approaches relying on raster images, our tool successfully recovered all data, even when hidden, with a 78% lower relative error.
false
false
[ "Damien Masson", "Sylvain Malacria", "Daniel Vogel 0001", "Edward Lank", "Géry Casiez" ]
[ "BP" ]
[]
[]
CHI
2,023
Collaborating Across Realities: Analytical Lenses for Understanding Dyadic Collaboration in Transitional Interfaces
10.1145/3544548.3580879
Transitional Interfaces are a yet underexplored, emerging class of cross-reality user interfaces that enable users to freely move along the reality-virtuality continuum during collaboration. To analyze and understand how such collaboration unfolds, we propose four analytical lenses derived from an exploratory study of transitional collaboration with 15 dyads. While solving a complex spatial optimization task, participants could freely switch between three contexts, each with different displays (desktop screens, tablet-based augmented reality, head-mounted virtual reality), input techniques (mouse, touch, handheld controllers), and visual representations (monoscopic and allocentric 2D/3D maps, stereoscopic egocentric views). Using the rich qualitative and quantitative data from our study, we evaluated participants’ perceptions of transitional collaboration and identified commonalities and differences between dyads. We then derived four lenses including metrics and visualizations to analyze key aspects of transitional collaboration: (1) place and distance, (2) temporal patterns, (3) group use of contexts, (4) individual use of contexts.
false
false
[ "Jan-Henrik Schröder", "Daniel Schacht", "Niklas Peper", "Anita Marie Hamurculu", "Hans-Christian Jetter" ]
[ "BP" ]
[]
[]
CHI
2,023
Collaboration with Conversational AI Assistants for UX Evaluation: Questions and How to Ask them (Voice vs. Text)
10.1145/3544548.3581247
AI is promising in assisting UX evaluators with analyzing usability tests, but its judgments are typically presented as non-interactive visualizations. Evaluators may have questions about test recordings, but have no way of asking them. Interactive conversational assistants provide a Q&A dynamic that may improve analysis efficiency and evaluator autonomy. To understand the full range of analysis-related questions, we conducted a Wizard-of-Oz design probe study with 20 participants who interacted with simulated AI assistants via text or voice. We found that participants asked for five categories of information: user actions, user mental model, help from the AI assistant, product and task information, and user demographics. Those who used the text assistant asked more questions, but the question lengths were similar. The text assistant was perceived as significantly more efficient, but both were rated equally in satisfaction and trust. We also provide design considerations for future conversational AI assistants for UX evaluation.
false
false
[ "Emily Kuang", "Ehsan Jahangirzadeh Soure", "Mingming Fan 0001", "Jian Zhao 0010", "Kristen Shinohara" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.03638v1", "icon": "paper" } ]
CHI
2,023
Communicating Consequences: Visual Narratives, Abstraction, and Polysemy in Rural Bangladesh
10.1145/3544548.3581149
Information communication and visualization practices reflect two centuries of developments of conventions and best practices which may not be reflective of global audiences’ methods for conveying information. Contrasting between rural traditional visual culture and contemporary HCI and data-visualization, we argue that an understanding of traditional practices for information visualization is required for building rich data-narratives and making data-driven systems more accessible and culturally situated. Our ten-month ethnographic study investigates how rural Bangladeshi communities construct narratives through visual media. 1 Our observation, interviews, and FGDs (n=54) expose how participants convey risk management, decision-making, and monetary management practices to their peers. We find that villagers used a rich network of polysemic symbols and abstractions to manifest subjectivity, factuality, consequence, situatedness, and uncertainty; varied visual attributes for constructing narratives; and emphasized material relations among components in visuals. These findings inform the design of future systems for decision support in a culturally situated manner.
false
false
[ "Sharifa Sultana", "Syed Ishtiaque Ahmed", "Jeffrey M. Rzeszotarski" ]
[]
[]
[]
CHI
2,023
Computational Notebooks as Co-Design Tools: Engaging Young Adults Living with Diabetes, Family Carers, and Clinicians with Machine Learning Models
10.1145/3544548.3581424
Engaging end user groups with machine learning (ML) models can help align the design of predictive systems with people's needs and expectations. We present a co-design study investigating the benefits and challenges of using computational notebooks to inform ML models with end user groups. We used a computational notebook to engage young adults, carers, and clinicians with an example ML model that predicted health risk in diabetes care. Through co-design workshops and retrospective interviews, we found that participants particularly valued using the interactive data visualisations of the computational notebook to scaffold multidisciplinary learning, anticipate benefits and harms of the example ML model, and create fictional feature importance plots to highlight care needs. Participants also reported challenges, from running code cells to managing information asymmetries and power imbalances. We discuss the potential of leveraging computational notebooks as interactive co-design tools to meet end user needs early in ML model lifecycles.
false
false
[ "Amid Ayobi", "Jacob Hughes", "Christopher J. Duckworth", "Jakub J. Dylag", "Sam James", "Paul Marshall", "Matthew Guy", "Anitha Kumaran", "Adriane Chapman", "Michael J. Boniface", "Aisling Ann O'Kane" ]
[]
[]
[]
CHI
2,023
ConceptEVA: Concept-Based Interactive Exploration and Customization of Document Summaries
10.1145/3544548.3581260
With the most advanced natural language processing and artificial intelligence approaches, effective summarization of long and multi-topic documents—such as academic papers—for readers from different domains still remains a challenge. To address this, we introduce ConceptEVA, a mixed-initiative approach to generate, evaluate, and customize summaries for long and multi-topic documents. ConceptEVA incorporates a custom multi-task longformer encoder decoder to summarize longer documents. Interactive visualizations of document concepts as a network reflecting both semantic relatedness and co-occurrence help users focus on concepts of interest. The user can select these concepts and automatically update the summary to emphasize them. We present two iterations of ConceptEVA evaluated through an expert review and a within-subjects study. We find that participants’ satisfaction with customized summaries through ConceptEVA is higher than their own manually-generated summary, while incorporating critique into the summaries proved challenging. Based on our findings, we make recommendations for designing summarization systems incorporating mixed-initiative interactions.
false
false
[ "Xiaoyu Zhang", "Jianping Kelvin Li", "Po-Wei Chi", "Senthil K. Chandrasegaran", "Kwan-Liu Ma" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.17826v1", "icon": "paper" } ]
CHI
2,023
CrossCode: Multi-level Visualization of Program Execution
10.1145/3544548.3581390
Program visualizations help to form useful mental models of how programs work, and to reason and debug code. But these visualizations exist at a fixed level of abstraction, e.g., line-by-line. In contrast, programmers switch between many levels of abstraction when inspecting program behavior. Based on results from a formative study of hand-designed program visualizations, we designed CrossCode, a web-based program visualization system for JavaScript that leverages structural cues in syntax, control flow, and data flow to aggregate and navigate program execution across multiple levels of abstraction. In an exploratory qualitative study with experts, we found that CrossCode enabled participants to maintain a strong sense of place in program execution, was conducive to explaining program behavior, and helped track changes and updates to the program state.
false
false
[ "Devamardeep Hayatpur", "Daniel Wigdor", "Haijun Xia" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.03445v2", "icon": "paper" } ]
CHI
2,023
Data Abstraction Elephants: The Initial Diversity of Data Representations and Mental Models
10.1145/3544548.3580669
Two people looking at the same dataset will create different mental models, prioritize different attributes, and connect with different visualizations. We seek to understand the space of data abstractions associated with mental models and how well people communicate their mental models when sketching. Data abstractions have a profound influence on the visualization design, yet it’s unclear how universal they may be when not initially influenced by a representation. We conducted a study about how people create their mental models from a dataset. Rather than presenting tabular data, we presented each participant with one of three datasets in paragraph form, to avoid biasing the data abstraction and mental model. We observed various mental models, data abstractions, and depictions from the same dataset, and how these concepts are influenced by communication and purpose-seeking. Our results have implications for visualization design, especially during the discovery and data collection phase.
false
false
[ "Katy Williams", "Alex Bigelow", "Katherine E. Isaacs" ]
[]
[]
[]
CHI
2,023
Data, Data, Everywhere: Uncovering Everyday Data Experiences for People with Intellectual and Developmental Disabilities
10.1145/3544548.3581204
Data is everywhere but may not be accessible to everyone. Conventional data visualization tools and guidelines often do not actively consider the specific needs and abilities of people with Intellectual and Developmental Disabilities (IDD), leaving them excluded from data-driven activities and vulnerable to ethical issues. To understand the needs and challenges people with IDD have with data, we conducted 15 semi-structured interviews with individuals with IDD and their caregivers. Our algorithmic interview approach situated data in the lived experiences of people with IDD to uncover otherwise hidden data encounters in their everyday life. Drawing on findings and observations, we characterize how they conceptualize data, when and where they use data, and what barriers exist when they interact with data. We use our results as a lens to reimagine the role of visualization in data accessibility and establish a critical near-term research agenda for cognitively accessible visualization.
false
false
[ "Keke Wu", "Michelle Ho Tran", "Emma Petersen", "Varsha Koushik", "Danielle Albers Szafir" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.05655v1", "icon": "paper" } ]
CHI
2,023
DataDancing: An Exploration of the Design Space For Visualisation View Management for 3D Surfaces and Spaces
10.1145/3544548.3580827
Recent studies have explored how users of immersive visualisation systems arrange data representations in the space around them. Generally, these have focused on placement centred at eye-level in absolute room coordinates. However, work in HCI exploring full-body interaction has identified zones relative to the user’s body with different roles. We encapsulate the possibilities for visualisation view management into a design space (called “DataDancing”). From this design space we extrapolate a variety of view management prototypes, each demonstrating a different combination of interaction techniques and space use. The prototypes are enabled by a full-body tracking system including novel devices for torso and foot interaction. We explore four of these prototypes, encompassing standard wall and table-style interaction as well as novel foot interaction, in depth through a qualitative user study. Learning from the results, we improve the interaction techniques and propose two hybrid interfaces that demonstrate interaction possibilities of the design space.
false
false
[ "Jiazhou Liu", "Barrett Ens", "Arnaud Prouzeau", "Jim Smiley", "Isobel Kara Nixon", "Sarah Goodwin", "Tim Dwyer" ]
[]
[]
[]