Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
CHI
2,023
DataHalo: A Customizable Notification Visualization System for Personalized and Longitudinal Interactions
10.1145/3544548.3580828
People struggle with the overflow of smartphone notifications but often face two challenges: (1) prioritizing the informative notifications as they wish and (2) retaining the delivered information as long as they want to utilize it. In this paper, we present DataHalo, a customizable notification visualization system that represents notifications as prolonged ambient visualizations on the home screen. DataHalo supports keyword-based filtering and categorization, and draws graphical marks based on time-varying importance model to enable longitudinal interaction with the notifications. We evaluated DataHalo through a usability study (N = 17), from which we improved the interface. We then conducted a three-week deployment study (N = 12) to assess how people use DataHalo in their domestic contexts. Our study revealed that people generated various visualization settings for different kinds of apps. Drawing on both quantitative and qualitative findings, we discussed implications for supporting effective notification management through customizable ambient visualizations.
false
false
[ "GuHyun Han", "Jaehun Jung", "Young-Ho Kim", "Jinwook Seo" ]
[]
[]
[]
CHI
2,023
DataLev: Mid-air Data Physicalisation Using Acoustic Levitation
10.1145/3544548.3581016
Data physicalisation is a technique that encodes data through the geometric and material properties of an artefact, allowing users to engage with data in a more immersive and multi-sensory way. However, current methods of data physicalisation are limited in terms of their reconfigurability and the types of materials that can be used. Acoustophoresis—a method of suspending and manipulating materials using sound waves—offers a promising solution to these challenges. In this paper, we present DataLev, a design space and platform for creating reconfigurable, multimodal data physicalisations with enriched materiality using acoustophoresis. We demonstrate the capabilities of DataLev through eight examples and evaluate its performance in terms of reconfigurability and materiality. Our work offers a new approach to data physicalisation, enabling designers to create more dynamic, engaging, and expressive artefacts.
false
false
[ "Lei Gao", "Pourang Irani", "Sriram Subramanian", "Gowdham Prabhakar", "Diego Martínez Plasencia", "Ryuji Hirayama" ]
[]
[]
[]
CHI
2,023
DataParticles: Block-based and Language-oriented Authoring of Animated Unit Visualizations
10.1145/3544548.3581472
Unit visualizations have been widely used in data storytelling within interactive articles and videos. However, authoring data stories that contain animated unit visualizations is challenging due to the tedious, time-consuming process of switching back and forth between writing a narrative and configuring the accompanying visualizations and animations. To streamline this process, we present DataParticles, a block-based story editor that leverages the latent connections between text, data, and visualizations to help creators flexibly prototype, explore, and iterate on a story narrative and its corresponding visualizations. To inform the design of DataParticles, we interviewed 6 domain experts and studied a dataset of 44 existing animated unit visualizations to identify the narrative patterns and congruence principles they employed. A user study with 9 experts showed that DataParticles can significantly simplify the process of authoring data stories with animated unit visualizations by encouraging exploration and supporting fast prototyping.
false
false
[ "Yining Cao", "Jane L. E", "Zhutian Chen", "Haijun Xia" ]
[]
[]
[]
CHI
2,023
DataPilot: Utilizing Quality and Usage Information for Subset Selection during Visual Data Preparation
10.1145/3544548.3581509
Selecting relevant data subsets from large, unfamiliar datasets can be difficult. We address this challenge by modeling and visualizing two kinds of auxiliary information: (1) quality – the validity and appropriateness of data required to perform certain analytical tasks; and (2) usage – the historical utilization characteristics of data across multiple users. Through a design study with 14 data workers, we integrate this information into a visual data preparation and analysis tool, DataPilot. DataPilot presents visual cues about “the good, the bad, and the ugly” aspects of data and provides graphical user interface controls as interaction affordances, guiding users to perform subset selection. Through a study with 36 participants, we investigate how DataPilot helps users navigate a large, unfamiliar tabular dataset, prepare a relevant subset, and build a visualization dashboard. We find that users selected smaller, effective subsets with higher quality and usage, and with greater success and confidence.
false
false
[ "Arpit Narechania", "Fan Du", "Atanu R. Sinha", "Ryan A. Rossi", "Jane Hoffswell", "Shunan Guo", "Eunyee Koh", "Shamkant B. Navathe", "Alex Endert" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.01575v1", "icon": "paper" } ]
CHI
2,023
DeepLens: Interactive Out-of-distribution Data Detection in NLP Models
10.1145/3544548.3580741
Machine Learning (ML) has been widely used in Natural Language Processing (NLP) applications. A fundamental assumption in ML is that training data and real-world data should follow a similar distribution. However, a deployed ML model may suffer from out-of-distribution (OOD) issues due to distribution shifts in the real-world data. Though many algorithms have been proposed to detect OOD data from text corpora, there is still a lack of interactive tool support for ML developers. In this work, we propose DeepLens, an interactive system that helps users detect and explore OOD issues in massive text corpora. Users can efficiently explore different OOD types in DeepLens with the help of a text clustering method. Users can also dig into a specific text by inspecting salient words highlighted through neuron activation analysis. In a within-subjects user study with 24 participants, participants using DeepLens were able to find nearly twice more types of OOD issues accurately with 22% more confidence compared with a variant of DeepLens that has no interaction or visualization support.
false
false
[ "Da Song", "Zhijie Wang", "Yuheng Huang", "Lei Ma 0003", "Tianyi Zhang 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.01577v1", "icon": "paper" } ]
CHI
2,023
Deimos: A Grammar of Dynamic Embodied Immersive Visualisation Morphs and Transitions
10.1145/3544548.3580754
We present Deimos, a grammar for specifying dynamic embodied immersive visualisation morphs and transitions. A morph is a collection of animated transitions that are dynamically applied to immersive visualisations at runtime and is conceptually modelled as a state machine. It is comprised of state, transition, and signal specifications. States in a morph are used to generate animation keyframes, with transitions connecting two states together. A transition is controlled by signals, which are composable data streams that can be used to enable embodied interaction techniques. Morphs allow immersive representations of data to transform and change shape through user interaction, facilitating the embodied cognition process. We demonstrate the expressivity of Deimos in an example gallery and evaluate its usability in an expert user study of six immersive analytics researchers. Participants found the grammar to be powerful and expressive, and showed interest in drawing upon Deimos’ concepts and ideas in their own research.
false
false
[ "Benjamin Lee", "Arvind Satyanarayan", "Maxime Cordeil", "Arnaud Prouzeau", "Bernhard Jenny", "Tim Dwyer" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.13655v1", "icon": "paper" } ]
CHI
2,023
Designing Resource Allocation Tools to Promote Fair Allocation: Do Visualization and Information Framing Matter?
10.1145/3544548.3580739
Studies on human decision-making focused on humanitarian aid have found that cognitive biases can hinder the fair allocation of resources. However, few HCI and Information Visualization studies have explored ways to overcome those cognitive biases. This work investigates whether the design of interactive resource allocation tools can help to promote allocation fairness. We specifically study the effect of presentation format (using text or visualization) and a specific framing strategy (showing resources allocated to groups or individuals). In our three crowdsourced experiments, we provided different tool designs to split money between two fictional programs that benefit two distinct communities. Our main finding indicates that individual-framed visualizations and text may be able to curb unfair allocations caused by group-framed designs. This work opens new perspectives that can motivate research on how interactive tools and visualizations can be engineered to combat cognitive biases that lead to inequitable decisions.
false
false
[ "Arnav Verma", "Luiz Augusto de Macêdo Morais", "Pierre Dragicevic", "Fanny Chevalier" ]
[]
[]
[]
CHI
2,023
DRAVA: Aligning Human Concepts with Machine Learning Latent Dimensions for the Visual Exploration of Small Multiples
10.1145/3544548.3581127
Latent vectors extracted by machine learning (ML) are widely used in data exploration (e.g., t-SNE) but suffer from a lack of interpretability. While previous studies employed disentangled representation learning (DRL) to enable more interpretable exploration, they often overlooked the potential mismatches between the concepts of humans and the semantic dimensions learned by DRL. To address this issue, we propose Drava, a visual analytics system that supports users in 1) relating the concepts of humans with the semantic dimensions of DRL and identifying mismatches, 2) providing feedback to minimize the mismatches, and 3) obtaining data insights from concept-driven exploration. Drava provides a set of visualizations and interactions based on visual piles to help users understand and refine concepts and conduct concept-driven exploration. Meanwhile, Drava employs a concept adaptor model to fine-tune the semantic dimensions of DRL based on user refinement. The usefulness of Drava is demonstrated through application scenarios and experimental validation.
false
false
[ "Qianwen Wang", "Sehi L'Yi", "Nils Gehlenborg" ]
[]
[]
[]
CHI
2,023
ESCAPE: Countering Systematic Errors from Machine's Blind Spots via Interactive Visual Analysis
10.1145/3544548.3581373
Classification models learn to generalize the associations between data samples and their target classes. However, researchers have increasingly observed that machine learning practice easily leads to systematic errors in AI applications, a phenomenon referred to as “AI blindspots.” Such blindspots arise when a model is trained with training samples (e.g., cat/dog classification) where important patterns (e.g., black cats) are missing or periphery/undesirable patterns (e.g., dogs with grass background) are misleading towards a certain class. Even more sophisticated techniques cannot guarantee to capture, reason about, and prevent the spurious associations. In this work, we propose ESCAPE, a visual analytic system that promotes a human-in-the-loop workflow for countering systematic errors. By allowing human users to easily inspect spurious associations, the system facilitates users to spontaneously recognize concepts associated misclassifications and evaluate mitigation strategies that can reduce biased associations. We also propose two statistical approaches, relative concept association to better quantify the associations between a concept and instances, and debias method to mitigate spurious associations. We demonstrate the utility of our proposed ESCAPE system and statistical measures through extensive evaluation including quantitative experiments, usage scenarios, expert interviews, and controlled user experiments.
false
false
[ "Yongsu Ahn", "Yu-Ru Lin", "Panpan Xu", "Zeng Dai" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.09657v1", "icon": "paper" } ]
CHI
2,023
Exploring Chart Question Answering for Blind and Low Vision Users
10.1145/3544548.3581532
Data visualizations can be complex or involve numerous data points, making them impractical to navigate using screen readers alone. Question answering (QA) systems have the potential to support visualization interpretation and exploration without overwhelming blind and low vision (BLV) users. To investigate if and how QA systems can help BLV users in working with visualizations, we conducted a Wizard of Oz study with 24 BLV people where participants freely posed queries about four visualizations. We collected 979 queries and mapped them to popular analytic task taxonomies. We found that retrieving value and finding extremum were the most common tasks, participants often made complex queries and used visual references, and the data topic notably influenced the queries. We compile a list of design considerations for accessible chart QA systems and make our question corpus publicly available to guide future research and development.
false
false
[ "Jiho Kim", "Arjun Srinivasan", "Nam Wook Kim", "Yea-Seul Kim" ]
[]
[]
[]
CHI
2,023
Exploring Co-located Interactions with a Shape-Changing Bar Chart
10.1145/3544548.3581214
Data-physicalizations encode data and meaning through geometry or material properties, providing a non-planar view of data, offering novel opportunities for interrogation, discovery and presentation. This field has explored how single users interact with complex 3D data, but the challenges in the application of this technology to collaborative situations have not been addressed. We describe a study exploring interactions and preferences among co-located individuals using a dynamic data-physicalization in the form of a shape-changing bar chart, and compare this to previous work with single participants. Results suggest that co-located interactions with physical data prompt non-interactive hand gestures, a mirroring of physicalizations, and novel hand gestures in comparison to single participant studies. We also note that behavioural similarities in participants between interactive tabletop studies and data-physicalizations may be capitalised upon for further development of these dynamic representations. Finally, we consider the implications and challenges for the adoption of these types of platforms.
false
false
[ "Miriam Sturdee", "Hayat Kara", "Jason Alexander" ]
[]
[]
[]
CHI
2,023
FlowAR: How Different Augmented Reality Visualizations of Online Fitness Videos Support Flow for At-Home Yoga Exercises
10.1145/3544548.3580897
Online fitness video tutorials are an increasingly popular way to stay fit at home without a personal trainer. However, to keep the screen playing the video in view, users typically disrupt their balance and break the motion flow — two main pillars for the correct execution of yoga poses. While past research partially addressed this problem, these approaches supported only a limited view of the instructor and simple movements. To enable the fluid execution of complex full-body yoga exercises, we propose FlowAR, an augmented reality system for home workouts that shows training video tutorials as always-present virtual static and dynamic overlays around the user. We tested different overlay layouts in a study with 16 participants, using motion capture equipment for baseline performance. Then, we iterated the prototype and tested it in a furnished lab simulating home settings with 12 users. Our results highlight the advantages of different visualizations and the system’s general applicability.
false
false
[ "Hye-Young Jo", "Laurenz Seidel", "Michel Pahud", "Mike Sinclair", "Andrea Bianchi" ]
[]
[]
[]
CHI
2,023
From Asymptomatics to Zombies: Visualization-Based Education of Disease Modeling for Children
10.1145/3544548.3581573
Throughout the COVID-19 pandemic, visualizations became commonplace in public communications to help people make sense of the world and the reasons behind government-imposed restrictions. Though the adult population were the main target of these messages, children were affected by restrictions through not being able to see friends and virtual schooling. However, through these daily models and visualizations, the pandemic response provided a way for children to understand what data scientists really do and provided new routes for engagement with STEM subjects. In this paper, we describe the development of an interactive and accessible visualization tool to be used in workshops for children to explain computational modeling of diseases, in particular COVID-19. We detail our design decisions based on approaches evidenced to be effective and engaging such as unplugged activities and interactivity. We share reflections and learnings from delivering these workshops to 140 children and assess their effectiveness.
false
false
[ "Graham Mcneill", "Max Sondag", "Stewart Powell", "Phoebe Asplin", "Cagatay Turkay", "Faron Moller", "Daniel Archambault" ]
[]
[]
[]
CHI
2,023
GAM Coach: Towards Interactive and User-centered Algorithmic Recourse
10.1145/3544548.3580816
Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed. However, a recourse plan’s actionability is subjective and unlikely to match developers’ expectations completely. We present GAM Coach, a novel open-source system that adapts integer linear programming to generate customizable counterfactual explanations for Generalized Additive Models (GAMs), and leverages interactive visualizations to enable end users to iteratively generate recourse plans meeting their needs. A quantitative user study with 41 participants shows our tool is usable and useful, and users prefer personalized recourse plans over generic plans. Through a log analysis, we explore how users discover satisfactory recourse plans, and provide empirical evidence that transparency can lead to more opportunities for everyday users to discover counterintuitive patterns in ML models. GAM Coach is available at: https://poloclub.github.io/gam-coach/.
false
false
[ "Zijie J. Wang", "Jennifer Wortman Vaughan", "Rich Caruana", "Duen Horng Chau" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.14165v2", "icon": "paper" } ]
CHI
2,023
GeoCamera: Telling Stories in Geographic Visualizations with Camera Movements
10.1145/3544548.3581470
In geographic data videos, camera movements are frequently used and combined to present information from multiple perspectives. However, creating and editing camera movements requires significant time and professional skills. This work aims to lower the barrier of crafting diverse camera movements for geographic data videos. First, we analyze a corpus of 66 geographic data videos and derive a design space of camera movements with a dimension for geospatial targets and one for narrative purposes. Based on the design space, we propose a set of adaptive camera shots and further develop an interactive tool called GeoCamera. This interactive tool allows users to flexibly design camera movements for geographic visualizations. We verify the expressiveness of our tool through case studies and evaluate its usability with a user study. The participants find that the tool facilitates the design of camera movements.
false
false
[ "Wenchao Li 0005", "Zhan Wang", "Yun Wang 0012", "Di Weng", "Liwenhan Xie", "Siming Chen 0001", "Haidong Zhang", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.06460v3", "icon": "paper" } ]
CHI
2,023
GestureExplorer: Immersive Visualisation and Exploration of Gesture Data
10.1145/3544548.3580678
This paper presents the design and evaluation of GestureExplorer, an Immersive Analytics tool that supports the interactive exploration, classification and sensemaking with large sets of 3D temporal gesture data. GestureExplorer features 3D skeletal and trajectory visualisations of gestures combined with abstract visualisations of clustered sets of gestures. By leveraging the large immersive space afforded by a Virtual Reality interface our tool allows free navigation and control of viewing perspective for users to gain a better understanding of gestures. We explored a selection of classification methods to provide an overview of the dataset that was linked to a detailed view of the data that showed different visualisation modalities. We evaluated GestureExplorer with two user studies and collected feedback from participants with diverse visualisation and analytics backgrounds. Our results demonstrated the promising capability of GestureExplorer for providing a useful and engaging experience in exploring and analysing gesture data.
false
false
[ "Ang Li 0007", "Jiazhou Liu", "Maxime Cordeil", "Jack Topliss", "Thammathip Piumsomboon", "Barrett Ens" ]
[]
[]
[]
CHI
2,023
Going, Going, Gone: Exploring Intention Communication for Multi-User Locomotion in Virtual Reality
10.1145/3544548.3581259
Exploring virtual worlds together with others adds a social component to the Virtual Reality (VR) experience that increases connectedness. In the physical world, joint locomotion comes naturally through implicit intention communication and subsequent adjustments of the movement patterns. In VR, however, discrete locomotion techniques such as point&teleport come without prior intention communication, hampering the collective experience. Related work proposes fixed groups, with a single person controlling the group movement, resulting in the loss of individual movement capabilities. To close the gap and mediate between these two extremes, we introduce three intention communication methods and explore them with two baseline methods. We contribute the results of a controlled experiment (n=20) investigating these methods from the perspective of a leader and a follower in a dyadic locomotion task. Our results suggest shared visualizations support the understanding of movement intentions, increasing the group feeling while maintaining individual freedom of movement.
false
false
[ "Julian Rasch", "Vladislav Dmitrievic Rusakov", "Martin Schmitz", "Florian Müller 0003" ]
[ "BP" ]
[]
[]
CHI
2,023
Graphical Perception of Saliency-based Model Explanations
10.1145/3544548.3581320
In recent years, considerable work has been devoted to explaining predictive, deep learning-based models, and in turn how to evaluate explanations. An important class of evaluation methods are ones that are human-centered, which typically require the communication of explanations through visualizations. And while visualization plays a critical role in perceiving and understanding model explanations, how visualization design impacts human perception of explanations remains poorly understood. In this work, we study the graphical perception of model explanations, specifically, saliency-based explanations for visual recognition models. We propose an experimental design to investigate how human perception is influenced by visualization design, wherein we study the task of alignment assessment, or whether a saliency map aligns with an object in an image. Our findings show that factors related to visualization design decisions, the type of alignment, and qualities of the saliency map all play important roles in how humans perceive saliency-based visual explanations.
false
false
[ "Yayan Zhao", "Mingwei Li", "Matthew Berger" ]
[]
[]
[]
CHI
2,023
Groupnamics: Designing an Interface for Overviewing and Managing Parallel Group Discussions in an Online Classroom
10.1145/3544548.3581322
Instructors facilitating online classes have a limited ability to see and hear interactions of student groups working in parallel, which prevents them from interacting with students effectively. In this work, we explore interface design for providing an overview of parallel group discussions in online classrooms. We derive design considerations through a participatory design process and instantiate them in our visualization interface, Groupnamics. Groupnamics visualizes recent vocal activities and discussion statuses of each group in a one-page view, facilitating identification of groups where intervention may be needed. Our user study with 16 instructors confirmed that Groupnamics can successfully provide cues for when instructors should join group discussions and improvements on the perceived usefulness and ease of use over the baseline interface representing existing videoconferencing tools. Our qualitative results suggest future research directions in interface design for online parallel group discussions.
false
false
[ "Arissa J. Sato", "Zefan Sramek", "Koji Yatani" ]
[]
[]
[]
CHI
2,023
GVQA: Learning to Answer Questions about Graphs with Visualizations via Knowledge Base
10.1145/3544548.3581067
Graphs are common charts used to represent the topological relationship between nodes. It is a powerful tool for data analysis and information retrieval tasks involve asking questions about graphs. In formative study, we found that questions for graphs are not only about the relationship of nodes but also about the properties of graph elements. We propose a pipeline to answer natural language questions about graph visualizations and generate visual answers. We first extract the data from graphs and convert them into GML format. We design data structures to encode graph information and convert them into an knowledge base. We then extract topic entities from questions. We feed questions, entities and knowledge bases into our question-answer model to obtain the SPARQL queries for textual answers. Finally, we design a module to present the answers visually. A user study demonstrates that these visual and textual answers are useful, credible and and transparent.
false
false
[ "Sicheng Song", "Juntong Chen", "Chenhui Li", "Changbo Wang" ]
[]
[]
[]
CHI
2,023
Here and Now: Creating Improvisational Dance Movements with a Mixed Reality Mirror
10.1145/3544548.3580666
This paper explores using mixed reality (MR) mirrors for supporting improvisational dance making. Motivated by the prevalence of mirrors in dance studios and inspired by Forsythe’s Improvisation Technologies, we conducted workshops with 13 dancers and choreographers to inform the design of future MR visualisation and annotation tools for dance. The workshops involved using a prototype MR mirror as a technology probe that reveals the spatial and temporal relationships between the reflected dancing body and its surroundings during improvisation; speed dating group interviews around future design ideas; follow-up surveys and extended interviews with a digital media dance artist and a dance educator. Our findings highlight how the MR mirror enriches dancers’ temporal and spatial perception, creates multi-layered presence, and affords appropriation by dancers. We also discuss the unique place of MR mirrors in the theoretical context of dance and in the history of movement visualisation, and distil lessons for broader HCI research.
false
false
[ "Qiushi Zhou", "Louise Grebel", "Andrew Irlitti", "Julie Ann Minaai", "Jorge Gonçalves 0001", "Eduardo Velloso" ]
[]
[]
[]
CHI
2,023
How Can Deep Neural Networks Aid Visualization Perception Research? Three Studies on Correlation Judgments in Scatterplots
10.1145/3544548.3581111
How deep neural networks can aid visualization perception research is a wide-open question. This paper provides insights from three perspectives—prediction, generalization, and interpretation—via training and analyzing deep convolutional neural networks on human correlation judgments in scatterplots across three studies. The first study assesses the accuracy of twenty-nine neural network architectures in predicting human judgments, finding that a subset of the architectures (e.g., VGG-19) has comparable accuracy to the best-performing regression analyses in prior research. The second study shows that the resulting models from the first study display better generalizability than prior models on two other judgment datasets for different scatterplot designs. The third study interprets visual features learned by a convolutional neural network model, providing insights about how the model makes predictions, and identifies potential features that could be investigated in human correlation perception studies. Together, this paper suggests that deep neural networks can serve as a tool for visualization perception researchers in devising potential empirical study designs and hypothesizing about perpetual judgments. The preprint, data, code, and training logs are available at https://doi.org/10.17605/osf.io/exa8m.
false
false
[ "Fumeng Yang", "Yuxin Ma", "Lane Harrison", "James Tompkin 0001", "David H. Laidlaw" ]
[]
[]
[]
CHI
2,023
How Data Analysts Use a Visualization Grammar in Practice
10.1145/3544548.3580837
Visualization grammars, often based on the Grammar of Graphics (GoG), have much potential for augmenting data analysis in a programming environment. However, we do not know how analysts conceptualize grammar abstractions, or how a visualization grammar works with data analysis in practice. Therefore, we qualitatively analyzed how experienced analysts (N = 6) from TidyTuesday, a social data project, wrangled and visualized data using GoG-based ggplot2 without given tasks in R Markdown. Though participants’ analysis and customization needs could mismatch with GoG component design, their analysis processes aligned with the goal of GoG to expedite visualization iteration. We also found a feedback loop and tight coupling between visualization and data transformation code, explaining both participants’ productivity and their errors. From these results, we discuss how future visualization grammars can become more practical for analysts and how visualization grammar and analysis tools can better integrate within a programming (i.e., computational notebook) environment.
false
false
[ "Xiaoying Pu", "Matthew Kay 0001" ]
[]
[]
[]
CHI
2,023
How Instructional Data Physicalisation Fosters Reflection in Personal Informatics
10.1145/3544548.3581198
The ever-increasing number of devices quantifying our lives offers a perspective of high awareness of one’s wellbeing, yet it remains a challenge for personal informatics (PI) to effectively support data-based reflection. Effective reflection is recognised as a key factor for PI technologies to foster wellbeing. Here, we investigate whether building tangible representations of health data can offer engaging and reflective experiences. We conducted a between-subjects study where n = 60 participants explored their immediate blood pressure data in relation to medical norms. They either used a standard mobile app, built a data representation from LEGO® bricks based on instructions, or completed a free-form brick build. We found that building with instructions fostered more comparison and using bricks fostered focused attention. The free-form condition required extra time to complete, and lacked usability. Our work shows that designing instructional physicalisation experiences for PI is a means of improving engagement and understanding of personal data.
false
false
[ "Marit Bentvelzen", "Julia Dominiak", "Jasmin Niess", "Frederique Henraat", "Pawel W. Wozniak" ]
[]
[]
[]
CHI
2,023
iBall: Augmenting Basketball Videos with Gaze-moderated Embedded Visualizations
10.1145/3544548.3581266
We present iBall, a basketball video-watching system that leverages gaze-moderated embedded visualizations to facilitate game understanding and engagement of casual fans. Video broadcasting and online video platforms make watching basketball games increasingly accessible. Yet, for new or casual fans, watching basketball videos is often confusing due to their limited basketball knowledge and the lack of accessible, on-demand information to resolve their confusion. To assist casual fans in watching basketball videos, we compared the game-watching behaviors of casual and die-hard fans in a formative study and developed iBall based on the findings. iBall embeds visualizations into basketball videos using a computer vision pipeline, and automatically adapts the visualizations based on the game context and users’ gaze, helping casual fans appreciate basketball games without being overwhelmed. We confirmed the usefulness, usability, and engagement of iBall in a study with 16 casual fans, and further collected feedback from 8 die-hard fans.
false
false
[ "Zhutian Chen", "Qisen Yang", "Jiarui Shan", "Tica Lin", "Johanna Beyer", "Haijun Xia", "Hanspeter Pfister" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.03476v2", "icon": "paper" } ]
CHI
2,023
Interactive Context-Preserving Color Highlighting for Multiclass Scatterplots
10.1145/3544548.3580734
Color is one of the main visual channels used for highlighting elements of interest in visualization. However, in multi-class scatterplots, color highlighting often comes at the expense of degraded color discriminability. In this paper, we argue for context-preserving highlighting during the interactive exploration of multi-class scatterplots to achieve desired pop-out effects, while maintaining good perceptual separability among all classes and consistent color mapping schemes under varying points of interest. We do this by first generating two contrastive color mapping schemes with large and small contrasts to the background. Both schemes maintain good perceptual separability among all classes and ensure that when colors from the two palettes are assigned to the same class, they have a high color consistency in color names. We then interactively combine these two schemes to create a dynamic color mapping for highlighting different points of interest. We demonstrate the effectiveness through crowd-sourced experiments and case studies.
false
false
[ "Kecheng Lu", "Khairi Reda", "Oliver Deussen", "Yunhai Wang" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.05368v1", "icon": "paper" } ]
CHI
2,023
Misleading Beyond Visual Tricks: How People Actually Lie with Charts
10.1145/3544548.3580910
Data visualizations can empower an audience to make informed decisions. At the same time, deceptive representations of data can lead to inaccurate interpretations while still providing an illusion of data-driven insights. Existing research on misleading visualizations primarily focuses on examples of charts and techniques previously reported to be deceptive. These approaches do not necessarily describe how charts mislead the general population in practice. We instead present an analysis of data visualizations found in a real-world discourse of a significant global event—Twitter posts with visualizations related to the COVID-19 pandemic. Our work shows that, contrary to conventional wisdom, violations of visualization design guidelines are not the dominant way people mislead with charts. Specifically, they do not disproportionately lead to reasoning errors in posters’ arguments. Through a series of examples, we present common reasoning errors and discuss how even faithfully plotted data visualizations can be used to support misinformation.
false
false
[ "Maxim Lisnic", "Cole Polychronis", "Alexander Lex", "Marina Kogan" ]
[]
[]
[]
CHI
2,023
NetworkNarratives: Data Tours for Visual Network Exploration and Analysis
10.1145/3544548.3581452
This paper introduces semi-automatic data tours to aid the exploration of complex networks. Exploring networks requires significant effort and expertise and can be time-consuming and challenging. Distinct from guidance and recommender systems for visual analytics, we provide a set of goal-oriented tours for network overview, ego-network analysis, community exploration, and other tasks. Based on interviews with five network analysts, we developed a user interface (NetworkNarratives) and 10 example tours. The interface allows analysts to navigate an interactive slideshow featuring facts about the network using visualizations and textual annotations. On each slide, an analyst can freely explore the network and specify nodes, links, or subgraphs as seed elements for follow-up tours. Two studies, comprising eight expert and 14 novice analysts, show that data tours reduce exploration effort, support learning about network exploration, and can aid the dissemination of analysis results. NetworkNarratives is available online, together with detailed illustrations for each tour.
false
false
[ "Wenchao Li 0005", "Sarah Schöttler", "James Scott-Brown", "Yun Wang 0012", "Siming Chen 0001", "Huamin Qu", "Benjamin Bach" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.06456v1", "icon": "paper" } ]
CHI
2,023
NFTDisk: Visual Detection of Wash Trading in NFT Markets
10.1145/3544548.3581466
With the growing popularity of Non-Fungible Tokens (NFT), a new type of digital assets, various fraudulent activities have appeared in NFT markets. Among them, wash trading has become one of the most common frauds in NFT markets, which attempts to mislead investors by creating fake trading volumes. Due to the sophisticated patterns of wash trading, only a subset of them can be detected by automatic algorithms, and manual inspection is usually required. We propose NFTDisk, a novel visualization for investors to identify wash trading activities in NFT markets, where two linked visualization modules are presented: a radial visualization module with a disk metaphor to overview NFT transactions and a flow-based visualization module to reveal detailed NFT flows at multiple levels. We conduct two case studies and an in-depth user interview with 14 NFT investors to evaluate NFTDisk. The results demonstrate its effectiveness in exploring wash trading activities in NFT markets.
false
false
[ "Xiaolin Wen", "Yong Wang 0021", "Xuanwu Yue", "Feida Zhu 0001", "Min Zhu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.05863v1", "icon": "paper" } ]
CHI
2,023
Notable: On-the-fly Assistant for Data Storytelling in Computational Notebooks
10.1145/3544548.3580965
Computational notebooks are widely used for data analysis. Their interleaved displays of code and execution results (e.g., visualizations) are welcomed since they enable iterative analysis and preserve the exploration process. However, the communication of data findings remains challenging in computational notebooks. Users have to carefully identify useful findings from useless ones, document them with texts and visual embellishments, and then organize them in different tools. Such workflow greatly increases their workload, according to our interviews with practitioners. To address the challenge, we designed Notable to offer on-the-fly assistance for data storytelling in computational notebooks. It provides intelligent support to minimize the work of documenting and organizing data findings and diminishes the cost of switching between data exploration and storytelling. To evaluate Notable, we conducted a user study with 12 data workers. The feedback from user study participants verifies its effectiveness and usability.
false
false
[ "Haotian Li 0001", "Lu Ying", "Haidong Zhang", "Yingcai Wu", "Huamin Qu", "Yun Wang 0012" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.04059v1", "icon": "paper" } ]
CHI
2,023
On the Design of AI-powered Code Assistants for Notebooks
10.1145/3544548.3580940
AI-powered code assistants, such as Copilot, are quickly becoming a ubiquitous component of contemporary coding contexts. Among these environments, computational notebooks, such as Jupyter, are of particular interest as they provide rich interface affordances that interleave code and output in a manner that allows for both exploratory and presentational work. Despite their popularity, little is known about the appropriate design of code assistants in notebooks. We investigate the potential of code assistants in computational notebooks by creating a design space (reified from a survey of extant tools) and through an interview-design study (with 15 practicing data scientists). Through this work, we identify challenges and opportunities for future systems in this space, such as the value of disambiguation for tasks like data visualization, the potential of tightly scoped domain-specific tools (like linters), and the importance of polite assistants.
false
false
[ "Andrew M. McNutt", "Chenglong Wang", "Robert A. DeLine", "Steven Mark Drucker" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2301.11178v1", "icon": "paper" } ]
CHI
2,023
Passenger Perceptions, Information Preferences, and Usability of Crowding Visualizations on Public Displays in Transit Stations and Vehicles
10.1145/3544548.3581241
Large crowds in public transit stations and vehicles introduce obstacles for wayfinding, hygiene, and physical distancing. Public displays that currently provide on-site transit information could also provide critical crowdedness information. Therefore, we examined people’s crowd perceptions and information preferences before and during the pandemic, and designs for visualizing crowdedness to passengers. We first report survey results with public transit users (n = 303), including the usability results of three crowdedness visualization concepts. Then, we present two animated crowd simulations on public displays that we evaluated in a field study (n = 44). We found that passengers react very positively to crowding information, especially before boarding a vehicle. Visualizing the exact physical spaces occupied on transit vehicles was most useful for avoiding crowded areas. However, visualizing the overall fullness of vehicles was the easiest to understand. We discuss design implications for communicating crowding information to support decision-making and promote a sense of safety.
false
false
[ "Leah Zhang-Kennedy", "Saira Aziz", "Oluwafunminitemi (Temi) Oluwadare", "Lyndon Pan", "Zeyu Wu", "Sydney E. C. Lamorea", "Soda Li", "Michael Sun", "Ville Mäkelä" ]
[]
[]
[]
CHI
2,023
Pearl: Physical Environment based Augmented Reality Lenses for In-Situ Human Movement Analysis
10.1145/3544548.3580715
This paper presents Pearl, a mixed-reality approach for the analysis of human movement data in situ. As the physical environment shapes human motion and behavior, the analysis of such motion can benefit from the direct inclusion of the environment in the analytical process. We present methods for exploring movement data in relation to surrounding regions of interest, such as objects, furniture, and architectural elements. We introduce concepts for selecting and filtering data through direct interaction with the environment, and a suite of visualizations for revealing aggregated and emergent spatial and temporal relations. More sophisticated analysis is supported through complex queries comprising multiple regions of interest. To illustrate the potential of Pearl, we developed an Augmented Reality-based prototype and conducted expert review sessions and scenario walkthroughs in a simulated exhibition. Our contribution lays the foundation for leveraging the physical environment in the in-situ analysis of movement data.
false
false
[ "Weizhou Luo", "Zhongyuan Yu", "Rufat Rzayev", "Marc Satkowski", "Stefan Gumhold", "Matthew McGinity", "Raimund Dachselt" ]
[]
[]
[]
CHI
2,023
Perceptual Pat: A Virtual Human Visual System for Iterative Visualization Design
10.1145/3544548.3580974
Designing a visualization is often a process of iterative refinement where the designer improves a chart over time by adding features, improving encodings, and fixing mistakes. However, effective design requires external critique and evaluation. Unfortunately, such critique is not always available on short notice and evaluation can be costly. To address this need, we present Perceptual Pat, an extensible suite of AI and computer vision techniques that forms a virtual human visual system for supporting iterative visualization design. The system analyzes snapshots of a visualization using an extensible set of filters—including gaze maps, text recognition, color analysis, etc—and generates a report summarizing the findings. The web-based Pat Design Lab provides a version tracking system that enables the designer to track improvements over time. We validate Perceptual Pat using a longitudinal qualitative study involving 4 professional visualization designers that used the tool over a few days to design a new visualization.
false
false
[ "Sungbok Shin", "Sanghyun Hong 0001", "Niklas Elmqvist" ]
[]
[]
[]
CHI
2,023
Polagons: Designing and Fabricating Polarized Light Mosaics with User-Defined Color-Changing Behaviors
10.1145/3544548.3580639
Polarized light mosaics (PLMs) are color-changing structures that alter their appearance based on the orientation of incident polarized light. While a few artists have developed techniques for crafting PLMs by hand, the underlying material properties are difficult to reason about; there exist no tools to bridge the high-level design objectives with the low-level physics knowledge needed to create PLMs. In this paper, we introduce the first system for creating Polagons: machine-made PLMs crafted from cellophane with user-defined color changing behaviors. Our system includes an interface for designing and visualizing Polagons as well as a fabrication process based on laser cutting and welding that requires minimal assembly by the user. We define the design space for Polagons and demonstrate how formalizing the process for creating PLMs can enable new applications in fields such as education, data visualization, and fashion.
false
false
[ "Ticha Sethapakdi", "Laura Huang", "Vivian Hsinyueh Chan", "Lung-Pan Cheng", "Fernando Fuzinatto Dall'Agnol", "Mackenzie Leake", "Stefanie Mueller 0001" ]
[]
[]
[]
CHI
2,023
ProxSituated Visualization: An Extended Model of Situated Visualization using Proxies for Physical Referents
10.1145/3544548.3580952
Existing situated visualization models assume the user is able to directly interact with the objects and spaces to which the data refers (known as physical referents). We review a growing body of work exploring scenarios where the user interacts with a proxy representation of the physical referent rather than immediately with the object itself. This introduces a complex mixture of immediate situatedness and proxies of situatedness that goes beyond the expressiveness of current models. We propose an extended model of situated visualization that encompasses Immediate Situated Visualization and ProxSituated (Proxy of Situated) Visualization. Our model describes a set of key entities involved in proxSituated scenarios and important relationships between them. From this model, we derive design dimensions and apply them to existing situated visualization work. The resulting design space allows us to describe and evaluate existing scenarios, as well as to creatively generate new conceptual scenarios.
false
false
[ "Kadek Ananta Satriadi", "Andrew Cunningham", "Ross T. Smith", "Tim Dwyer", "Adam Drogemuller", "Bruce H. Thomas" ]
[]
[]
[]
CHI
2,023
Rapsai: Accelerating Machine Learning Prototyping of Multimedia Applications through Visual Programming
10.1145/3544548.3581338
In recent years, there has been a proliferation of multimedia applications that leverage machine learning (ML) for interactive experiences. Prototyping ML-based applications is, however, still challenging, given complex workflows that are not ideal for design and experimentation. To better understand these challenges, we conducted a formative study with seven ML practitioners to gather insights about common ML evaluation workflows. The study helped us derive six design goals, which informed Rapsai1, a visual programming platform for rapid and iterative development of end-to-end ML-based multimedia applications. Rapsai features a node-graph editor to facilitate interactive characterization and visualization of ML model performance. Rapsai streamlines end-to-end prototyping with interactive data augmentation and model comparison capabilities in its no-coding environment. Our evaluation of Rapsai in four real-world case studies (N=15) suggests that practitioners can accelerate their workflow, make more informed decisions, analyze strengths and weaknesses, and holistically evaluate model behavior with real-world input.
false
false
[ "Ruofei Du", "Na Li", "Jing Jin", "Michelle Carney", "Scott Miles", "Maria Kleiner", "Xiuxiu Yuan", "Yinda Zhang 0001", "Anuva Kulkarni", "Xingyu Liu", "Ahmed Sabie", "Sergio Orts-Escolano", "Abhishek Kar", "Ping Yu", "Ram Iyengar", "Adarsh Kowdle", "Alex Olwal" ]
[ "HM" ]
[]
[]
CHI
2,023
Showing Flow: Comparing Usability of Chord and Sankey Diagrams
10.1145/3544548.3581119
Chord and Sankey diagrams are two common techniques for visualizing flows. Chord diagrams use a radial layout with a single circular axis, and Sankey diagrams use a left-to-right layout with two vertical axes. Previous work suggests both strengths and weaknesses of the radial approach, but little is known about the usability and interpretability of these two layout styles for showing flow. We carried out a study where participants answered questions using equivalent Chord and Sankey diagrams. We measured completion time, errors, perceived effort, and preference. Our results show that participants took substantially longer to answer questions with Chord diagrams and made more errors; participants also rated Chord as requiring more effort, and strongly preferred Sankey diagrams. Our study identifies and explains limitations of the popular Chord layout, provides new understanding about radial vs. linear layouts that can help guide visualization designers, and identifies possible design improvements for both visualization types.
false
false
[ "Carl Gutwin", "Aristides Mairena", "Venkat Bandi" ]
[]
[]
[]
CHI
2,023
Soliloquy: Fostering Poetry Comprehension Using an Interactive Think-aloud Visualization
10.1145/3544548.3581374
Complex texts like poetry are distinct from informative texts, requiring additional subprocesses to decode and interpret. Approaching a poem without knowledge of these cognitive strategies can result in confusion and frustration—rather than comprehension. In this work, we explore how interfaces can surface and demonstrate these cognitive processes to novice readers. We introduce Soliloquy, an interface that visualizes the thoughts of an expert as they read and interpret a poem by using animations of text and pop-up tooltips. We evaluate the interface in a five-condition Mechanical Turk study (n=254) by varying the detail of thoughts, including audio, and substituting a static text control. Our study detected a significant difference in comprehension between the detail of thoughts, but not between the Soliloquy interface and static text control. We further investigate this finding in a think-aloud study (n=13), revealing the impact individual differences, experience, and cognitive load could have on Soliloquy’s effectiveness.
false
false
[ "Zak Risha", "Deniz Sonmez Unal", "Erin Walker" ]
[]
[]
[]
CHI
2,023
Speech-Augmented Cone-of-Vision for Exploratory Data Analysis
10.1145/3544548.3581283
Mutual awareness of visual attention is crucial for successful collaboration. Previous research has explored various ways to represent visual attention, such as field-of-view visualizations and cursor visualizations based on eye-tracking, but these methods have limitations. Verbal communication is often utilized as a complementary strategy to overcome such disadvantages. This paper proposes a novel method that combines verbal communication with the Cone of Vision to improve gaze inference and mutual awareness in VR. We conducted a within-group study with pairs of participants who performed a collaborative analysis of data visualizations in VR. We found that our proposed method provides a better approximation of eye gaze than the approximation provided by head direction. Furthermore, we release the first collaborative head, eyes, and verbal behaviour dataset. The results of this study provide a foundation for investigating the potential of verbal communication as a tool for enhancing visual cues for joint attention.
false
false
[ "Riccardo Bovo", "Daniele Giunchi", "Ludwig Sidenmark", "Joshua Newn", "Hans Gellersen", "Enrico Costanza", "Thomas Heinis" ]
[]
[]
[]
CHI
2,023
The tactile dimension: a method for physicalizing touch behaviors
10.1145/3544548.3581137
Traces of touch provide valuable insight into how we interact with the physical world. Measuring touch behavior, however, is expensive and imprecise. Utilizing a fluorescent UV tracer powder, we developed a low-cost analog method to capture persistent, high-contrast touch records on arbitrary objects. We describe our process for selecting a tracer, methods for capturing, enhancing, and aggregating traces, and approaches to examining qualitative aspects of the user experience. Three user studies demonstrate key features of this method. First, we show that it provides clear and durable traces on objects representative of scientific visualization, physicalization, and product design. Second, we demonstrate how this method could be used to study touch perception, by measuring how task and narrative framing elicit different touch behaviors on the same object. Third, we demonstrate how this method can be used to evaluate data physicalizations by observing how participants touch two different physicalizations of COVID-19 time-series data.
false
false
[ "Laura J. Perovich", "Bernice E. Rogowitz", "Victoria Crabb", "Jack Vogelsang", "Sara Hartleben", "Dietmar Offenhuber" ]
[]
[]
[]
CHI
2,023
This Watchface Fits with my Tattoos: Investigating Customisation Needs and Preferences in Personal Tracking
10.1145/3544548.3580955
People engage in self-tracking with diverse data collection and visualisation needs and preferences. Customisable self-tracking tools offer the potential to support individualized preferences by letting people make changes to the aesthetics and functionality of tracker displays. In this paper, we use the customisation options offered by the displays of commercial fitness smartwatches as a lens to investigate when, why and how 386 self-trackers engage in customisations in their daily lives. We find that people largely customise their trackers’ display frequently, multiple times a day, or not at all, with frequent customisations reflecting situational data, aesthetic and personal meaning needs. We discuss implications for the design of tracking tools aiming to support customisation and discuss the utility of customisations towards goal scaffolding and maintaining interest in tracking.
false
false
[ "Rúben Gouveia 0001", "Daniel A. Epstein" ]
[]
[]
[]
CHI
2,023
Through Their Eyes and In Their Shoes: Providing Group Awareness During Collaboration Across Virtual Reality and Desktop Platforms
10.1145/3544548.3581093
Many collaborative data analysis situations benefit from collaborators utilizing different platforms. However, maintaining group awareness between team members using diverging devices is difficult, not least because common ground diminishes. A person using head-mounted VR cannot physically see a user on a desktop computer even while co-located, and the desktop user cannot easily relate to the VR user’s 3D workspace. To address this, we propose the “eyes-and-shoes” principles for group awareness and abstract them into four levels of techniques. Furthermore, we evaluate these principles with a qualitative user study of 6 participant pairs synchronously collaborating across distributed desktop and VR head-mounted devices. In this study, we vary the group awareness techniques between participants and explore two visualization contexts within participants. The results of this study indicate that the more visual metaphors and views of participants diverge, the greater the level of group awareness is needed. A copy of this paper, the study preregistration, and all supplemental materials required to reproduce the study are available on OSF (link).
false
false
[ "David Saffo", "Andrea Batch", "Cody Dunne", "Niklas Elmqvist" ]
[]
[]
[]
CHI
2,023
Tracing and Visualizing Human-ML/AI Collaborative Processes through Artifacts of Data Work
10.1145/3544548.3580819
Automated Machine Learning (AutoML) technology can lower barriers in data work yet still requires human intervention to be functional. However, the complex and collaborative process resulting from humans and machines trading off work makes it difficult to trace what was done, by whom (or what), and when. In this research, we construct a taxonomy of data work artifacts that captures AutoML and human processes. We present a rigorous methodology for its creation and discuss its transferability to the visual design process. We operationalize the taxonomy through the development of AutoML Trace a visual interactive sketch showing both the context and temporality of human-ML/AI collaboration in data work. Finally, we demonstrate the utility of our approach via a usage scenario with an enterprise software development team. Collectively, our research process and findings explore challenges and fruitful avenues for developing data visualization tools that interrogate the sociotechnical relationships in automated data work. Availability of Supplemental Materials: https://osf.io/3nmyj/?view_only=19962103d58b45d289b5c83421f48b36
false
false
[ "Jen Rogers", "Anamaria Crisan" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.02699v1", "icon": "paper" } ]
CHI
2,023
Troubling Collaboration: Matters of Care for Visualization Design Study
10.1145/3544548.3581168
A common research process in visualization is for visualization researchers to collaborate with domain experts to solve particular applied data problems. While there is existing guidance and expertise around how to structure collaborations to strengthen research contributions, there is comparatively little guidance on how to navigate the implications of, and power produced through the socio-technical entanglements of collaborations. In this paper, we qualitatively analyze reflective interviews of past participants of collaborations from multiple perspectives: visualization graduate students, visualization professors, and domain collaborators. We juxtapose the perspectives of these individuals, revealing tensions about the tools that are built and the relationships that are formed — a complex web of competing motivations. Through the lens of matters of care, we interpret this web, concluding with considerations that both trouble and necessitate reformation of current patterns around collaborative work in visualization design studies to promote more equitable, useful, and care-ful outcomes.
false
false
[ "Derya Akbaba", "Devin Lange", "Michael Correll", "Alexander Lex", "Miriah Meyer" ]
[]
[]
[]
CHI
2,023
Tutor In-sight: Guiding and Visualizing Students' Attention with Mixed Reality Avatar Presentation Tools
10.1145/3544548.3581069
Remote conferencing systems are increasingly used to supplement or even replace in-person teaching. However, prevailing conferencing systems restrict the teacher’s representation to a webcam live-stream, hamper the teacher’s use of body-language, and result in students’ decreased sense of co-presence and participation. While Virtual Reality (VR) systems may increase student engagement, the teacher may not have the time or expertise to conduct the lecture in VR. To address this issue and bridge the requirements between students and teachers, we have developed Tutor In-sight, a Mixed Reality (MR) avatar augmented into the student’s workspace based on four design requirements derived from the existing literature, namely: integrated virtual with physical space, improved teacher’s co-presence through avatar, direct attention with auto-generated body language, and usable workflow for teachers. Two user studies were conducted from the perspectives of students and teachers to determine the advantages of Tutor In-sight in comparison to two existing conferencing systems, Zoom (video-based) and Mozilla Hubs (VR-based). The participants of both studies favoured Tutor In-sight. Among others, this main finding indicates that Tutor In-sight satisfied the needs of both teachers and students. In addition, the participants’ feedback was used to empirically determine the four main teacher requirements and the four main student requirements in order to improve the future design of MR educational tools.
false
false
[ "Santawat Thanyadit", "Matthias Heintz 0001", "Effie L.-C. Law" ]
[]
[]
[]
CHI
2,023
UndoPort: Exploring the Influence of Undo-Actions for Locomotion in Virtual Reality on the Efficiency, Spatial Understanding and User Experience
10.1145/3544548.3581557
When we get lost in Virtual Reality (VR) or want to return to a previous location, we use the same methods of locomotion for the way back as for the way forward. This is time-consuming and requires additional physical orientation changes, increasing the risk of getting tangled in the headsets’ cables. In this paper, we propose the use of undo actions to revert locomotion steps in VR. We explore eight different variations of undo actions as extensions of point&teleport, based on the possibility to undo position and orientation changes together with two different visualizations of the undo step (discrete and continuous). We contribute the results of a controlled experiment with 24 participants investigating the efficiency and orientation of the undo techniques in a radial maze task. We found that the combination of position and orientation undo together with a discrete visualization resulted in the highest efficiency without increasing orientation errors.
false
false
[ "Florian Müller 0003", "Arantxa Ye", "Dominik Schön", "Julian Rasch" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2303.15800v2", "icon": "paper" } ]
CHI
2,023
VisLab: Enabling Visualization Designers to Gather Empirically Informed Design Feedback
10.1145/3544548.3581132
When creating a visualization, designers face various conflicting design choices. They typically rely on their hunches to deal with intricate trade-offs or resort to feedback from their colleagues. On the other hand, researchers have long used empirical methods to derive useful quantitative insights into visualization designs. Taking inspiration from this research tradition, we developed VisLab, an open-source online system to complement the existing qualitative feedback practice and help visualization practitioners run experiments to gather empirically informed design feedback. We surveyed practitioners’ perceptions of quantitative feedback and analyzed the research literature to inform VisLab’s motivation and design. VisLab operationalizes the experiment process using templates and dashboards to make empirical methods amenable for practitioners while supporting sharing and remixing experiments to aid knowledge exchange and validation. We demonstrated the validity of experiments in VisLab and evaluated the usability and potential usefulness of VisLab in visualization design practice.
false
false
[ "Jinhan Choi", "Changhoon Oh", "Yea-Seul Kim", "Nam Wook Kim" ]
[]
[]
[]
CHI
2,023
Visual Belief Elicitation Reduces the Incidence of False Discovery
10.1145/3544548.3580808
Visualization supports exploratory data analysis (EDA), but EDA frequently presents spurious charts, which can mislead people into drawing unwarranted conclusions. We investigate interventions to prevent false discovery from visualized data. We evaluate whether eliciting analyst beliefs helps guard against the over-interpretation of noisy visualizations. In two experiments, we exposed participants to both spurious and ‘true’ scatterplots, and assessed their ability to infer data-generating models that underlie those samples. Participants who underwent prior belief elicitation made 21% more correct inferences along with 12% fewer false discoveries. This benefit was observed across a variety of sample characteristics, suggesting broad utility to the intervention. However, additional interventions to highlight counterevidence and sample uncertainty did not provide significant advantage. Our findings suggest that lightweight, belief-driven interactions can yield a reliable, if moderate, reduction in false discovery. This work also suggests future directions to improve visual inference and reduce bias.
false
false
[ "Ratanond Koonchanok", "Gauri Yatindra Tawde", "Gokul Ragunandhan Narayanasamy", "Shalmali Walimbe", "Khairi Reda" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2301.12512v1", "icon": "paper" } ]
CHI
2,023
Visual Task Performance and Spatial Abilities: An Investigation of Artists and Mathematicians
10.1145/3544548.3580765
This study builds on past research to present a domain-specific empirical investigation of artists and math & computer scientists on their respective relationships to, perceptions of, and interactions with data visualization. We conducted a three-phase study utilizing mixed-methods to investigate performance on visual and text representations of data between domains. Our findings evidenced how math & computer scientists are proficient utilizing text representations of data while artists benefit more from visual chart representations. Finally, we present perspectives from artists to gain an understanding of their approach to visual and mathematical tasks. Our findings indicate that artists are especially adept at statistical visual tasks and that development of cognitive skills could be fostered by individuals to potentially benefit visualization task performance.
false
false
[ "Sara Tandon", "Alfie Abdul-Rahman", "Rita Borgo" ]
[]
[]
[]
CHI
2,023
Visualization of Speech Prosody and Emotion in Captions: Accessibility for Deaf and Hard-of-Hearing Users
10.1145/3544548.3581511
Speech is expressive in ways that caption text does not capture, with emotion or emphasis information not conveyed. We interviewed eight Deaf and Hard-of-Hearing (dhh) individuals to understand if and how captions’ inexpressiveness impacts them in online meetings with hearing peers. Automatically captioned speech, we found, lacks affective depth, lending it a hard-to-parse ambiguity and general dullness. Interviewees regularly feel excluded, which some understand is an inherent quality of these types of meetings rather than a consequence of current caption text design. Next, we developed three novel captioning models that depicted, beyond words, features from prosody, emotions, and a mix of both. In an empirical study, 16 dhh participants compared these models with conventional captions. The emotion-based model outperformed traditional captions in depicting emotions and emphasis, with only a moderate loss in legibility, suggesting its potential as a more inclusive design for captions.
false
false
[ "Caluã de Lacerda Pataca", "Matthew Watkins", "Roshan L. Peiris", "Sooyeon Lee", "Matt Huenerfauth" ]
[]
[]
[]
CHI
2,023
VizProg: Identifying Misunderstandings By Visualizing Students' Coding Progress
10.1145/3544548.3581516
Programming instructors often conduct in-class exercises to help them identify students that are falling behind and surface students’ misconceptions. However, as we found in interviews with programming instructors, monitoring students’ progress during exercises is difficult, particularly for large classes. We present VizProg, a system that allows instructors to monitor and inspect students’ coding progress in real-time during in-class exercises. VizProg represents students’ statuses as a 2D Euclidean spatial map that encodes the students’ problem-solving approaches and progress in real-time. VizProg allows instructors to navigate the temporal and structural evolution of students’ code, understand relationships between code, and determine when to provide feedback. A comparison experiment showed that VizProg helped to identify more students’ problems than a baseline system. VizProg also provides richer and more comprehensive information for identifying important student behavior. By managing students’ activities at scale, this work presents a new paradigm for improving the quality of live learning.
false
false
[ "Ashley Ge Zhang", "Yan Chen 0033", "Steve Oney" ]
[]
[]
[]
CHI
2,023
VRGit: A Version Control System for Collaborative Content Creation in Virtual Reality
10.1145/3544548.3581136
Immersive authoring tools allow users to intuitively create and manipulate 3D scenes while immersed in Virtual Reality (VR). Collaboratively designing these scenes is a creative process that involves numerous edits, explorations of design alternatives, and frequent communication with collaborators. Version Control Systems (VCSs) help users achieve this by keeping track of the version history and creating a shared hub for communication. However, most VCSs are unsuitable for managing the version history of VR content because their underlying line differencing mechanism is designed for text and lacks the semantic information of 3D content; and the widely adopted commit model is designed for asynchronous collaboration rather than real-time awareness and communication in VR. We introduce VRGit, a new collaborative VCS that visualizes version history as a directed graph composed of 3D miniatures, and enables users to easily navigate versions, create branches, as well as preview and reuse versions directly in VR. Beyond individual uses, VRGit also facilitates synchronous collaboration in VR by providing awareness of users’ activities and version history through portals and shared history visualizations. In a lab study with 14 participants (seven groups), we demonstrate that VRGit enables users to easily manage version history both individually and collaboratively in VR.
false
false
[ "Lei Zhang", "Ashutosh Agrawal", "Steve Oney", "Anhong Guo" ]
[]
[]
[]
CHI
2,023
We are the Data: Challenges and Opportunities for Creating Demographically Diverse Anthropographics
10.1145/3544548.3581086
Anthropographics are human-shaped visualizations that aim to emphasize the human importance of datasets and the people behind them. However, current anthropographics tend to employ homogeneous human shapes to encode data about diverse demographic groups. Such anthropographics can obscure important differences between groups and contemporary designs exemplify the lack of inclusive approaches for representing human diversity in visualizations. In response, we explore the creation of demographically diverse anthropographics that communicate the visible diversity of demographically distinct populations. Building on previous anthropographics research, we explore strategies for visualizing datasets about people in ways that explicitly encode diversity—illustrating these approaches with examples in a variety of visual styles. We also critically reflect on strategies for creating diverse anthropographics, identifying social and technical challenges that can result in harmful representations. Finally, we highlight a set of forward-looking research opportunities for advancing the design and understanding of diverse anthropographics.
false
false
[ "Priya Dhawka", "Helen Ai He", "Wesley Willett" ]
[]
[]
[]
CHI
2,023
When do data visualizations persuade? The impact of prior attitudes on learning about correlations from scatterplot visualizations
10.1145/3544548.3581330
Data visualizations are vital to scientific communication on critical issues such as public health, climate change, and socioeconomic policy. They are often designed not just to inform, but to persuade people to make consequential decisions (e.g., to get vaccinated). Are such visualizations persuasive, especially when audiences have beliefs and attitudes that the data contradict? In this paper we examine the impact of existing attitudes (e.g., positive or negative attitudes toward COVID-19 vaccination) on changes in beliefs about statistical correlations when viewing scatterplot visualizations with different representations of statistical uncertainty. We find that strong prior attitudes are associated with smaller belief changes when presented with data that contradicts existing views, and that visual uncertainty representations may amplify this effect. Finally, even when participants’ beliefs about correlations shifted their attitudes remained unchanged, highlighting the need for further research on whether data visualizations can drive longer-term changes in views and behavior.
false
false
[ "Douglas Markant", "Milad Rogha", "Alireza Karduni", "Ryan Wesslen", "Wenwen Dou" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.03776v1", "icon": "paper" } ]
CHI
2,023
Who Do We Mean When We Talk About Visualization Novices?
10.1145/3544548.3581524
As more people rely on visualization to inform their personal and collective decisions, researchers have focused on a broader range of audiences, including “novices.” But successfully applying, interrogating, or advancing visualization research for novices demands a clear understanding of what “novice” means in theory and practice. Misinterpreting who a “novice” is could lead to misapplying guidelines and overgeneralizing results. In this paper, we investigated how visualization researchers define novices and how they evaluate visualizations intended for novices. We analyzed 79 visualization papers that used “novice,” “non-expert,” “laypeople,” or “general public” in their titles or abstracts. We found ambiguity within papers and disagreement between papers regarding what defines a novice. Furthermore, we found a mismatch between the broad language describing novices and the narrow population representing them in evaluations (i.e., young people, students, and US residents). We suggest directions for inclusively supporting novices in both theory and practice.
false
false
[ "Alyxander Burns", "Christiana Lee", "Ria Chawla", "Evan Peck", "Narges Mahyar" ]
[ "BP" ]
[]
[]
CHI
2,023
Why Combining Text and Visualization Could Improve Bayesian Reasoning: A Cognitive Load Perspective
10.1145/3544548.3581218
Investigations into using visualization to improve Bayesian reasoning and advance risk communication have produced mixed results, suggesting that cognitive ability might affect how users perform with different presentation formats. Our work examines the cognitive load elicited when solving Bayesian problems using icon arrays, text, and a juxtaposition of text and icon arrays. We used a three-pronged approach to capture a nuanced picture of cognitive demand and measure differences in working memory capacity, performance under divided attention using a dual-task paradigm, and subjective ratings of self-reported effort. We found that individuals with low working memory capacity made fewer errors and experienced less subjective workload when the problem contained an icon array compared to text alone, showing that visualization improves accuracy while exerting less cognitive demand. We believe these findings can considerably impact accessible risk communication, especially for individuals with low working memory capacity.
false
false
[ "Melanie Bancilhon", "Amanda Wright", "Sunwoo Ha", "R. Jordan Crouser", "Alvitta Ottley" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.00707v1", "icon": "paper" } ]
CHI
2,023
Working with Forensic Practitioners to Understand the Opportunities and Challenges for Mixed-Reality Digital Autopsy
10.1145/3544548.3580768
Forensic practitioners analyse intrinsic 3D data daily on 2D screens. We explore novel immersive visualisation techniques that enable digital autopsy through analysis of 3D imagery. We employ a user-centred design process involving four rounds of user feedback: (1) formative interviews eliciting opportunities and requirements for mixed-reality digital autopsies; (2) a larger workshop identifying our prototype’s limitations and further use-cases and interaction ideas; (3+4) two rounds of qualitative user validation of successive prototypes of novel interaction techniques for pathologist sensemaking. Overall, we find MR holds great potential to enable digital autopsy, initially to supplement physical autopsy, but ultimately to replace it. We found that experts were able to use our tool to perform basic virtual autopsy tasks, MR setup promotes exploration and sense making of cause of death, and subject to limitations of current MR technology, the proposed system is a valid option for digital autopsies, according to experts’ feedback. – Warning: This paper contains sensitive images which are 3D visualisation of deceased people.
false
false
[ "Vahid Pooryousef", "Maxime Cordeil", "Lonni Besançon", "Christophe Hurter", "Tim Dwyer", "Richard Bassed" ]
[]
[]
[]
CHI
2,023
Zeno: An Interactive Framework for Behavioral Evaluation of Machine Learning
10.1145/3544548.3581268
Machine learning models with high accuracy on test data can still produce systematic failures, such as harmful biases and safety issues, when deployed in the real world. To detect and mitigate such failures, practitioners run behavioral evaluation of their models, checking model outputs for specific types of inputs. Behavioral evaluation is important but challenging, requiring that practitioners discover real-world patterns and validate systematic failures. We conducted 18 semi-structured interviews with ML practitioners to better understand the challenges of behavioral evaluation and found that it is a collaborative, use-case-first process that is not adequately supported by existing task- and domain-specific tools. Using these findings, we designed zeno, a general-purpose framework for visualizing and testing AI systems across diverse use cases. In four case studies with participants using zeno on real-world models, we found that practitioners were able to reproduce previous manual analyses and discover new systematic failures.
false
false
[ "Ángel Alexander Cabrera", "Erica Fu", "Donald Bertucci", "Kenneth Holstein", "Ameet Talwalkar", "Jason I. Hong", "Adam Perer" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.04732v1", "icon": "paper" } ]
Vis
2,022
A Comparison of Spatiotemporal Visualizations for 3D Urban Analytics
10.1109/TVCG.2022.3209474
Recent technological innovations have led to an increase in the availability of 3D urban data, such as shadow, noise, solar potential, and earthquake simulations. These spatiotemporal datasets create opportunities for new visualizations to engage experts from different domains to study the dynamic behavior of urban spaces in this under explored dimension. However, designing 3D spatiotemporal urban visualizations is challenging, as it requires visual strategies to support analysis of time-varying data referent to the city geometry. Although different visual strategies have been used in 3D urban visual analytics, the question of how effective these visual designs are at supporting spatiotemporal analysis on building surfaces remains open. To investigate this, in this paper we first contribute a series of analytical tasks elicited after interviews with practitioners from three urban domains. We also contribute a quantitative user study comparing the effectiveness of four representative visual designs used to visualize 3D spatiotemporal urban data: spatial juxtaposition, temporal juxtaposition, linked view, and embedded view. Participants performed a series of tasks that required them to identify extreme values on building surfaces over time. Tasks varied in granularity for both space and time dimensions. Our results demonstrate that participants were more accurate using plot-based visualizations (linked view, embedded view) but faster using color-coded visualizations (spatial juxtaposition, temporal juxtaposition). Our results also show that, with increasing task complexity, plot-based visualizations perform better in preserving efficiency (time, accuracy) compared to color-coded visualizations. Based on our findings, we present a set of takeaways with design recommendations for 3D spatiotemporal urban visualizations for researchers and practitioners. Lastly, we report on a series of interviews with four practitioners, and their feedback and suggestions for further work on the visualizations to support 3D spatiotemporal urban data analysis.
false
false
[ "Roberta C. Ramos Mota", "Nivan Ferreira", "Julio Daniel Silva", "Marius Horga", "Marcos Lage", "Luis Ceferino", "Usman R. Alim", "Ehud Sharlin", "Fabio Miranda 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.05370v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/TyrUZWjKRw0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/qgoNzM9SeUw", "icon": "video" } ]
Vis
2,022
A Design Space for Surfacing Content Recommendations in Visual Analytic Platforms
10.1109/TVCG.2022.3209445
Recommendation algorithms have been leveraged in various ways within visualization systems to assist users as they perform of a range of information tasks. One common focus for these techniques has been the recommendation of content, rather than visual form, as a means to assist users in the identification of information that is relevant to their task context. A wide variety of techniques have been proposed to address this general problem, with a range of design choices in how these solutions surface relevant information to users. This paper reviews the state-of-the-art in how visualization systems surface recommended content to users during users' visual analysis; introduces a four-dimensional design space for visual content recommendation based on a characterization of prior work; and discusses key observations regarding common patterns and future research opportunities.
false
false
[ "Zhilan Zhou", "Wenyuan Wang", "Mengtian Guo", "Yue Wang 0035", "David Gotz" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04219v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/xzaOTSubu4Y", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/CIdzyf9xHIs", "icon": "video" } ]
Vis
2,022
A Framework for Multiclass Contour Visualization
10.1109/TVCG.2022.3209482
Multiclass contour visualization is often used to interpret complex data attributes in such fields as weather forecasting, computational fluid dynamics, and artificial intelligence. However, effective and accurate representations of underlying data patterns and correlations can be challenging in multiclass contour visualization, primarily due to the inevitable visual cluttering and occlusions when the number of classes is significant. To address this issue, visualization design must carefully choose design parameters to make visualization more comprehensible. With this goal in mind, we proposed a framework for multiclass contour visualization. The framework has two components: a set of four visualization design parameters, which are developed based on an extensive review of literature on contour visualization, and a declarative domain-specific language (DSL) for creating multiclass contour rendering, which enables a fast exploration of those design parameters. A task-oriented user study was conducted to assess how those design parameters affect users' interpretations of real-world data. The study results offered some suggestions on the value choices of design parameters in multiclass contour visualization.
false
false
[ "Sihang Li", "Jiacheng Yu", "Mingxuan Li", "Le Liu", "Xiaolong Zhang 0001", "Xiaoru Yuan" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/ZUFIRLy3McE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/ZTwky8yd_yY", "icon": "video" } ]
Vis
2,022
A Scanner Deeply: Predicting Gaze Heatmaps on Visualizations Using Crowdsourced Eye Movement Data
10.1109/TVCG.2022.3209472
Visual perception is a key component of data visualization. Much prior empirical work uses eye movement as a proxy to understand human visual perception. Diverse apparatus and techniques have been proposed to collect eye movements, but there is still no optimal approach. In this paper, we review 30 prior works for collecting eye movements based on three axes: (1) the tracker technology used to measure eye movements; (2) the image stimulus shown to participants; and (3) the collection methodology used to gather the data. Based on this taxonomy, we employ a webcam-based eyetracking approach using task-specific visualizations as the stimulus. The low technology requirement means that virtually anyone can participate, thus enabling us to collect data at large scale using crowdsourcing: approximately 12,000 samples in total. Choosing visualization images as stimulus means that the eye movements will be specific to perceptual tasks associated with visualization. We use these data to propose a Scanner Deeply, a virtual eyetracker model that, given an image of a visualization, generates a gaze heatmap for that image. We employ a computationally efficient, yet powerful convolutional neural network for our model. We compare the results of our work with results from the DVS model and a neural network trained on the Salicon dataset. The analysis of our gaze patterns enables us to understand how users grasp the structure of visualized data. We also make our stimulus dataset of visualization images available as part of this paper's contribution.
false
false
[ "Sungbok Shin", "Sunghyo Chung", "Sanghyun Hong 0001", "Niklas Elmqvist" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/kZyUfaertD8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/W_5DWEkUPeA", "icon": "video" } ]
Vis
2,022
A Unified Comparison of User Modeling Techniques for Predicting Data Interaction and Detecting Exploration Bias
10.1109/TVCG.2022.3209476
The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. However, the community lacks a rigorous evaluation and comparison of these existing techniques. As a result, there is limited guidance on which method to use and when. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance.
false
false
[ "Sunwoo Ha", "Shayan Monadjemi", "Roman Garnett", "Alvitta Ottley" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.05021v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ZfXk_wFENmY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/3c3wp4MtdFQ", "icon": "video" } ]
Vis
2,022
A Visual Analytics System for Improving Attention-based Traffic Forecasting Models
10.1109/TVCG.2022.3209462
With deep learning (DL) outperforming conventional methods for different tasks, much effort has been devoted to utilizing DL in various domains. Researchers and developers in the traffic domain have also designed and improved DL models for forecasting tasks such as estimation of traffic speed and time of arrival. However, there exist many challenges in analyzing DL models due to the black-box property of DL models and complexity of traffic data (i.e., spatio-temporal dependencies). Collaborating with domain experts, we design a visual analytics system, AttnAnalyzer, that enables users to explore how DL models make predictions by allowing effective spatio-temporal dependency analysis. The system incorporates dynamic time warping (DTW) and Granger causality tests for computational spatio-temporal dependency analysis while providing map, table, line chart, and pixel views to assist user to perform dependency and model behavior analysis. For the evaluation, we present three case studies showing how AttnAnalyzer can effectively explore model behaviors and improve model performance in two different road networks. We also provide domain expert feedback.
false
false
[ "Seungmin Jin", "Hyunwook Lee", "Cheonbok Park", "Hyeshin Chu", "Yunwon Tae", "Jaegul Choo", "Sungahn Ko" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04350v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/vv-eZ1lHGoo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/JiObcRiyv9Q", "icon": "video" } ]
Vis
2,022
Affective Learning Objectives for Communicative Visualizations
10.1109/TVCG.2022.3209500
When designing communicative visualizations, we often focus on goals that seek to convey patterns, relations, or comparisons (cognitive learning objectives). We pay less attention to affective intents–those that seek to influence or leverage the audience's opinions, attitudes, or values in some way. Affective objectives may range in outcomes from making the viewer care about the subject, strengthening a stance on an opinion, or leading them to take further action. Because such goals are often considered a violation of perceived ‘neutrality’ or are ‘political,’ designers may resist or be unable to describe these intents, let alone formalize them as learning objectives. While there are notable exceptions–such as advocacy visualizations or persuasive cartography–we find that visualization designers rarely acknowledge or formalize affective objectives. Through interviews with visualization designers, we expand on prior work on using learning objectives as a framework for describing and assessing communicative intent. Specifically, we extend and revise the framework to include a set of affective learning objectives. This structured taxonomy can help designers identify and declare their goals and compare and assess designs in a more principled way. Additionally, the taxonomy can enable external critique and analysis of visualizations. We illustrate the use of the taxonomy with a critical analysis of an affective visualization.
false
false
[ "Elsie Lee-Robbins", "Eytan Adar" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04078v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/UrW92ubvSdo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/2MJlzAd9Ua0", "icon": "video" } ]
Vis
2,022
Animated Vega-Lite: Unifying Animation with a Grammar of Interactive Graphics
10.1109/TVCG.2022.3209369
We present Animated Vega-Lite, a set of extensions to Vega-Lite that model animated visualizations as time-varying data queries. In contrast to alternate approaches for specifying animated visualizations, which prize a highly expressive design space, Animated Vega-Lite prioritizes unifying animation with the language's existing abstractions for static and interactive visualizations to enable authors to smoothly move between or combine these modalities. Thus, to compose animation with static visualizations, we represent time as an encoding channel. Time encodings map a data field to animation keyframes, providing a lightweight specification for animations without interaction. To compose animation and interaction, we also represent time as an event stream; Vega-Lite selections, which provide dynamic data queries, are now driven not only by input events but by timer ticks as well. We evaluate the expressiveness of our approach through a gallery of diverse examples that demonstrate coverage over taxonomies of both interaction and animation. We also critically reflect on the conceptual affordances and limitations of our contribution by interviewing five expert developers of existing animation grammars. These reflections highlight the key motivating role of in-the-wild examples, and identify three central tradeoffs: the language design process, the types of animated transitions supported, and how the systems model keyframes.
false
false
[ "Jonathan Zong", "Josh Pollock", "Dylan Wootton", "Arvind Satyanarayan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03869v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Qe3Foy2h3ag", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/3awOHEVAjME", "icon": "video" } ]
Vis
2,022
ASTF: Visual Abstractions of Time-Varying Patterns in Radio Signals
10.1109/TVCG.2022.3209469
A time-frequency diagram is a commonly used visualization for observing the time-frequency distribution of radio signals and analyzing their time-varying patterns of communication states in radio monitoring and management. While it excels when performing short-term signal analyses, it becomes inadaptable for long-term signal analyses because it cannot adequately depict signal time-varying patterns in a large time span on a space-limited screen. This research thus presents an abstract signal time-frequency (ASTF) diagram to address this problem. In the diagram design, a visual abstraction method is proposed to visually encode signal communication state changes in time slices. A time segmentation algorithm is proposed to divide a large time span into time slices. Three new quantified metrics and a loss function are defined to ensure the preservation of important time-varying information in the time segmentation. An algorithm performance experiment and a user study are conducted to evaluate the effectiveness of the diagram for long-term signal analyses.
false
false
[ "Ying Zhao 0001", "Luhao Ge", "Huixuan Xie", "Genghuai Bai", "Zhao Zhang", "Qiang Wei", "Yun Lin 0005", "Yuchao Liu", "Fangfang Zhou" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.15223v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/z16ClvSo8lQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Id5k5AXg7is", "icon": "video" } ]
Vis
2,022
BeauVis: A Validated Scale for Measuring the Aesthetic Pleasure of Visual Representations
10.1109/TVCG.2022.3209390
We developed and validated a rating scale to assess the aesthetic pleasure (or beauty) of a visual data representation: the BeauVis scale. With our work we offer researchers and practitioners a simple instrument to compare the visual appearance of different visualizations, unrelated to data or context of use. Our rating scale can, for example, be used to accompany results from controlled experiments or be used as informative data points during in-depth qualitative studies. Given the lack of an aesthetic pleasure scale dedicated to visualizations, researchers have mostly chosen their own terms to study or compare the aesthetic pleasure of visualizations. Yet, many terms are possible and currently no clear guidance on their effectiveness regarding the judgment of aesthetic pleasure exists. To solve this problem, we engaged in a multi-step research process to develop the first validated rating scale specifically for judging the aesthetic pleasure of a visualization (osf.io/fxs76). Our final BeauVis scale consists of five items, “enjoyable,” “likable,” “pleasing,” “nice,” and “appealing.” Beyond this scale itself, we contribute (a) a systematic review of the terms used in past research to capture aesthetics, (b) an investigation with visualization experts who suggested terms to use for judging the aesthetic pleasure of a visualization, and (c) a confirmatory survey in which we used our terms to study the aesthetic pleasure of a set of 3 visualizations.
false
false
[ "Tingying He", "Petra Isenberg", "Raimund Dachselt", "Tobias Isenberg 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.14147v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/QjHI0eHLhRU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/k9iC6typxYA", "icon": "video" } ]
Vis
2,022
Breaking the Fourth Wall of Data Stories through Interaction
10.1109/TVCG.2022.3209409
Interaction is increasingly integrating into data stories to support data exploration and explanation. Interaction can also be combined with the narrative device, breaking the fourth wall (BTFW), to build a deeper connection between readers and data stories. BTFW interaction directly addresses readers by requiring their input. Such user input is then integrated into the narrative or visuals of data stories to encourage readers to inspect the stories more closely. In this work, we explore the design patterns of BTFW interaction commonly used in data stories. Six design patterns were identified through the analysis of 58 high-quality data stories collected from a range of online sources. Specifically, the data stories were categorized using a coding framework, including the input of BTFW interaction provided by readers and the output of BTFW interaction generated by data stories to respond to the input. To explore the benefits as well as concerns of using BTFW interaction, we conducted a three-session user study including the reading, interview, and recall sessions. The results of our user study suggested that BTFW interaction has a positive impact on self-story connection, user engagement, and information recall. We also discussed design implications to address the possible negative effects on the interactivity-comprehensibility balance, information privacy, and the learning curve of interaction brought by BTFW interaction.
false
false
[ "Yang Shi 0007", "Tian Gao", "Xiaohan Jiao", "Nan Cao" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/m1MwgbOWVxg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/gqdC0w04wxY", "icon": "video" } ]
Vis
2,022
Calibrate: Interactive Analysis of Probabilistic Model Output
10.1109/TVCG.2022.3209489
Analyzing classification model performance is a crucial task for machine learning practitioners. While practitioners often use count-based metrics derived from confusion matrices, like accuracy, many applications, such as weather prediction, sports betting, or patient risk prediction, rely on a classifier's predicted probabilities rather than predicted labels. In these instances, practitioners are concerned with producing a calibrated model, that is, one which outputs probabilities that reflect those of the true distribution. Model calibration is often analyzed visually, through static reliability diagrams, however, the traditional calibration visualization may suffer from a variety of drawbacks due to the strong aggregations it necessitates. Furthermore, count-based approaches are unable to sufficiently analyze model calibration. We present Calibrate, an interactive reliability diagram that addresses the aforementioned issues. Calibrate constructs a reliability diagram that is resistant to drawbacks in traditional approaches, and allows for interactive subgroup analysis and instance-level inspection. We demonstrate the utility of Calibrate through use cases on both real-world and synthetic data. We further validate Calibrate by presenting the results of a think-aloud experiment with data scientists who routinely analyze model calibration.
false
false
[ "Peter Xenopoulos", "João Rulff", "Luis Gustavo Nonato", "Brian Barr", "Cláudio T. Silva" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.13770v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/IXfUiI3Lybg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/B55HitnGlw4", "icon": "video" } ]
Vis
2,022
ChartWalk: Navigating large collections of text notes in electronic health records for clinical chart review
10.1109/TVCG.2022.3209444
Before seeing a patient for the first time, healthcare workers will typically conduct a comprehensive clinical chart review of the patient's electronic health record (EHR). Within the diverse documentation pieces included there, text notes are among the most important and thoroughly perused segments for this task; and yet they are among the least supported medium in terms of content navigation and overview. In this work, we delve deeper into the task of clinical chart review from a data visualization perspective and propose a hybrid graphics+text approach via ChartWalk, an interactive tool to support the review of text notes in EHRs. We report on our iterative design process grounded in input provided by a diverse range of healthcare professionals, with steps including: (a) initial requirements distilled from interviews and the literature, (b) an interim evaluation to validate design decisions, and (c) a task-based qualitative evaluation of our final design. We contribute lessons learned to better support the design of tools not only for clinical chart reviews but also other healthcare-related tasks around medical text analysis.
false
false
[ "Nicole Sultanum", "Farooq Naeem", "Michael Brudno", "Fanny Chevalier" ]
[ "HM" ]
[]
[]
Vis
2,022
CohortVA: A Visual Analytic System for Interactive Exploration of Cohorts based on Historical Data
10.1109/TVCG.2022.3209483
In history research, cohort analysis seeks to identify social structures and figure mobilities by studying the group-based behavior of historical figures. Prior works mainly employ automatic data mining approaches, lacking effective visual explanation. In this paper, we present CohortVA, an interactive visual analytic approach that enables historians to incorporate expertise and insight into the iterative exploration process. The kernel of CohortVA is a novel identification model that generates candidate cohorts and constructs cohort features by means of pre-built knowledge graphs constructed from large-scale history databases. We propose a set of coordinated views to illustrate identified cohorts and features coupled with historical events and figure profiles. Two case studies and interviews with historians demonstrate that CohortVA can greatly enhance the capabilities of cohort identifications, figure authentications, and hypothesis generation.
false
false
[ "Wei Zhang", "Jason K. Wong", "Xumeng Wang", "Youcheng Gong", "Rongchen Zhu", "Kai Liu", "Zihan Yan", "Siwei Tan", "Huamin Qu", "Siming Chen 0001", "Wei Chen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.09237v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/MlxXPJ5FN1A", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Pyn6-kD13Ho", "icon": "video" } ]
Vis
2,022
Communicating Uncertainty in Digital Humanities Visualization Research
10.1109/TVCG.2022.3209436
Due to their historical nature, humanistic data encompass multiple sources of uncertainty. While humanists are accustomed to handling such uncertainty with their established methods, they are cautious of visualizations that appear overly objective and fail to communicate this uncertainty. To design more trustworthy visualizations for humanistic research, therefore, a deeper understanding of its relation to uncertainty is needed. We systematically reviewed 126 publications from digital humanities literature that use visualization as part of their research process, and examined how uncertainty was handled and represented in their visualizations. Crossing these dimensions with the visualization type and use, we identified that uncertainty originated from multiple steps in the research process from the source artifacts to their datafication. We also noted how besides known uncertainty coping strategies, such as excluding data and evaluating its effects, humanists also embraced uncertainty as a separate dimension important to retain. By mapping how the visualizations encoded uncertainty, we identified four approaches that varied in terms of explicitness and customization. This work contributes with two empirical taxonomies of uncertainty and it's corresponding coping strategies, as well as with the foundation of a research agenda for uncertainty visualization in the digital humanities. Our findings further the synergy among humanists and visualization researchers, and ultimately contribute to the development of more trustworthy, uncertainty-aware visualizations.
false
false
[ "Georgia Panagiotidou", "Houda Lamqaddam", "Jeroen Poblome", "Koenraad Brosens", "Katrien Verbert", "Andrew Vande Moere" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/YlUu-_7EItI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/mqfGYPMD8gE", "icon": "video" } ]
Vis
2,022
Comparative Evaluation of Bipartite, Node-Link, and Matrix-Based Network Representations
10.1109/TVCG.2022.3209427
This work investigates and compares the performance of node-link diagrams, adjacency matrices, and bipartite layouts for visualizing networks. In a crowd-sourced user study ($\mathrm{n}=150$), we measure the task accuracy and completion time of the three representations for different network classes and properties. In contrast to the literature, which covers mostly topology-based tasks (e.g., path finding) in small datasets, we mainly focus on overview tasks for large and directed networks. We consider three overview tasks on networks with 500 nodes: (T1) network class identification, (T2) cluster detection, and (T3) network density estimation, and two detailed tasks: (T4) node in-degree vs. out-degree and (T5) representation mapping, on networks with 50 and 20 nodes, respectively. Our results show that bipartite layouts are beneficial for revealing the overall network structure, while adjacency matrices are most reliable across the different tasks.
false
false
[ "Moataz Abdelaal", "Nathan Daniel Schiele", "Katrin Angerbauer", "Kuno Kurzhals", "Michael Sedlmair", "Daniel Weiskopf" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04458v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/QzNWtkJhnzI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/oNKd4xQCG64", "icon": "video" } ]
Vis
2,022
Comparison Conundrum and the Chamber of Visualizations: An Exploration of How Language Influences Visual Design
10.1109/TVCG.2022.3209456
The language for expressing comparisons is often complex and nuanced, making supporting natural language-based visual comparison a non-trivial task. To better understand how people reason about comparisons in natural language, we explore a design space of utterances for comparing data entities. We identified different parameters of comparison utterances that indicate what is being compared (i.e., data variables and attributes) as well as how these parameters are specified (i.e., explicitly or implicitly). We conducted a user study with sixteen data visualization experts and non-experts to investigate how they designed visualizations for comparisons in our design space. Based on the rich set of visualization techniques observed, we extracted key design features from the visualizations and synthesized them into a subset of sixteen representative visualization designs. We then conducted a follow-up study to validate user preferences for the sixteen representative visualizations corresponding to utterances in our design space. Findings from these studies suggest guidelines and future directions for designing natural language interfaces and recommendation tools to better support natural language comparisons in visual analytics.
false
false
[ "Aimen Gaba", "Vidya Setlur", "Arjun Srinivasan", "Jane Hoffswell", "Cindy Xiong" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03785v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/XYBfY4IU-CI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yVpPhEb2BNg", "icon": "video" } ]
Vis
2,022
Computing a Stable Distance on Merge Trees
10.1109/TVCG.2022.3209395
Distances on merge trees facilitate visual comparison of collections of scalar fields. Two desirable properties for these distances to exhibit are 1) the ability to discern between scalar fields which other, less complex topological summaries cannot and 2) to still be robust to perturbations in the dataset. The combination of these two properties, known respectively as stability and discriminativity, has led to theoretical distances which are either thought to be or shown to be computationally complex and thus their implementations have been scarce. In order to design similarity measures on merge trees which are computationally feasible for more complex merge trees, many researchers have elected to loosen the restrictions on at least one of these two properties. The question still remains, however, if there are practical situations where trading these desirable properties is necessary. Here we construct a distance between merge trees which is designed to retain both discriminativity and stability. While our approach can be expensive for large merge trees, we illustrate its use in a setting where the number of nodes is small. This setting can be made more practical since we also provide a proof that persistence simplification increases the outputted distance by at most half of the simplified value. We demonstrate our distance measure on applications in shape comparison and on detection of periodicity in the von Kármán vortex street.
false
false
[ "Brian Bollen", "Pasindu Tennakoon", "Joshua A. Levine" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2210.08644v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/TWudEI4tBlQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/pRQ1Q_yCHNc", "icon": "video" } ]
Vis
2,022
ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective
10.1109/TVCG.2022.3209384
Traditional deep learning interpretability methods which are suitable for model users cannot explain network behaviors at the global level and are inflexible at providing fine-grained explanations. As a solution, concept-based explanations are gaining attention due to their human intuitiveness and their flexibility to describe both global and local model behaviors. Concepts are groups of similarly meaningful pixels that express a notion, embedded within the network's latent space and have commonly been hand-generated, but have recently been discovered by automated approaches. Unfortunately, the magnitude and diversity of discovered concepts makes it difficult to navigate and make sense of the concept space. Visual analytics can serve a valuable role in bridging these gaps by enabling structured navigation and exploration of the concept space to provide concept-based insights of model behavior to users. To this end, we design, develop, and validate ConceptExplainer, a visual analytics system that enables people to interactively probe and explore the concept space to explain model behavior at the instance/class/global level. The system was developed via iterative prototyping to address a number of design challenges that model users face in interpreting the behavior of deep learning models. Via a rigorous user study, we validate how ConceptExplainer supports these challenges. Likewise, we conduct a series of usage scenarios to demonstrate how the system supports the interactive analysis of model behavior across a variety of tasks and explanation granularities, such as identifying concepts that are important to classification, identifying bias in training data, and understanding how concepts can be shared across diverse and seemingly dissimilar classes.
false
false
[ "Jinbin Huang", "Aditi Mishra", "Bum Chul Kwon", "Chris Bryan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2204.01888v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/gltneexyhYs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/EvArXDWxCXI", "icon": "video" } ]
Vis
2,022
Constrained Dynamic Mode Decomposition
10.1109/TVCG.2022.3209437
Frequency-based decomposition of time series data is used in many visualization applications. Most of these decomposition methods (such as Fourier transform or singular spectrum analysis) only provide interaction via pre- and post-processing, but no means to influence the core algorithm. A method that also belongs to this class is Dynamic Mode Decomposition (DMD), a spectral decomposition method that extracts spatio-temporal patterns from data. In this paper, we incorporate frequency-based constraints into DMD for an adaptive decomposition that leads to user-controllable visualizations, allowing analysts to include their knowledge into the process. To accomplish this, we derive an equivalent reformulation of DMD that implicitly provides access to the eigenvalues (and therefore to the frequencies) identified by DMD. By utilizing a constrained minimization problem customized to DMD, we can guarantee the existence of desired frequencies by minimal changes to DMD. We complement this core approach by additional techniques for constrained DMD to facilitate explorative visualization and investigation of time series data. With several examples, we demonstrate the usefulness of constrained DMD and compare it to conventional frequency-based decomposition methods.
false
false
[ "Tim Krake", "Daniel Klötzl", "Bernd Eberhardt", "Daniel Weiskopf" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/vqahHdOPkzM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/zc3xLI1wt14", "icon": "video" } ]
Vis
2,022
Cultivating Visualization Literacy for Children Through Curiosity and Play
10.1109/TVCG.2022.3209442
Fostering data visualization literacy (DVL) as part of childhood education could lead to a more data literate society. However, most work in DVL for children relies on a more formal educational context (i.e., a teacher-led approach) that limits children's engagement with data to classroom-based environments and, consequently, children's ability to ask questions about and explore data on topics they find personally meaningful. We explore how a curiosity-driven, child-led approach can provide more agency to children when they are authoring data visualizations. This paper explores how informal learning with crafting physicalizations through play and curiosity may foster increased literacy and engagement with data. Employing a constructionist approach, we designed a do-it-yourself toolkit made out of everyday materials (e.g., paper, cardboard, mirrors) that enables children to create, customize, and personalize three different interactive visualizations (bar, line, pie). We used the toolkit as a design probe in a series of in-person workshops with 5 children (6 to 11-year-olds) and interviews with 5 educators. Our observations reveal that the toolkit helped children creatively engage and interact with visualizations. Children with prior knowledge of data visualization reported the toolkit serving as more of an authoring tool that they envision using in their daily lives, while children with little to no experience found the toolkit as an engaging introduction to data visualization. Our study demonstrates the potential of using the constructionist approach to cultivate children's DVL through curiosity and play.
false
false
[ "Sandra Bae", "Rishi Vanukuru", "Ruhan Yang", "Peter Gyory", "Ran Zhou 0003", "Ellen Yi-Luen Do", "Danielle Albers Szafir" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.05015v1", "icon": "paper" } ]
Vis
2,022
D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling Algorithmic Bias
10.1109/TVCG.2022.3209484
With the rise of AI, algorithms have become better at learning underlying patterns from the training data including ingrained social biases based on gender, race, etc. Deployment of such algorithms to domains such as hiring, healthcare, law enforcement, etc. has raised serious concerns about fairness, accountability, trust and interpretability in machine learning algorithms. To alleviate this problem, we propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases from tabular datasets. It uses a graphical causal model to represent causal relationships among different features in the dataset and as a medium to inject domain knowledge. A user can detect the presence of bias against a group, say females, or a subgroup, say black females, by identifying unfair causal relationships in the causal network and using an array of fairness metrics. Thereafter, the user can mitigate bias by refining the causal model and acting on the unfair causal edges. For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset based on the current causal model while ensuring a minimal change from the original dataset. Users can visually assess the impact of their interactions on different fairness metrics, utility metrics, data distortion, and the underlying data distribution. Once satisfied, they can download the debiased dataset and use it for any downstream application for fairer predictions. We evaluate D-BIAS by conducting experiments on 3 datasets and also a formal user study. We found that D-BIAS helps reduce bias significantly compared to the baseline debiasing approach across different fairness metrics while incurring little data distortion and a small loss in utility. Moreover, our human-in-the-loop based approach significantly outperforms an automated approach on trust, interpretability and accountability.
false
false
[ "Bhavya Ghai", "Klaus Mueller 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.05126v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/v0VY4fZfsNc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/iN5kabYi_g4", "icon": "video" } ]
Vis
2,022
Dashboard Design Patterns
10.1109/TVCG.2022.3209448
This paper introduces design patterns for dashboards to inform dashboard design processes. Despite a growing number of public examples, case studies, and general guidelines there is surprisingly little design guidance for dashboards. Such guidance is necessary to inspire designs and discuss tradeoffs in, e.g., screenspace, interaction, or information shown. Based on a systematic review of 144 dashboards, we report on eight groups of design patterns that provide common solutions in dashboard design. We discuss combinations of these patterns in “dashboard genres” such as narrative, analytical, or embedded dashboard. We ran a 2-week dashboard design workshop with 23 participants of varying expertise working on their own data and dashboards. We discuss the application of patterns for the dashboard design processes, as well as general design tradeoffs and common challenges. Our work complements previous surveys and aims to support dashboard designers and researchers in co-creation, structured design decisions, as well as future user evaluations about dashboard design guidelines. Detailed pattern descriptions and workshop material can be found online: https://dashboarddesignpatterns.github.io
false
false
[ "Benjamin Bach", "Euan Freeman", "Alfie Abdul-Rahman", "Cagatay Turkay", "Saiful Khan", "Yulei Fan", "Min Chen 0001" ]
[ "HM" ]
[ "PW", "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2205.00757v2", "icon": "paper" }, { "name": "Project Website", "url": "https://dashboarddesignpatterns.github.io/", "icon": "project_website" }, { "name": "Fast Forward", "url": "https://youtu.be/igHTCf93aa8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/P70kXAco3Qo", "icon": "video" } ]
Vis
2,022
DashBot: Insight-Driven Dashboard Generation Based on Deep Reinforcement Learning
10.1109/TVCG.2022.3209468
Analytical dashboards are popular in business intelligence to facilitate insight discovery with multiple charts. However, creating an effective dashboard is highly demanding, which requires users to have adequate data analysis background and be familiar with professional tools, such as Power BI. To create a dashboard, users have to configure charts by selecting data columns and exploring different chart combinations to optimize the communication of insights, which is trial-and-error. Recent research has started to use deep learning methods for dashboard generation to lower the burden of visualization creation. However, such efforts are greatly hindered by the lack of large-scale and high-quality datasets of dashboards. In this work, we propose using deep reinforcement learning to generate analytical dashboards that can use well-established visualization knowledge and the estimation capacity of reinforcement learning. Specifically, we use visualization knowledge to construct a training environment and rewards for agents to explore and imitate human exploration behavior with a well-designed agent network. The usefulness of the deep reinforcement learning model is demonstrated through ablation studies and user studies. In conclusion, our work opens up new opportunities to develop effective ML-based visualization recommenders without beforehand training datasets.
false
false
[ "Dazhen Deng", "Aoyu Wu", "Huamin Qu", "Yingcai Wu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.01232v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Nb22kJIVT7Q", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/nfJAyE7A9zA", "icon": "video" } ]
Vis
2,022
Data Hunches: Incorporating Personal Knowledge into Visualizations
10.1109/TVCG.2022.3209451
The trouble with data is that it frequently provides only an imperfect representation of a phenomenon of interest. Experts who are familiar with their datasets will often make implicit, mental corrections when analyzing a dataset, or will be cautious not to be overly confident about their findings if caveats are present. However, personal knowledge about the caveats of a dataset is typically not incorporated in a structured way, which is problematic if others who lack that knowledge interpret the data. In this work, we define such analysts' knowledge about datasets as data hunches. We differentiate data hunches from uncertainty and discuss types of hunches. We then explore ways of recording data hunches, and, based on a prototypical design, develop recommendations for designing visualizations that support data hunches. We conclude by discussing various challenges associated with data hunches, including the potential for harm and challenges for trust and privacy. We envision that data hunches will empower analysts to externalize their knowledge, facilitate collaboration and communication, and support the ability to learn from others' data hunches.
false
false
[ "Haihan Lin", "Derya Akbaba", "Miriah D. Meyer", "Alexander Lex" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.07035v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Akb9_1qg-EE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/tZ4HaUAoSNw", "icon": "video" } ]
Vis
2,022
DendroMap: Visual Exploration of Large-Scale Image Datasets for Machine Learning with Treemaps
10.1109/TVCG.2022.3209425
In this paper, we present DendroMap, a novel approach to interactively exploring large-scale image datasets for machine learning (ML). ML practitioners often explore image datasets by generating a grid of images or projecting high-dimensional representations of images into 2-D using dimensionality reduction techniques (e.g., t-SNE). However, neither approach effectively scales to large datasets because images are ineffectively organized and interactions are insufficiently supported. To address these challenges, we develop DendroMap by adapting Treemaps, a well-known visualization technique. DendroMap effectively organizes images by extracting hierarchical cluster structures from high-dimensional representations of images. It enables users to make sense of the overall distributions of datasets and interactively zoom into specific areas of interests at multiple levels of abstraction. Our case studies with widely-used image datasets for deep learning demonstrate that users can discover insights about datasets and trained models by examining the diversity of images, identifying underperforming subgroups, and analyzing classification errors. We conducted a user study that evaluates the effectiveness of DendroMap in grouping and searching tasks by comparing it with a gridified version of t-SNE and found that participants preferred DendroMap. DendroMap is available at https://div-lab.github.io/dendromap/.
false
false
[ "Donald Bertucci", "Md Montaser Hamid", "Yashwanthi Anand", "Anita Ruangrotsakun", "Delyar Tabatabai", "Melissa Perez", "Minsuk Kahng" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2205.06935v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/2Fq7Z4Y-cbI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/cZAoAEcMW6I", "icon": "video" } ]
Vis
2,022
Development and Evaluation of Two Approaches of Visual Sensitivity Analysis to Support Epidemiological Modeling
10.1109/TVCG.2022.3209464
Computational modeling is a commonly used technology in many scientific disciplines and has played a noticeable role in combating the COVID-19 pandemic. Modeling scientists conduct sensitivity analysis frequently to observe and monitor the behavior of a model during its development and deployment. The traditional algorithmic ranking of sensitivity of different parameters usually does not provide modeling scientists with sufficient information to understand the interactions between different parameters and model outputs, while modeling scientists need to observe a large number of model runs in order to gain actionable information for parameter optimization. To address the above challenge, we developed and compared two visual analytics approaches, namely: algorithm-centric and visualization-assisted, and visualization-centric and algorithm-assisted. We evaluated the two approaches based on a structured analysis of different tasks in visual sensitivity analysis as well as the feedback of domain experts. While the work was carried out in the context of epidemiological modeling, the two approaches developed in this work are directly applicable to a variety of modeling processes featuring time series outputs, and can be extended to work with models with other types of outputs.
false
false
[ "Erik Rydow", "Rita Borgo", "Hui Fang 0003", "Thomas Torsney-Weir", "Ben Swallow", "Thibaud Porphyre", "Cagatay Turkay", "Min Chen 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/g_GonL3WuTs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yA458DmorBQ", "icon": "video" } ]
Vis
2,022
Dispersion vs Disparity: Hiding Variability Can Encourage Stereotyping When Visualizing Social Outcomes
10.1109/TVCG.2022.3209377
Visualization research often focuses on perceptual accuracy or helping readers interpret key messages. However, we know very little about how chart designs might influence readers' perceptions of the people behind the data. Specifically, could designs interact with readers' social cognitive biases in ways that perpetuate harmful stereotypes? For example, when analyzing social inequality, bar charts are a popular choice to present outcome disparities between race, gender, or other groups. But bar charts may encourage deficit thinking, the perception that outcome disparities are caused by groups' personal strengths or deficiencies, rather than external factors. These faulty personal attributions can then reinforce stereotypes about the groups being visualized. We conducted four experiments examining design choices that influence attribution biases (and therefore deficit thinking). Crowdworkers viewed visualizations depicting social outcomes that either mask variability in data, such as bar charts or dot plots, or emphasize variability in data, such as jitter plots or prediction intervals. They reported their agreement with both personal and external explanations for the visualized disparities. Overall, when participants saw visualizations that hide within-group variability, they agreed more with personal explanations. When they saw visualizations that emphasize within-group variability, they agreed less with personal explanations. These results demonstrate that data visualizations about social inequity can be misinterpreted in harmful ways and lead to stereotyping. Design choices can influence these biases: Hiding variability tends to increase stereotyping while emphasizing variability reduces it.
false
false
[ "Eli Holder", "Cindy Xiong" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04440v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/qak_QLRIiqQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/kI3NcukbVsA", "icon": "video" } ]
Vis
2,022
Diverse Interaction Recommendation for Public Users Exploring Multi-view Visualization using Deep Learning
10.1109/TVCG.2022.3209461
Interaction is an important channel to offer users insights in interactive visualization systems. However, which interaction to operate and which part of data to explore are hard questions for public users facing a multi-view visualization for the first time. Making these decisions largely relies on professional experience and analytic abilities, which is a huge challenge for non-professionals. To solve the problem, we propose a method aiming to provide diverse, insightful, and real-time interaction recommendations for novice users. Building on the Long-Short Term Memory Model (LSTM) structure, our model captures users' interactions and visual states and encodes them in numerical vectors to make further recommendations. Through an illustrative example of a visualization system about Chinese poets in the museum scenario, the model is proven to be workable in systems with multi-views and multiple interaction types. A further user study demonstrates the method's capability to help public users conduct more insightful and diverse interactive explorations and gain more accurate data insights.
false
false
[ "Yixuan Li", "Yusheng Qi", "Yang Shi 0007", "Qing Chen 0001", "Nan Cao", "Siming Chen 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Ut5l987xayw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/D73x0sHuQ4Q", "icon": "video" } ]
Vis
2,022
DPVisCreator: Incorporating Pattern Constraints to Privacy-preserving Visualizations via Differential Privacy
10.1109/TVCG.2022.3209391
Data privacy is an essential issue in publishing data visualizations. However, it is challenging to represent multiple data patterns in privacy-preserving visualizations. The prior approaches target specific chart types or perform an anonymization model uniformly without considering the importance of data patterns in visualizations. In this paper, we propose a visual analytics approach that facilitates data custodians to generate multiple private charts while maintaining user-preferred patterns. To this end, we introduce pattern constraints to model users' preferences over data patterns in the dataset and incorporate them into the proposed Bayesian network-based Differential Privacy (DP) model PriVis. A prototype system, DPVisCreator, is developed to assist data custodians in implementing our approach. The effectiveness of our approach is demonstrated with quantitative evaluation of pattern utility under the different levels of privacy protection, case studies, and semi-structured expert interviews.
false
false
[ "Jiehui Zhou", "Xumeng Wang", "Jason K. Wong", "Huanliang Wang", "Zhongwei Wang", "Xiaoyu Yang", "Xiaoran Yan", "Haozhe Feng", "Huamin Qu", "Haochao Ying", "Wei Chen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.13418v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/-cmsbm8opvg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/LYdxLA3hD3c", "icon": "video" } ]
Vis
2,022
Dual Space Coupling Model Guided Overlap-Free Scatterplot
10.1109/TVCG.2022.3209459
The overdraw problem of scatterplots seriously interferes with the visual tasks. Existing methods, such as data sampling, node dispersion, subspace mapping, and visual abstraction, cannot guarantee the correspondence and consistency between the data points that reflect the intrinsic original data distribution and the corresponding visual units that reveal the presented data distribution, thus failing to obtain an overlap-free scatterplot with unbiased and lossless data distribution. A dual space coupling model is proposed in this paper to represent the complex bilateral relationship between data space and visual space theoretically and analytically. Under the guidance of the model, an overlap-free scatterplot method is developed through integration of the following: a geometry-based data transformation algorithm, namely DistributionTranscriptor; an efficient spatial mutual exclusion guided view transformation algorithm, namely PolarPacking; an overlap-free oriented visual encoding configuration model and a radius adjustment tool, namely $f_{r_{draw}}$. Our method can ensure complete and accurate information transfer between the two spaces, maintaining consistency between the newly created scatterplot and the original data distribution on global and local features. Quantitative evaluation proves our remarkable progress on computational efficiency compared with the state-of-the-art methods. Three applications involving pattern enhancement, interaction improvement, and overdraw mitigation of trajectory visualization demonstrate the broad prospects of our method.
false
false
[ "Zeyu Li 0003", "Ruizhi Shi", "Yan Liu", "Shizhuo Long", "Ziheng Guo", "Shichao Jia", "Jiawan Zhang" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.09706v1", "icon": "paper" } ]
Vis
2,022
ECoalVis: Visual Analysis of Control Strategies in Coal-fired Power Plants
10.1109/TVCG.2022.3209430
Improving the efficiency of coal-fired power plants has numerous benefits. The control strategy is one of the major factors affecting such efficiency. However, due to the complex and dynamic environment inside the power plants, it is hard to extract and evaluate control strategies and their cascading impact across massive sensors. Existing manual and data-driven approaches cannot well support the analysis of control strategies because these approaches are time-consuming and do not scale with the complexity of the power plant systems. Three challenges were identified: a) interactive extraction of control strategies from large-scale dynamic sensor data, b) intuitive visual representation of cascading impact among the sensors in a complex power plant system, and c) time-lag-aware analysis of the impact of control strategies on electricity generation efficiency. By collaborating with energy domain experts, we addressed these challenges with ECoalVis, a novel interactive system for experts to visually analyze the control strategies of coal-fired power plants extracted from historical sensor data. The effectiveness of the proposed system is evaluated with two usage scenarios on a real-world historical dataset and received positive feedback from experts.
false
false
[ "Shuhan Liu", "Di Weng", "Yuan Tian", "Zikun Deng", "Haoran Xu", "Xiangyu Zhu", "Honglei Yin", "Xianyuan Zhan", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/F0UEbxpkC0o", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MJtqd5u-h54", "icon": "video" } ]
Vis
2,022
Effects of View Layout on Situated Analytics for Multiple-View Representations in Immersive Visualization
10.1109/TVCG.2022.3209475
Multiple-view (MV) representations enabling multi-perspective exploration of large and complex data are often employed on 2D displays. The technique also shows great potential in addressing complex analytic tasks in immersive visualization. However, although useful, the design space of MV representations in immersive visualization lacks in deep exploration. In this paper, we propose a new perspective to this line of research, by examining the effects of view layout for MV representations on situated analytics. Specifically, we disentangle situated analytics in perspectives of situatedness regarding spatial relationship between visual representations and physical referents, and analytics regarding cross-view data analysis including filtering, refocusing, and connecting tasks. Through an in-depth analysis of existing layout paradigms, we summarize design trade-offs for achieving high situatedness and effective analytics simultaneously. We then distill a list of design requirements for a desired layout that balances situatedness and analytics, and develop a prototype system with an automatic layout adaptation method to fulfill the requirements. The method mainly includes a cylindrical paradigm for egocentric reference frame, and a force-directed method for proper view-view, view-user, and view-referent proximities and high view visibility. We conducted a formal user study that compares layouts by our method with linked and embedded layouts. Quantitative results show that participants finished filtering- and connecting-centered tasks significantly faster with our layouts, and user feedback confirms high usability of the prototype system.
false
false
[ "Zhen Wen", "Wei Zeng 0004", "Luoxuan Weng", "Yihan Liu", "Mingliang Xu", "Wei Chen 0001" ]
[]
[]
[]
Vis
2,022
Erato: Cooperative Data Story Editing via Fact Interpolation
10.1109/TVCG.2022.3209428
As an effective form of narrative visualization, visual data stories are widely used in data-driven storytelling to communicate complex insights and support data understanding. Although important, they are difficult to create, as a variety of interdisciplinary skills, such as data analysis and design, are required. In this work, we introduce Erato, a human-machine cooperative data story editing system, which allows users to generate insightful and fluent data stories together with the computer. Specifically, Erato only requires a number of keyframes provided by the user to briefly describe the topic and structure of a data story. Meanwhile, our system leverages a novel interpolation algorithm to help users insert intermediate frames between the keyframes to smooth the transition. We evaluated the effectiveness and usefulness of the Erato system via a series of evaluations including a Turing test, a controlled user study, a performance validation, and interviews with three expert users. The evaluation results showed that the proposed interpolation technique was able to generate coherent story content and help users create data stories more efficiently.
false
false
[ "Mengdi Sun", "Ligan Cai", "Weiwei Cui", "Yanqiu Wu", "Yang Shi 0007", "Nan Cao" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.02529v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Qv6AlPPfZhg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Luv5dRwLLnw", "icon": "video" } ]
Vis
2,022
ErgoExplorer: Interactive Ergonomic Risk Assessment from Video Collections
10.1109/TVCG.2022.3209432
Ergonomic risk assessment is now, due to an increased awareness, carried out more often than in the past. The conventional risk assessment evaluation, based on expert-assisted observation of the workplaces and manually filling in score tables, is still predominant. Data analysis is usually done with a focus on critical moments, although without the support of contextual information and changes over time. In this paper we introduce ErgoExplorer, a system for the interactive visual analysis of risk assessment data. In contrast to the current practice, we focus on data that span across multiple actions and multiple workers while keeping all contextual information. Data is automatically extracted from video streams. Based on carefully investigated analysis tasks, we introduce new views and their corresponding interactions. These views also incorporate domain-specific score tables to guarantee an easy adoption by domain experts. All views are integrated into ErgoExplorer, which relies on coordinated multiple views to facilitate analysis through interaction. ErgoExplorer makes it possible for the first time to examine complex relationships between risk assessments of individual body parts over long sessions that span multiple operations. The newly introduced approach supports analysis and exploration at several levels of detail, ranging from a general overview, down to inspecting individual frames in the video stream, if necessary. We illustrate the usefulness of the newly proposed approach applying it to several datasets.
false
false
[ "Manlio Massiris Fernández", "Sanjin Rados", "Kresimir Matkovic", "M. Eduard Gröller", "Claudio Delrieux" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.05252v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/INIZJUllKuI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MJFVNNI5Pqs", "icon": "video" } ]
Vis
2,022
Evaluating the Use of Uncertainty Visualisations for Imputations of Data Missing At Random in Scatterplots
10.1109/TVCG.2022.3209348
Most real-world datasets contain missing values yet most exploratory data analysis (EDA) systems only support visualising data points with complete cases. This omission may potentially lead the user to biased analyses and insights. Imputation techniques can help estimate the value of a missing data point, but introduces additional uncertainty. In this work, we investigate the effects of visualising imputed values in charts using different ways of representing data imputations and imputation uncertainty—no imputation, mean, 95% confidence intervals, probability density plots, gradient intervals, and hypothetical outcome plots. We focus on scatterplots, which is a commonly used chart type, and conduct a crowdsourced study with 202 participants. We measure users' bias and precision in performing two tasks—estimating average and detecting trend—and their self-reported confidence in performing these tasks. Our results suggest that, when estimating averages, uncertainty representations may reduce bias but at the cost of decreasing precision. When estimating trend, only hypothetical outcome plots may lead to a small probability of reducing bias while increasing precision. Participants in every uncertainty representation were less certain about their response when compared to the baseline. The findings point towards potential trade-offs in using uncertainty encodings for datasets with a large number of missing values. This paper and the associated analysis materials are available at: https://osf.io/q4y5r/
false
false
[ "Abhraneel Sarma", "Shunan Guo", "Jane Hoffswell", "Ryan A. Rossi", "Fan Du", "Eunyee Koh", "Matthew Kay 0001" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/5cy8k", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/xS7TURfPCdQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/ey3Jy2iW-bI", "icon": "video" } ]
Vis
2,022
Exploring Interactions with Printed Data Visualizations in Augmented Reality
10.1109/TVCG.2022.3209386
This paper presents a design space of interaction techniques to engage with visualizations that are printed on paper and augmented through Augmented Reality. Paper sheets are widely used to deploy visualizations and provide a rich set of tangible affordances for interactions, such as touch, folding, tilting, or stacking. At the same time, augmented reality can dynamically update visualization content to provide commands such as pan, zoom, filter, or detail on demand. This paper is the first to provide a structured approach to mapping possible actions with the paper to interaction commands. This design space and the findings of a controlled user study have implications for future designs of augmented reality systems involving paper sheets and visualizations. Through workshops ($\mathrm{N}=20$) and ideation, we identified 81 interactions that we classify in three dimensions: 1) commands that can be supported by an interaction, 2) the specific parameters provided by an (inter)action with paper, and 3) the number of paper sheets involved in an interaction. We tested user preference and viability of 11 of these interactions with a prototype implementation in a controlled study ($\mathrm{N}=12$, HoloLens 2) and found that most of the interactions are intuitive and engaging to use. We summarized interactions (e.g., tilt to pan) that have strong affordance to complement “point” for data exploration, physical limitations and properties of paper as a medium, cases requiring redundancy and shortcuts, and other implications for design.
false
false
[ "Wai Tong", "Zhutian Chen", "Meng Xia", "Leo Yu-Ho Lo", "Linping Yuan", "Benjamin Bach", "Huamin Qu" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.10603v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/WYP_7ASDHEo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/edZMvbOatfA", "icon": "video" } ]
Vis
2,022
Extending the Nested Model for User-Centric XAI: A Design Study on GNN-based Drug Repurposing
10.1109/TVCG.2022.3209435
Whether AI explanations can help users achieve specific tasks efficiently (i.e., usable explanations) is significantly influenced by their visual presentation. While many techniques exist to generate explanations, it remains unclear how to select and visually present AI explanations based on the characteristics of domain users. This paper aims to understand this question through a multidisciplinary design study for a specific problem: explaining graph neural network (GNN) predictions to domain experts in drug repurposing, i.e., reuse of existing drugs for new diseases. Building on the nested design model of visualization, we incorporate XAI design considerations from a literature review and from our collaborators' feedback into the design process. Specifically, we discuss XAI-related design considerations for usable visual explanations at each design layer: target user, usage context, domain explanation, and XAI goal at the domain layer; format, granularity, and operation of explanations at the abstraction layer; encodings and interactions at the visualization layer; and XAI and rendering algorithm at the algorithm layer. We present how the extended nested model motivates and informs the design of DrugExplorer, an XAI tool for drug repurposing. Based on our domain characterization, DrugExplorer provides path-based explanations and presents them both as individual paths and meta-paths for two key XAI operations, why and what else. DrugExplorer offers a novel visualization design called MetaMatrix with a set of interactions to help domain users organize and compare explanation paths at different levels of granularity to generate domain-meaningful insights. We demonstrate the effectiveness of the selected visual presentation and DrugExplorer as a whole via a usage scenario, a user study, and expert interviews. From these evaluations, we derive insightful observations and reflections that can inform the design of XAI visualizations for other scientific applications.
false
false
[ "Qianwen Wang", "Kexin Huang", "Payal Chandak", "Marinka Zitnik", "Nils Gehlenborg" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/yhdpv", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/r1QpPuw3zbw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/0VgjOUTmtjY", "icon": "video" } ]
Vis
2,022
Fiber Uncertainty Visualization for Bivariate Data With Parametric and Nonparametric Noise Models
10.1109/TVCG.2022.3209424
Visualization and analysis of multivariate data and their uncertainty are top research challenges in data visualization. Constructing fiber surfaces is a popular technique for multivariate data visualization that generalizes the idea of level-set visualization for univariate data to multivariate data. In this paper, we present a statistical framework to quantify positional probabilities of fibers extracted from uncertain bivariate fields. Specifically, we extend the state-of-the-art Gaussian models of uncertainty for bivariate data to other parametric distributions (e.g., uniform and Epanechnikov) and more general nonparametric probability distributions (e.g., histograms and kernel density estimation) and derive corresponding spatial probabilities of fibers. In our proposed framework, we leverage Green's theorem for closed-form computation of fiber probabilities when bivariate data are assumed to have independent parametric and nonparametric noise. Additionally, we present a nonparametric approach combined with numerical integration to study the positional probability of fibers when bivariate data are assumed to have correlated noise. For uncertainty analysis, we visualize the derived probability volumes for fibers via volume rendering and extracting level sets based on probability thresholds. We present the utility of our proposed techniques via experiments on synthetic and simulation datasets.
false
false
[ "Tushar M. Athawale", "Christopher R. Johnson 0001", "Sudhanshu Sane", "David Pugmire" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.11318v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/X2qxav8zXsk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/33pVyJ9bUqc", "icon": "video" } ]
Vis
2,022
FlowNL: Asking the Flow Data in Natural Languages
10.1109/TVCG.2022.3209453
Flow visualization is essentially a tool to answer domain experts' questions about flow fields using rendered images. Static flow visualization approaches require domain experts to raise their questions to visualization experts, who develop specific techniques to extract and visualize the flow structures of interest. Interactive visualization approaches allow domain experts to ask the system directly through the visual analytic interface, which provides flexibility to support various tasks. However, in practice, the visual analytic interface may require extra learning effort, which often discourages domain experts and limits its usage in real-world scenarios. In this paper, we propose FlowNL, a novel interactive system with a natural language interface. FlowNL allows users to manipulate the flow visualization system using plain English, which greatly reduces the learning effort. We develop a natural language parser to interpret user intention and translate textual input into a declarative language. We design the declarative language as an intermediate layer between the natural language and the programming language specifically for flow visualization. The declarative language provides selection and composition rules to derive relatively complicated flow structures from primitive objects that encode various kinds of information about scalar fields, flow patterns, regions of interest, connectivities, etc. We demonstrate the effectiveness of FlowNL using multiple usage scenarios and an empirical evaluation.
false
false
[ "Jieying Huang", "Yang Xi", "Junnan Hu", "Jun Tao 0002" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/54yeCnVaTdE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/sY9o7WDRbGU", "icon": "video" } ]
Vis
2,022
FoVolNet: Fast Volume Rendering using Foveated Deep Neural Networks
10.1109/TVCG.2022.3209498
Volume data is found in many important scientific and engineering applications. Rendering this data for visualization at high quality and interactive rates for demanding applications such as virtual reality is still not easily achievable even using professional-grade hardware. We introduce FoVolNet—a method to significantly increase the performance of volume data visualization. We develop a cost-effective foveated rendering pipeline that sparsely samples a volume around a focal point and reconstructs the full-frame using a deep neural network. Foveated rendering is a technique that prioritizes rendering computations around the user's focal point. This approach leverages properties of the human visual system, thereby saving computational resources when rendering data in the periphery of the user's field of vision. Our reconstruction network combines direct and kernel prediction methods to produce fast, stable, and perceptually convincing output. With a slim design and the use of quantization, our method outperforms state-of-the-art neural reconstruction techniques in both end-to-end frame times and visual quality. We conduct extensive evaluations of the system's rendering performance, inference speed, and perceptual properties, and we provide comparisons to competing neural image reconstruction techniques. Our test results show that FoVolNet consistently achieves significant time saving over conventional rendering while preserving perceptual quality.
false
false
[ "David Bauer", "Qi Wu 0015", "Kwan-Liu Ma" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.09965v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/axiljcwYYMI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/TCQiw2DTc_0", "icon": "video" } ]