Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
CHI
2,025
Crowdsourced Think-Aloud Studies
10.1145/3706598.3714305
The think-aloud (TA) protocol is a useful method for evaluating user interfaces, including data visualizations. However, TA studies are time-consuming to conduct and hence often have a small number of participants. Crowdsourcing TA studies would help alleviate these problems, but the technical overhead and the unknown quality of results have restricted TA to synchronous studies. To address this gap we introduce CrowdAloud, a system for creating and analyzing asynchronous, crowdsourced TA studies. CrowdAloud captures audio and provenance (log) data as participants interact with a stimulus. Participant audio is automatically transcribed and visualized together with events data and a full recreation of the state of the stimulus as seen by participants. To gauge the value of crowdsourced TA studies, we conducted two experiments: one to compare lab-based and crowdsourced TA studies, and one to compare crowdsourced TA studies with crowdsourced text prompts. Our results suggest that crowdsourcing is a viable approach for conducting TA studies at scale.
false
false
[ "Zach Cutler", "Lane Harrison", "Carolina Nobre", "Alexander Lex" ]
[]
[]
[]
CHI
2,025
Data at Hand: Exploring the Tactile Perception of Data Physicalizations
10.1145/3706598.3713212
Data physicalizations are tangible objects, and touching them may improve their interpretation. However, little is known about how people actually touch physicalizations. We recorded verbal and tactile responses to data physicalizations in three consecutive conditions: as an unspecified object, as a representation of unknown data, and with full information about data and encoding. Our two stimulus objects present data for nine countries in a 3x3 grid. We varied vertical axis polarity, with positive data values either above (convex) or below (concave) baseline. Using an analog tracer method, we examine whether some components of the physicalization are touched more than others, whether touch varies by task and the impact of axis polarity. We found large differences in the degree to which different components were touched and that the effect of vertical axis polarity depended on task. We describe additional tactile and verbal behaviors that can inform the design of data physicalizations.
false
false
[ "Dietmar Offenhuber", "Laura J. Perovich", "Bernice E. Rogowitz" ]
[]
[]
[]
CHI
2,025
Data Formulator 2: Iterative Creation of Data Visualizations, with AI Transforming Data Along the Way
10.1145/3706598.3713296
Data analysts often need to iterate between data transformations and chart designs to create rich visualizations for exploratory data analysis. Although many AI-powered systems have been introduced to reduce the effort of visualization authoring, existing systems are not well suited for iterative authoring. They typically require analysts to provide, in a single turn, a text-only prompt that fully describe a complex visualization. We introduce Data Formulator 2 (Df2 for short), an AI-powered visualization system designed to overcome this limitation. Df2 blends graphical user interfaces and natural language inputs to enable users to convey their intent more effectively, while delegating data transformation to AI. Furthermore, to support efficient iteration, Df2 lets users navigate their iteration history and reuse previous designs, eliminating the need to start from scratch each time. A user study with eight participants demonstrated that Df2 allowed participants to develop their own iteration styles to complete challenging data exploration sessions.
false
false
[ "Chenglong Wang", "Bongshin Lee", "Steven Mark Drucker", "Dan Marshall", "Jianfeng Gao 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2408.16119v2", "icon": "paper" } ]
CHI
2,025
Datamancer: Bimanual Gesture Interaction in Multi-Display Ubiquitous Analytics Environments
10.1145/3706598.3713123
We introduce Datamancer, a wearable device enabling bimanual gesture interaction across multi-display ubiquitous analytics environments. Datamancer addresses the gap in gesture-based interaction within data visualization settings, where current methods are often constrained by limited interaction spaces or the need for installing bulky tracking setups. Datamancer integrates a finger-mounted pinhole camera and a chest-mounted gesture sensor, allowing seamless selection and manipulation of visualizations on distributed displays. By pointing to a display, users can acquire the display and engage in various interactions, such as panning, zooming, and selection, using both hands. Our contributions include (1) an investigation of the design space of gestural interaction for physical ubiquitous analytics environments; (2) a prototype implementation of the Datamancer system that realizes this model; and (3) an evaluation of the prototype through demonstration of application scenarios, an expert review, and a user study.
false
false
[ "Biswaksen Patnaik", "Marcel Borowski", "Huaishu Peng", "Clemens Nylandsted Klokmose", "Niklas Elmqvist" ]
[]
[]
[]
CHI
2,025
Decision Theoretic Foundations for Experiments Evaluating Human Decisions
10.1145/3706598.3714063
Decision-making with information displays is a key focus of research in areas like human-AI collaboration and data visualization. However, what constitutes a decision problem, and what is required for an experiment to conclude that decisions are flawed, remain imprecise. We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics. We claim that to attribute loss in human performance to bias, an experiment must provide the information that a rational agent would need to identify the normative decision. We evaluate whether recent empirical research on AI-assisted decisions achieves this standard. We find that only 10 (26%) of 39 studies that claim to identify biased behavior presented participants with sufficient information to make this claim in at least one treatment condition. We motivate the value of studying well-defined decision problems by describing a characterization of performance losses they allow to be conceived.
false
false
[ "Jessica Hullman", "Alex Kale", "Jason D. Hartline" ]
[]
[]
[]
CHI
2,025
Diagrammatization and Abduction to Improve AI Interpretability With Domain-Aligned Explanations for Medical Diagnosis
10.1145/3706598.3714058
Many visualizations have been developed for explainable AI (XAI), but they often require further reasoning by users to interpret. Investigating XAI for high-stakes medical diagnosis, we propose improving domain alignment with diagrammatic and abductive reasoning to reduce the interpretability gap. We developed DiagramNet to predict cardiac diagnoses from heart auscultation, select the best-fitting hypothesis based on criteria evaluation, and explain with clinically-relevant murmur diagrams. The ante-hoc interpretable model leverages domain-relevant ontology, representation, and reasoning process to increase trust in expert users. In modeling studies, we found that DiagramNet not only provides faithful murmur shape explanations, but also has better performance than baseline models. We demonstrate the interpretability and trustworthiness of diagrammatic, abductive explanations in a qualitative user study with medical students, showing that clinically-relevant, diagrammatic explanations are preferred over technical saliency map explanations. This work contributes insights into providing domain-aligned explanations for user-centric XAI in complex domains.
false
false
[ "Brian Y. Lim", "Joseph P. Cahaly", "Chester Y. F. Sng", "Adam Chew" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2302.01241v3", "icon": "paper" } ]
CHI
2,025
Disentangling the Power Dynamics in Participatory Data Physicalisation
10.1145/3706598.3713703
Participatory data physicalisation (PDP) is recognised for its potential to support data-driven decisions among stakeholders who collaboratively construct physical elements into commonly insightful visualisations. Like all participatory processes, PDP is however influenced by underlying power dynamics that might lead to issues regarding extractive participation, marginalisation, or exclusion, among others. We first identified the decisions behind these power dynamics by developing an ontology that synthesises critical theoretical insights from both visualisation and participatory design research, which were then systematically applied unto a representative corpus of 23 PDP artefacts. By revealing how shared decisions are guided by different agendas, this paper presents three contributions: 1) a cross-disciplinary ontology that facilitates the systematic analysis of existing and novel PDP artefacts and processes; which leads to 2) six PDP agendas that reflect the key power dynamics in current PDP practice, revealing the diversity of orientations towards stakeholder participation in PDP practice; and 3) a set of critical considerations that should guide how power dynamics can be balanced, such as by reflecting on how issues are represented, data is contextualised, participants express their meanings, and how participants can dissent with flexible artefact construction. Consequently, this study advances a feminist research agenda by guiding researchers and practitioners in openly reflecting on and sharing responsibilities in data physicalisation and participatory data visualisation.
false
false
[ "Silvia Cazacu", "Georgia Panagiotidou", "Thérèse Steenberghen", "Andrew Vande Moere" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.13018v1", "icon": "paper" } ]
CHI
2,025
Divisi: Interactive Search and Visualization for Scalable Exploratory Subgroup Analysis
10.1145/3706598.3713103
Analyzing data subgroups is a common data science task to build intuition about a dataset and identify areas to improve model performance. However, subgroup analysis is prohibitively difficult in datasets with many features, and existing tools limit unexpected discoveries by relying on user-defined or static subgroups. We propose exploratory subgroup analysis as a set of tasks in which practitioners discover, evaluate, and curate interesting subgroups to build understanding about datasets and models. To support these tasks we introduce Divisi, an interactive notebook-based tool underpinned by a fast approximate subgroup discovery algorithm. Divisi's interface allows data scientists to interactively re-rank and refine subgroups and to visualize their overlap and coverage in the novel Subgroup Map. Through a think-aloud study with 13 practitioners, we find that Divisi can help uncover surprising patterns in data features and their interactions, and that it encourages more thorough exploration of subtypes in complex data.
false
false
[ "Venkatesh Sivaraman", "Zexuan Li", "Adam Perer" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.10537v1", "icon": "paper" } ]
CHI
2,025
DysVis: A User-Centred Data Visualization System for Dyslexia Pre-screening
10.1145/3706598.3713194
Dyslexia is a common neurobiological learning disorder significantly impacting reading, writing, and spelling worldwide. Early identification and intervention are essential, but most pre-screening tools focus on Latin languages, leaving Chinese-speaking students underserved. To address this gap, we conduct semi-structured interviews with special education (special-ed) teachers to gather their needs for dyslexia pre-screening tailored to Chinese contexts. Using their insights, we have developed DysVis, a user-centered data visualization system that combines handwriting analysis, body movement keypoint conversion, and a comprehensive visualization interface. DysVis provides teachers with multi-level visualizations, such as performance overviews, task analyses, handwriting observations, and behavioural insights, enabling them to identify the root causes of learning difficulties. Our evaluations, including case studies, a user study, and expert interviews, demonstrate that DysVis is user-friendly and effective in quickly identifying at-risk students, ultimately enhancing learning outcomes for Chinese-speaking students with dyslexia.
false
false
[ "Ka Yan Fung", "Lik Hang Lee", "Linping Yuan", "Kwong Chiu Fung", "Kuen Fung Sin", "Tze-Leung Rick Lui", "Huamin Qu", "Shenghui Song 0001" ]
[]
[]
[]
CHI
2,025
Effects of Alternative Scatterplot Designs on Belief
10.1145/3706598.3713809
Viewers tend to underestimate correlation in positively correlated scatterplots. However, systematically changing the size and opacity of scatterplot points can bias estimates upwards, correcting for this underestimation. Here, we examine whether the application of these visualisation techniques goes beyond a simple perceptual effect and could actually influence beliefs about information from trusted news sources. We present a fully-reproducible study in which we demonstrate that scatterplot manipulations that are able to correct for the correlation underestimation bias can also induce stronger levels of belief change compared to conventional scatterplots presenting identical data. Consequently, we show that novel visualisation techniques can be used to drive belief change, and suggest future directions for extending this work with regards to altering attitudes and behaviours.
false
false
[ "Gabriel Strain", "Andrew J. Stewart", "Caroline Jay", "Charlotte Rutherford", "Paul A. Warren" ]
[]
[]
[]
CHI
2,025
Exploratory Visual Analysis of Transcripts for Interaction Analysis in Human-Computer Interaction
10.1145/3706598.3713490
Transcripts are central to qualitative research in HCI, particularly for researchers using methods of Conversation Analysis (CA) and Interaction Analysis (IA) who study the socially situated nature of human-computer interaction. However, CA and IA researchers continue to highlight the significant need for more dynamic ways to visualize transcripts to support interaction analysis. This need is particularly evident in HCI, where transcripts as a form of data have received little attention. In this article, we make three contributions to HCI research. First, we present Transcript Explorer, an open-source visualization system that integrates three visualization techniques we have developed to interactively visualize transcripts linked to videos: Distribution Diagrams, Turn Charts and Contribution Clouds. Second, we present findings from a qualitative analysis of focus group interviews with three different qualitative research groups who engaged with this system to analyze common transcript data. Finally, we expand upon transcripts as a unique form of data for HCI research and propose directions for future research.
false
false
[ "Ben Rydal Shapiro", "Rogers Hall", "Arpit Mathur", "Edwin Zhao" ]
[]
[]
[]
CHI
2,025
Exploring Design Spaces to Facilitate Household Collaboration for Cohabiting Couples
10.1145/3706598.3713383
Household collaboration among cohabiting couples presents unique challenges due to the intimate nature of the relationships and the lack of external rewards. Current efficiency-oriented technologies neglect these distinct dynamics. Our study aims to examine the real-world context and underlying needs of couples in their collaborative homemaking. We conducted a 10-day empirical investigation involving six Korean couples, supplemented by a probe approach to facilitate reflection on their current homemaking practices. We identified the requirement for ideal household collaboration as a 'shared ritual for celebratory interaction' and pinpointed the challenges in achieving this goal. We propose three design opportunities for domestic technology to address this gap: strengthening the meaning of housework around family values, supporting recognition of the partner's efforts through visualization, and initiating negotiation through defamiliarization. These insights extend the design considerations for domestic technologies, advocating for a broader understanding of the values contributing to satisfactory homemaking activities within the household.
false
false
[ "Gahyeon Bae", "Seo Kyoung Park", "Taewan Kim", "Hwajung Hong" ]
[]
[]
[]
CHI
2,025
Exploring Mobile Touch Interaction with Large Language Models
10.1145/3706598.3713554
Interacting with Large Language Models (LLMs) for text editing on mobile devices currently requires users to break out of their writing environment and switch to a conversational AI interface. In this paper, we propose to control the LLM via touch gestures performed directly on the text. We first chart a design space that covers fundamental touch input and text transformations. In this space, we then concretely explore two control mappings: spread-to-generate and pinch-to-shorten, with visual feedback loops. We evaluate this concept in a user study (N=14) that compares three feedback designs: no visualisation, text length indicator, and length + word indicator. The results demonstrate that touch-based control of LLMs is both feasible and user-friendly, with the length + word indicator proving most effective for managing text generation. This work lays the foundation for further research into gesture-based interaction with LLMs on touch devices.
false
false
[ "Tim Zindulka", "Jannek Maximilian Sekowski", "Florian Lehmann", "Daniel Buschek" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.07629v1", "icon": "paper" } ]
CHI
2,025
Exploring the Design Space of Privacy-Driven Adaptation Techniques for Future Augmented Reality Interfaces
10.1145/3706598.3713320
Modern augmented reality (AR) devices with advanced display and sensing capabilities pose significant privacy risks to users and bystanders. While previous context-aware adaptations focused on usability and ergonomics, we explore the design space of privacy-driven adaptations that allow users to meet their dynamic needs. These techniques offer granular control over AR sensing capabilities across various AR input, output, and interaction modalities, aiming to minimize degradations to the user experience. Through an elicitation study with 10 AR researchers, we derive 62 privacy-focused adaptation techniques that preserve key AR functionalities and classify them into system-driven, user-driven, and mixed-initiative approaches to create an adaptation catalog. We also contribute a visualization tool that helps AR developers navigate the design space, validating its effectiveness in design workshops with six AR developers. Our findings indicate that the tool allowed developers to discover new techniques, evaluate tradeoffs, and make informed decisions that balance usability and privacy concerns in AR design.
false
false
[ "Shwetha Rajaram", "Macarena Peralta", "Janet G. Johnson", "Michael Nebeling" ]
[]
[]
[]
CHI
2,025
From Concept to Clinic: Multidisciplinary Design, Development, and Clinical Validation of Augmented Reality-Assisted Open Pancreatic Surgery
10.1145/3706598.3713458
Wearable augmented reality (AR) systems have significant potential to enhance surgical outcomes through in-situ visualization of patient-specific data. Yet, efforts to develop AR-based systems for open surgery have been limited, lacking comprehensive interdisciplinary research and actual clinical evaluations in real surgical environments. Our research addresses this gap by presenting a user-centered design and development process of ARAS, an AR assistance for open pancreatic surgery. ARAS provides in-situ visualization of critical structures, such as the vascular system and the tumor, while offering a robust dual-layer registration method ensuring accurate registration during relevant phases of the surgery. We evaluated ARAS in clinical trials of 20 patients with pancreatic tumors. Accuracy validation and postoperative surgeon interviews confirmed its successful deployment, supporting surgeons in vascular localization and critical decision-making. Our work showcases AR's potential to fundamentally transform procedures for complex surgical operations, advocating a research shift toward ecological validation in open surgery.
false
false
[ "Hamraz Javaheri", "Omid Ghamarnejad", "Paul Lukowicz", "Gregor Alexander Stavrou", "Jakob Karolus" ]
[]
[]
[]
CHI
2,025
GenPara: Enhancing the 3D Design Editing Process by Inferring Users' Regions of Interest with Text-Conditional Shape Parameters
10.1145/3706598.3713502
In 3D design, specifying design objectives and visualizing complex shapes through text alone proves to be a significant challenge. Although advancements in 3D GenAI have significantly enhanced part assembly and the creation of high-quality 3D designs, many systems still to dynamically generate and edit design elements based on the shape parameters. To bridge this gap, we propose GenPara, an interactive 3D design editing system that leverages text-conditional shape parameters of part-aware 3D designs and visualizes design space within the Exploration Map and Design Versioning Tree. Additionally, among the various shape parameters generated by LLM, the system extracts and provides design outcomes within the user's regions of interest based on Bayesian inference. A user study (N = 16) revealed that GenPara enhanced the comprehension and management of designers with text-conditional shape parameters, streamlining design exploration and concretization. This improvement boosted efficiency and creativity of the 3D design process.
false
false
[ "Jiin Choi 0001", "Seung Won Lee", "Kyung Hoon Hyun" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.14096v1", "icon": "paper" } ]
CHI
2,025
GistVis: Automatic Generation of Word-scale Visualizations from Data-rich Documents
10.1145/3706598.3713881
Data-rich documents are ubiquitous in various applications, yet they often rely solely on textual descriptions to convey data insights. Prior research primarily focused on providing visualization-centric augmentation to data-rich documents. However, few have explored using automatically generated word-scale visualizations to enhance the document-centric reading process. As an exploratory step, we propose GistVis, an automatic pipeline that extracts and visualizes data insight from text descriptions. GistVis decomposes the generation process into four modules: Discoverer, Annotator, Extractor, and Visualizer, with the first three modules utilizing the capabilities of large language models and the fourth using visualization design knowledge. Technical evaluation including a comparative study on Discoverer and an ablation study on Annotator reveals decent performance of GistVis. Meanwhile, the user study (N=12) showed that GistVis could generate satisfactory word-scale visualizations, indicating its effectiveness in facilitating users' understanding of data-rich documents (+5.6% accuracy) while significantly reducing their mental demand (p=0.016) and perceived effort (p=0.033).
false
false
[ "Ruishi Zou", "Yinqi Tang", "Jingzhu Chen", "Siyu Lu", "Yan Lu", "Yingfan Yang", "Chen Ye 0002" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.03784v1", "icon": "paper" } ]
CHI
2,025
HaptiCoil: Soft Programmable Buttons with Hydraulically Coupled Haptic Feedback and Sensing
10.1145/3706598.3713175
We present HaptiCoil, an embedded system and interaction method for prototyping low-cost, compact, and customizable wide bandwidth (1-500 Hz) soft haptic buttons. HaptiCoil devices are built using mass-produced, waterproof planar micro-speakers which are adapted to direct energy to the skin using a novel hydraulic coupling mechanism. They can sense force input, using a measurement of self-inductance, and provide output in a single package, yielding a flexible all-in-one button solution. Our devices offer a wider perceptual range of tactile stimuli than industry standard approaches, while maintaining comparable power threshold levels (typical threshold under 40 mW). We detail the construction and underlying principles of our approach, as well as an extensive physical quantification of both input and output. We share psychophysical data on device bandwidth, and show three illustrative examples of how HaptiCoil buttons can implemented in use cases such as spatial computing, digital inking, and remote control.
false
false
[ "Jung-Hwan Youn", "Seung Heon Lee", "Craig Shultz" ]
[]
[]
[]
CHI
2,025
How Do People Perceive Bundling? An Experiment
10.1145/3706598.3713444
We present an exploratory study on how people perceive visualizations of spatial social networks generated by edge bundling algorithms. Although these algorithms successfully minimize clutter in node-link diagrams, they do so through various methods that can sometimes create false connections between nodes. We conducted a qualitative experiment involving participants with technical expertise but no prior knowledge of edge bundling algorithms. Participants described their perceptions of both bundled and straight-line visualizations in open-ended tasks. Analysis of their annotations and transcripts revealed a general preference for bundled visualizations. However, when it came to false connections, participants tended to follow them in tightly bundled diagrams while also vocalizing that these drawings were more ambiguous. The routing of bundles influenced the perception of clusters and participants assigned more or fewer nodes to the clusters, depending on the routing of bundles. Participants' unfamiliarity with the dataset led them to use analogies to describe the bundled drawings, potentially adding perceived semantic meaning to the data.
false
false
[ "Markus Wallinger", "Osman Akbulut", "Kabir Ahmed Rufai", "Helen C. Purchase", "Daniel Archambault" ]
[]
[]
[]
CHI
2,025
How Scientists Use Large Language Models to Program
10.1145/3706598.3713668
Scientists across disciplines write code for critical activities like data collection and generation, statistical modeling, and visualization. As large language models that can generate code have become widely available, scientists may increasingly use these models during research software development. We investigate the characteristics of scientists who are early-adopters of code generating models and conduct interviews with scientists at a public, research-focused university. Through interviews and reviews of user interaction logs, we see that scientists often use code generating models as an information retrieval tool for navigating unfamiliar programming languages and libraries. We present findings about their verification strategies and discuss potential vulnerabilities that may emerge from code generation practices unknowingly influencing the parameters of scientific analyses.
false
false
[ "Gabrielle O'Brien" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.17348v1", "icon": "paper" } ]
CHI
2,025
How Visualization Designers Perceive and Use Inspiration
10.1145/3706598.3714191
Inspiration plays an important role in design, yet its specific impact on data visualization design practice remains underexplored. This study investigates how professional visualization designers perceive and use inspiration in their practice. Through semi-structured interviews, we examine their sources of inspiration, the value they place on them, and how they navigate the balance between inspiration and imitation. Our findings reveal that designers draw from a diverse array of sources, including existing visualizations, real-world phenomena, and personal experiences. Participants describe a mix of active and passive inspiration practices, often iterating on sources to create original designs. This research offers insights into the role of inspiration in visualization practice, the need to expand visualization design theory, and the implications for the development of visualization tools that support inspiration and for training future visualization designers.
false
false
[ "Ali Baigelenov", "Prakash Shukla", "Paul Parsons" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.15205v1", "icon": "paper" } ]
CHI
2,025
IdeationWeb: Tracking the Evolution of Design Ideas in Human-AI Co-Creation
10.1145/3706598.3713375
Due to the remarkable content generation capabilities, large language models (LLMs) have demonstrated potential in supporting early-stage conceptual design. However, current interaction paradigms often struggle to effectively facilitate multi-round idea exploration and selection, leading to random outputs, unclear iterations, and cognitive overload. To address these challenges, we propose a human-AI co-ideation framework aimed at tracking the evolution of design ideas. This framework leverages a structured idea representation, an analogy-based reasoning mechanism and interactive visualization techniques. It guides both designers and AI to systematically explore design spaces. We also develop a prototype system, IdeationWeb, which integrates an intuitive, mind map-like visual interface and interactive methods to support co-ideation. Our user study validates the framework's feasibility, demonstrating enhanced collaboration and creativity between humans and AI. Furthermore, we identified collaborative design patterns from user behaviors, providing valuable insights for future human-AI interaction design.
false
false
[ "Hanshu Shen", "Lyukesheng Shen", "Wenqi Wu", "Kejun Zhang" ]
[]
[]
[]
CHI
2,025
Interactive Debugging and Steering of Multi-Agent AI Systems
10.1145/3706598.3713581
Fully autonomous teams of LLM-powered AI agents are emerging that collaborate to perform complex tasks for users. What challenges do developers face when trying to build and debug these AI agent teams? In formative interviews with five AI agent developers, we identify core challenges: difficulty reviewing long agent conversations to localize errors, lack of support in current tools for interactive debugging, and the need for tool support to iterate on agent configuration. Based on these needs, we developed an interactive multi-agent debugging tool, AGDebugger, with a UI for browsing and sending messages, the ability to edit and reset prior agent messages, and an overview visualization for navigating complex message histories. In a two-part user study with 14 participants, we identify common user strategies for steering agents and highlight the importance of interactive message resets for debugging. Our studies deepen understanding of interfaces for debugging increasingly important agentic workflows.
false
false
[ "Will Epperson", "Gagan Bansal", "Victor C. Dibia", "Adam Fourney", "Jack Gerrits", "Erkang (Eric) Zhu", "Saleema Amershi" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.02068v1", "icon": "paper" } ]
CHI
2,025
Intra, Extra, Read all about it! How Readers Interpret Visualizations with Intra- and Extratextual Information
10.1145/3706598.3713612
A reader's interpretation of a visualization is informed by both intratextual information (the information directly represented in the visualization) and extratextual information (information not represented in the visualization but known by the reader). Yet, we do not know what kinds of intra- and extratextual information readers use or how they integrate it to form meaning. To explore this area, we conducted semi-structured interviews about four real-world visualizations. We used thematic analysis to understand the types of information that participants used and diffractive reading to reveal how participants blended intra- and extratextual information. Our thematic analysis showed that participants utilized a broad assortment of information from both expected and unexpected sources. Additionally, our diffractive reading exposed three ways that participants incorporated intra- and extratextual information: to decide what to look at, to make (in)accurate assumptions about what the visualization showed, and to discover insights beyond what was directly encoded.
false
false
[ "Alyxander Burns", "Claudia Gonzalez-Vazquez" ]
[]
[]
[]
CHI
2,025
Leveraging AI-Generated Emotional Self-Voice to Nudge People towards their Ideal Selves
10.1145/3706598.3713359
Emotions, shaped by past experiences, significantly influence decision-making and goal pursuit. Traditional cognitive-behavioral techniques for personal development rely on mental imagery to envision ideal selves, but may be less effective for individuals who struggle with visualization. This paper introduces Emotional Self-Voice (ESV), a novel system combining emotionally expressive language models and voice cloning technologies to render customized responses in the user's own voice. We investigate the potential of ESV to nudge individuals towards their ideal selves in a study with 60 participants. Across all three conditions (ESV, text-only, and mental imagination), we observed an increase in resilience, confidence, motivation, and goal commitment, and the ESV condition was perceived as uniquely engaging and personalized. We discuss the implications of designing generated self-voice systems as a personalized behavioral intervention for different scenarios.
false
false
[ "Cathy Mengying Fang", "Phoebe Chua", "Samantha W. T. Chan", "Joanne Leong", "Andria Bao", "Pattie Maes" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2409.11531v3", "icon": "paper" } ]
CHI
2,025
Libra: An Interaction Model for Data Visualization
10.1145/3706598.3713769
While existing visualization libraries enable the reuse, extension, and combination of static visualizations, achieving the same for interactions remains nearly impossible. Therefore, we contribute an interaction model and its implementation to achieve this goal. Our model enables the creation of interactions that support direct manipulation, enforce software modularity by clearly separating visualizations from interactions, and ensure compatibility with existing visualization systems. Interaction management is achieved through an instrument that receives events from the view, dispatches these events to graphical layers containing objects, and then triggers actions. We present a JavaScript prototype implementation of our model called Libra.js, enabling the specification of interactions for visualizations created by different libraries. We demonstrate the effectiveness of Libra by describing and generating a wide range of existing interaction techniques. We evaluate Libra.js through diverse examples, a metric-based notation comparison, and a performance benchmark analysis.
false
false
[ "Yue Zhao 0033", "Yunhai Wang", "Xu Luo", "Yanyan Wang", "Jean-Daniel Fekete" ]
[]
[]
[]
CHI
2,025
LogoMotion: Visually-Grounded Code Synthesis for Creating and Editing Animation
10.1145/3706598.3714155
Creating animation takes time, effort, and technical expertise. To help novices with animation, we present LogoMotion, an AI code generation approach that helps users create semantically meaningful animation for logos. LogoMotion automatically generates animation code with a method called visually-grounded code synthesis and program repair. This method performs visual analysis, instantiates a design concept, and conducts visual checking to generate animation code. LogoMotion provides novices with code-connected AI editing widgets that help them edit the motion, grouping, and timing of their animation. In a comparison study on 276 animations, LogoMotion was found to produce more content-aware animation than an industry-leading tool. In a user evaluation (n=16) comparing against a prompt-only baseline, these code-connected widgets helped users edit animations with control, iteration, and creative expression.
false
false
[ "Vivian Liu", "Rubaiat Habib Kazi", "Li-Yi Wei", "Matthew Fisher", "Timothy Langlois", "Seth Walker", "Lydia B. Chilton" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2405.07065v2", "icon": "paper" } ]
CHI
2,025
Lost in Magnitudes: Exploring Visualization Designs for Large Value Ranges
10.1145/3706598.3713487
We explore the design of visualizations for values spanning multiple orders of magnitude; we call them Orders of Magnitude Values (OMVs). Visualization researchers have shown that separating OMVs into two components, the mantissa and the exponent, and encoding them separately overcomes limitations of linear and logarithmic scales. However, only a small number of such visualizations have been tested, and the design guidelines for visualizing the mantissa and exponent separately remain under-explored. To initiate this exploration, better understand the factors influencing the effectiveness of these visualizations, and create guidelines, we adopt a multi-stage workflow. We introduce a design space for visualizing mantissa and exponent, systematically generating and qualitatively evaluating all possible visualizations within it. From this evaluation, we derive guidelines. We select two visualizations that align with our guidelines and test them using a crowdsourcing experiment, showing they facilitate quantitative comparisons and increase confidence in interpretation compared to the state-of-the-art.
false
false
[ "Katerina Batziakoudi", "Florent Cabric", "Stéphanie Rey", "Jean-Daniel Fekete" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2404.15150v2", "icon": "paper" } ]
CHI
2,025
Making the Write Connections: Linking Writing Support Tools with Writer Needs
10.1145/3706598.3713161
This work sheds light on whether and how creative writers' needs are met by existing research and commercial writing support tools (WST). We conducted a need finding study to gain insight into the writers' process during creative writing through a qualitative analysis of the response from an online questionnaire and Reddit discussions on r/Writing. Using a systematic analysis of 115 tools and 67 research papers, we map out the landscape of how digital tools facilitate the writing process. Our triangulation of data reveals that research predominantly focuses on the writing activity and overlooks pre-writing activities and the importance of visualization. We distill 10 key takeaways to inform future research on WST and point to opportunities surrounding underexplored areas. Our work offers a holistic and up-to-date account of how tools have transformed the writing process, guiding the design of future tools that address writers' evolving and unmet needs.
false
false
[ "Zixin Zhao", "Damien Masson", "Young-Ho Kim", "Gerald Penn", "Fanny Chevalier" ]
[]
[]
[]
CHI
2,025
OptiCarVis: Improving Automated Vehicle Functionality Visualizations Using Bayesian Optimization to Enhance User Experience
10.1145/3706598.3713514
Automated vehicle (AV) acceptance relies on their understanding via feedback. While visualizations aim to enhance user understanding of AV's detection, prediction, and planning functionalities, establishing an optimal design is challenging. Traditional "one-size-fits-all" designs might be unsuitable, stemming from resource-intensive empirical evaluations. This paper introduces OptiCarVis, a set of Human-in-the-Loop (HITL) approaches using Multi-Objective Bayesian Optimization (MOBO) to optimize AV feedback visualizations. We compare conditions using eight expert and user-customized designs for a Warm-Start HITL MOBO. An online study (N=117) demonstrates OptiCarVis's efficacy in significantly improving trust, acceptance, perceived safety, and predictability without increasing cognitive load. OptiCarVis facilitates a comprehensive design space exploration, enhancing in-vehicle interfaces for optimal passenger experiences and broader applicability.
false
false
[ "Pascal Jansen", "Mark Colley", "Svenja Krauß", "Daniel Hirschle", "Enrico Rukzio" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2501.06757v3", "icon": "paper" } ]
CHI
2,025
PAIRcolator: Pair Collaboration for Sensemaking and Reflection on Personal Data
10.1145/3706598.3713332
This paper explores pair collaboration as a novel approach for making sense of personal data. Pair collaboration—characterized by dyadic comparison and structured roles for questioning and reasoning—has proven effective for co-constructing knowledge. However, current collaborative visualization tools primarily focus on group comparisons, overlooking the challenges of accommodating pair collaboration in the context of personal data. To address this gap, we propose a set of design rationales supporting subjective data analysis through dyadic comparison and mixed-focus collaboration styles for co-constructing personal narratives. We operationalize these principles in a tangible visualization toolkit, PAIRcolator. Our user study demonstrates that pairwise collaboration facilitated by the toolkit: 1) reveals detailed data insights that are effective for recalling personal experiences, and 2) fosters a structured, reciprocal sensemaking process for interpreting and reconstructing personal experiences beyond data insights. Our results shed light on the design rationales for, and the processes of pair sensemaking of personal data, and their effects to foster deep levels of reflection.
false
false
[ "Di Yan", "Jacky Bourgeois", "Yen-Chia Hsu", "Gerd Kortuem" ]
[]
[]
[]
CHI
2,025
Paratrouper: Exploratory Creation of Character Cast Visuals Using Generative AI
10.1145/3706598.3714242
Great characters are critical to the success of many forms of media, such as comics, games, and films. Designing visually compelling casts of characters requires significant skill and consideration, and there is a lack of specialized tools to support this endeavor. We investigate how AI-driven image-generation techniques can empower creatives to explore a variety of visual design possibilities for individual and groups of characters. Informed by interviews with character designers, Paratrouper is a multi-modal system that enables creating and experimenting with multiple permutations for character casts and visualizing them in various contexts as part of a holistic approach to design. We demonstrate how Paratrouper supports different aspects of the character design process, and share insights from its use by eight creators. Our work highlights the interplay between creative agency and serendipity, as well as the visual interrelationships among character aesthetics.
false
false
[ "Joanne Leong", "David Ledo", "Thomas Driscoll", "Tovi Grossman", "George W. Fitzmaurice", "Fraser Anderson" ]
[]
[]
[]
CHI
2,025
PiaMuscle: Improving Piano Skill Acquisition by Cost-effectively Estimating and Visualizing Activities of Miniature Hand Muscles
10.1145/3706598.3713465
Understanding neuromusculoskeletal mechanisms significantly impacts skill specialization and proficiency. While existing methods can infer large muscle activities during gross motor movements, the estimation of dexterous motor control involving miniature muscles remains underexplored. Targeting the coordinated hand muscles in advanced piano performance, we learn spatiotemporal discrete representations of electromyography (EMG) data and hand postures utilizing a multimodal dataset. Subsequently, we train a precise and cost-effective neural network model. Based on this model, PiaMuscle is introduced to investigate if visualizing muscle activities during piano training enhances piano performance. Quantitative and qualitative results of a user study with highly skilled professional pianists demonstrate that PiaMuscle provides reliable muscle activation data to support and optimize force control. Our research underscores the potential of a naturalistic workflow to estimate small muscles' activities from readily accessible human-centric information and more accurately when combined with tool-centric data, thereby enhancing skill acquisition.
false
false
[ "Ruofan Liu 0001", "Yichen Peng", "Takanori Oku", "Chen-Chieh Liao", "Erwin Wu", "Shinichi Furuya", "Hideki Koike" ]
[]
[]
[]
CHI
2,025
Plume: Scaffolding Text Composition in Dashboards
10.1145/3706598.3713580
Text in dashboards plays multiple critical roles, including providing context, offering insights, guiding interactions, and summarizing key information. Despite its importance, most dashboarding tools focus on visualizations and offer limited support for text authoring. To address this gap, we developed Plume, a system to help authors craft effective dashboard text. Through a formative review of exemplar dashboards, we created a typology of text parameters and articulated the relationship between visual placement and semantic connections, which informed Plume's design. Plume employs large language models (LLMs) to generate contextually appropriate content and provides guidelines for writing clear, readable text. A preliminary evaluation with 12 dashboard authors explored how assisted text authoring integrates into workflows, revealing strengths and limitations of LLM-generated text and the value of our human-in-the-loop approach. Our findings suggest opportunities to improve dashboard authoring tools by better supporting the diverse roles that text plays in conveying insights.
false
false
[ "Maxim Lisnic", "Vidya Setlur", "Nicole Sultanum" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.07512v1", "icon": "paper" } ]
CHI
2,025
PrivacyHub: A Functional Tangible and Digital Ecosystem for Interoperable Smart Home Privacy Awareness and Control
10.1145/3706598.3713517
Hubs are at the core of most smart homes. Modern cross-ecosystem protocols and standards enable smart home hubs to achieve interoperability across devices, offering the unique opportunity to integrate universally available smart home privacy awareness and control features. To date, such privacy features mainly focus on individual products or prototypical research artifacts. We developed a cross-ecosystem hub featuring a tangible dashboard and a digital web application to deepen our understanding of how smart home users interact with functional privacy features. The ecosystem allows users to control the connectivity states of their devices and raises awareness by visualizing device positions, states, and data flows. We deployed the ecosystem in six households for one week and found that it increased participants' perceived control, awareness, and understanding of smart home privacy. We further found distinct differences between tangible and digital mechanisms. Our findings highlight the value of cross-ecosystem hubs for effective privacy management.
false
false
[ "Maximiliane Windl", "Philipp Thalhammer", "David Müller 0009", "Albrecht Schmidt 0001", "Sebastian S. Feger" ]
[]
[]
[]
CHI
2,025
PrivCAPTCHA: Interactive CAPTCHA to Facilitate Effective Comprehension of APP Privacy Policy
10.1145/3706598.3713928
Traditional app privacy policies are often lengthy and non-interactive, leading users to skip them and remain uninformed. To address this, we proposed PrivCAP, a technique to enhance user comprehension by presenting policies in a concise, interactive format. PrivCAP adopted a CAPTCHA-based design, requiring users to interact with clickable chunks of concise policy content, thus reducing physical and cognitive load. A formative study (N=38) demonstrated that participants valued informed consent alongside concerns over data collection and sharing, marking the first such evaluation among Chinese users. This study further found a preference for concise visualizations and interactable formats. PrivCAP, leveraging few-shot prompting on Large Language Models (LLMs), accurately translates privacy policies into clickable, chunked formats optimized for smartphone screens. In an evaluation (N=28), PrivCAP outperformed traditional policy presentations in improving user understanding, reducing cognitive load, and maintaining efficiency, with participants favoring its engaging design and reporting more informed decision-making1.
false
false
[ "Shuning Zhang", "Xin Yi 0001", "Shixuan Li", "Haobin Xing", "Hewu Li" ]
[]
[]
[]
CHI
2,025
Progression Balancing × Baldur's Gate 3: Insights, Terms and Tools for Multi-Dimensional Video Game Balance
10.1145/3706598.3713162
Internal game balancing is one of the major components that affect player experience, as it is responsible for a large share of development time, the majority of game update patches and long-term player satisfaction. This makes tools and methodologies of assessing and advancing game balance a valuable endeavor for industry and academia. During the past decades, scientific research produced numerous outputs to inform and enhance game balancing, yet most of them only adhere to a single dimension of balance: fixed (end-game) scenarios. However, games are usually experienced throughout a continuous spectrum of ever-changing constellations, which should be reflected. Using simulation, game-playing AI, visual analytics and informative metrics, we introduce a methodology and implementation of Progression Balancing, incorporating multi-dimensional game aspects. For the sake of exposition and ecological validity, we applied it in one of the most successful recent games (Baldur's Gate 3), and evaluated its efficacy with help of its player community.
false
false
[ "Johannes Pfau" ]
[]
[]
[]
CHI
2,025
Reciportrait: a Data Humanism Approach for Collaborative Sensemaking of Personal Data
10.1145/3706598.3713300
Data Humanism has gained prominence in personal visualization and Personal Informatics, advocating for a subjective and slow approach to engage with personal data. Collaborative sensemaking has great potential for aiding the understanding of personal data, yet little is known about addressing requirements of structure and coordination when integrating Data Humanism into collaborative visualization. In this paper, we propose design principles for creating both subjective and effective collaborative visualizations, while coordinating the slow sensemaking process and promoting data awareness and communication. We operationalize these principles into a personal visualization toolkit, which we evaluate with an observational study involving 16 university students (8 pairs) analyzing each other's screen-time data. Our findings reveal that implementing the proposed design principles: (1) facilitated data comparison from shared subjective perspectives, (2) helped coordinate sensemaking while allowing time for understanding personal data, and (3) helped the contextualization of data patterns, in turn aiding self-reflection.
false
false
[ "Di Yan", "Chenge Tang", "Senthil Chandrasegaran", "Gerd Kortuem" ]
[]
[]
[]
CHI
2,025
Reflecting on Design Paradigms of Animated Data Video Tools
10.1145/3706598.3713449
Animated data videos have gained significant popularity in recent years. However, authoring data videos remains challenging due to the complexity of creating and coordinating diverse components (e.g., visualization, animation, audio, etc.). Although numerous tools have been developed to streamline the process, there is a lack of comprehensive understanding and reflection of their design paradigms to inform future development. To address this gap, we propose a framework for understanding data video creation tools along two dimensions: what data video components to create and coordinate, including visual, motion, narrative, and audio components, and how to support the creation and coordination. By applying the framework to analyze 46 existing tools, we summarized key design paradigms of creating and coordinating each component based on the varying work distribution for humans and AI in these tools. Finally, we share our detailed reflections, highlight gaps from a holistic view, and discuss future directions to address them.
false
false
[ "Leixian Shen", "Haotian Li 0001", "Yun Wang 0012", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.04801v1", "icon": "paper" } ]
CHI
2,025
RemapVR: An Immersive Authoring Tool for Rapid Prototyping of Remapped Interaction in VR
10.1145/3706598.3714201
Remapping techniques in VR such as repositioning, redirection, and resizing have been extensively studied. Still, interaction designers rarely have the opportunity to use them due to high technical and knowledge barriers. In the paper, we extract common features of 24 existing remapping techniques and develop a high-fidelity immersive authoring tool, namely RemapVR, for rapidly building and experiencing prototypes of remapped space properties in VR that are unperceivable or acceptable to users. RemapVR provides designers with a series of functions for editing remappings and visualizing spatial property changes, mapping relationships between real and virtual worlds, sensory conflicts, etc. Designers can quickly build existing remappings via templates, and author new remappings by interactively recording spatial relations between input trajectory in real world and output trajectory in virtual world. User studies showed that the designs of RemapVR can effectively improve designers' authoring experience and efficiency, and support designers to author remapping prototypes that meet scene requirements and provide good user experience.
false
false
[ "Tianren Luo", "Tong Wu", "Chaoyong Jiang", "Xinran Duan", "Jiafu Lv", "Nianlong Li", "Yachun Fan", "Teng Han", "Feng Tian 0001" ]
[]
[]
[]
CHI
2,025
RidgeBuilder: Interactive Authoring of Expressive Ridgeline Plots
10.1145/3706598.3714209
Ridgeline plots are frequently employed to visualize the evolution or distributions of multiple series with a pile of overlapping line, area, or bar charts, highlighting the peak patterns. While traditionally viewed as small multiple visualizations, their ridge-like patterns have increasingly attracted graphic designers to create appealing customized ridgeline plots. However, many tools only support creating basic ridgeline plots and overlook their diverse layouts and styles. This paper introduces a comprehensive design space for ridgeline plots, focusing on their varied layouts and expressive styles. We present RidgeBuilder, an intuitive tool for creating expressive ridgeline plots with customizable layouts and styles. In particular, we summarize three goals for refining the layout of ridgeline plots and propose an optimization method. We assess RidgeBuilder's usability and usefulness through a reproduction study and evaluate the layout optimization algorithm through anonymized questionnaires. The effectiveness is demonstrated with a gallery of ridgeline plots created by RidgeBuilder.
false
false
[ "Shuhan Liu", "Yangtian Liu", "Junxin Li", "Yanwei Huang", "Yue Shangguan", "Zikun Deng", "Di Weng", "Yingcai Wu" ]
[]
[]
[]
CHI
2,025
Scaffolding Empathy: Training Counselors with Simulated Patients and Utterance-level Performance Visualizations
10.1145/3706598.3714014
Learning therapeutic counseling involves significant role-play experience with mock patients, with current manual training methods providing only intermittent granular feedback. We seek to accelerate and optimize counselor training by providing frequent, detailed feedback to trainees as they interact with a simulated patient. Our first application domain involves training motivational interviewing skills for counselors. Motivational interviewing is a collaborative counseling style in which patients are guided to talk about changing their behavior, with empathetic counseling an essential ingredient. We developed and evaluated an LLM-powered training system that features a simulated patient and visualizations of turn-by-turn performance feedback tailored to the needs of counselors learning motivational interviewing. We conducted an evaluation study with professional and student counselors, demonstrating high usability and satisfaction with the system. We present design implications for the development of automated systems that train users in counseling skills and their generalizability to other types of social skills training.
false
false
[ "Ian Steenstra", "Farnaz Nouraei", "Timothy W. Bickmore" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.18673v1", "icon": "paper" } ]
CHI
2,025
Seeing Eye to AI? Applying Deep-Feature-Based Similarity Metrics to Information Visualization
10.1145/3706598.3713955
Judging the similarity of visualizations is crucial to various applications, such as visualization-based search and visualization recommendation systems. Recent studies show deep-feature-based similarity metrics correlate well with perceptual judgments of image similarity and serve as effective loss functions for tasks like image super-resolution and style transfer. We explore the application of such metrics to judgments of visualization similarity. We extend a similarity metric using five ML architectures and three pre-trained weight sets. We replicate results from previous crowdsourced studies on scatterplot and visual channel similarity perception. Notably, our metric using pre-trained ImageNet weights outperformed gradient-descent tuned MS-SSIM, a multi-scale similarity metric based on luminance, contrast, and structure. Our work contributes to understanding how deep-feature-based metrics can enhance similarity assessments in visualization, potentially improving visual analysis tools and techniques. Supplementary materials are available at https://osf.io/dj2ms/.
false
false
[ "Sheng Long", "Angelos Chatzimparmpas", "Emma Alexander", "Matthew Kay 0001", "Jessica Hullman" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.00228v1", "icon": "paper" } ]
CHI
2,025
Seeing Through the Overlap: The Impact of Color and Opacity on Depth Order Perception in Visualization
10.1145/3706598.3714070
Semi-transparent visualizations are commonly used to reveal information in overlapped regions by applying colors and opacity. While a few studies made recommendations on how to choose colors and opacity levels to maintain depth perception, they often conflict and overlook the interaction effect between these factors. In this paper, we systematically explore the impact of color and opacity on depth order perception across eight colors, three opacity levels, and various layer orders and arrangements. Our inferential analysis shows that both color hue and opacity significantly influence depth order perception, with the effectiveness depending on their interaction. We also derived 12 features for predictive analysis, achieving the best mean accuracy of 80.72% and mean F1 score of 87.75%, with opacity assigned to the front layer as the top feature for most models. Finally, we provide a small design tool and four guidelines to better align the design rules of colors and opacity in semi-transparent visualizations.
false
false
[ "Zhiyuan Meng", "Yunpeng Yang", "Qiong Zeng", "Kecheng Lu", "Lin Lu 0001", "Changhe Tu", "Fumeng Yang", "Yunhai Wang" ]
[]
[]
[]
CHI
2,025
SeeThroughBody: Mitigating Occlusion through Body Transparency to Enhance Foot-Floor Touch Interaction
10.1145/3706598.3713659
Occlusion, often caused by the user's body or fingers, can significantly reduce the efficiency and usability of touch interfaces. As foot-based interactions in HMDs become more prevalent, self-occlusion becomes a more pronounced issue due to the involvement of the body and legs. This work presents SeeThroughBody, a body-rendering approach designed to mitigate occlusion and enhance touch interactions between the foot and interactive floor in virtual environments. Our user study unveiled twofold results. First, changing VisualizationStyles and BodyPartsVisibility can improve objective performance (e.g., time, movement) by reducing occlusion. Second, these modifications also affect the subjective user experience (e.g., embodiment, usability). Different VisualizationStyles and BodyPartsVisibility have varying impacts, presenting trade-offs between performance and experience. Based on these insights, we recommend Transparent-Foot and Outline-Foot for interactions focused on efficiency, and Transparent-All and Transparent-Thigh for enhancing overall user experience. Finally, we demonstrate the application of these recommendations in a map browsing scenario using foot touch.
false
false
[ "Meng Ting Shih", "Chun-Jui Chou", "Tzu-Wei Mi", "Liwei Chan 0001" ]
[]
[]
[]
CHI
2,025
Sequential Visual Cues from Gaze Patterns: Reasoning Assistance for Bar Charts
10.1145/3706598.3713352
Even for well-studied visual reasoning tasks such as those performed on bar charts, little is known about the cognitive strategies users adopt to solve them. Guidance systems that support users in learning visual reasoning require information on successful strategies to help unsuccessful users improve or change their strategies. We introduce the guidance paradigm of sequential visual cues (SVCs), accompanied by a differential pattern mining approach that determines relevant visual attention patterns from gaze data, and exemplified for bar charts. The novel feature of SVCs is to give hints on critical fragments of successful strategies, guiding users where to look in a visualization and in which order, but not what to do with this information. Results from an empirical study (N=30) show how critical patterns of successful and unsuccessful strategies differ for various bar chart tasks. In a qualitative survey (N=5), we explore how to surface relevant gaze patterns as SVCs.
false
false
[ "Antonia Schlieder", "Jan Rummel", "Peter Albers", "Filip Sadlo" ]
[]
[]
[]
CHI
2,025
SocialEyes: Scaling Mobile Eye-tracking to Multi-person Social Settings
10.1145/3706598.3713910
Eye movements provide a window into human behaviour, attention, and interaction dynamics. Challenges in real-world, multi-person environments have, however, restrained eye-tracking research predominantly to single-person, in-lab settings. We developed a system to stream, record, and analyse synchronised data from multiple mobile eye-tracking devices during collective viewing experiences (e.g., concerts, films, lectures). We implemented lightweight operator interfaces for real-time-monitoring, remote-troubleshooting, and gaze-projection from individual egocentric perspectives to a common coordinate space for shared gaze analysis. We tested the system in a live concert and a film screening with 30 simultaneous viewers during each of two public events (N=60). We observe precise time-synchronisation between devices measured through recorded clock-offsets, and accurate gaze-projection in challenging dynamic scenes. Our novel analysis metrics and visualizations illustrate the potential of collective eye-tracking data for understanding collaborative behaviour and social interaction. This advancement promotes ecological validity in eye-tracking research and paves the way for innovative interactive tools.
false
false
[ "Shreshth Saxena", "Areez Visram", "Neil Lobo", "Zahid Mirza", "Mehak Rafi Khan", "Biranugan Pirabaharan", "Alexander Nguyen", "Lauren K. Fink" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2407.06345v4", "icon": "paper" } ]
CHI
2,025
SpatIO: Spatial Physical Computing Toolkit Based on Extended Reality
10.1145/3706598.3713747
Proper placement of sensors and actuators is one of the key factors when designing spatial and proxemic interactions. However, current physical computing tools do not effectively support placing components in three-dimensional space, often forcing designers to build and test prototypes without precise spatial configuration. To address this, we propose the concept of spatial physical computing and present SpatIO, an XR-based physical computing toolkit that supports a continuous end-to-end workflow. SpatIO consists of three interconnected subsystems: SpatIO Environment for composing and testing prototypes with virtual sensors and actuators, SpatIO Module for converting virtually placed components into physical ones, and SpatIO Code for authoring interactions with spatial visualization of data flow. Through a comparative user study with 20 designers, we found that SpatIO significantly altered workflow order, encouraged broader exploration of component placement, enhanced spatial correlation between code and components, and promoted in-situ bodily testing.
false
false
[ "Seung Hyeon Han", "Yeeun Han", "Kyeongho Park", "Sangjun Lee", "Woohun Lee" ]
[]
[]
[]
CHI
2,025
SPECTRA: Personalizable Sound Recognition for Deaf and Hard of Hearing Users through Interactive Machine Learning
10.1145/3706598.3713294
We introduce SPECTRA, a novel pipeline for personalizable sound recognition designed to understand DHH users' needs when collecting audio data, creating a training dataset, and reasoning about the quality of a model. To evaluate the prototype, we recruited 12 DHH participants who trained personalized models for their homes. We investigated waveforms, spectrograms, interactive clustering, and data annotating to support DHH users throughout this workflow, and we explored the impact of a hands-on training session on their experience and attitudes toward sound recognition tools. Our findings reveal the potential for clustering visualizations and waveforms to enrich users' understanding of audio data and refinement of training datasets, along with data annotations to promote varied data collection. We provide insights into DHH users' experiences and perspectives on personalizing a sound recognition pipeline. Finally, we share design considerations for future interactive systems to support this population.
false
false
[ "Steven M. Goodman", "Emma J. McDonnell", "Jon E. Froehlich", "Leah Findlater" ]
[]
[]
[]
CHI
2,025
SpeechCompass: Enhancing Mobile Captioning with Diarization and Directional Guidance via Multi-Microphone Localization
10.1145/3706598.3713631
Speech-to-text capabilities on mobile devices have proven helpful for hearing and speech accessibility, language translation, note-taking, and meeting transcripts. However, our foundational large-scale survey (n=263) shows that the inability to distinguish and indicate speaker direction makes them challenging in group conversations. SpeechCompass addresses this limitation through real-time, multi-microphone speech localization, where the direction of speech allows visual separation and guidance (e.g., arrows) in the user interface. We introduce efficient real-time audio localization algorithms and custom sound perception hardware, running on a low-power microcontroller with four integrated microphones, which we characterize in technical evaluations. Informed by a large-scale survey (n=494), we conducted an in-person study of group conversations with eight frequent users of mobile speech-to-text, who provided feedback on five visualization styles. The value of diarization and visualizing localization was consistent across participants, with everyone agreeing on the value and potential of directional guidance for group conversations.
false
false
[ "Artem Dementyev", "Dimitri Kanevsky", "Samuel J. Yang", "Mathieu Parvaix", "Chiong Lai", "Alex Olwal" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.08848v2", "icon": "paper" } ]
CHI
2,025
StructVizor: Interactive Profiling of Semi-Structured Textual Data
10.1145/3706598.3713484
Data profiling plays a critical role in understanding the structure of complex datasets and supporting numerous downstream tasks, such as social media analytics and financial fraud detection. While existing research predominantly focuses on structured data formats, a substantial portion of semi-structured textual data still requires ad-hoc and arduous manual profiling to extract and comprehend its internal structures. In this work, we propose StructVizor, an interactive profiling system that facilitates sensemaking and transformation of semi-structured textual data. Our tool mainly addresses two challenges: a) extracting and visualizing the diverse structural patterns within data, such as how information is organized or related, and b) enabling users to efficiently perform various wrangling operations on textual data. Through automatic data parsing and structure mining, StructVizor enables visual analytics of structural patterns, while incorporating novel interactions to enable profile-based data wrangling. A comparative user study involving 12 participants demonstrates the system's usability and its effectiveness in supporting exploratory data analysis and transformation tasks.
false
false
[ "Yanwei Huang", "Yan Miao", "Di Weng", "Adam Perer", "Yingcai Wu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.06500v1", "icon": "paper" } ]
CHI
2,025
TableCanoniser: Interactive Grammar-Powered Transformation of Messy, Non-Relational Tables to Canonical Tables
10.1145/3706598.3714321
TableCanoniser is a declarative grammar and interactive system for constructing relational tables from messy tabular inputs such as spreadsheets. We propose the concept of axis alignment to categorise input types and characterise the expanded scope of our system relative to existing tools. The declarative grammar consists of match conditions, which specify repeating patterns of input cells, and extract operations, which specify how matched values map to the output table. In the interactive interface, users can specify match and extract patterns by interacting with an input table, or author more advanced specifications in the coding panel. To refine and verify specifications, users interact with grammar-based provenance visualisations such as linked highlighting of input and output values, tree-based visualisation of matching patterns, and a mini-map overview of matched instances of patterns with annotations showing where cells are extracted to. We motivate and illustrate our work with real-world usage scenarios and workflows.
false
false
[ "Kai Xiong", "Cynthia A. Huang", "Michael Wybrow", "Yingcai Wu" ]
[]
[]
[]
CHI
2,025
TangibleNet: Synchronous Network Data Storytelling through Tangible Interactions in Augmented Reality
10.1145/3706598.3714265
Synchronous data-driven storytelling with network visualizations presents significant challenges due to the complexity of real-time manipulation of network components. While existing research addresses asynchronous scenarios, there is a lack of effective tools for live presentations. To address this gap, we developed TangibleNet, a projector-based AR prototype that allows presenters to interact with node-link diagrams using double-sided magnets during live presentations. The design process was informed by interviews with professionals experienced in synchronous data storytelling and workshops with 14 HCI/VIS researchers. Insights from the interviews helped identify key design considerations for integrating physical objects as interactive tools in presentation contexts. The workshops contributed to the development of a design space mapping user actions to interaction commands for node-link diagrams. Evaluation with 12 participants confirmed that TangibleNet supports intuitive interactions and enhances presenter autonomy, demonstrating its effectiveness for synchronous network-based data storytelling.
false
false
[ "Kentaro Takahira", "Wong Kam-Kwai", "Leni Yang", "Xian Xu", "Takanori Fujiwara", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2504.04710v1", "icon": "paper" } ]
CHI
2,025
The Anatomy of a Plea: How Uncertainty, Visualizations & Individual Differences Shape Plea Bargain Decisions
10.1145/3706598.3713096
Plea bargains are commonly used in the criminal justice system, where they can offer potential benefits to both the prosecution and the defendant. However, research has shown that defendants often engage in poor decision-making, such as accepting the plea even when the trial sentence is likely to be less severe. While previous studies have shown some evidence that uncertainty visualizations can improve decision-making, there is a lack of research on their effectiveness in domain-specific tasks like plea bargain decision-making. In this work, we conduct a series of experiments to explore whether the presence and format of uncertainty impact plea bargain decisions, taking into account time pressure and individual differences. Our findings reveal that these factors can have a significant impact on plea bargain decisions. We also show evidence that communicating uncertainty in the form of text can elicit more optimal decisions under time-pressure conditions.
false
false
[ "Melanie Bancilhon", "Alvitta Ottley", "Andrew Jordan" ]
[]
[]
[]
CHI
2,025
The Gulf of Interpretation: From Chart to Message and Back Again
10.1145/3706598.3713413
Charts are used to communicate data visually, but often, we do not know whether a chart's intended message aligns with the message readers perceive. In this mixed-methods study, we investigate how data journalists encode data and how members of a broad audience engage with, experience, and understand these visualizations. We conducted workshops and interviews with school and university students, job seekers, designers, and senior citizens to collect perceived messages and feedback on eight real-world charts. We analyzed these messages and compared them to the intended message. Our results help to understand the gulf that can exist between messages (that producers encode) and viewer interpretations. In particular, we find that consumers are often overwhelmed with the amount of data provided and are easily confused with terms that are not well known. Chart producers tend to follow strong conventions on how to visually encode particular information that might not always benefit consumers.
false
false
[ "Christian Knoll 0004", "Torsten Möller", "Kathleen Gregory", "Laura Koesten" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2310.05752v2", "icon": "paper" } ]
CHI
2,025
The Interaction Geography Slicer: Designing Exploratory Spatial Data Visualization Tools for Teachers' Reflective Practice
10.1145/3706598.3713499
Researchers in HCI and teacher education have long recognized the potential of visualization to support teachers' reflective practice. Despite much progress however, teacher educators continue to highlight the need for more dynamic classroom data visualizations to better support teachers' reflective practice, particularly about spatial dimensions of their pedagogy. In response, this article makes three contributions. First, we build on prior work to present the Interaction Geography Slicer (IGS), an open-source tool to dynamically visualize movement, conversation, and video data over space and time in settings such as classrooms. Second, we share findings from a participatory design-based research project involving 11 experienced high school mathematics teachers who used the IGS over one year to support their reflective practice. Finally, we propose new directions for exploratory spatial classroom data visualization.
false
false
[ "Ben Rydal Shapiro", "Elizabeth C. Metts", "Edwin Zhao" ]
[]
[]
[]
CHI
2,025
The Many Tendrils of the Octopus Map
10.1145/3706598.3713583
Conspiratorial thinking can connect many distinct or distant ills to a central cause. This belief has visual form in the octopus map: a map where a central force (for instance a nation, an ideology, or an ethnicity) is depicted as a literal or figurative octopus, with extending tendrils. In this paper, we explore how octopus maps function as visual arguments through an analysis of historical examples as well as a through a crowd-sourced study on how the underlying data and the use of visual metaphors contribute to specific negative or conspiratorial interpretations. We find that many features of the data or visual style can lead to "octopus-like" thinking in visualizations, even without the use of an explicit octopus motif. We conclude with a call for a deeper analysis of visual rhetoric, and an acknowledgment of the potential for the design of data visualizations to contribute to harmful or conspiratorial thinking.
false
false
[ "Eduardo Puerta", "Shani Claire Spivak", "Michael Correll" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2501.14903v1", "icon": "paper" } ]
CHI
2,025
The Social Construction of Visualizations: Practitioner Challenges and Experiences of Visualizing Race and Gender
10.1145/3706598.3713243
Data visualizations are increasingly seen as socially constructed, with several recent studies positing that perceptions and interpretations of visualization artifacts are shaped through complex sets of interactions between members of a community. However, most of these works have focused on audiences and researchers, and little is known about if and how practitioners account for the socially constructed framing of data visualization. In this paper, we study and analyze how visualization practitioners understand the influence of their beliefs, values, and biases in their design processes and the challenges they experience. In 17 semi-structured interviews with designers working with race and gender demographic data, we find that a complex mix of factors interact to inform how practitioners approach their design process—including their personal experiences, values, and their understandings of power, neutrality, and politics. Based on our findings, we suggest a series of implications for research and practice in this space.
false
false
[ "Priya Dhawka", "Sayamindu Dasgupta" ]
[]
[]
[]
CHI
2,025
Toward Filling a Critical Knowledge Gap: Charting the Interactions of Age with Task and Visualization
10.1145/3706598.3714229
We present the results of a study comparing the performance of younger adults (YA) and people in late adulthood (PLA) across ten low-level analysis tasks and five basic visualizations, employing Bayesian regression to aggregate and model participant performance. We analyzed performance at the task level and across combinations of tasks and visualizations, reporting measures of performance at aggregate and individual levels. These analyses showed that PLA on average required more time to complete tasks while demonstrating comparable accuracy. Furthermore, at the individual level, PLA exhibited greater heterogeneity in task performance as well as differences in best-performing visualization types for some tasks. We contribute empirical knowledge on how age interacts with analysis task and visualization type and use these results to offer actionable insights and design recommendations for aging-inclusive visualization design. We invite the visualization research community to further investigate aging-aware data visualization. Supplementary materials can be found at https://osf.io/a7xtz/.
false
false
[ "Zack While", "Ali Sarvghad" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.02699v1", "icon": "paper" } ]
CHI
2,025
Toward Personalizable AI Node Graph Creative Writing Support: Insights on Preferences for Generative AI Features and Information Presentation Across Story Writing Processes
10.1145/3706598.3713569
As story writing requires diverse resources, a single system combining these resources could improve personalization. We leverage the broad capabilities of generative AI to support both more general story writing needs and an understudied but essential aspect: reflection on the moral (lesson) conveyed. Through a formative study (N=12), a user study (N=14), and external evaluation (N=19), we designed, implemented, then studied a prototype plugin for FigJam supporting visualization of the story structure through customizable node graph editing, LLM audience impersonation (chatbot and non-chatbot interfaces), and image and audio generative AI features. Our findings support writers' preference for leveraging unique interplays of our breadth of features to satisfy shifting needs across writing processes, from conveying a moral across audience groups to story writing in general. We discuss how our tool design and findings can inform model bias, personalized writing support, and visualization research.
false
false
[ "Hua Xuan Qin", "Guangzhi Zhu", "Mingming Fan 0001", "Pan Hui 0001" ]
[]
[]
[]
CHI
2,025
Traversing Dual Realities: Investigating Techniques for Transitioning 3D Objects between Desktop and Augmented Reality Environments
10.1145/3706598.3713949
Desktop environments can integrate augmented reality (AR) head-worn devices to support 3D representations, visualizations, and interactions in a novel yet familiar setting. As users navigate across the dual realities—desktop and AR—a way to move 3D objects between them is needed. We devise three baseline transition techniques based on common approaches in the literature and evaluate their usability and practicality in an initial user study (N=18). After refining both our transition techniques and the surrounding technical setup, we validate the applicability of the overall concept for real-world activities in an expert user study (N=6). In it, computational chemists followed their usual desktop workflows to build, manipulate, and analyze 3D molecular structures, but now aided with the addition of AR and our transition techniques. Based on our findings from both user studies, we provide lessons learned and takeaways for the design of 3D object transition techniques in desktop + AR environments.
false
false
[ "Tobias Rau", "Tobias Isenberg 0001", "Andreas Köhn", "Michael Sedlmair", "Benjamin Lee 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2504.00371v1", "icon": "paper" } ]
CHI
2,025
Trustworthy by Design: The Viewer's Perspective on Trust in Data Visualization
10.1145/3706598.3713824
Despite the importance of viewers' trust in data visualization, there is a lack of research on the viewers' own perspective on their trust. In addition, much of the research on trust remains relatively theoretical and inaccessible for designers. This work aims to address this gap by conducting a qualitative study to explore how viewers perceive different data visualizations and how their perceptions impact their trust. Three dominant themes emerged from the data. First, users appeared to be consistent, listing similar rationale for their trust across different stimuli. Second, there were diverse opinions about what factors were most important to trust perception and about why the factors matter. Third, despite this disagreement, there were important trends to the factors that users reported as impactful. Finally, we leverage these themes to give specific and actionable guidelines for visualization designers to make more trustworthy visualizations.
false
false
[ "Oen G. McKinley", "Saugat Pandey", "Alvitta Ottley" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2503.10892v1", "icon": "paper" } ]
CHI
2,025
Uncertainty in Science is Malleable. Advocating for User-Agency in Defining Uncertainty in Visualizations: a Case Study in Geology
10.1145/3706598.3713972
Uncertainty is inherent in science built on previous results. In geoscience, for instance, researchers analyzing volcanic deposits assess the uncertainty around past deposit classifications. To aid this assessment, we followed a design by immersion approach to co-design uncertainty visualizations. We observed that besides visualizing it, it is challenging even to define what constitutes uncertainty, as how researchers understand and process uncertainty evolves. This motivated us to reach other members of the community to better understand how they integrate uncertainty in their work. Informed by a series of interviews, we first redesigned our visualization system and then introduced it as a technology probe to a broader community of geoscientists. Our results highlight that uncertainty in science is malleable and that visualization systems should be designed with this malleability in mind. Through a set of design implications, we advocate for visualizations that promote user agency and flexibility in defining and processing uncertainty.
false
false
[ "Vanessa Peña Araya", "Consuelo Martínez Fontaine", "Xiang Wei", "Guillaume Delpech", "Anastasia Bezerianos" ]
[]
[]
[]
CHI
2,025
Understanding Marine Scientist Software Tool Use
10.1145/3706598.3713621
Marine science researchers are heavy users of software tools and systems such as statistics packages, visualization tools, and online data catalogues. Following a constructivist grounded theory approach, we conduct a semi-structured interview study of 23 marine science researchers and research supports within a North American university, to understand their perceptions of and approaches towards using both graphical and code-based software tools and systems. We propose the concept of fragmentation to represent how various factors lead to isolated pockets of views and practices concerning software tool use during the research process. These factors include informal learning of tools, preferences towards doing things from scratch, and a push towards more code-based tools. Based on our findings, we suggest design priorities for user interfaces that could more effectively help support marine scientists make and use software tools and systems.
false
false
[ "Matthew Lakier", "Andrew Irwin", "Daniel Vogel 0001" ]
[]
[]
[]
CHI
2,025
Unveiling High-dimensional Backstage: A Survey for Reliable Visual Analytics with Dimensionality Reduction
10.1145/3706598.3713551
Dimensionality reduction (DR) techniques are essential for visually analyzing high-dimensional data. However, visual analytics using DR often face unreliability, stemming from factors such as inherent distortions in DR projections. This unreliability can lead to analytic insights that misrepresent the underlying data, potentially resulting in misguided decisions. To tackle these reliability challenges, we review 133 papers that address the unreliability of visual analytics using DR. Through this review, we contribute (1) a workflow model that describes the interaction between analysts and machines in visual analytics using DR, and (2) a taxonomy that identifies where and why reliability issues arise within the workflow, along with existing solutions for addressing them. Our review reveals ongoing challenges in the field, whose significance and urgency are validated by five expert researchers. This review also finds that the current research landscape is skewed toward developing new DR techniques rather than their interpretation or evaluation, where we discuss how the HCI community can contribute to broadening this focus.
false
false
[ "Hyeon Jeon", "Hyunwook Lee", "Yun-Hsin Kuo", "Taehyun Yang", "Daniel Archambault", "Sungahn Ko", "Takanori Fujiwara", "Kwan-Liu Ma", "Jinwook Seo" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2501.10168v2", "icon": "paper" } ]
CHI
2,025
Visualization Guardrails: Designing Interventions Against Cherry-Picking in Interactive Data Explorers
10.1145/3706598.3713385
The growing popularity of interactive time series exploration platforms has made data visualization more accessible to the public. However, the ease of creating polished charts with preloaded data also enables selective information presentation, often resulting in biased or misleading visualizations. Research shows that these tools have been used to spread misinformation, particularly in areas such as public health and economic policies during the COVID-19 pandemic. Post hoc fact-checking may be ineffective because it typically addresses only a portion of misleading posts and comes too late to curb the spread. In this work, we explore using visualization design to counteract cherry-picking, a common tactic in deceptive visualizations. We propose a design space of guardrails—interventions to expose cherry-picking in time-series explorers. Through three crowd-sourced experiments, we demonstrate that guardrails, particularly those superimposing data, can encourage skepticism, though with some limitations. We provide recommendations for developing more effective visualization guardrails.
false
false
[ "Maxim Lisnic", "Zach Cutler", "Marina Kogan", "Alexander Lex" ]
[]
[]
[]
CHI
2,025
VisUnit: Literate Visualisation Studies Assembled from Reusable Test-Suites
10.1145/3706598.3713104
We make four contributions to lower the overhead of conducting visualisation user studies and promote the reuse and extension of their materials. (i) A declarative Javascript specification lets experimenters describe how studies are assembled from tested visualisations, datasets, tasks and chosen evaluation strategies. (ii) A VisUnit library translates these into sequences of visual stimuli and delivers them to participants. We move away from monolithic evaluation stimuli typical of previous work and construct studies around three ingredients – visual encodings, datasets, and tasks – that can be developed independently and recombined flexibly. (iii) This paves the way for developing benchmark data+tasks test-suites as independent, reusable resources to support multiple studies. (iv) Structuring user studies as "literate" visualisation notebooks brings together in the open all ingredients necessary for replication and scrutiny: formal design specification; underlying materials; participant-facing views; and narratives justifying design and supporting reuse.
false
false
[ "Radu Jianu", "Aidan Slingsby", "Dany Laksono", "Mershack Okoe" ]
[]
[]
[]
CHI
2,025
What-if Analysis for Business Professionals: Current Practices and Future Opportunities
10.1145/3706598.3713672
What-if analysis (WIA) is essential for data-driven decision-making, allowing users to assess how changes in variables impact outcomes and explore alternative scenarios. Existing WIA research primarily supports the workflows of data scientists and analysts, and largely overlooks business professionals who engage in WIA through non-technical means. To bridge this gap, we conduct a two-part user study with 22 business professionals across marketing, sales, product, and operations roles. The first study examines their existing WIA practices, tools, and challenges. Findings reveal that business professionals perform many WIA techniques independently using rudimentary tools due to various constraints. We then implement representative WIA techniques in a visual analytics prototype and use it as a probe to conduct a follow-up study evaluating business professionals' practical use of the techniques. Results show that these techniques improve decision-making efficiency and confidence while underscoring the need for better support in data preparation, risk assessment, and domain knowledge integration. Finally, we offer design recommendations to enhance future business analytics systems.
false
false
[ "Sneha Gathani", "Zhicheng Liu", "Peter J. Haas", "Çagatay Demiralp" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2212.13643v4", "icon": "paper" } ]
CHI
2,025
ZuantuSet: A Collection of Historical Chinese Visualizations and Illustrations
10.1145/3706598.3713276
Historical visualizations are a valuable resource for studying the history of visualization and inspecting the cultural context where they were created. When investigating historical visualizations, it is essential to consider contributions from different cultural frameworks to gain a comprehensive understanding. While there is extensive research on historical visualizations within the European cultural framework, this work shifts the focus to ancient China, a cultural context that remains underexplored by visualization researchers. To this aim, we propose a semi-automatic pipeline to collect, extract, and label historical Chinese visualizations. Through the pipeline, we curate ZuantuSet, a dataset with over 71K visualizations and 108K illustrations. We analyze distinctive design patterns of historical Chinese visualizations and their potential causes within the context of Chinese history and culture. We illustrate potential usage scenarios for this dataset, summarize the unique challenges and solutions associated with collecting historical Chinese visualizations, and outline future research directions.
false
false
[ "Xiyao Mei", "Yu Zhang 0043", "Chaofan Yang", "Rui Shi", "Xiaoru Yuan" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2502.19093v1", "icon": "paper" } ]
Vis
2,024
"I Came Across a Junk": Understanding Design Flaws of Data Visualization from the Public's Perspective
10.1109/TVCG.2024.3456341
The visualization community has a rich history of reflecting upon visualization design flaws. Although research in this area has remained lively, we believe it is essential to continuously revisit this classic and critical topic in visualization research by incorporating more empirical evidence from diverse sources, characterizing new design flaws, building more systematic theoretical frameworks, and understanding the underlying reasons for these flaws. To address the above gaps, this work investigated visualization design flaws through the lens of the public, constructed a framework to summarize and categorize the identified flaws, and explored why these flaws occur. Specifically, we analyzed 2227 flawed data visualizations collected from an online gallery and derived a design task-associated taxonomy containing 76 specific design flaws. These flaws were further classified into three high-level categories (i.e., misinformation, uninformativeness, unsociability) and ten subcategories (e.g., inaccuracy, unfairness, ambiguity). Next, we organized five focus groups to explore why these design flaws occur and identified seven causes of the flaws. Finally, we proposed a research agenda for combating visualization design flaws and summarize nine research opportunities.
false
true
[ "Xingyu Lan", "Yu Liu" ]
[ "HM" ]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2407.11497v3", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1594.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/4OD9F2xvtPk", "icon": "video" } ]
Vis
2,024
"It's a Good Idea to Put It Into Words": Writing 'Rudders' in the Initial Stages of Visualization Design
10.1109/TVCG.2024.3456324
Written language is a useful tool for non-visual creative activities like composing essays and planning searches. This paper investigates the integration of written language into the visualization design process. We create the idea of a ‘writing rudder,’ which acts as a guiding force or strategy for the designer. Via an interview study of 24 working visualization designers, we first established that only a minority of participants systematically use writing to aid in design. A second study with 15 visualization designers examined four different variants of written rudders: asking questions, stating conclusions, composing a narrative, and writing titles. Overall, participants had a positive reaction; designers recognized the benefits of explicitly writing down components of the design and indicated that they would use this approach in future design work. More specifically, two approaches — writing questions and writing conclusions/takeaways — were seen as beneficial across the design process, while writing narratives showed promise mainly for the creation stage. Although concerns around potential bias during data exploration were raised, participants also discussed strategies to mitigate such concerns. This paper contributes to a deeper understanding of the interplay between language and visualization, and proposes a straightforward, lightweight addition to the visualization design process.
false
true
[ "Chase Stokes", "Clara Hu", "Marti Hearst" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2407.15959", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1140.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/ciCUI2ju3tM", "icon": "video" } ]
Vis
2,024
2D Embeddings of Multi-dimensional Partitionings
10.1109/TVCG.2024.3456394
Partitionings (or segmentations) divide a given domain into disjoint connected regions whose union forms again the entire domain. Multi-dimensional partitionings occur, for example, when analyzing parameter spaces of simulation models, where each segment of the partitioning represents a region of similar model behavior. Having computed a partitioning, one is commonly interested in understanding how large the segments are and which segments lie next to each other. While visual representations of 2D domain partitionings that reveal sizes and neighborhoods are straightforward, this is no longer the case when considering multi-dimensional domains of three or more dimensions. We propose an algorithm for computing 2D embeddings of multi-dimensional partitionings. The embedding shall have the following properties: It shall maintain the topology of the partitioning and optimize the area sizes and joint boundary lengths of the embedded segments to match the respective sizes and lengths in the multi-dimensional domain. We demonstrate the effectiveness of our approach by applying it to different use cases, including the visual exploration of 3D spatial domain segmentations and multi-dimensional parameter space partitionings of simulation ensembles. We numerically evaluate our algorithm with respect to how well sizes and lengths are preserved depending on the dimensionality of the domain and the number of segments
true
true
[ "Marina Evers", "Lars Linsen" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/abs/2408.03641", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/marinaevers/segmentation-projection", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1612.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/91i3yDeIi38", "icon": "video" } ]
Vis
2,024
A Deixis-Centered Approach for Documenting Remote Synchronous Communication around Data Visualizations
10.1109/TVCG.2024.3456351
Referential gestures, or as termed in linguistics, deixis, are an essential part of communication around data visualizations. Despite their importance, such gestures are often overlooked when documenting data analysis meetings. Transcripts, for instance, fail to capture gestures, and video recordings may not adequately capture or emphasize them. We introduce a novel method for documenting collaborative data meetings that treats deixis as a first-class citizen. Our proposed framework captures cursor-based gestural data along with audio and converts them into interactive documents. The framework leverages a large language model to identify word correspondences with gestures. These identified references are used to create context-based annotations in the resulting interactive document. We assess the effectiveness of our proposed method through a user study, finding that participants preferred our automated interactive documentation over recordings, transcripts, and manual note-taking. Furthermore, we derive a preliminary taxonomy of cursor-based deictic gestures from participant actions during the study. This taxonomy offers further opportunities for better utilizing cursor-based deixis in collaborative data analysis scenarios.
true
true
[ "Chang Han", "Katherine E. Isaacs" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2408.04041", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1487.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/c8cC1W9ucj8", "icon": "video" } ]
Vis
2,024
A General Framework for Comparing Embedding Visualizations Across Class-Label Hierarchies
10.1109/TVCG.2024.3456370
Projecting high-dimensional vectors into two dimensions for visualization, known as embedding visualization, facilitates perceptual reasoning and interpretation. Comparing multiple embedding visualizations drives decision-making in many domains, but traditional comparison methods are limited by a reliance on direct point correspondences. This requirement precludes comparisons without point correspondences, such as two different datasets of annotated images, and fails to capture meaningful higher-level relationships among point groups. To address these shortcomings, we propose a general framework for comparing embedding visualizations based on shared class labels rather than individual points. Our approach partitions points into regions corresponding to three key class concepts—confusion, neighborhood, and relative size—to characterize intra- and inter-class relationships. Informed by a preliminary user study, we implemented our framework using perceptual neighborhood graphs to defne these regions and introduced metrics to quantify each concept. We demonstrate the generality of our framework with usage scenarios from machine learning and single-cell biology, highlighting our metrics' ability to draw insightful comparisons across label hierarchies. To assess the effectiveness of our approach, we conducted an evaluation study with fve machine learning researchers and six single-cell biologists using an interactive and scalable prototype built with Python, JavaScript, and Rust. Our metrics enable more structured comparisons through visual guidance and increased participants' confdence in their fndings
true
true
[ "Trevor Manz", "Fritz Lekschas", "Evan Greene", "Greg Finak", "Nils Gehlenborg" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/puxnf", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1489.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/NOQMkUdisUc", "icon": "video" } ]
Vis
2,024
A Large-Scale Sensitivity Analysis on Latent Embeddings and Dimensionality Reductions for Text Spatializations
10.1109/TVCG.2024.3456308
The semantic similarity between documents of a text corpus can be visualized using map-like metaphors based on twodimensional scatterplot layouts. These layouts result from a dimensionality reduction on the document-term matrix or a representation within a latent embedding, including topic models. Thereby, the resulting layout depends on the input data and hyperparameters of the dimensionality reduction and is therefore affected by changes in them. Furthermore, the resulting layout is affected by changes in the input data and hyperparameters of the dimensionality reduction. However, such changes to the layout require additional cognitive efforts from the user. In this work, we present a sensitivity study that analyzes the stability of these layouts concerning (1) changes in the text corpora, (2) changes in the hyperparameter, and (3) randomness in the initialization. Our approach has two stages: data measurement and data analysis. First, we derived layouts for the combination of three text corpora and six text embeddings and a grid-search-inspired hyperparameter selection of the dimensionality reductions. Afterward, we quantified the similarity of the layouts through ten metrics, concerning local and global structures and class separation. Second, we analyzed the resulting 42 817 tabular data points in a descriptive statistical analysis. From this, we derived guidelines for informed decisions on the layout algorithm and highlight specific hyperparameter settings. We provide our implementation as a Git repository at hpicgs/Topic-Models-and-DimensionalityReduction-Sensitivity-Study and results as Zenodo archive at DOI:10.5281/zenodo.12772898
false
true
[ "Daniel Atzberger", "Tim Cech", "Willy Scheibel", "Jürgen Döllner", "Michael Behrisch", "Tobias Schreck" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2407.17876", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://zenodo.org/records/12772899", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1770.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/T3hvGmZlBgw", "icon": "video" } ]
Vis
2,024
A Multi-Level Task Framework for Event Sequence Analysis
10.1109/TVCG.2024.3456510
Despite the development of numerous visual analytics tools for event sequence data across various domains, including but not limited to healthcare, digital marketing, and user behavior analysis, comparing these domain-specific investigations and transferring the results to new datasets and problem areas remain challenging. Task abstractions can help us go beyond domain-specific details, but existing visualization task abstractions are insufficient for event sequence visual analytics because they primarily focus on multivariate datasets and often overlook automated analytical techniques. To address this gap, we propose a domain-agnostic multi-level task framework for event sequence analytics, derived from an analysis of 58 papers that present event sequence visualization systems. Our framework consists of four levels: objective, intent, strategy, and technique. Overall objectives identify the main goals of analysis. Intents comprises five high-level approaches adopted at each analysis step: augment data, simplify data, configure data, configure visualization, and manage provenance. Each intent is accomplished through a number of strategies, for instance, data simplification can be achieved through aggregation, summarization, or segmentation. Finally, each strategy can be implemented by a set of techniques depending on the input and output components. We further show that each technique can be expressed through a quartet of action-input-output-criteria. We demonstrate the framework's descriptive power through case studies and discuss its similarities and differences with previous event sequence task taxonomies
false
true
[ "Kazi Tasnim Zinat", "Saimadhav Naga Sakhamuri", "Aaron Sun Chen", "Zhicheng Liu" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2408.04752v1", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://osf.io/bkjsc/?view_only=b95871b8c4ae497ab9b6cb565e28edf5", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1642.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/4WP9eGQ_hwI", "icon": "video" } ]
Vis
2,024
A Practical Solver for Scalar Data Topological Simplification
10.1109/TVCG.2024.3456345
This paper presents a practical approach for the optimization of topological simplification, a central pre-processing step for the analysis and visualization of scalar data. Given an input scalar field f and a set of "signal" persistence pairs to maintain, our approaches produces an output field g that is close to f and which optimizes (i) the cancellation of "non-signal" pairs, while (ii) preserving the "signal" pairs. In contrast to pre-existing simplification algorithms, our approach is not restricted to persistence pairs involving extrema and can thus address a larger class of topological features, in particular saddle pairs in three-dimensional scalar data. Our approach leverages recent generic persistence optimization frameworks and extends them with tailored accelerations specific to the problem of topological simplification. Extensive experiments report substantial accelerations over these frameworks, thereby making topological simplification optimization practical for real-life datasets. Our approach enables a direct visualization and analysis of the topologically simplified data, e.g., via isosurfaces of simplified topology (fewer components and handles). We apply our approach to the extraction of prominent filament structures in three-dimensional data. Specifically, we show that our pre-simplification of the data leads to practical improvements over standard topological techniques for removing filament loops. We also show how our approach can be used to repair genus defects in surface processing. Finally, we provide a C++ implementation for reproducibility purposes
false
true
[ "Mohamed KISSI", "Mathieu Pont", "Joshua A Levine", "Julien Tierny" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/abs/2407.12399", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/MohamedKISSI/Code-Paper-A-Pratical-Solver-for-Scalar-Data-Topological-Simplification", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1461.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/PJ_tDek0d88", "icon": "video" } ]
Vis
2,024
A Qualitative Analysis of Common Practices in Annotations: A Taxonomy and Design Space
10.1109/TVCG.2024.3456359
Annotations play a vital role in highlighting critical aspects of visualizations, aiding in data externalization and exploration, collaborative sensemaking, and visual storytelling. However, despite their widespread use, we identified a lack of a design space for common practices for annotations. In this paper, we evaluated over 1,800 static annotated charts to understand how people annotate visualizations in practice. Through qualitative coding of these diverse real-world annotated charts, we explored three primary aspects of annotation usage patterns: analytic purposes for chart annotations (e.g., present, identify, summarize, or compare data features), mechanisms for chart annotations (e.g., types and combinations of annotations used, frequency of different annotation types across chart types, etc.), and the data source used to generate the annotations. We then synthesized our findings into a design space of annotations, highlighting key design choices for chart annotations. We presented three case studies illustrating our design space as a practical framework for chart annotations to enhance the communication of visualization insights. All supplemental materials are available at https://shorturl.at/bAGM1.
false
true
[ "Md Dilshadur Rahman", "Ghulam Jilani Quadri", "Bhavana Doppalapudi", "Danielle Albers Szafir", "Paul Rosen" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2306.06043", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://shorturl.at/bAGM1", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1295.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/UiheOlbONP0", "icon": "video" } ]
Vis
2,024
Aardvark: Composite Visualizations of Trees, Time-Series, and Images
10.1109/TVCG.2024.3456193
How do cancer cells grow, divide, proliferate, and die? How do drugs infuence these processes? These are diffcult questions that we can attempt to answer with a combination of time-series microscopy experiments, classifcation algorithms, and data visualization. However, collecting this type of data and applying algorithms to segment and track cells and construct lineages of proliferation is error-prone; and identifying the errors can be challenging since it often requires cross-checking multiple data types. Similarly, analyzing and communicating the results necessitates synthesizing different data types into a single narrative. State-of-the-art visualization methods for such data use independent line charts, tree diagrams, and images in separate views. However, this spatial separation requires the viewer of these charts to combine the relevant pieces of data in memory. To simplify this challenging task, we describe design principles for weaving cell images, time-series data, and tree data into a cohesive visualization. Our design principles are based on choosing a primary data type that drives the layout and integrates the other data types into that layout. We then introduce Aardvark, a system that uses these principles to implement novel visualization techniques. Based on Aardvark, we demonstrate the utility of each of these approaches for discovery, communication, and data debugging in a series of case studies
true
true
[ "Devin Lange", "Robert L Judson-Torres", "Thomas A Zangle", "Alexander Lex" ]
[ "BP" ]
[ "PW", "P", "V", "C", "O" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/preprints/osf/cdbm6", "icon": "paper" }, { "name": "Live Demo", "url": "https://aardvark.sci.utah.edu/", "icon": "project_website" }, { "name": "Overview Video", "url": "https://youtu.be/mA6H4-i04g4", "icon": "video" }, { "name": "Source Code", "url": "https://github.com/visdesignlab/Aardvark", "icon": "code" }, { "name": "Supplemental Material", "url": "https://osf.io/3f6kr/", "icon": "other" }, { "name": "Blog Post", "url": "https://vdl.sci.utah.edu/blog/2024/09/30/aardvark/", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1232.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/5kVue1ySnOk", "icon": "video" } ]
Vis
2,024
AdaMotif: Graph Simplification via Adaptive Motif Design
10.1109/TVCG.2024.3456321
With the increase of graph size, it becomes difficult or even impossible to visualize graph structures clearly within the limited screen space. Consequently, it is crucial to design effective visual representations for large graphs. In this paper, we propose AdaMotif, a novel approach that can capture the essential structure patterns of large graphs and effectively reveal the overall structures via adaptive motif designs. Specifically, our approach involves partitioning a given large graph into multiple subgraphs, then clustering similar subgraphs and extracting similar structural information within each cluster. Subsequently, adaptive motifs representing each cluster are generated and utilized to replace the corresponding subgraphs, leading to a simplified visualization. Our approach aims to preserve as much information as possible from the subgraphs while simplifying the graph efficiently. Notably, our approach successfully visualizes crucial community information within a large graph. We conduct case studies and a user study using real-world graphs to validate the effectiveness of our proposed approach. The results demonstrate the capability of our approach in simplifying graphs while retaining important structural and community information.
false
true
[ "Hong Zhou", "Peifeng Lai", "Zhida Sun", "Xiangyuan Chen", "Yang Chen", "Huisi Wu", "Yong WANG" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2408.16308v1", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://osf.io/pb8t3/  and  https://github.com/lpfeng11/AdaMotif,other\nIEEE VIS Conference Page,https://ieeevis.org/year/2024/program/paper_v-full-1606.html,other\nFast Forward Video,https://youtu.be/gWWaEplNEMQ,video", "icon": "" } ]
Vis
2,024
Advancing Multimodal Large Language Models in Chart Question Answering with Visualization-Referenced Instruction Tuning
10.1109/TVCG.2024.3456159
Emerging multimodal large language models (MLLMs) exhibit great potential for chart question answering (CQA). Recent efforts primarily focus on scaling up training datasets (i.e., charts, data tables, and question-answer (QA) pairs) through data collection and synthesis. However, our empirical study on existing MLLMs and CQA datasets reveals notable gaps. First, current data collection and synthesis focus on data volume and lack consideration of fine-grained visual encodings and QA tasks, resulting in unbalanced data distribution divergent from practical CQA scenarios. Second, existing work follows the training recipe of the base MLLMs initially designed for natural images, under-exploring the adaptation to unique chart characteristics, such as rich text elements. To fill the gap, we propose a visualization-referenced instruction tuning approach to guide the training dataset enhancement and model development. Specifically, we propose a novel data engine to effectively filter diverse and high-quality data from existing datasets and subsequently refine and augment the data using LLM-based generation techniques to better align with practical QA tasks and visual encodings. Then, to facilitate the adaptation to chart characteristics, we utilize the enriched data to train an MLLM by unfreezing the vision encoder and incorporating a mixture-of-resolution adaptation strategy for enhanced fine-grained recognition. Experimental results validate the effectiveness of our approach. Even with fewer training examples, our model consistently outperforms state-of-the-art CQA models on established benchmarks. We also contribute a dataset split as a benchmark for future research. Source codes and datasets of this paper are available at https://github.com/zengxingchen/ChartQA-MLLM.
false
true
[ "Xingchen Zeng", "Haichuan Lin", "Yilin Ye", "Wei Zeng" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2407.20174", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/zengxingchen/ChartQA-MLLM", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1193.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/fiE38Zyk9VY", "icon": "video" } ]
Vis
2,024
AdversaFlow: Visual Red Teaming for Large Language Models with Multi-Level Adversarial Flow
10.1109/TVCG.2024.3456150
Large Language Models (LLMs) are powerful but also raise significant security concerns, particularly regarding the harm they can cause, such as generating fake news that manipulates public opinion on social media and providing responses to unethical activities. Traditional red teaming approaches for identifying AI vulnerabilities rely on manual prompt construction and expertise. This paper introduces AdversaFlow, a novel visual analytics system designed to enhance LLM security against adversarial attacks through human-AI collaboration. AdversaFlow involves adversarial training between a target model and a red model, featuring unique multi-level adversarial flow and fluctuation path visualizations. These features provide insights into adversarial dynamics and LLM robustness, enabling experts to identify and mitigate vulnerabilities effectively. We present quantitative evaluations and case studies validating our system's utility and offering insights for future AI security solutions. Our method can enhance LLM security, supporting downstream scenarios like social media regulation by enabling more effective detection, monitoring, and mitigation of harmful content and behaviors.
true
true
[ "Dazhen Deng", "Chuhan Zhang", "Huawei Zheng", "Yuwen Pu", "Shouling Ji", "Yingcai Wu" ]
[ "HM" ]
[ "V", "O" ]
[ { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1067.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/NWnvzefxILM", "icon": "video" } ]
Vis
2,024
An Empirical Evaluation of the GPT-4 Multimodal Language Model on Visualization Literacy Tasks
10.1109/TVCG.2024.3456155
Large Language Models (LLMs) like GPT-4 which support multimodal input (i.e., prompts containing images in addition to text) have immense potential to advance visualization research. However, many questions exist about the visual capabilities of such models, including how well they can read and interpret visually represented data. In our work, we address this question by evaluating the GPT-4 multimodal LLM using a suite of task sets meant to assess the model's visualization literacy. The task sets are based on existing work in the visualization community addressing both automated chart question answering and human visualization literacy across multiple settings. Our assessment finds that GPT-4 can perform tasks such as recognizing trends and extreme values, and also demonstrates some understanding of visualization design best-practices. By contrast, GPT-4 struggles with simple value retrieval when not provided with the original dataset, lacks the ability to reliably distinguish between colors in charts, and occasionally suffers from hallucination and inconsistency. We conclude by reflecting on the model's strengths and weaknesses as well as the potential utility of models like GPT-4 for future visualization research. We also release all code, stimuli, and results for the task sets at the following link: https://doi.org/10.17605/OSF.IO/F39J6
true
true
[ "Alexander Bendeck", "John Stasko" ]
[]
[ "V", "O" ]
[ { "name": "Supplemental Material", "url": "https://doi.org/10.17605/OSF.IO/F39J6", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1147.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/Nr30W716yjI", "icon": "video" } ]
Vis
2,024
Attention-Aware Visualization: Tracking and Responding to User Perception Over Time
10.1109/TVCG.2024.3456300
We propose the notion of attention-aware visualizations (AAVs) that track the user's perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user's attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user's gaze on a visualization and its parts; (2) tracking the user's attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user's gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.
false
true
[ "Arvind Srinivasan", "Johannes Ellemose", "Peter W. S. Butcher", "Panagiotis D. Ritsos", "Niklas Elmqvist" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2404.10732", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://osf.io/8mfhp/", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1480.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/cDGkQpk85yw", "icon": "video" } ]
Vis
2,024
BEMTrace: Visualization-driven approach for deriving Building Energy Models from BIM
10.1109/TVCG.2024.3456315
Building Information Modeling (BIM) describes a central data pool covering the entire life cycle of a construction project. Similarly, Building Energy Modeling (BEM) describes the process of using a 3D representation of a building as a basis for thermal simulations to assess the building's energy performance. This paper explores the intersection of BIM and BEM, focusing on the challenges and methodologies in converting BIM data into BEM representations for energy performance analysis. BEMTrace integrates 3D data wrangling techniques with visualization methodologies to enhance the accuracy and traceability of the BIM-to-BEM conversion process. Through parsing, error detection, and algorithmic correction of BIM data, our methods generate valid BEM models suitable for energy simulation. Visualization techniques provide transparent insights into the conversion process, aiding error identifcation, validation, and user comprehension. We introduce context-adaptive selections to facilitate user interaction and to show that the BEMTrace workfow helps users understand complex 3D data wrangling processes.
true
true
[ "Andreas Walch", "Attila Szabo", "Harald Steinlechner", "Thomas Ortner", "Eduard Gröller", "Johanna Schmidt" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2407.19464", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1307.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/AwIuPtpFz-k", "icon": "video" } ]
Vis
2,024
Beware of Validation by Eye: Visual Validation of Linear Trends in Scatterplots
10.1109/TVCG.2024.3456305
Visual validation of regression models in scatterplots is a common practice for assessing model quality, yet its efficacy remains unquantified. We conducted two empirical experiments to investigate individuals' ability to visually validate linear regression models (linear trends) and to examine the impact of common visualization designs on validation quality. The first experiment showed that the level of accuracy for visual estimation of slope (i.e., fitting a line to data) is higher than for visual validation of slope (i.e., accepting a shown line). Notably, we found bias toward slopes that are "too steep" in both cases. This lead to novel insights that participants naturally assessed regression with orthogonal distances between the points and the line (i.e., ODR regression) rather than the common vertical distances (OLS regression). In the second experiment, we investigated whether incorporating common designs for regression visualization (error lines, bounding boxes, and confidence intervals) would improve visual validation. Even though error lines reduced validation bias, results failed to show the desired improvements in accuracy for any design. Overall, our findings suggest caution in using visual model validation for linear trends in scatterplots
false
true
[ "Daniel Braun", "Remco Chang", "Michael Gleicher", "Tatiana von Landesberger" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2407.11625", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://visva.cs.uni-koeln.de/en/publications/beware-of-validation-by-eye-visual-validation-of-linear-trends-in-scatterplots", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1547.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/-Ohr2rTpvXI", "icon": "video" } ]
Vis
2,024
Beyond Correlation: Incorporating Counterfactual Guidance to Better Support Exploratory Visual Analysis
10.1109/TVCG.2024.3456369
Providing effective guidance for users has long been an important and challenging task for efficient exploratory visual analytics, especially when selecting variables for visualization in high-dimensional datasets. Correlation is the most widely applied metric for guidance in statistical and analytical tools, however a reliance on correlation may lead users towards false positives when interpreting causal relations in the data. In this work, inspired by prior insights on the benefits of counterfactual visualization in supporting visual causal inference, we propose a novel, simple, and efficient counterfactual guidance method to enhance causal inference performance in guided exploratory analytics based on insights and concerns gathered from expert interviews. Our technique aims to capitalize on the benefits of counterfactual approaches while reducing their complexity for users. We integrated counterfactual guidance into an exploratory visual analytics system, and using a synthetically generated ground-truth causal dataset, conducted a comparative user study and evaluated to what extent counterfactual guidance can help lead users to more precise visual causal inferences. The results suggest that counterfactual guidance improved visual causal inference performance, and also led to different exploratory behaviors compared to correlation-based guidance. Based on these findings, we offer future directions and challenges for incorporating counterfactual guidance to better support exploratory visual analytics.
false
true
[ "Arran Zeyu Wang", "David Borland", "David Gotz" ]
[ "HM" ]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2408.16078v1", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/VACLab/Co-op", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1258.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/xFxX4tX8KKM", "icon": "video" } ]
Vis
2,024
Blowing Seeds across Gardens: Visualizing Implicit Propagation of Cross-Platform Social Media Posts
10.1109/TVCG.2024.3456181
Propagation analysis refers to studying how information spreads on social media, a pivotal endeavor for understanding social sentiment and public opinions. Numerous studies contribute to visualizing information spread, but few have considered the implicit and complex diffusion patterns among multiple platforms. To bridge the gap, we summarize cross-platform diffusion patterns with experts and identify significant factors that dissect the mechanisms of cross-platform information spread. Based on that, we propose an information diffusion model that estimates the likelihood of a topic/post spreading among different social media platforms. Moreover, we propose a novel visual metaphor that encapsulates cross-platform propagation in a manner analogous to the spread of seeds across gardens. Specifically, we visualize platforms, posts, implicit cross-platform routes, and salient instances as elements of a virtual ecosystem — gardens, flowers, winds, and seeds, respectively. We further develop a visual analytic system, namely BloomWind, that enables users to quickly identify the cross-platform diffusion patterns and investigate the relevant social media posts. Ultimately, we demonstrate the usage of BloomWind through two case studies and validate its effectiveness using expert interviews.
false
true
[ "Jianing Yin", "Hanze Jia", "Buwei Zhou", "Tan Tang", "Lu Ying", "Shuainan Ye", "Tai-Quan Peng", "Yingcai Wu" ]
[]
[ "V", "O" ]
[ { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1039.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/orsYGZt1cWI", "icon": "video" } ]
Vis
2,024
CataAnno: An Ancient Catalog Annotator for Annotation Cleaning by Recommendation
10.1109/TVCG.2024.3456379
Classical bibliography, by researching preserved catalogs from both official archives and personal collections of accumulated books, examines the books throughout history, thereby revealing cultural development across historical periods. In this work, we collaborate with domain experts to accomplish the task of data annotation concerning Chinese ancient catalogs. We introduce the CataAnno system that facilitates users in completing annotations more efficiently through cross-linked views, recommendation methods and convenient annotation interactions. The recommendation method can learn the background knowledge and annotation patterns that experts subconsciously integrate into the data during prior annotation processes. CataAnno searches for the most relevant examples previously annotated and recommends to the user. Meanwhile, the cross-linked views assist users in comprehending the correlations between entries and offer explanations for these recommendations. Evaluation and expert feedback confirm that the CataAnno system, by offering high-quality recommendations and visualizing the relationships between entries, can mitigate the necessity for specialized knowledge during the annotation process. This results in enhanced accuracy and consistency in annotations, thereby enhancing the overall efficiency
false
true
[ "Hanning Shao", "Xiaoru Yuan" ]
[ "HM" ]
[ "V", "O" ]
[ { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1810.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/JP2jrdeR04g", "icon": "video" } ]
Vis
2,024
Causal Priors and Their Influence on Judgements of Causality in Visualized Data
10.1109/TVCG.2024.3456381
"Correlation does not imply causation" is a famous mantra in statistical and visual analysis. However, consumers of visualizations often draw causal conclusions when only correlations between variables are shown. In this paper, we investigate factors that contribute to causal relationships users perceive in visualizations. We collected a corpus of concept pairs from variables in widely used datasets and created visualizations that depict varying correlative associations using three typical statistical chart types. We conducted two MTurk studies on (1) preconceived notions on causal relations without charts, and (2) perceived causal relations with charts, for each concept pair. Our results indicate that people make assumptions about causal relationships between pairs of concepts even without seeing any visualized data. Moreover, our results suggest that these assumptions constitute causal priors that, in combination with visualized association, impact how data visualizations are interpreted. The results also suggest that causal priors may lead to over- or under-estimation in perceived causal relations in different circumstances, and that those priors can also impact users' confidence in their causal assessments. In addition, our results align with prior work, indicating that chart type may also affect causal inference. Using data from the studies, we develop a model to capture the interaction between causal priors and visualized associations as they combine to impact a user's perceived causal relations. In addition to reporting the study results and analyses, we provide an open dataset of causal priors for 56 specific concept pairs that can serve as a potential benchmark for future studies. We also suggest remaining challenges and heuristic-based guidelines to help designers improve visualization design choices to better support visual causal inference.
false
true
[ "Arran Zeyu Wang", "David Borland", "Tabitha C. Peck", "Wenyuan Wang", "David Gotz" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2408.16077v1", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://osf.io/dfkv4/?view_only=f84ffbc28cdf45e5a3d68f2f1e9c8427", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1100.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/-9MypSwTv8w", "icon": "video" } ]
Vis
2,024
Cell2Cell: Explorative Cell Interaction Analysis in Multi-Volumetric Tissue Data
10.1109/TVCG.2024.3456406
We present Cell2Cell, a novel visual analytics approach for quantifying and visualizing networks of cell-cell interactions in three-dimensional (3D) multi-channel cancerous tissue data. By analyzing cellular interactions, biomedical experts can gain a more accurate understanding of the intricate relationships between cancer and immune cells. Recent methods have focused on inferring interaction based on the proximity of cells in low-resolution 2D multi-channel imaging data. By contrast, we analyze cell interactions by quantifying the presence and levels of specific proteins within a tissue sample (protein expressions) extracted from high-resolution 3D multi-channel volume data. Such analyses have a strong exploratory nature and require a tight integration of domain experts in the analysis loop to leverage their deep knowledge. We propose two complementary semi-automated approaches to cope with the increasing size and complexity of the data interactively: On the one hand, we interpret cell-to-cell interactions as edges in a cell graph and analyze the image signal (protein expressions) along those edges, using spatial as well as abstract visualizations. Complementary, we propose a cell-centered approach, enabling scientists to visually analyze polarized distributions of proteins in three dimensions, which also captures neighboring cells with biochemical and cell biological consequences. We evaluate our application in three case studies, where biologists and medical experts use Cell2Cell to investigate tumor micro-environments to identify and quantify T-cell activation in human tissue data. We confirmed that our tool can fully solve the use cases and enables a streamlined and detailed analysis of cell-cell interactions.
false
true
[ "Eric Mörth", "Kevin Sidak", "Zoltan Maliga", "Torsten Möller", "Nils Gehlenborg", "Peter Sorger", "Hanspeter Pfister", "Johanna Beyer", "Robert Krüger" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/preprints/osf/axy82", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1615.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/wVBlWgy1Gd8", "icon": "video" } ]
Vis
2,024
Charting EDA: Characterizing Interactive Visualization Use in Computational Notebooks with a Mixed-Methods Formalism
10.1109/TVCG.2024.3456217
Interactive visualizations are powerful tools for Exploratory Data Analysis (EDA), but how do they affect the observations analysts make about their data? We conducted a qualitative experiment with 13 professional data scientists analyzing two datasets with Jupyter notebooks, collecting a rich dataset of interaction traces and think-aloud utterances. By qualitatively coding participant utterances, we introduce a formalism that describes EDA as a sequence of analysis states, where each state is comprised of either a representation an analyst constructs (e.g., the output of a data frame, an interactive visualization, etc.) or an observation the analyst makes (e.g., about missing data, the relationship between variables, etc.). By applying our formalism to our dataset, we identify that interactive visualizations, on average, lead to earlier and more complex insights about relationships between dataset attributes compared to static visualizations. Moreover, by calculating metrics such as revisit count and representational diversity, we uncover that some representations serve more as "planning aids" during EDA rather than tools strictly for hypothesis-answering. We show how these measures help identify other patterns of analysis behavior, such as the "80-20 rule", where a small subset of representations drove the majority of observations. Based on these fndings, we offer design guidelines for interactive exploratory analysis tooling and refect on future directions for studying the role that visualizations play in EDA.
true
true
[ "Dylan Wootton", "Amy Rae Fox", "Evan Peck", "Arvind Satyanarayan" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2409.10450v1", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://osf.io/bu7je/", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1155.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/CNQni-VZ4FI", "icon": "video" } ]
Vis
2,024
CompositingVis: Exploring Interactions for Creating Composite Visualizations in Immersive Environments
10.1109/TVCG.2024.3456210
Composite visualization represents a widely embraced design that combines multiple visual representations to create an integrated view. However, the traditional approach of creating composite visualizations in immersive environments typically occurs asynchronously outside of the immersive space and is carried out by experienced experts. In this work, we aim to empower users to participate in the creation of composite visualization within immersive environments through embodied interactions. This could provide a flexible and fluid experience with immersive visualization and has the potential to facilitate understanding of the relationship between visualization views. We begin with developing a design space of embodied interactions to create various types of composite visualizations with the consideration of data relationships. Drawing inspiration from people's natural experience of manipulating physical objects, we design interactions based on the combination of 3D manipulations in immersive environments. Building upon the design space, we present a series of case studies showcasing the interaction to create different kinds of composite visualizations in virtual reality. Subsequently, we conduct a user study to evaluate the usability of the derived interaction techniques and user experience of creating composite visualizations through embodied interactions. We find that empowering users to participate in composite visualizations through embodied interactions enables them to flexibly leverage different visualization views for understanding and communicating the relationships between different views, which underscores the potential of several future application scenarios
true
true
[ "Qian Zhu", "Tao Lu", "Shunan Guo", "Xiaojuan Ma", "Yalong Yang" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2408.02240", "icon": "paper" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1150.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/vngAibFJrlE", "icon": "video" } ]
Vis
2,024
Compress and Compare: Interactively Evaluating Efficiency and Behavior Across ML Model Compression Experiments
10.1109/TVCG.2024.3456371
To deploy machine learning models on-device, practitioners use compression algorithms to shrink and speed up models while maintaining their high-quality output. A critical aspect of compression in practice is model comparison, including tracking many compression experiments, identifying subtle changes in model behavior, and negotiating complex accuracy-efficiency trade-offs. However, existing compression tools poorly support comparison, leading to tedious and, sometimes, incomplete analyses spread across disjoint tools. To support real-world comparative workflows, we develop an interactive visual system called COMPRESS AND COMPARE. Within a single interface, COMPRESS AND COMPARE surfaces promising compression strategies by visualizing provenance relationships between compressed models and reveals compression-induced behavior changes by comparing models' predictions, weights, and activations. We demonstrate how COMPRESS AND COMPARE supports common compression analysis tasks through two case studies, debugging failed compression on generative language models and identifying compression artifacts in image classification models. We further evaluate COMPRESS AND COMPARE in a user study with eight compression experts, illustrating its potential to provide structure to compression workflows, help practitioners build intuition about compression, and encourage thorough analysis of compression's effect on model behavior. Through these evaluations, we identify compression-specific challenges that future visual analytics tools should consider and COMPRESS AND COMPARE visualizations that may generalize to broader model comparison tasks..
true
true
[ "Angie Boggust", "Venkatesh Sivaraman", "Yannick Assogba", "Donghao Ren", "Dominik Moritz", "Fred Hohman" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/pdf/2408.03274", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/apple/ml-compress-and-compare", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1142.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/5tS7HFn5W6Y", "icon": "video" } ]
Vis
2,024
CSLens: Towards Better Deploying Charging Stations via Visual Analytics —— A Coupled Networks Perspective
10.1109/TVCG.2024.3456392
In recent years, the global adoption of electric vehicles (EVs) has surged, prompting a corresponding rise in the installation of charging stations. This proliferation has underscored the importance of expediting the deployment of charging infrastructure. Both academia and industry have thus devoted to addressing the charging station location problem (CSLP) to streamline this process. However, prevailing algorithms addressing CSLP are hampered by restrictive assumptions and computational overhead, leading to a dearth of comprehensive evaluations in the spatiotemporal dimensions. Consequently, their practical viability is restricted. Moreover, the placement of charging stations exerts a significant impact on both the road network and the power grid, which necessitates the evaluation of the potential post-deployment impacts on these interconnected networks holistically. In this study, we propose CSLens, a visual analytics system designed to inform charging station deployment decisions through the lens of coupled transportation and power networks. CSLens offers multiple visualizations and interactive features, empowering users to delve into the existing charging station layout, explore alternative deployment solutions, and assess the ensuring impact. To validate the efficacy of CSLens, we conducted two case studies and engaged in interviews with domain experts. Through these efforts, we substantiated the usability and practical utility of CSLens in enhancing the decision-making process surrounding charging station deployment. Our findings underscore CSLens's potential to serve as a valuable asset in navigating the complexities of charging infrastructure planning.
false
true
[ "Yutian Zhang", "Liwen Xu", "Shaocong Tao", "Quanxue Guan", "Quan Li", "Haipeng Zeng" ]
[]
[ "V", "O" ]
[ { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1681.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/qZcYIS995YE", "icon": "video" } ]
Vis
2,024
Curio: A Dataflow-Based Framework for Collaborative Urban Visual Analytics
10.1109/TVCG.2024.3456353
Over the past decade, several urban visual analytics systems and tools have been proposed to tackle a host of challenges faced by cities, in areas as diverse as transportation, weather, and real estate. Many of these tools have been designed through collaborations with urban experts, aiming to distill intricate urban analysis workflows into interactive visualizations and interfaces. However, the design, implementation, and practical use of these tools still rely on siloed approaches, resulting in bespoke systems that are difficult to reproduce and extend. At the design level, these tools undervalue rich data workflows from urban experts, typically treating them only as data providers and evaluators. At the implementation level, they lack interoperability with other technical frameworks. At the practical use level, they tend to be narrowly focused on specific fields, inadvertently creating barriers to cross-domain collaboration. To address these gaps, we present Curio, a framework for collaborative urban visual analytics. Curio uses a dataflow model with multiple abstraction levels (code, grammar, GUI elements) to facilitate collaboration across the design and implementation of visual analytics components. The framework allows experts to intertwine data preprocessing, management, and visualization stages while tracking the provenance of code and visualizations. In collaboration with urban experts, we evaluate Curio through a diverse set of usage scenarios targeting urban accessibility, urban microclimate, and sunlight access. These scenarios use different types of data and domain methodologies to illustrate Curio's flexibility in tackling pressing societal challenges. Curio is available at urbantk.org/curio
false
true
[ "Gustavo Moreira", "Maryam Hosseini", "Carolina Veiga", "Lucas Alexandre", "Nicola Colaninno", "Daniel de Oliveira", "Nivan Ferreira", "Marcos Lage", "Fabio Miranda" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2408.06139", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://urbantk.org/curio", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1830.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/phFXjrH7_ns", "icon": "video" } ]
Vis
2,024
D-Tour: Semi-Automatic Generation of Interactive Guided Tours for Visualization Dashboard Onboarding
10.1109/TVCG.2024.3456347
Onboarding a user to a visualization dashboard entails explaining its various components, including the chart types used, the data loaded, and the interactions available. Authoring such an onboarding experience is time-consuming and requires significant knowledge and little guidance on how best to complete this task. Depending on their levels of expertise, end users being onboarded to a new dashboard can be either confused and overwhelmed or disinterested and disengaged. We propose interactive dashboard tours (D-Tours) as semi-automated onboarding experiences that preserve the agency of users with various levels of expertise to keep them interested and engaged. Our interactive tours concept draws from open-world game design to give the user freedom in choosing their path through onboarding. We have implemented the concept in a tool called D-TOUR PROTOTYPE, which allows authors to craft custom interactive dashboard tours from scratch or using automatic templates. Automatically generated tours can still be customized to use different media (e.g., video, audio, and highlighting) or new narratives to produce an onboarding experience tailored to an individual user. We demonstrate the usefulness of interactive dashboard tours through use cases and expert interviews. Our evaluation shows that authors found the automation in the D-Tour Prototype helpful and time-saving, and users found the created tours engaging and intuitive. This paper and all supplemental materials are available at https://osf.io/6fbjp/.
false
true
[ "Vaishali Dhanoa", "Andreas Hinterreiter", "Vanessa Fediuk", "Niklas Elmqvist", "Eduard Gröller", "Marc Streit" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/preprints/osf/t5m3u", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://osf.io/6fbjp/", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1395.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/S6366DrJQTs", "icon": "video" } ]
Vis
2,024
DaedalusData: Exploration, Knowledge Externalization and Labeling of Particles in Medical Manufacturing — A Design Study
10.1109/TVCG.2024.3456329
In medical diagnostics of both early disease detection and routine patient care, particle-based contamination of in-vitro diagnostics consumables poses a significant threat to patients. Objective data-driven decision-making on the severity of contamination is key for reducing patient risk, while saving time and cost in quality assessment. Our collaborators introduced us to their quality control process, including particle data acquisition through image recognition, feature extraction, and attributes reflecting the production context of particles. Shortcomings in the current process are limitations in exploring thousands of images, data-driven decision making, and ineffective knowledge externalization. Following the design study methodology, our contributions are a characterization of the problem space and requirements, the development and validation of DaedalusData, a comprehensive discussion of our study's learnings, and a generalizable framework for knowledge externalization. DaedalusData is a visual analytics system that enables domain experts to explore particle contamination patterns, label particles in label alphabets, and externalize knowledge through semi-supervised label-informed data projections. The results of our case study and user study show high usability of DaedalusData and its efficient support of experts in generating comprehensive overviews of thousands of particles, labeling of large quantities of particles, and externalizing knowledge to augment the dataset further. Reflecting on our approach, we discuss insights on dataset augmentation via human knowledge externalization, and on the scalability and trade-offs that come with the adoption of this approach in practice.
false
true
[ "Alexander Wyss", "Gabriela Morgenshtern", "Amanda Hirsch-Hüsler", "Jürgen Bernard" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://arxiv.org/abs/2408.04749", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/alexv710/DaedalusData---IEEE-VIS-Supplemental", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1865.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/TUuS_IaBoRg", "icon": "video" } ]
Vis
2,024
DataGarden: Formalizing Personal Sketches into Structured Visualization Templates
10.1109/TVCG.2024.3456336
Sketching is a common practice among visualization designers and serves an approachable entry to data visualization for non-experts. However, moving from a sketch to a full fledged data visualization often requires throwing away the original sketch and recreating it from scratch. Our goal is to formalize these sketches, enabling them to support iteration and systematic data mapping through a visual-first templating workflow. In this workflow, authors sketch a representative visualization and structure it into an expressive template for an envisioned or partial dataset, capturing implicit style as well as explicit data mappings. To demonstrate our proposed workflow, we implement DataGarden and evaluate it through a reproduction and a freeform study. We investigate how DataGarden supports personal expression and delve into the variety of visualizations that authors can produce with it, identifying cases that demonstrate the limitations of our approach and discussing avenues for future work
false
true
[ "Anna Offenwanger", "Theophanis Tsandilas", "Fanny Chevalier" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "https://hal.science/hal-04664470/", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://datagarden-git.github.io/datagarden/", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1502.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/IFG97n_gi0g", "icon": "video" } ]
Vis
2,024
Defogger: A Visual Analysis Approach for Data Exploration of Sensitive Data Protected by Differential Privacy
10.1109/TVCG.2024.3456304
Differential privacy ensures the security of individual privacy but poses challenges to data exploration processes because the limited privacy budget incapacitates the flexibility of exploration and the noisy feedback of data requests leads to confusing uncertainty. In this study, we take the lead in describing corresponding exploration scenarios, including underlying requirements and available exploration strategies. To facilitate practical applications, we propose a visual analysis approach to the formulation of exploration strategies. Our approach applies a reinforcement learning model to provide diverse suggestions for exploration strategies according to the exploration intent of users. A novel visual design for representing uncertainty in correlation patterns is integrated into our prototype system to support the proposed approach. Finally, we implemented a user study and two case studies. The results of these studies verified that our approach can help develop strategies that satisfy the exploration intent of users.
false
true
[ "Xumeng Wang", "Shuangcheng Jiao", "Chris Bryan" ]
[]
[ "P", "V", "O" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/abs/2407.19364", "icon": "paper" }, { "name": "Supplemental Material", "url": "https://github.com/Vanellope7/Defogger", "icon": "other" }, { "name": "IEEE VIS Conference Page", "url": "https://ieeevis.org/year/2024/program/paper_v-full-1438.html", "icon": "other" }, { "name": "Fast Forward Video", "url": "https://youtu.be/BDNvBU24Hls", "icon": "video" } ]