Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
EuroVis | 2,024 | CAN: Concept-Aligned Neurons for Visual Comparison of Deep Neural Network Models | 10.1111/cgf.15085 | We present concept-aligned neurons, or CAN, a visualization design for comparing deep neural networks. The goal of CAN is to support users in understanding the similarities and differences between neural networks, with an emphasis on comparing neuron functionality across different models. To make this comparison intuitive, CAN uses concept-based representations of neurons to visually align models in an interpretable manner. A key feature of CAN is the hierarchical organization of concepts, which permits users to relate sets of neurons at different levels of detail. CAN's visualization is designed to help compare the semantic coverage of neurons, as well as assess the distinctiveness, redundancy, and multi-semantic alignment of neurons or groups of neurons, all at different concept granularity. We demonstrate the generality and effectiveness of CAN by comparing models trained on different datasets, neural networks with different architectures, and models trained for different objectives, e.g. adversarial robustness, and robustness to out-of-distribution data. | false | false | [
"Mingwei Li",
"Sangwon Jeong",
"Shusen Liu 0001",
"Matthew Berger"
] | [] | [] | [] |
EuroVis | 2,024 | ChoreoVis: Planning and Assessing Formations in Dance Choreographies | 10.1111/cgf.15104 | Sports visualization has developed into an active research field over the last decades. Many approaches focus on analyzing movement data recorded from unstructured situations, such as soccer. For the analysis of choreographed activities like formation dancing, however, the goal differs, as dancers follow specific formations in coordinated movement trajectories. To date, little work exists on how visual analytics methods can support such choreographed performances. To fill this gap, we introduce a new visual approach for planning and assessing dance choreographies. In terms of planning choreographies, we contribute a web application with interactive authoring tools and views for the dancers' positions and orientations, movement trajectories, poses, dance floor utilization, and movement distances. For assessing dancers' real-world movement trajectories, extracted by manual bounding box annotations, we developed a timeline showing aggregated trajectory deviations and a dance floor view for detailed trajectory comparison. Our approach was developed and evaluated in collaboration with dance instructors, showing that introducing visual analytics into this domain promises improvements in training efficiency for the future. | false | false | [
"Samuel Beck",
"Nina Doerr",
"Kuno Kurzhals",
"Alexander Riedlinger",
"Fabian Schmierer",
"Michael Sedlmair",
"Steffen Koch 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.04100v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | CUPID: Contextual Understanding of Prompt-conditioned Image Distributions | 10.1111/cgf.15086 | We present CUPID: a visualization method for the contextual understanding of prompt-conditioned image distributions. CUPID targets the visual analysis of distributions produced by modern text-to-image generative models, wherein a user can specify a scene via natural language, and the model generates a set of images, each intended to satisfy the user's description. CUPID is designed to help understand the resulting distribution, using contextual cues to facilitate analysis: objects mentioned in the prompt, novel, synthesized objects not explicitly mentioned, and their potential relationships. Central to CUPID is a novel method for visualizing high-dimensional distributions, wherein contextualized embeddings of objects, those found within images, are mapped to a low-dimensional space via density-based embeddings. We show how such embeddings allows one to discover salient styles of objects within a distribution, as well as identify anomalous, or rare, object styles. Moreover, we introduce conditional density embeddings, whereby conditioning on a given object allows one to compare object dependencies within the distribution. We employ CUPID for analyzing image distributions produced by large-scale diffusion models, where our experimental results offer insights on language misunderstanding from such models and biases in object composition, while also providing an interface for discovery of typical, or rare, synthesized scenes. | false | false | [
"Yayan Zhao",
"Mingwei Li",
"Matthew Berger"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2406.07699v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Deconstructing Human-AI Collaboration: Agency, Interaction, and Adaptation | 10.1111/cgf.15107 | As full AI-based automation remains out of reach in most real-world applications, the focus has instead shifted to leveraging the strengths of both human and AI agents, creating effective collaborative systems. The rapid advances in this area have yielded increasingly more complex systems and frameworks, while the nuance of their characterization has gotten more vague. Similarly, the existing conceptual models no longer capture the elaborate processes of these systems nor describe the entire scope of their collaboration paradigms. In this paper, we propose a new unified set of dimensions through which to analyze and describe human-AI systems. Our conceptual model is centered around three high-level aspects - agency, interaction, and adaptation - and is developed through a multi-step process. Firstly, an initial design space is proposed by surveying the literature and consolidating existing definitions and conceptual frameworks. Secondly, this model is iteratively refined and validated by conducting semi-structured interviews with nine researchers in this field. Lastly, to illustrate the applicability of our design space, we utilize it to provide a structured description of selected human-AI systems. | false | false | [
"Steffen Holter",
"Mennatallah El-Assady"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.12056v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Depth for Multi-Modal Contour Ensembles | 10.1111/cgf.15083 | The contour depth methodology enables non-parametric summarization of contour ensembles by extracting their representatives, confidence bands, and outliers for visualization (via contour boxplots) and robust downstream procedures. We address two shortcomings of these methods. Firstly, we significantly expedite the computation and recomputation of Inclusion Depth (ID), introducing a linear-time algorithm for epsilon ID, a variant used for handling ensembles with contours with multiple intersections. We also present the inclusion matrix, which contains the pairwise inclusion relationships between contours, and leverage it to accelerate the recomputation of ID. Secondly, extending beyond the single distribution assumption, we present the Relative Depth (ReD), a generalization of contour depth for ensembles with multiple modes. Building upon the linear-time eID, we introduce CDclust, a clustering algorithm that untangles ensemble modes of variation by optimizing ReD. Synthetic and real datasets from medical image segmentation and meteorological forecasting showcase the speed advantages, illustrate the use case of progressive depth computation and enable non-parametric multimodal analysis. To promote research and adoption, we offer the contour-depth Python package. | false | false | [
"Nicolas F. Chaves-de-Plaza",
"Mathijs Molenaar",
"Prerak Mody",
"Marius Staring",
"René van Egmond",
"Elmar Eisemann",
"Anna Vilanova",
"Klaus Hildebrandt"
] | [] | [] | [] |
EuroVis | 2,024 | DynTrix: A Hybrid Representation for Dynamic Graphs | 10.1111/cgf.15076 | Hybrid graph representations combine two or more network visualization techniques in a unique drawing, simultaneously leveraging their strong traits. Since their introduction in the early 2000s, hybrid representations have gained significant research interest, with the introduction of new techniques and comparative user studies. However, all this research has not considered dynamic graphs. In this paper, we investigate hybrid graph representations in a dynamic network context and present DynTrix. Our system uses the NodeTrix representation as a basis, but the research extends this representation to the dynamic network domain. DynTrix supports automatic or manually created clusters/matrices across time. Drawing stability is implemented through aggregation and users can rearrange the nodes/matrix positions and pin them. DynTrix visualizes the temporal dynamics of the network through a combination of movement and element highlighting. We also introduce the concept of volatility, that allows the identification of actors in the network that are the most volatile. Matrices can be ordered such that stable cores gravitate towards the centre of the matrix. We integrate this technique in a visual analytics application for the exploration of offline dynamic networks and evaluate our system through case studies and qualitative expert interviews. Experts agree on the capabilities of the system, noting its potential for the analysis of dynamic networks through hybrid representations. | false | false | [
"B. Vago",
"Daniel Archambault",
"Alessio Arleo"
] | [] | [] | [] |
EuroVis | 2,024 | Exploring Classifiers with Differentiable Decision Boundary Maps | 10.1111/cgf.15109 | Explaining Machine Learning (ML) — and especially Deep Learning (DL) — classifiers' decisions is a subject of interest across fields due to the increasing ubiquity of such models in computing systems. As models get increasingly complex, relying on sophisticated machinery to recognize data patterns, explaining their behavior becomes more difficult. Directly visualizing classifier behavior is in general infeasible, as they create partitions of the data space, which is typically high dimensional. In recent years, Decision Boundary Maps (DBMs) have been developed, taking advantage of projection and inverse projection techniques. By being able to map 2D points back to the data space and subsequently run a classifier, DBMs represent a slice of classifier outputs. However, we recognize that DBMs without additional explanatory views are limited in their applicability. In this work, we propose augmenting the naive DBM generating process with views that provide more in-depth information about classifier behavior, such as whether the training procedure is locally stable. We describe our proposed views — which we term Differentiable Decision Boundary Maps — over a running example, explaining how our work enables drawing new and useful conclusions from these dense maps. We further demonstrate the value of these conclusions by showing how useful they would be in carrying out or preventing a dataset poisoning attack. We thus provide evidence of the ability of our proposed views to make DBMs significantly more trustworthy and interpretable, increasing their utility as a model understanding tool. | false | false | [
"Alister Machado",
"Michael Behrisch 0001",
"Alexandru C. Telea"
] | [
"HM"
] | [] | [] |
EuroVis | 2,024 | Exploring the Design Space of BioFabric Visualization for Multivariate Network Analysis | 10.1111/cgf.15079 | The visual analysis of multivariate network data is a common yet difficult task in many domains. The major challenge is to visualize the network's topology and additional attributes for entities and their connections. Although node-link diagrams and adjacency matrices are widespread, they have inherent limitations. Node-link diagrams struggle to scale effectively, while adjacency matrices can fail to represent network topologies clearly. In this paper, we delve into the design space of BioFabric, which aligns entities along rows and relationships along columns, providing a way to encapsulate multiple attributes for both. We explore how we can leverage the unique opportunities offered by BioFabric's design space to visualize multivariate network data — focusing on three main categories: juxtaposed visualizations, embedded on-node and on-edge encoding, and transformed node and edge encoding. We complement our exploration with a quantitative assessment comparing BioFabric to adjacency matrices. We postulate that the expansive design possibilities introduced in BioFabric network visualization have the potential for the visualization of multivariate data, and we advocate for further evaluation of the associated design space. Our supplemental material is available on osf.io. | false | false | [
"Johannes Fuchs 0001",
"Frederik L. Dennig",
"Maria-Viktoria Heinle",
"Daniel A. Keim",
"Sara Di Bartolomeo"
] | [] | [] | [] |
EuroVis | 2,024 | From Delays to Densities: Exploring Data Uncertainty through Speech, Text, and Visualization | 10.1111/cgf.15100 | Understanding and communicating data uncertainty is crucial for making informed decisions in sectors like finance and healthcare. Previous work has explored how to express uncertainty in various modes. For example, uncertainty can be expressed visually with quantile dot plots or linguistically with hedge words and prosody. Our research aims to systematically explore how variations within each mode contribute to communicating uncertainty to the user; this allows us to better understand each mode's affordances and limitations. We completed an exploration of the uncertainty design space based on pilot studies and ran two crowdsourced experiments examining how speech, text, and visualization modes and variants within them impact decision-making with uncertain data. Visualization and text were most effective for rational decision-making, though text resulted in lower confidence. Speech garnered the highest trust despite sometimes leading to risky decisions. Results from these studies indicate meaningful trade-offs among modes of information and encourage exploration of multimodal data representations. | false | false | [
"Chase Stokes",
"Chelsea Sanker",
"Bridget Cogley",
"Vidya Setlur"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.02317v2",
"icon": "paper"
}
] |
EuroVis | 2,024 | Generating Euler Diagrams Through Combinatorial Optimization | 10.1111/cgf.15089 | Can a given set system be drawn as an Euler diagram? We present the first method that correctly decides this question for arbitrary set systems if the Euler diagram is required to represent each set with a single connected region. If the answer is yes, our method constructs an Euler diagram. If the answer is no, our method yields an Euler diagram for a simplified version of the set system, where a minimum number of set elements have been removed. Further, we integrate known wellformedness criteria for Euler diagrams as additional optimization objectives into our method. Our focus lies on the computation of a planar graph that is embedded in the plane to serve as the dual graph of the Euler diagram. Since even a basic version of this problem is known to be NP-hard, we choose an approach based on integer linear programming (ILP), which allows us to compute optimal solutions with existing mathematical solvers. For this, we draw upon previous research on computing planar supports of hypergraphs and adapt existing ILP building blocks for contiguity-constrained spatial unit allocation and the maximum planar subgraph problem. To generate Euler diagrams for large set systems, for which the proposed simplification through element removal becomes indispensable, we also present an efficient heuristic. We report on experiments with data from MovieDB and Twitter. Over all examples, including 850 non-trivial instances, our exact optimization method failed only for one set system to find a solution without removing a set element. However, with the removal of only a few set elements, the Euler diagrams can be substantially improved with respect to our wellformedness criteria. | false | false | [
"Peter Rottmann",
"Peter J. Rodgers",
"Xinyuan Yan",
"Daniel Archambault",
"Bei Wang 0001",
"Jan-Henrik Haunert"
] | [] | [] | [] |
EuroVis | 2,024 | GerontoVis: Data Visualization at the Confluence of Aging | 10.1111/cgf.15101 | Despite the explosive growth of the aging population worldwide, older adults have been largely overlooked by visualization research. This paper is a critical reflection on the underrepresentation of older adults in visualization research. We discuss why investigating visualization at the intersection of aging matters, why older adults may have been omitted from sample populations in visualization research, how aging may affect visualization use, and how this differs from traditional accessibility research. To encourage further discussion and novel scholarship in this area, we introduce GerontoVis, a term which encapsulates research and practice of data visualization design that primarily focuses on older adults. By introducing this new subfield of visualization research, we hope to shine a spotlight on this growing user population and stimulate innovation toward the development of aging-aware visualization tools. We offer a birds-eye view of the GerontoVis landscape, explore some of its unique challenges, and identify promising areas for future research. | false | false | [
"Zack While",
"R. Jordan Crouser",
"Ali Sarvghad"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.13173v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Guided By AI: Navigating Trust, Bias, and Data Exploration in AI-Guided Visual Analytics | 10.1111/cgf.15108 | The increasing integration of artificial intelligence (AI) in visual analytics (VA) tools raises vital questions about the behavior of users, their trust, and the potential of induced biases when provided with guidance during data exploration. We present an experiment where participants engaged in a visual data exploration task while receiving intelligent suggestions supplemented with four different transparency levels. We also modulated the difficulty of the task (easy or hard) to simulate a more tedious scenario for the analyst. Our results indicate that participants were more inclined to accept suggestions when completing a more difficult task despite the ai's lower suggestion accuracy. Moreover, the levels of transparency tested in this study did not significantly affect suggestion usage or subjective trust ratings of the participants. Additionally, we observed that participants who utilized suggestions throughout the task explored a greater quantity and diversity of data points. We discuss these findings and the implications of this research for improving the design and effectiveness of ai-guided va tools. | false | false | [
"Sunwoo Ha",
"Shayan Monadjemi",
"Alvitta Ottley"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.14521v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | HORA 3D: Personalized Flood Risk Visualization as an Interactive Web Service | 10.1111/cgf.15110 | We propose an interactive web-based application to inform the general public about personal flood risks. Flooding is the natural hazard affecting most people worldwide. Protection against flooding is not limited to mitigation measures, but also includes communicating its risks to affected individuals to raise awareness and preparedness for its adverse effects. Until now, this is mostly done with static and indiscriminate 2D maps of the water depth. These flood hazard maps can be difficult to interpret and the user has to derive a personal flood risk based on prior knowledge. In addition to the hazard, the flood risk has to consider the exposure of the own house and premises to high water depths and flow velocities as well as the vulnerability of particular parts. Our application is centered around an interactive personalized visualization to raise awareness of these risk factors for an object of interest. We carefully extract and show only the relevant information from large precomputed flood simulation and geospatial data to keep the visualization simple and comprehensible. To achieve this goal, we extend various existing approaches and combine them with new real-time visualization and interaction techniques in 3D. A new view-dependent focus+context design guides user attention and supports an intuitive interpretation of the visualization to perform predefined exploration tasks. HORA 3D enables users to individually inform themselves about their flood risks. We evaluated the user experience through a broad online survey with 87 participants of different levels of expertise, who rated the helpfulness of the application with 4.7 out of 5 on average. | false | false | [
"Silvana Rauer-Zechmeister",
"Daniel Cornel",
"Bernhard Sadransky",
"Zsolt Horváth",
"Artem Konev",
"Andreas Buttinger-Kreuzhuber",
"Raimund Heidrich",
"Günter Blöschl",
"Eduard Gröller",
"Jürgen Waser"
] | [
"BP"
] | [] | [] |
EuroVis | 2,024 | Improving Temporal Treemaps by Minimizing Crossings | 10.1111/cgf.15087 | Temporal trees are trees that evolve over a discrete set of time steps. Each time step is associated with a node-weighted rooted tree and consecutive trees change by adding new nodes, removing nodes, splitting nodes, merging nodes, and changing node weights. Recently, two-dimensional visualizations of temporal trees called temporal treemaps have been proposed, representing the temporal dimension on the x-axis, and visualizing the tree modifications over time as temporal edges of varying thickness. The tree hierarchy at each time step is depicted as a vertical, one-dimensional nesting relationships, similarly to standard, non-temporal treemaps. Naturally, temporal edges can cross in the visualization, decreasing readability. Heuristics were proposed to minimize such crossings in the literature, but a formal characterization and minimization of crossings in temporal treemaps was left open. In this paper, we propose two variants of defining crossings in temporal treemaps that can be combinatorially characterized. For each variant, we propose an exact optimization algorithm based on integer linear programming and heuristics based on graph drawing techniques. In an extensive experimental evaluation, we show that on the one hand the exact algorithms reduce the number of crossings by a factor of 20 on average compared to the previous algorithms. On the other hand, our new heuristics are faster by a factor of more than 100 and still reduce the number of crossings by a factor of almost three. | false | false | [
"Alexander Dobler",
"Martin Nöllenburg"
] | [] | [] | [] |
EuroVis | 2,024 | Instantaneous Visual Analysis of Blood Flow in Stenoses Using Morphological Similarity | 10.1111/cgf.15081 | The emergence of computational fluid dynamics (CFD) enabled the simulation of intricate transport processes, including flow in physiological structures, such as blood vessels. While these so-called hemodynamic simulations offer groundbreaking opportunities to solve problems at the clinical forefront, a successful translation of CFD to clinical decision-making is challenging. Hemodynamic simulations are intrinsically complex, time-consuming, and resource-intensive, which conflicts with the time-sensitive nature of clinical workflows and the fact that hospitals usually do not have the necessary resources or infrastructure to support CFD simulations. To address these transfer challenges, we propose a novel visualization system which enables instant flow exploration without performing on-site simulation. To gain insights into the viability of the approach, we focus on hemodynamic simulations of the carotid bifurcation, which is a highly relevant arterial subtree in stroke diagnostics and prevention. We created an initial database of 120 high-resolution carotid bifurcation flow models and developed a set of similarity metrics used to place a new carotid surface model into a neighborhood of simulated cases with the highest geometric similarity. The neighborhood can be immediately explored and the flow fields analyzed. We found that if the artery models are similar enough in the regions of interest, a new simulation leads to coinciding results, allowing the user to circumvent individual flow simulations. We conclude that similarity-based visual analysis is a promising approach toward the usability of CFD in medical practice. | false | false | [
"Pepe Eulzer",
"Kevin Richter",
"Anna Hundertmark",
"Ralph Wickenhöfer",
"Carsten Klingner",
"Kai Lawonn"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.16653v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Interactive Optimization for Cartographic Aggregation of Building Features | 10.1111/cgf.15090 | Aggregation, as an operation of cartographic generalization, provides an effective means of abstracting the configuration of building features by combining them according to the scale reduction of the 2D map. Automating this design process effectively helps professional cartographers design both paper and digital maps, but finding the best aggregation result from the numerous combinations of building features has been a challenge. This paper presents a novel approach to assist cartographers in interactively designing the aggregation of building features in scale-aware map visualization. Our contribution is to provide an appropriate set of candidates for the cartographer to choose from among a limited number of possible combinations of building features. This is achieved by collecting locally optimal solutions that emerge in the course of aggregation operations, formulated as a label cost optimization problem. Users can also explore better aggregation results by interactively adjusting the design parameters to update the set of possible combinations, along with an operator to force the combination of manually selected building features. Each cluster of aggregated building features is tightly enclosed by a concave hull, which is later adaptively simplified to abstract its boundary shapes. Experimental design examples and evaluations by expert cartographers demonstrate the feasibility of the proposed approach to interactive aggregation. | false | false | [
"Shigeo Takahashi",
"Ryo Kokubun",
"Satoshi Nishimura",
"Kazuo Misue",
"Masatoshi Arikawa"
] | [] | [] | [] |
EuroVis | 2,024 | InverseVis: Revealing the Hidden with Curved Sphere Tracing | 10.1111/cgf.15080 | Exploratory analysis of scalar fields on surface meshes presents significant challenges in identifying and visualizing important regions, particularly on the surface's backside. Previous visualization methods achieved only a limited visibility of significant features, i.e., regions with high or low scalar values, during interactive exploration. In response to this, we propose a novel technique, InverseVis, which leverages curved sphere tracing and uses the otherwise unused space to enhance visibility. Our approach combines direct and indirect rendering, allowing camera rays to wrap around the surface and reveal information from the backside. To achieve this, we formulate an energy term that guides the image synthesis in previously unused space, highlighting the most important regions of the backside. By quantifying the amount of visible important features, we optimize the camera position to maximize the visibility of the scalar field on both the front and backsides. InverseVis is benchmarked against state-of-the-art methods and a derived technique, showcasing its effectiveness in revealing essential features and outperforming existing approaches. | false | false | [
"Kai Lawonn",
"Monique Meuschke",
"Tobias Günther"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.09092v2",
"icon": "paper"
}
] |
EuroVis | 2,024 | Investigating the Effect of Operation Mode and Manifestation on Physicalizations of Dynamic Processes | 10.1111/cgf.15106 | We conducted a study to systematically investigate the communication of complex dynamic processes along a two-dimensional design space, where the axes represent a representation's manifestation (physical or virtual) and operation (manual or automatic). We exemplify the design space on a model embodying cardiovascular pathologies, represented by a mechanism where a liquid is pumped into a draining vessel, with complications illustrated through modifications to the model. The results of a mixed-methods lab study with 28 participants show that both physical manifestation and manual operation have a strong positive impact on the audience's engagement. The study does not show a measurable knowledge increase with respect to cardiovascular pathologies using manually operated physical representations. However, subjectively, participants report a better understanding of the process—mainly through non-visual cues like haptics, but also auditory cues. The study also indicates an increased task load when interacting with the process, which, however, seems to play a minor role for the participants. Overall, the study shows a clear potential of physicalization for the communication of complex dynamic processes, which only fully unfold if observers have to chance to interact with the process. | false | false | [
"Daniel Pahr",
"Henry Ehlers",
"Hsiang-Yun Wu",
"Manuela Waldner",
"Renata G. Raidou"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2405.09372v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Open Your Ears and Take a Look: A State-of-the-Art Report on the Integration of Sonification and Visualization | 10.1111/cgf.15114 | The research communities studying visualization and sonification for data display and analysis share exceptionally similar goals, essentially making data of any kind interpretable to humans. One community does so by using visual representations of data, and the other community employs auditory (non-speech) representations of data. While the two communities have a lot in common, they developed mostly in parallel over the course of the last few decades. With this STAR, we discuss a collection of work that bridges the borders of the two communities, hence a collection of work that aims to integrate the two techniques into one form of audiovisual display, which we argue to be "more than the sum of the two." We introduce and motivate a classification system applicable to such audiovisual displays and categorize a corpus of 57 academic publications that appeared between 2011 and 2023 in categories such as reading level, dataset type, or evaluation system, to mention a few. The corpus also enables a meta-analysis of the field, including regularly occurring design patterns such as type of visualization and sonification techniques, or the use of visual and auditory channels, showing an overall diverse field with different designs. An analysis of a co-author network of the field shows individual teams without many interconnections. The body of work covered in this STAR also relates to three adjacent topics: audiovisual monitoring, accessibility, and audiovisual data art. These three topics are discussed individually in addition to the systematically conducted part of this research. The findings of this report may be used by researchers from both fields to understand the potentials and challenges of such integrated designs while hopefully inspiring them to collaborate with experts from the respective other field. | false | false | [
"Kajetan Enge",
"Elias Elmquist",
"Valentina Caiola",
"Niklas Rönnberg",
"Alexander Rind",
"Michael Iber",
"Sara Lenzi",
"Fangfei Lan",
"Robert Höldrich",
"Wolfgang Aigner"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.16558v2",
"icon": "paper"
}
] |
EuroVis | 2,024 | Persist: Persistent and Reusable Interactions in Computational Notebooks | 10.1111/cgf.15092 | Computational notebooks, such as Jupyter, support rich data visualization. However, even when visualizations in notebooks are interactive, they are a dead end: Interactive data manipulations, such as selections, applying labels, filters, categorizations, or fixes to column or cell values, could be efficiently applied in interactive visual components, but interactive components typically cannot manipulate Python data structures. Furthermore, actions performed in interactive plots are lost as soon as the cell is re-run, prohibiting reusability and reproducibility. To remedy this problem, we introduce Persist, a family of techniques to (a) capture interaction provenance, enabling the persistence of interactions, and (b) map interactions to data manipulations that can be applied to dataframes. We implement our approach as a JupyterLab extension that supports tracking interactions in Vega-Altair plots and in a data table view. Persist can re-execute interaction provenance when a notebook or a cell is re-executed, enabling reproducibility and re-use. We evaluate Persist in a user study targeting data manipulations with 11 participants skilled in Python and Pandas, comparing it to traditional code-based approaches. Participants were consistently faster and were able to correctly complete more tasks with Persist. | false | false | [
"Kiran Gadhave",
"Zach Cutler",
"Alexander Lex"
] | [] | [] | [] |
EuroVis | 2,024 | ProtEGOnist: Visual Analysis of Interactions in Small World Networks Using Ego-graphs | 10.1111/cgf.15078 | Visualizing small-world networks such as protein-protein interaction networks or social networks often leads to visual clutter and limited interpretability. To overcome these problems, we present ProtEGOnist, a visualization approach designed to explore small-world networks. ProtEGOnist visualizes networks using ego-graphs that represent local neighborhoods. Ego-graphs are visualized in an aggregated state as a glyph where the size encodes the size of the neighborhood and in a detailed version where the original network nodes can be explored. The ego-graphs are arranged in an ego-graph network, where edges encode similarity using the Jaccard index. Our design aims to reduce visual complexity and clutter while enabling detailed exploration and facilitating the discovery of meaningful patterns. To achieve this, our approach offers a network overview using ego-graphs, a radar chart for a one-to-many ego-graph comparison and meta-data integration, and detailed ego-graph subnetworks for interactive exploration. We demonstrate the applicability of our approach on a co-author network and two different protein-protein interaction networks. A web-based prototype of ProtEGOnist can be accessed online at https://protegonist-tuevis.cs.uni-tuebingen.de/. | false | false | [
"Nicolas Brich",
"Theresa Anisja Harbig",
"Mathias Witte Paz",
"Kay Nieselt",
"Michael Krone"
] | [] | [] | [] |
EuroVis | 2,024 | psudo: Exploring Multi-Channel Biomedical Image Data with Spatially and Perceptually Optimized Pseudocoloring | 10.1111/cgf.15103 | Over the past century, multichannel fluorescence imaging has been pivotal in myriad scientific breakthroughs by enabling the spatial visualization of proteins within a biological sample. With the shift to digital methods and visualization software, experts can now flexibly pseudocolor and combine image channels, each corresponding to a different protein, to explore their spatial relationships. We thus propose psudo, an interactive system that allows users to create optimal color palettes for multichannel spatial data. In psudo, a novel optimization method generates palettes that maximize the perceptual differences between channels while mitigating confusing color blending in overlapping channels. We integrate this method into a system that allows users to explore multi-channel image data and compare and evaluate color palettes for their data. An interactive lensing approach provides on-demand feedback on channel overlap and a color confusion metric while giving context to the underlying channel values. Color palettes can be applied globally or, using the lens, to local regions of interest. We evaluate our palette optimization approach using three graphical perception tasks in a crowdsourced user study with 150 participants, showing that users are more accurate at discerning and comparing the underlying data using our approach. Additionally, we showcase psudo in a case study exploring the complex immune responses in cancer tissue data with a biologist. | false | false | [
"Simon Warchol",
"Jakob Troidl",
"Jeremy Muhlich",
"Robert Krüger",
"John Hoffer",
"Tica Lin",
"Johanna Beyer",
"Elena L. Glassman",
"Peter K. Sorger",
"Hanspeter Pfister"
] | [] | [] | [] |
EuroVis | 2,024 | RouteVis: Quantitative Visual Analytics of Various Factors to Understand Route Choice Preferences | 10.1111/cgf.15091 | Analyzing the preference of route choice not only facilitates the understanding of individuals' decision-making behavior, but also provides valuable information for improving traffic management strategies. As the layout of the road network, the variability of individual preferences and the spatial distribution of origins and destinations all play a role in route choice, it is a great challenge to reveal the interplay of such numerous complex factors. In this paper, we propose RouteVis, an interactive visual analytics system that enables traffic analysts to gain insight into what factors drive individuals to choose a specific route. To uncover the relationship between route choice and influencing factors, we design a quantitative analytical framework that supports analysts in conducting closed-loop analysis of various factors, i.e., data preprocessing, route identification, and the quantification of influence and contribution. Furthermore, given the multidimensional and spatio-temporal characteristics of the analysis results, we customize a set of coordinated views and visual designs to provide an intuitive presentation of the factors affecting people's travels, thus freeing analysts from tedious repetitive tasks and significantly enhancing work efficiency. Two typical usage scenarios and expert feedback on the system's functionality demonstrate that RouteVis can greatly enhance the capabilities of understanding the travel status. | false | false | [
"C. Lv",
"Huijie Zhang",
"Y. Lin",
"J. Dong",
"L. Tian"
] | [] | [] | [] |
EuroVis | 2,024 | Should I make it round? Suitability of circular and linear layouts for comparative tasks with matrix and connective data | 10.1111/cgf.15102 | Visual representations based on circular shapes are frequently used in visualization applications. One example are circos plots within bioinformatics, which bend graphs into a wheel of information with connective lines running through the center like spokes. The results are aesthetically appealing and impressive visualizations that fit long data sequences into a small quadratic space. However, the authors' experiences are that when asked, a visualization researcher would generally advise against making visualizations with radial layouts. Upon reviewing the literature we found that there is evidence that circular layouts are preferable in some cases, but we found no clear evidence for what layout is preferable for matrices and connective data in particular, which both are common data types in circos plots. In this work, we thus performed a user study to compare circular and linear layouts. The tasks are inspired by genomics data, but our results generalize to many other application areas, involving comparison and connective data. To build the prototype we utilized Gosling, a grammar for visualizing genomics data. We contribute empirical evidence on the suitedness of linear versus circular layouts, adding to the specific and general knowledge concerning perception of circular graphs. In addition, we contribute a case study evaluation of the grammar Gosling as a rapid prototyping language, confirming its utility and providing guidance on suitable areas for future development. | false | false | [
"Emilia Ståhlbom",
"Jesper Molin",
"Anders Ynnerman",
"Claes Lundström"
] | [] | [] | [] |
EuroVis | 2,024 | Sparse q-ball imaging towards efficient visual exploration of HARDI data | 10.1111/cgf.15082 | Diffusion-weighted magnetic resonance imaging (D-MRI) is a technique to measure the diffusion of water, in biological tissues. It is used to detect microscopic patterns, such as neural fibers in the living human brain, with many medical and neuroscience applications e.g. for fiber tracking. In this paper, we consider High-Angular Resolution Diffusion Imaging (HARDI) which provides one of the richest representations of water diffusion. It records the movement of water molecules by measuring diffusion under 64 or more directions. A key challenge is that it generates high-dimensional, large, and complex datasets. In our work, we develop a novel representation that exploits the inherent sparsity of the HARDI signal by approximating it as a linear sum of basic atoms in an overcomplete data-driven dictionary using only a sparse set of coefficients. We show that this approach can be efficiently integrated into the standard q-ball imaging pipeline to compute the diffusion orientation distribution function (ODF). Sparse representations have the potential to reduce the size of the data while also giving some insight into the data. To explore the results, we provide a visualization of the atoms of the dictionary and their frequency in the data to highlight the basic characteristics of the data. We present our proposed pipeline and demonstrate its performance on 5 HARDI datasets. | false | false | [
"Danhua Lei",
"Ehsan Miandji",
"Jonas Unger",
"Ingrid Hotz"
] | [] | [] | [] |
EuroVis | 2,024 | State of the Art of Graph Visualization in non-Euclidean Spaces | 10.1111/cgf.15113 | Visualizing graphs and networks in non-Euclidean space can have benefits such as natural focus+context in hyperbolic space and the familiarity of interactions in spherical space. Despite work on these topics going back to the mid 1990s, there is no survey, or a part of a survey for this area of research. In this paper we review and categorize over 60 relevant papers and analyze them by geometry, (e.g., spherical, hyperbolic, torus), by contribution (e.g., technique, evaluation, proof, application), and by graph class (e.g., tree, planar, complex). | false | false | [
"Jacob Miller 0001",
"Dhruv Bhatia",
"Stephen G. Kobourov"
] | [] | [] | [] |
EuroVis | 2,024 | The State of the Art in Visual Analytics for 3D Urban Data | 10.1111/cgf.15112 | Urbanization has amplified the importance of three-dimensional structures in urban environments for a wide range of phenomena that are of significant interest to diverse stakeholders. With the growing availability of 3D urban data, numerous studies have focused on developing visual analysis techniques tailored to the unique characteristics of urban environments. However, incorporating the third dimension into visual analytics introduces additional challenges in designing effective visual tools to tackle urban data's diverse complexities. In this paper, we present a survey on visual analytics of 3D urban data. Our work characterizes published works along three main dimensions (why, what, and how), considering use cases, analysis tasks, data, visualizations, and interactions. We provide a fine-grained categorization of published works from visualization journals and conferences, as well as from a myriad of urban domains, including urban planning, architecture, and engineering. By incorporating perspectives from both urban and visualization experts, we identify literature gaps, motivate visualization researchers to understand challenges and opportunities, and indicate future research directions. | false | false | [
"Fabio Miranda 0001",
"Thomas Ortner",
"Gustavo Moreira",
"Maryam Hosseini",
"Milena Vuckovic",
"Filip Biljecki",
"Cláudio T. Silva",
"Marcos Lage",
"Nivan Ferreira"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.15976v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Topological Characterization and Uncertainty Visualization of Atmospheric Rivers | 10.1111/cgf.15084 | Atmospheric rivers (ARs) are long, narrow regions of water vapor in the Earth's atmosphere that transport heat and moisture from the tropics to the mid-latitudes. ARs are often associated with extreme weather events in North America and contribute significantly to water supply and flood risk. However, characterizing ARs has been a major challenge due to the lack of a universal definition and their structural variations. Existing AR detection tools (ARDTs) produce distinct AR boundaries for the same event, making the risk assessment of ARs a difficult task. Understanding these uncertainties is crucial to improving the predictability of AR impacts, including their landfall areas and associated precipitation, which could cause catastrophic flooding and landslides over the coastal regions. In this work, we develop an uncertainty visualization framework that captures boundary and interior uncertainties, i.e., structural variations, of an ensemble of ARs that arise from a set of ARDTs. We first provide a statistical overview of the AR boundaries using the contour boxplots of Whitaker et al. that highlight the structural variations of AR boundaries based on their nesting relationships. We then introduce the topological skeletons of ARs based on Morse complexes that characterize the interior variation of an ensemble of ARs. We propose an uncertainty visualization of these topological skeletons, inspired by MetroSets of Jacobson et al. that emphasizes the agreements and disagreements across the ensemble members. Through case studies and expert feedback, we demonstrate that the two approaches complement each other, and together they could facilitate an effective comparative analysis process and provide a more confident outlook on an AR's shape, area, and onshore impact. | false | false | [
"Fangfei Lan",
"Brandi Gamelin",
"Lin Yan 0003",
"Jiali Wang",
"Bei Wang 0001",
"Hanqi Guo 0001"
] | [] | [] | [] |
EuroVis | 2,024 | Transmittance-based Extinction and Viewpoint Optimization | 10.1111/cgf.15096 | A long-standing challenge in volume visualization is the effective communication of relevant spatial structures that might be hidden due to occlusions. Given a scalar field that indicates the importance of every point in the domain, previous work synthesized volume visualizations by weighted averaging of samples along view rays or by optimizing a spatially-varying extinction field through an energy minimization. This energy minimization, however, did not directly measure the contribution of an individual sample to the final pixel color. In this paper, we measure the visibility of relevant structures directly by incorporating the transmittance into a non-linear energy minimization. For the first time, we not only perform a transmittance-based extinction optimization, we concurrently optimize the camera position to find ideal viewpoints. We derive the partial derivatives for the gradient-based optimization symbolically, which makes the application of automatic differentiation methods unnecessary. The transmittance-based formulation gives a direct visibility measure that is communicated to the user in order to make aware of potentially overlooked relevant structures. Our approach is compatible with any measure of importance and its versatility is demonstrated in multiple data sets. | false | false | [
"Paul Himmler",
"Tobias Günther"
] | [] | [] | [] |
EuroVis | 2,024 | Transparent Risks: The Impact of the Specificity and Visual Encoding of Uncertainty on Decision Making | 10.1111/cgf.15094 | People frequently make decisions based on uncertain information. Prior research has shown that visualizations of uncertainty can help to support better decision making. However, research has also shown that different representations of the same information can lead to different patterns of decision making. It is crucial for researchers to develop a better scientific understanding of when, why and how different representations of uncertainty lead viewers to make different decisions. This paper seeks to address this need by comparing geospatial visualizations of wildfire risk to verbal descriptions of the same risk. In three experiments, we manipulated the specificity of the uncertain information as well as the visual cues used to encode risk in the visualizations. All three experiments found that participants were more likely to evacuate in response to a hypothetical wildfire if the risk information was presented verbally. When the risk was presented visually, participants were less likely to evacuate, particularly when transparency was used to encode the risk information. Experiment 1 showed that evacuation rates were lower for transparency maps than for other types of visualizations. Experiments 2 and 3 sought to replicate this effect and to test how it related to other factors. Experiment 2 varied the hue used for the transparency maps and Experiment 3 manipulated the salience of the borders between the different risk levels. These experiments showed lower evacuation rates in response to transparency maps regardless of hue. The effect was partially, but not entirely, mitigated by adding salient borders to the transparency maps. Taken together, these experiments show that using transparency to encode information about risk can lead to very different patterns of decision making than other encodings of the same information. | false | false | [
"Laura E. Matzen",
"Breannan C. Howell",
"Marie Tuft",
"Kristin Divis"
] | [] | [] | [] |
EuroVis | 2,024 | Visual Analytics for Fine-grained Text Classification Models and Datasets | 10.1111/cgf.15098 | In natural language processing (NLP), text classification tasks are increasingly fine-grained, as datasets are fragmented into a larger number of classes that are more difficult to differentiate from one another. As a consequence, the semantic structures of datasets have become more complex, and model decisions more difficult to explain. Existing tools, suited for coarse-grained classification, falter under these additional challenges. In response to this gap, we worked closely with NLP domain experts in an iterative design-and-evaluation process to characterize and tackle the growing requirements in their workflow of developing fine-grained text classification models. The result of this collaboration is the development of SemLa, a novel Visual Analytics system tailored for 1) dissecting complex semantic structures in a dataset when it is spatialized in model embedding space, and 2) visualizing fine-grained nuances in the meaning of text samples to faithfully explain model reasoning. This paper details the iterative design study and the resulting innovations featured in SemLa. The final design allows contrastive analysis at different levels by unearthing lexical and conceptual patterns including biases and artifacts in data. Expert feedback on our final design and case studies confirm that SemLa is a useful tool for supporting model validation and debugging as well as data annotation. | false | false | [
"Munkhtulga Battogtokh",
"Yiwen Xing",
"Cosmin Davidescu",
"Alfie Abdul-Rahman",
"Michael Luck",
"Rita Borgo"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.15492v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Visual Highlighting for Situated Brushing and Linking | 10.1111/cgf.15105 | Brushing and linking is widely used for visual analytics in desktop environments. However, using this approach to link many data items between situated (e.g., a virtual screen with data) and embedded views (e.g., highlighted objects in the physical environment) is largely unexplored. To this end, we study the effectiveness of visual highlighting techniques in helping users identify and link physical referents to brushed data marks in a situated scatterplot. In an exploratory virtual reality user study (N=20), we evaluated four highlighting techniques under different physical layouts and tasks. We discuss the effectiveness of these techniques, as well as implications for the design of brushing and linking operations in situated analytics. | false | false | [
"Nina Doerr",
"Benjamin Lee",
"Katarina Baricova",
"Dieter Schmalstieg",
"Michael Sedlmair"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.15321v3",
"icon": "paper"
}
] |
CHI | 2,024 | "Ah! I see" - Facilitating Process Reflection in Gameplay through a Novel Spatio-Temporal Visualization System | 10.1145/3613904.3642484 | Educational games have emerged as potent tools for helping students understand complex concepts and are now ubiquitous in global classrooms, amassing vast data. However, there is a notable gap in research concerning the effective visualization of this data to serve two key functions: (a) guiding students in reflecting upon their game-based learning and (b) aiding them in analyzing peer strategies. In this paper, we engage educators, students, and researchers as essential stakeholders. Taking a Design-Based Research (DBR) approach, we incorporate UX design methods to develop an innovative visualization system that helps players learn through gaining insights from their own and peers' gameplay and strategies. | false | false | [
"Sai Siddartha Maram",
"Erica Kleinman",
"Jennifer Villareale",
"Jichen Zhu",
"Magy Seif El-Nasr"
] | [] | [] | [] |
CHI | 2,024 | "Customization is Key": Reconfigurable Textual Tokens for Accessible Data Visualizations | 10.1145/3613904.3641970 | Customization is crucial for making visualizations accessible to blind and low-vision (BLV) people with widely-varying needs. But what makes for usable or useful customization? We identify four design goals for how BLV people should be able to customize screen-reader-accessible visualizations: presence, or what content is included; verbosity, or how concisely content is presented; ordering, or how content is sequenced; and, duration, or how long customizations are active. To meet these goals, we model a customization as a sequence of content tokens, each with a set of adjustable properties. We instantiate our model by extending Olli, an open-source accessible visualization toolkit, with a settings menu and command box for persistent and ephemeral customization respectively. Through a study with 13 BLV participants, we find that customization increases the ease of identifying and remembering information. However, customization also introduces additional complexity, making it more helpful for users familiar with similar tools. | false | false | [
"Shuli Jones",
"Isabella Pedraza Pineros",
"Daniel Hajas",
"Jonathan Zong",
"Arvind Satyanarayan"
] | [] | [] | [] |
CHI | 2,024 | "Is Text-Based Music Search Enough to Satisfy Your Needs?" A New Way to Discover Music with Images | 10.1145/3613904.3642126 | Music is intrinsically connected to human experience, yet the plethora of choices often renders the search for the ideal piece perplexing, especially when the search terms are ambiguous. This study questions the viability of employing visual data, specifically images, in innovative queries for music search, and it aims to better align search results with users' moods and situational context. We designed and evaluated three prototype systems for music search—TTTune (text-based), VisTune (image-based), and VTTune (hybrid)—to comparatively assess user experience and system usability. In a comprehensive user study involving 236 participants, each participant interacted with one of the systems and subsequently completed post-experimental surveys. A subset of participants also participated in in-depth interviews to further elucidate the potential and the advantages of image-based music retrieval (IMR) systems. Our findings reveal a marked preference for the user experience and usability offered by the IMR approach, as compared with the traditional text-based method. This underscores the potential of the image in an effective search query. Based on these findings, we discuss interface design guidelines tailored for IMR systems and factors affecting system performance, contributing to the evolving landscape of music search methods. | false | false | [
"Jeongeun Park 0003",
"Hyorim Shin",
"Changhoon Oh",
"Ha Young Kim"
] | [] | [] | [] |
CHI | 2,024 | "It is hard to remove from my eye": Design Makeup Residue Visualization System for Chinese Traditional Opera (Xiqu) Performers | 10.1145/3613904.3642261 | Chinese traditional opera (Xiqu) performers often experience skin problems due to the long-term use of heavy-metal-laden face paints. To explore the current skincare challenges encountered by Xiqu performers, we conducted an online survey (N=136) and semi-structured interviews (N=15) as a formative study. We found that incomplete makeup removal is the leading cause of human-induced skin problems, especially the difficulty in removing eye makeup. Therefore, we proposed EyeVis, a prototype that can visualize the residual eye makeup and record the time make-up was worn by Xiqu performers. We conducted a 7-day deployment study (N=12) to evaluate EyeVis. Results indicate that EyeVis helps to increase Xiqu performers’ awareness about removing makeup, as well as boosting their confidence and security in skincare. Overall, this work also provides implications for studying the work of people who wear makeup on a daily basis, and helps to promote and preserve the intangible cultural heritage of practitioners. | false | false | [
"Zeyu Xiong",
"Shihan Fu",
"Yanying Zhu",
"Chenqing Zhu",
"Xiaojuan Ma",
"Mingming Fan 0001"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.15719v1",
"icon": "paper"
}
] |
CHI | 2,024 | "Yeah, this graph doesn't show that": Analysis of Online Engagement with Misleading Data Visualizations | 10.1145/3613904.3642448 | Attempting to make sense of a phenomenon or crisis, social media users often share data visualizations and interpretations that can be erroneous or misleading. Prior work has studied how data visualizations can mislead, but do misleading visualizations reach a broad social media audience? And if so, do users amplify or challenge misleading interpretations? To answer these questions, we conducted a mixed-methods analysis of the public's engagement with data visualization posts about COVID-19 on Twitter. Compared to posts with accurate visual insights, our results show that posts with misleading visualizations garner more replies in which the audiences point out nuanced fallacies and caveats in data interpretations. Based on the results of our thematic analysis of engagement, we identify and discuss important opportunities and limitations to effectively leveraging crowdsourced assessments to address data-driven misinformation. | false | false | [
"Maxim Lisnic",
"Alexander Lex",
"Marina Kogan"
] | [] | [] | [] |
CHI | 2,024 | 'We Do Not Have the Capacity to Monitor All Media': A Design Case Study on Cyber Situational Awareness in Computer Emergency Response Teams | 10.1145/3613904.3642368 | Computer Emergency Response Teams (CERTs) provide advisory, preventive and reactive cybersecurity services for authorities, citizens, and businesses. However, their responsibility of monitoring, analyzing, and communicating cyber threats have become challenging due to the growing volume and varying quality of information disseminated through public channels. Based on a design case study conducted from 2021 to 2023, this paper combines three iterations of expert interviews, design workshops and cognitive walkthroughs to design an automated, cross-platform and real-time cybersecurity dashboard. By adopting the notion of cyber situational awareness, the study extracts user requirements and design heuristics for enhanced threat awareness and mission awareness in CERTs, discussing the aspects of source integration, data management, customizable visualization, relationship awareness, information assessment, software integration, (inter-)organizational collaboration, and communication of stakeholder warnings. | false | false | [
"Marc-André Kaufhold",
"Thea Riebe",
"Markus Bayer",
"Christian Reuter 0001"
] | [
"BP"
] | [] | [] |
CHI | 2,024 | (Re)activate, (Re)direct, (Re)arrange: Exploring the Design Space of Direct Interactions with Flavobacteria | 10.1145/3613904.3642262 | HCI designers increasingly engage in the integration of microbes into artefacts, leveraging their distinct biological affordances for novel interactions. While in many explorations the interaction between humans and microbes is mediated, scholars also highlight the potential of direct interactions, such as visualising mechanical distortions or fostering a sense of relationality with nonhumans through eliciting intimate encounters. Seizing upon this potential, our study delves into the realm of direct interactions involving Flavobacteria, recently introduced as a colour-changing interactive medium in HCI. We present a design space for direct interactions where humans can (re)activate, (re)direct, and (re)arrange Flavobacteria's colourations, thereby fostering a personal and dynamic interplay between humans and microbes. With our work, we aspire to provide pathways and ignite inspiration among HCI designers to create living artefacts that cultivate active engagement and heightened attentiveness towards microbial worlds and beyond. | false | false | [
"Clarice Risseeuw",
"Holly McQuillan",
"Joana Martins",
"Elvin Karana"
] | [
"HM"
] | [] | [] |
CHI | 2,024 | A Human Information Processing Theory of the Interpretation of Visualizations: Demonstrating Its Utility | 10.1145/3613904.3642276 | Providing an approach to model the memory structures that humans build as they use visualizations could be useful for researchers, designers and educators in the field of information visualization. Cheng and colleagues formulated Representation Interpretive Structure Theory (RIST) for that purpose. RIST adopts a human information processing perspective in order to address the immediate, short timescale, cognitive load likely to be experienced by visualization users. RIST is operationalized in a graphical modeling notation and browser-based editor. This paper demonstrates the utility of RIST by showing that (a): RIST models are compatible with established empirical and computational cognitive findings about differences in human performance on alternative representations; (b) they can encompass existing explanations from the literature; and, (c) they provide new explanations about causes of those performance differences. | false | false | [
"Peter C.-H. Cheng",
"Grecia Garcia Garcia",
"Daniel Raggi",
"Mateja Jamnik"
] | [] | [] | [] |
CHI | 2,024 | An Eye Gaze Heatmap Analysis of Uncertainty Head-Up Display Designs for Conditional Automated Driving | 10.1145/3613904.3642219 | This paper reports results from a high-fidelity driving simulator study (N=215) about a head-up display (HUD) that conveys a conditional automated vehicle's dynamic "uncertainty" about the current situation while fallback drivers watch entertaining videos. We compared (between-group) three design interventions: display (a bar visualisation of uncertainty close to the video), interruption (interrupting the video during uncertain situations), and combination (a combination of both), against a baseline (video-only). We visualised eye-tracking data to conduct a heatmap analysis of the four groups' gaze behaviour over time. We found interruptions initiated a phase during which participants interleaved their attention between monitoring and entertainment. This improved monitoring behaviour was more pronounced in combination compared to interruption, suggesting pre-warning interruptions have positive effects. The same addition had negative effects without interruptions (comparing baseline & display). Intermittent interruptions may have safety benefits over placing additional peripheral displays without compromising usability. | false | false | [
"Michael A. Gerber",
"Ronald Schroeter",
"Daniel Johnson 0001",
"Christian P. Janssen",
"Andry Rakotonirainy",
"Jonny Kuo",
"Mike Lenné"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.17751v1",
"icon": "paper"
}
] |
CHI | 2,024 | Better Little People Pictures: Generative Creation of Demographically Diverse Anthropographics | 10.1145/3613904.3641957 | We explore the potential of generative AI text-to-image models to help designers efficiently craft unique, representative, and demographically diverse anthropographics that visualize data about people. Currently, creating data-driven iconic images to represent individuals in a dataset often requires considerable design effort. Generative text-to-image models can streamline the process of creating these images, but risk perpetuating designer biases in addition to stereotypes latent in the models. In response, we outline a conceptual workflow for crafting anthropographic assets for visualizations, highlighting possible sources of risk and bias as well as opportunities for reflection and refinement by a human designer. Using an implementation of this workflow with Stable Diffusion and Google Colab, we illustrate a variety of new anthropographic designs that showcase the visual expressiveness and scalability of these generative approaches. Based on our experiments, we also identify challenges and research opportunities for new AI-enabled anthropographic visualization tools. | false | false | [
"Priya Dhawka",
"Lauren Perera",
"Wesley Willett"
] | [] | [] | [] |
CHI | 2,024 | CharacterMeet: Supporting Creative Writers' Entire Story Character Construction Processes Through Conversation with LLM-Powered Chatbot Avatars | 10.1145/3613904.3642105 | Support for story character construction is as essential as characters are for stories. Building upon past research on early character construction stages, we explore how conversation with chatbot avatars embodying characters powered by more recent technologies could support the entire character construction process for creative writing. Through a user study (N=14) with creative writers, we examine thinking and usage patterns of CharacterMeet, a prototype system allowing writers to progressively manifest characters through conversation while customizing context, character appearance, voice, and background image. We discover that CharacterMeet facilitates iterative character construction. Specifically, participants, including those with more linear usual approaches, alternated between writing and personalized exploration through visualization of ideas on CharacterMeet while visuals and audio enhanced immersion. Our findings support research on iterative creative processes and the growing potential of personalizable generative AI creativity support tools. We present design implications for leveraging chatbot avatars in the creative writing process. | false | false | [
"Hua Xuan Qin",
"Shan Jin",
"Ze Gao",
"Mingming Fan 0001",
"Pan Hui 0001"
] | [] | [] | [] |
CHI | 2,024 | Cieran: Designing Sequential Colormaps via In-Situ Active Preference Learning | 10.1145/3613904.3642903 | Quality colormaps can help communicate important data patterns. However, finding an aesthetically pleasing colormap that looks "just right" for a given scenario requires significant design and technical expertise. We introduce Cieran, a tool that allows any data analyst to rapidly find quality colormaps while designing charts within Jupyter Notebooks. Our system employs an active preference learning paradigm to rank expert-designed colormaps and create new ones from pairwise comparisons, allowing analysts who are novices in color design to tailor colormaps to their data context. We accomplish this by treating colormap design as a path planning problem through the CIELAB colorspace with a context-specific reward model. In an evaluation with twelve scientists, we found that Cieran effectively modeled user preferences to rank colormaps and leveraged this model to create new quality designs. Our work shows the potential of active preference learning for supporting efficient visualization design optimization. | false | false | [
"Matt-Heun Hong",
"Zachary Nolan Sunberg",
"Danielle Albers Szafir"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.15997v2",
"icon": "paper"
}
] |
CHI | 2,024 | Co-designing Customizable Clinical Dashboards with Multidisciplinary Teams: Bridging the Gap in Chronic Disease Care | 10.1145/3613904.3642618 | Providing care to individuals with chronic diseases benefits from a multidisciplinary approach and longitudinal symptom, event, and disease monitoring, in and out of clinical facilities. Technological advancements, including the ubiquitous presence of sensors and devices, present opportunities to collect large amounts of data and extract evidence-based insights about the patient and disease. Nevertheless, practical examples of clinical utility of those technologies remain sparse, and in specific focus areas (e.g, insights from a single device). This paper explores the challenges and opportunities of multidisciplinary clinical dashboards to support clinicians caring for people with chronic diseases. We report on a focus group and co-design workshops with a multidisciplinary team of clinicians and HCI researchers. We offer insights into how technological outcomes and visualizations can enhance clinical practice and the intricacies of information-sharing dynamics. We discuss the potential of dashboards to trigger actions in clinical settings and emphasize the benefits of customizable dashboards. | false | false | [
"Diogo Branco",
"Margarida Móteiro",
"Raquel Bouça-Machado",
"Rita Miranda",
"Tiago Reis",
"Élia Decoroso",
"Rita Cardoso",
"Joana Ramalho",
"Filipa Rato",
"Joana Malheiro",
"Diana Miranda",
"Verónica Caniça",
"Filipa Pona-Ferreira",
"Daniela Guerreiro",
"Mariana Leitão",
"Alexandra Saúde Braz",
"Joaquim J. Ferreira",
"Tiago João Guerreiro"
] | [] | [] | [] |
CHI | 2,024 | CollageVis: Rapid Previsualization Tool for Indie Filmmaking using Video Collages | 10.1145/3613904.3642575 | Previsualization, previs, is essential for film production, allowing cinematographic experiments and effective collaboration. However, traditional previs methods like 2D storyboarding and 3D animation require substantial time, cost, and technical expertise, posing challenges for indie filmmakers. We introduce CollageVis, a rapid previsualization tool using video collages. CollageVis enables filmmakers to create previs through two main user interfaces. First, it automatically segments actors from videos and assigns roles using name tags, color filters, and face swaps. Second, it positions video layers on a virtual stage and allows users to record shots using mobile as a proxy for a virtual camera. These features were developed based on formative interviews by reflecting indie filmmakers' needs and working methods. We demonstrate the system's capability by replicating seven film scenes and evaluate the system's usability with six indie filmmakers. The findings indicate that CollageVis allows more flexible yet expressive previs creation for idea development and collaboration. | false | false | [
"Hye-Young Jo",
"Ryo Suzuki 0001",
"Yoonji Kim"
] | [] | [] | [] |
CHI | 2,024 | Comparison of Spatial Visualization Techniques for Radiation in Augmented Reality | 10.1145/3613904.3642646 | Augmented Reality (AR) provides a safe and low-cost option for hazardous safety training that allows for the visualization of aspects that may be invisible, such as radiation. Effectively visually communicating such threats in the environment around the user is not straightforward. This work describes visually encoding radiation using the spatial awareness mesh of an AR Head Mounted Display. We leverage the AR device's GPUs to develop a real time solution that accumulates multiple dynamic sources and uses stencils to prevent an environment being over saturated with a visualization, as well as supporting the encoding of direction explicitly in the visualization. We perform a user study (25 participants) of different visualizations and obtain user feedback. Results show that there are complex interactions and while no visual representation was statistically superior or inferior, user opinions vary widely. We also discuss the evaluation approaches and provide recommendations. | false | false | [
"Fintan McGee",
"Roderick McCall",
"Joan Baixauli"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.05403v1",
"icon": "paper"
}
] |
CHI | 2,024 | Damage Optimization in Video Games: A Player-Driven Co-Creative Approach | 10.1145/3613904.3642643 | The concept of dealing damage is established and widespread in video games. With growing complexity and countless interactions in modern games, capturing how damage unfolds becomes an intricate problem - for developers just as for players. Misunderstanding how to optimize damage potentials includes risks of game imbalances, game-breaking exploits, mismatches between player skill and challenge (harming flow), and impaired perceived competence. All of these considerably impact player experience, game reception, success, and retention, yet polishing optimal strategies remains often a player community effort. To accelerate, inform and ease this process, we implemented an interactive tool capable of simulating, visualizing, planning and comparing damage strategies in video games. Following a case study within the Guild Wars 2 community, we contribute a player-driven perspective on the problem of damage optimization, as well as an artifact that resulted in empirical improvements – advancing the fields of game analytics, game evaluation methods and self-regulated learning. | false | false | [
"Johannes Pfau",
"Manik Charan",
"Erica Kleinman",
"Magy Seif El-Nasr"
] | [] | [] | [] |
CHI | 2,024 | Data Cubes in Hand: A Design Space of Tangible Cubes for Visualizing 3D Spatio-Temporal Data in Mixed Reality | 10.1145/3613904.3642740 | Tangible interfaces in mixed reality (MR) environments allow for intuitive data interactions. Tangible cubes, with their rich interaction affordances, high maneuverability, and stable structure, are particularly well-suited for exploring multi-dimensional data types. However, the design potential of these cubes is underexplored. This study introduces a design space for tangible cubes in MR, focusing on interaction space, visualization space, sizes, and multiplicity. Using spatio-temporal data, we explored the interaction affordances of these cubes in a workshop (N=24). We identified unique interactions like rotating, tapping, and stacking, which are linked to augmented reality (AR) visualization commands. Integrating user-identified interactions, we created a design space for tangible-cube interactions and visualization. A prototype visualizing global health spending with small cubes was developed and evaluated, supporting both individual and combined cube manipulation. This research enhances our grasp of tangible interaction in MR, offering insights for future design and application in diverse data contexts. | false | false | [
"Shuqi He",
"Haonan Yao",
"Luyan Jiang",
"Kaiwen Li",
"Nan Xiang",
"Yue Li 0023",
"Hai-Ning Liang",
"Lingyun Yu 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.06891v1",
"icon": "paper"
}
] |
CHI | 2,024 | Data Probes as Boundary Objects for Technology Policy Design: Demystifying Technology for Policymakers and Aligning Stakeholder Objectives in Rideshare Gig Work | 10.1145/3613904.3642000 | Despite the evidence of harm that technology can inflict, commensurate policymaking to hold tech platforms accountable still lags. This is pertinent to app-based gig workers, where unregulated algorithms continue to dictate their work, often with little human recourse. While past HCI literature has investigated workers' experiences under algorithmic management and how to design interventions, rarely are the perspectives of stakeholders who inform or craft policy sought. To bridge this, we propose using data probes—interactive visualizations of workers' data that show the impact of technology practices on people—exploring them in 12 semi-structured interviews with policy informers, (driver-)organizers, litigators, and a lawmaker in the rideshare space. We show how data probes act as boundary objects to assist stakeholder interactions, demystify technology for policymakers, and support worker collective action. We discuss the potential for data probes as training tools for policymakers, and considerations around data access and worker risks when using data probes. | false | false | [
"Angie Zhang",
"Rocita Rana",
"Alexander Boltz",
"Veena Dubal",
"Min Kyung Lee"
] | [] | [] | [] |
CHI | 2,024 | Data Storytelling in Data Visualisation: Does it Enhance the Efficiency and Effectiveness of Information Retrieval and Insights Comprehension? | 10.1145/3613904.3643022 | Data storytelling (DS) is rapidly gaining attention as an approach that integrates data, visuals, and narratives to create data stories that can help a particular audience to comprehend the key messages underscored by the data with enhanced efficiency and effectiveness. It is been posited that DS can be especially advantageous for audiences with limited visualisation literacy, by presenting the data clearly and concisely. However, empirical studies confirming whether data stories indeed provide these benefits over conventional data visualisations are scarce. To bridge this gap, we conducted a study with 103 participants to determine whether DS indeed improve both efficiency and effectiveness in tasks related to information retrieval and insights comprehension. Our findings suggest that data stories do improve the efficiency of comprehension tasks, as well as the effectiveness of comprehension tasks that involve a single insight, compared with conventional visualisations. Interestingly, these benefits were not associated with participants' visualisation literacy. | false | false | [
"Hongbo Shao",
"Roberto Martínez-Maldonado",
"Vanessa Echeverría",
"Lixiang Yan",
"Dragan Gasevic"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.12634v2",
"icon": "paper"
}
] |
CHI | 2,024 | DeepSee: Multidimensional Visualizations of Seabed Ecosystems | 10.1145/3613904.3642001 | Scientists studying deep ocean microbial ecosystems use limited numbers of sediment samples collected from the seafloor to characterize important life-sustaining biogeochemical cycles in the environment. Yet conducting fieldwork to sample these extreme remote environments is both expensive and time consuming, requiring tools that enable scientists to explore the sampling history of field sites and predict where taking new samples is likely to maximize scientific return. We conducted a collaborative, user-centered design study with a team of scientific researchers to develop DeepSee, an interactive data workspace that visualizes 2D and 3D interpolations of biogeochemical and microbial processes in context together with sediment sampling history overlaid on 2D seafloor maps. Based on a field deployment and qualitative interviews, we found that DeepSee increased the scientific return from limited sample sizes, catalyzed new research workflows, reduced long-term costs of sharing data, and supported teamwork and communication between team members with diverse research goals. | false | false | [
"Adam Coscia",
"Haley M. Sapers",
"Noah Deutsch",
"Malika Khurana",
"John S. Magyar",
"Sergio A. Parra",
"Daniel R. Utter",
"Rebecca L. Wipfler",
"David W. Caress",
"Eric J. Martin",
"Jennifer B. Paduan",
"Maggie Hendrie",
"Santiago V. Lombeyda",
"Hillary Mushkin",
"Alex Endert",
"Scott Davidoff",
"Victoria J. Orphan"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.04761v1",
"icon": "paper"
}
] |
CHI | 2,024 | Discovering Accessible Data Visualizations for People with ADHD | 10.1145/3613904.3642112 | There have been many studies on understanding data visualizations regarding general users. However, we have a limited understanding of how people with ADHD comprehend data visualizations and how it might be different from the general users. To understand accessible data visualization for people with ADHD, we conducted a crowd-sourced survey involving 70 participants with ADHD and 77 participants without ADHD. Specifically, we tested the chart components of color, text amount, and use of visual embellishments/pictographs, finding that some of these components and ADHD affected participants' response times and accuracy. We outlined the neurological traits of ADHD and discussed specific findings on accessible data visualizations for people with ADHD. We found that various chart embellishment types affected accuracy and response times for those with ADHD differently depending on the types of questions. Based on these results, we suggest visual design recommendations to make accessible data visualizations for people with ADHD. | false | false | [
"Tien Tran",
"Hae Na Lee",
"Ji Hwan Park"
] | [
"HM"
] | [] | [] |
CHI | 2,024 | Do You See What I See? A Qualitative Study Eliciting High-Level Visualization Comprehension | 10.1145/3613904.3642813 | Designers often create visualizations to achieve specific high-level analytical or communication goals. These goals require people to naturally extract complex, contextualized, and interconnected patterns in data. While limited prior work has studied general high-level interpretation, prevailing perceptual studies of visualization effectiveness primarily focus on isolated, predefined, low-level tasks, such as estimating statistical quantities. This study more holistically explores visualization interpretation to examine the alignment between designers' communicative goals and what their audience sees in a visualization, which we refer to as their comprehension. We found that statistics people effectively estimate from visualizations in classical graphical perception studies may differ from the patterns people intuitively comprehend in a visualization. We conducted a qualitative study on three types of visualizations—line graphs, bar graphs, and scatterplots—to investigate the high-level patterns people naturally draw from a visualization. Participants described a series of graphs using natural language and think-aloud protocols. We found that comprehension varies with a range of factors, including graph complexity and data distribution. Specifically, 1) a visualization's stated objective often does not align with people's comprehension, 2) results from traditional experiments may not predict the knowledge people build with a graph, and 3) chart type alone is insufficient to predict the information people extract from a graph. Our study confirms the importance of defining visualization effectiveness from multiple perspectives to assess and inform visualization practices. | false | false | [
"Ghulam Jilani Quadri",
"Arran Zeyu Wang",
"Zhehao Wang",
"Jennifer Adorno Nieves",
"Paul Rosen 0001",
"Danielle Albers Szafir"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.15605v1",
"icon": "paper"
}
] |
CHI | 2,024 | DoodleTunes: Interactive Visual Analysis of Music-Inspired Children Doodles with Automated Feature Annotation | 10.1145/3613904.3642346 | Music and visual arts are essential in children's arts education, and their integration has garnered significant attention. Existing data analysis methods for exploring audio-visual correlations are limited. Yet, relevant research is necessary for innovating and promoting arts integration courses. In our work, we collected substantial volumes of music-inspired doodles created by children and interviewed education experts to comprehend the challenges they encountered in the relevant analysis. Based on the insights we obtained, we designed and constructed an interactive visualization system DoodleTunes. DoodleTunes integrates deep learning-driven methods for automatically annotating several types of data features. The visual designs of the system are based on a four-level analysis structure to construct a progressive workflow, facilitating data exploration and insight discovery between doodle images and corresponding music pieces. We evaluated the accuracy of our feature prediction results and collected usage feedback on DoodleTunes from five domain experts. | false | false | [
"Shuqi Liu",
"Jia Bu",
"Huayuan Ye",
"Juntong Chen",
"Shiqi Jiang",
"Mingtian Tao",
"Liping Guo",
"Changbo Wang",
"Chenhui Li"
] | [] | [] | [] |
CHI | 2,024 | Doorways Do Not Always Cause Forgetting: Studying the Effect of Locomotion Technique and Doorway Visualization in Virtual Reality | 10.1145/3613904.3642879 | The "doorway effect" predicts that crossing an environmental boundary affects memory negatively. In virtual reality (VR), we can design the crossing and the appearance of such boundaries in non-realistic ways. However, it is unclear whether locomotion techniques like teleportation, which avoid crossing the boundary altogether, still induce the effect. Furthermore, it is unclear how different appearances of a doorway act as a boundary and thus induce the effect. To address these questions, we conducted two lab studies. First, we conceptually replicated prior doorway effect studies in VR using natural walking and teleportation. Second, we investigated the effect of five doorway visualizations, ranging from doors to portals. The results show no difference in object recognition performance due to the presence of a doorway, locomotion technique, or doorway visualization. We discuss the implications of these findings on the role of boundaries in event-based memory and the design of boundary interactions in VR. | false | false | [
"Thomas Van Gemert",
"Sean Chew",
"Yiannis Kalaitzoglou",
"Joanna Bergström"
] | [] | [] | [] |
CHI | 2,024 | DynaVis: Dynamically Synthesized UI Widgets for Visualization Editing | 10.1145/3613904.3642639 | Users often rely on GUIs to edit and interact with visualizations — a daunting task due to the large space of editing options. As a result, users are either overwhelmed by a complex UI or constrained by a custom UI with a tailored, fixed subset of options with limited editing flexibility. Natural Language Interfaces (NLIs) are emerging as a feasible alternative for users to specify edits. However, NLIs forgo the advantages of traditional GUI: the ability to explore and repeat edits and see instant visual feedback. We introduce DynaVis, which blends natural language and dynamically synthesized UI widgets. As the user describes an editing task in natural language, DynaVis performs the edit and synthesizes a persistent widget that the user can interact with to make further modifications. Study participants (n=24) preferred DynaVis over the NLI-only interface citing ease of further edits and editing confidence due to immediate visual feedback. | false | false | [
"Priyan Vaithilingam",
"Elena L. Glassman",
"Jeevana Priya Inala",
"Chenglong Wang"
] | [
"BP"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2401.10880v1",
"icon": "paper"
}
] |
CHI | 2,024 | Effects of Point Size and Opacity Adjustments in Scatterplots | 10.1145/3613904.3642127 | Systematically changing the size and opacity of points on scatterplots can be used to induce more accurate perceptions of correlation by viewers. Evidence points to the mechanisms behind these effects being similar, so one may expect their combination to be additive regarding their effects on correlation estimation. We present a fully-reproducible study in which we combine techniques for influencing correlation perception to show that in reality, effects of changing point size and opacity interact in a non-additive fashion. We show that there is a great deal of scope for using visual features to change viewers' perceptions of data visualizations. Additionally, we use our results to further interrogate the perceptual mechanisms at play when changing point size and opacity in scatterplots. | false | false | [
"Gabriel Strain",
"Andrew J. Stewart",
"Paul A. Warren",
"Caroline Jay"
] | [] | [] | [] |
CHI | 2,024 | Epigraphics: Message-Driven Infographics Authoring | 10.1145/3613904.3642172 | The message a designer wants to convey plays a pivotal role in directing the design of an infographic, yet most authoring workflows start with creating the visualizations or graphics first without gauging whether they fit the message. To address this gap, we propose Epigraphics, a web-based authoring system that treats an "epigraph" as the first-class object, and uses it to guide infographic asset creation, editing, and syncing. The system uses the text-based message to recommend visualizations, graphics, data filters, color palettes, and animations. It further supports between-asset interactions and fine-tuning such as recoloring, highlighting, and animation syncing that enhance the aesthetic cohesiveness of the assets. A gallery and case studies show that our system can produce infographics inspired by existing popular ones, and a task-based usability study with 10 designers show that a text-sourced workflow can standardize content, empower users to think more about the big picture, and facilitate rapid prototyping. | false | false | [
"Tongyu Zhou",
"Jeff Huang 0002",
"Gromit Yeuk-Yin Chan"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.10152v1",
"icon": "paper"
}
] |
CHI | 2,024 | Evaluating Navigation and Comparison Performance of Computational Notebooks on Desktop and in Virtual Reality | 10.1145/3613904.3642932 | The computational notebook serves as a versatile tool for data analysis. However, its conventional user interface falls short of keeping pace with the ever-growing data-related tasks, signaling the need for novel approaches. With the rapid development of interaction techniques and computing environments, there is a growing interest in integrating emerging technologies in data-driven workflows. Virtual reality, in particular, has demonstrated its potential in interactive data visualizations. In this work, we aimed to experiment with adapting computational notebooks into VR and verify the potential benefits VR can bring. We focus on the navigation and comparison aspects as they are primitive components in analysts' workflow. To further improve comparison, we have designed and implemented a Branching&Merging functionality. We tested computational notebooks on the desktop and in VR, both with and without the added Branching&Merging capability. We found VR significantly facilitated navigation compared to desktop, and the ability to create branches enhanced comparison. | false | false | [
"Sungwon In",
"Eric Krokos",
"Kirsten Whitley",
"Chris North 0001",
"Yalong Yang 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.07161v1",
"icon": "paper"
}
] |
CHI | 2,024 | Exploring Visualizations for Precisely Guiding Bare Hand Gestures in Virtual Reality | 10.1145/3613904.3642935 | Bare hand interaction in augmented or virtual reality (AR/VR) systems, while intuitive, often results in errors and frustration. However, existing methods, such as a static icon or a dynamic tutorial, can only inform simple and coarse hand gestures and lack corrective feedback. This paper explores various visualizations for enhancing precise hand interaction in VR. Through a comprehensive two-part formative study with 11 participants, we identified four types of essential information for visual guidance and designed different visualizations that manifest these information types. We further distilled four visual designs and conducted a controlled lab study with 15 participants to assess their effectiveness for various single- and double-handed gestures. Our results demonstrate that visual guidance significantly improved users' gesture performance, reducing time and workload while increasing confidence. Moreover, we found that the visualization did not disrupt most users' immersive VR experience or their perceptions of hand tracking and gesture recognition reliability. | false | false | [
"Xizi Wang",
"Ben Lafreniere",
"Jian Zhao 0010"
] | [] | [] | [] |
CHI | 2,024 | Fast-Forward Reality: Authoring Error-Free Context-Aware Policies with Real-Time Unit Tests in Extended Reality | 10.1145/3613904.3642158 | Advances in ubiquitous computing have enabled end-user authoring of context-aware policies (CAPs) that control smart devices based on specific contexts of the user and environment. However, authoring CAPs accurately and avoiding run-time errors is challenging for end-users as it is difficult to foresee CAP behaviors under complex real-world conditions. We propose Fast-Forward Reality, an Extended Reality (XR) based authoring workflow that enables end-users to iteratively author and refine CAPs by validating their behaviors via simulated unit test cases. We develop a computational approach to automatically generate test cases based on the authored CAP and the user's context history. Our system delivers each test case with immersive visualizations in XR, facilitating users to verify the CAP behavior and identify necessary refinements. We evaluated Fast-Forward Reality in a user study (N=12). Our authoring and validation process improved the accuracy of CAPs and the users provided positive feedback on the system usability. | false | false | [
"Xun Qian",
"Tianyi Wang 0004",
"Xuhai Xu",
"Tanya R. Jonker",
"Kashyap Todi"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.07997v1",
"icon": "paper"
}
] |
CHI | 2,024 | From Exploration to End of Life: Unpacking Sustainability in Physicalization Practices | 10.1145/3613904.3642248 | Data physicalizations have gained prominence across domains, but their environmental impact has been largely overlooked. This work addresses this gap by investigating the interplay between sustainability and physicalization practices. We conducted interviews with experts from diverse backgrounds, followed by a survey to gather insights into how they approach physicalization projects and reflect on sustainability. Our thematic analysis revealed sustainability considerations throughout the entire physicalization life cycle—a framework that encompasses various stages in a physicalization's existence. Notably, we found no single agreed-upon definition for sustainable physicalizations, highlighting the complexity of integrating sustainability into physicalization practices. We outline sustainability challenges and strategies based on participants' experiences and propose the Sustainable Physicalization Practices (SuPPra) Matrix, providing a structured approach for designers to reflect on and enhance the environmental impact of their future physicalizations. | false | false | [
"Luiz Morais",
"Georgia Panagiotidou",
"Sarah Hayes",
"Tatiana Losev",
"Rebecca Noonan",
"Uta Hinrichs"
] | [
"BP"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.09860v1",
"icon": "paper"
}
] |
CHI | 2,024 | Functional Design Requirements to Facilitate Menstrual Health Data Exploration | 10.1145/3613904.3642282 | Menstrual trackers currently lack the affordances required to help individuals achieve their goals beyond menstrual event predictions and symptom logging. Taking an initial step towards this aspiration, we propose, validate, and refine five functional design requirements for future interface designs that facilitate menstrual data exploration. We interviewed 30 individuals who menstruate and collected their feedback on the practical application of these requirements. To elicit ideas and impressions, we designed two proof-of-concept interfaces to use as design probes with similar core functionalities but different presentations of phase timing predictions and signal arrangement. Our analysis revealed participants' feedback regarding the presentation of predictions for menstrual-related events, the visualization of future signal patterns, personalization abilities for viewing signals relevant to their menstrual experience, the availability of resources to understand the underlying biological connections between signals, and the ability to compare multiple cycles side-by-side with context. | false | false | [
"Georgianna Lin",
"Pierre-William Lessard",
"Minh Ngoc Le",
"Brenna Li",
"Fanny Chevalier",
"Khai N. Truong",
"Alex Mariakakis"
] | [] | [] | [] |
CHI | 2,024 | Glanceable Data Visualizations for Older Adults: Establishing Thresholds and Examining Disparities Between Age Groups | 10.1145/3613904.3642776 | We present results of a replication study on smartwatch visualizations with adults aged 65 and older. The older adult population is rising globally, coinciding with their increasing interest in using small wearable devices, such as smartwatches, to track and view data. Smartwatches, however, pose challenges to this population: fonts and visualizations are often small and meant to be seen at a glance. How concise design on smartwatches interacts with aging-related changes in perception and cognition, however, is not well understood. We replicate a study that investigated how visualization type and number of data points affect glanceable perception. We observe strong evidence of differences for participants aged 75 and older, sparking interesting questions regarding the study of visualization and older adults. We discuss first steps toward better understanding and supporting an older population of smartwatch wearers and reflect on our experiences working with this population. Supplementary materials are available at https://osf.io/7x4hq/. | false | false | [
"Zack While",
"Tanja Blascheck",
"Yujie Gong",
"Petra Isenberg",
"Ali Sarvghad"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.12343v1",
"icon": "paper"
}
] |
CHI | 2,024 | Go-Go Biome: Evaluation of a Casual Game for Gut Health Engagement and Reflection | 10.1145/3613904.3642742 | Experts emphasise that maintaining a healthy gut microbial balance requires the public to understand factors beyond diet, such as physical activity, lifestyle, and other real-world influences. Games as experiential systems are known to foster playful engagement and reflection. We propose a novel approach to promoting activity engagement for gut health and its reflection through the design of the Go-Go Biome game. The game simulates the interplay between friendly and unfriendly gut microbes, encouraging real-world activity engagement for gut-microbial balance through interactive visuals, unstructured play mechanics, and reflective design principles. A field study with 14 participants revealed that important facets of our game design led to awareness, playful visualisation, and reflection on factors influencing gut health. Our findings suggest four design lenses– bio-temporality, visceral conversations, wellness comparison, and inner discovery, to aid future playful design explorations to foster gut health engagement and reflection. | false | false | [
"Nandini Pasumarthy",
"Shreyas Nisal",
"Jessica Danaher",
"Elise van den Hoven",
"Rohit Ashok Khot"
] | [] | [] | [] |
CHI | 2,024 | How Do Analysts Understand and Verify AI-Assisted Data Analyses? | 10.1145/3613904.3642497 | Data analysis is challenging as it requires synthesizing domain knowledge, statistical expertise, and programming skills. Assistants powered by large language models (LLMs), such as ChatGPT, can assist analysts by translating natural language instructions into code. However, AI-assistant responses and analysis code can be misaligned with the analyst's intent or be seemingly correct but lead to incorrect conclusions. Therefore, validating AI assistance is crucial and challenging. Here, we explore how analysts understand and verify the correctness of AI-generated analyses. To observe analysts in diverse verification approaches, we develop a design probe equipped with natural language explanations, code, visualizations, and interactive data tables with common data operations. Through a qualitative user study (n=22) using this probe, we uncover common behaviors within verification workflows and how analysts' programming, analysis, and tool backgrounds reflect these behaviors. Additionally, we provide recommendations for analysts and highlight opportunities for designers to improve future AI-assistant experiences. | false | false | [
"Ken Gu",
"Ruoxi Shang",
"Tim Althoff",
"Chenglong Wang",
"Steven Mark Drucker"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2309.10947v2",
"icon": "paper"
}
] |
CHI | 2,024 | How Do Low-Vision Individuals Experience Information Visualization? | 10.1145/3613904.3642188 | In recent years, there has been a growing interest in enhancing the accessibility of visualizations for people with visual impairments. While much of the research has focused on improving accessibility for screen reader users, the specific needs of people with remaining vision (i.e., low-vision individuals) have been largely unaddressed. To bridge this gap, we conducted a qualitative study that provides insights into how low-vision individuals experience visualizations. We found that participants utilized various strategies to examine visualizations using the screen magnifiers and also observed that the default zoom level participants use for general purposes may not be optimal for reading visualizations. We identified that participants relied on their prior knowledge and memory to minimize the traversing cost when examining visualization. Based on the findings, we motivate a personalized tool to accommodate varying visual conditions of low-vision individuals and derive the design goals and features of the tool. | false | false | [
"Yanan Wang",
"Yuhang Zhao 0001",
"Yea-Seul Kim"
] | [] | [] | [] |
CHI | 2,024 | Input Visualization: Collecting and Modifying Data with Visual Representations | 10.1145/3613904.3642808 | We examine input visualizations, visual representations that are designed to collect (and represent) new data rather than encode preexisting datasets. Information visualization is commonly used to reveal insights and stories within existing data. As a result, most contemporary visualization approaches assume existing datasets as the starting point for design, through which that data is mapped to visual encodings. Meanwhile, the implications of visualizations as inputs and as data sources have received little attention—despite the existence of visual and physical examples stretching back centuries. In this paper, we present a design space of 50 input visualizations analyzing their visual representation, data, artifact, context, and input. Based on this, we identify input modalities, purposes of input visualizations, and a set of design considerations. Finally, we discuss the relationship between input visualization and traditional visualization design and suggest opportunities for future research to better understand these visual representations and their potential. | false | false | [
"Nathalie Bressa",
"Jordan Louis",
"Wesley Willett",
"Samuel Huron"
] | [] | [
"PW",
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/bw3gp/files/c9j8s",
"icon": "paper"
},
{
"name": "Video",
"url": "https://www.youtube.com/watch?v=RAfv2quE6nA",
"icon": "video"
},
{
"name": "Project Website",
"url": "https://inputvisualization.github.io/",
"icon": "project_website"
}
] |
CHI | 2,024 | KnitScape: Computational Design and Yarn-Level Simulation of Slip and Tuck Colorwork Knitting Patterns | 10.1145/3613904.3642799 | Slipped and tucked stitches introduce small areas of deformation that compound and result in emergent textures on knitted fabrics. When used together with color changes and ladders, these can also produce dramatic colorwork and openwork effects. However, designing slip and tuck colorwork patterns is challenging due to the complex interactions between operations, yarns, and deformations. We present KnitScape, a browser-based tool for design and simulation of stitch patterns for knitting. KnitScape provides a design interface to specify 1) operation repeats, 2) color changes, and 3) needle positions. These inputs are used to build a graph of yarn topology and run a yarn-level spring simulation. This enables visualization of the deformation that arises from slip and tuck operations. Through its design tool and simulation, KnitScape enables rapid exploration of a complex colorwork design space. We demonstrate KnitScape with a series of example swatches. | false | false | [
"Hannah Twigg-Smith",
"Emily Whiting",
"Nadya Peek"
] | [] | [] | [] |
CHI | 2,024 | Looking Together ≠ Seeing the Same Thing: Understanding Surgeons' Visual Needs During Intra-operative Coordination and Instruction | 10.1145/3613904.3641929 | Shared gaze visualizations have been found to enhance collaboration and communication outcomes in diverse HCI scenarios including computer supported collaborative work and learning contexts. Given the importance of gaze in surgery operations, especially when a surgeon trainer and trainee need to coordinate their actions, research on the use of gaze to facilitate intra-operative coordination and instruction has been limited and shows mixed implications. We performed a field observation of 8 surgeries and an interview study with 14 surgeons to understand their visual needs during operations, informing ways to leverage and augment gaze to enhance intra-operative coordination and instruction. We found that trainees have varying needs in receiving visual guidance which are often unfulfilled by the trainers' instructions. It is critical for surgeons to control the timing of the gaze-based visualizations and effectively interpret gaze data. We suggest overlay technologies, e.g., gaze-based summaries and depth sensing, to augment raw gaze in support of surgical coordination and instruction. | false | false | [
"Vitaliy Popov",
"Xinyue Chen",
"Jingying Wang",
"Michael Kemp",
"Gurjit Sandhu",
"Taylor Kantor",
"Natalie Mateju",
"Xu Wang 0016"
] | [
"HM"
] | [] | [] |
CHI | 2,024 | MAIDR: Making Statistical Visualizations Accessible with Multimodal Data Representation | 10.1145/3613904.3642730 | This paper investigates new data exploration experiences that enable blind users to interact with statistical data visualizations—bar plots, heat maps, box plots, and scatter plots—leveraging multimodal data representations. In addition to sonification and textual descriptions that are commonly employed by existing accessible visualizations, our MAIDR (multimodal access and interactive data representation) system incorporates two additional modalities (braille and review) that offer complementary benefits. It also provides blind users with the autonomy and control to interactively access and understand data visualizations. In a user study involving 11 blind participants, we found the MAIDR system facilitated the accurate interpretation of statistical visualizations. Participants exhibited a range of strategies in combining multiple modalities, influenced by their past interactions and experiences with data visualizations. This work accentuates the overlooked potential of combining refreshable tactile representation with other modalities and elevates the discussion on the importance of user autonomy when designing accessible data visualizations. | false | false | [
"Jooyoung Seo",
"Yilin Xia",
"Bongshin Lee",
"Sean McCurry",
"Yu Jun Yam"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.00717v1",
"icon": "paper"
}
] |
CHI | 2,024 | Make Interaction Situated: Designing User Acceptable Interaction for Situated Visualization in Public Environments | 10.1145/3613904.3642049 | Situated visualization blends data into the real world to fulfill individuals' contextual information needs. However, interacting with situated visualization in public environments faces challenges posed by users' acceptance and contextual constraints. To explore appropriate interaction design, we first conduct a formative study to identify users' needs for data and interaction. Informed by the findings, we summarize appropriate interaction modalities with eye-based, hand-based and spatially-aware object interaction for situated visualization in public environments. Then, through an iterative design process with six users, we explore and implement interactive techniques for activating and analyzing with situated visualization. To assess the effectiveness and acceptance of these interactions, we integrate them into an AR prototype and conduct a within-subjects study in public scenarios using conventional hand-only interactions as the baseline. The results show that participants preferred our prototype over the baseline, attributing their preference to the interactions being more acceptable, flexible, and practical in public. | false | false | [
"Qian Zhu 0010",
"Zhuo Wang",
"Wei Zeng 0004",
"Wai Tong",
"Weiyue Lin",
"Xiaojuan Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2402.14251v2",
"icon": "paper"
}
] |
CHI | 2,024 | Milliways: Taming Multiverses through Principled Evaluation of Data Analysis Paths | 10.1145/3613904.3642375 | Multiverse analyses involve conducting all combinations of reasonable choices in a data analysis process. A reader of a study containing a multiverse analysis might question—are all the choices included in the multiverse reasonable and equally justifiable? How much do results vary if we make different choices in the analysis process? In this work, we identify principles for validating the composition of, and interpreting the uncertainty in, the results of a multiverse analysis. We present Milliways, a novel interactive visualisation system to support principled evaluation of multiverse analyses. Milliways provides interlinked panels presenting result distributions, individual analysis composition, multiverse code specification, and data summaries. Milliways supports interactions to sort, filter and aggregate results based on the analysis specification to identify decisions in the analysis process to which the results are sensitive. To represent the two qualitatively different types of uncertainty that arise in multiverse analyses—probabilistic uncertainty from estimating unknown quantities of interest such as regression coefficients, and possibilistic uncertainty from choices in the data analysis—Milliways uses consonance curves and probability boxes. Through an evaluative study with five users familiar with multiverse analysis, we demonstrate how Milliways can support multiverse analysis tasks, including a principled assessment of the results of a multiverse analysis. | false | false | [
"Abhraneel Sarma",
"Kyle Hwang",
"Jessica Hullman",
"Matthew Kay 0001"
] | [] | [] | [] |
CHI | 2,024 | Momentary Stressor Logging and Reflective Visualizations: Implications for Stress Management with Wearables | 10.1145/3613904.3642662 | Commercial wearables from Fitbit, Garmin, and Whoop have recently introduced real-time notifications based on detecting changes in physiological responses indicating potential stress. In this paper, we investigate how these new capabilities can be leveraged to improve stress management. We developed a smartwatch app, a smartphone app, and a cloud service, and conducted a 100-day field study with 122 participants who received prompts triggered by physiological responses several times a day. They were asked whether they were stressed, and if so, to log the most likely stressor. Each week, participants received new visualizations of their data to self-reflect on patterns and trends. Participants reported better awareness of their stressors, and self-initiating fourteen kinds of behavioral changes to reduce stress in their daily lives. Repeated self-reports over 14 weeks showed reductions in both stress intensity (in 26,521 momentary ratings) and stress frequency (in 1,057 weekly surveys). | false | false | [
"Sameer Neupane",
"Mithun Saha",
"Nasir Ali",
"Timothy Hnat",
"Shahin Alan Samiei",
"Anandatirtha Nandugudi",
"David M. Almeida",
"Santosh Kumar 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2401.16307v1",
"icon": "paper"
}
] |
CHI | 2,024 | Natural Language Dataset Generation Framework for Visualizations Powered by Large Language Models | 10.1145/3613904.3642943 | We introduce VL2NL, a Large Language Model (LLM) framework that generates rich and diverse NL datasets using Vega-Lite specifications as input, thereby streamlining the development of Natural Language Interfaces (NLIs) for data visualization. To synthesize relevant chart semantics accurately and enhance syntactic diversity in each NL dataset, we leverage 1) a guided discovery incorporated into prompting so that LLMs can steer themselves to create faithful NL datasets in a self-directed manner; 2) a score-based paraphrasing to augment NL syntax along with four language axes. We also present a new collection of 1,981 real-world Vega-Lite specifications that have increased diversity and complexity than existing chart collections. When tested on our chart collection, VL2NL extracted chart semantics and generated L1/L2 captions with 89.4% and 76.0% accuracy, respectively. It also demonstrated generating and paraphrasing utterances and questions with greater diversity compared to the benchmarks. Last, we discuss how our NL datasets and framework can be utilized in real-world scenarios. The codes and chart collection are available at https://github.com/hyungkwonko/chart-llm. | false | false | [
"Hyung-Kwon Ko",
"Hyeon Jeon",
"Gwanmo Park",
"Dae Hyun Kim 0005",
"Nam Wook Kim",
"Juho Kim",
"Jinwook Seo"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2309.10245v4",
"icon": "paper"
}
] |
CHI | 2,024 | Odds and Insights: Decision Quality in Exploratory Data Analysis Under Uncertainty | 10.1145/3613904.3641995 | Recent studies have shown that users of visual analytics tools can have difficulty distinguishing robust findings in the data from statistical noise, but the true extent of this problem is likely dependent on both the incentive structure motivating their decisions, and the ways that uncertainty and variability are (or are not) represented in visualisations. In this work, we perform a crowd-sourced study measuring decision-making quality in visual analytics, testing both an explicit structure of incentives designed to reward cautious decision-making as well as a variety of designs for communicating uncertainty. We find that, while participants are unable to perfectly control for false discoveries as well as idealised statistical models such as the Benjamini-Hochberg, certain forms of uncertainty visualisations can improve the quality of participants' decisions and lead to fewer false discoveries than not correcting for multiple comparisons. We conclude with a call for researchers to further explore visual analytics decision quality under different decision-making contexts, and for designers to directly present uncertainty and reliability information to users of visual analytics tools. The supplementary materials are available at: https://osf.io/xtsfz/. | false | false | [
"Abhraneel Sarma",
"Xiaoying Pu",
"Yuan Cui",
"Michael Correll",
"Eli T. Brown",
"Matthew Kay 0001"
] | [
"HM"
] | [] | [] |
CHI | 2,024 | On the Benefits of Image-Schematic Metaphors when Designing Mixed Reality Systems | 10.1145/3613904.3642925 | A Mixed Reality (MR) system encompasses various aspects, such as visualization and spatial registration of user interface elements, user interactions and interaction feedback. Image-schematic metaphors (ISMs) are universal knowledge structures shared by a wide range of users. They hold a theoretical promise of facilitating greater ease of learning and use for interactive systems without costly adaptations. This paper investigates whether image-schematic metaphors (ISMs) can improve user learning, by comparing an existing MR instruction authoring system with or without ISM enhancements. In a user study with 32 participants, we found that the ISM-enhanced system significantly improved task performance, learnability and mental efficiency compared to the baseline. Participants also rated the ISM-enhanced system significantly higher in terms of perspicuity, efficiency, and novelty. These results empirically demonstrate multiple benefits of ISMs when integrated into the design of this MR system and encourage further studies to explore the wider applicability of ISMs in user interface design. | false | false | [
"Jingyi Li",
"Per Ola Kristensson"
] | [] | [] | [] |
CHI | 2,024 | PD-Insighter: A Visual Analytics System to Monitor Daily Actions for Parkinson's Disease Treatment | 10.1145/3613904.3642215 | People with Parkinson's Disease (PD) can slow the progression of their symptoms with physical therapy. However, clinicians lack insight into patients' motor function during daily life, preventing them from tailoring treatment protocols to patient needs. This paper introduces PD-Insighter, a system for comprehensive analysis of a person's daily movements for clinical review and decision-making. PD-Insighter provides an overview dashboard for discovering motor patterns and identifying critical deficits during activities of daily living and an immersive replay for closely studying the patient's body movements with environmental context. Developed using an iterative design study methodology in consultation with clinicians, we found that PD-Insighter's ability to aggregate and display data with respect to time, actions, and local environment enabled clinicians to assess a person's overall functioning during daily life outside the clinic. PD-Insighter's design offers future guidance for generalized multiperspective body motion analytics, which may significantly improve clinical decision-making and slow the functional decline of PD and other medical conditions. | false | false | [
"Jade Kandel",
"Chelsea Duppen",
"Qian Zhang 0066",
"Howard Jiang",
"Angelos Angelopoulos",
"Ashley Paula-Ann Neall",
"Pranav Wagh",
"Daniel Szafir",
"Henry Fuchs",
"Michael Lewek",
"Danielle Albers Szafir"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.10661v1",
"icon": "paper"
}
] |
CHI | 2,024 | PhotoScout: Synthesis-Powered Multi-Modal Image Search | 10.1145/3613904.3642319 | Due to the availability of increasingly large amounts of visual data, there is a growing need for tools that can help users find relevant images. While existing tools can perform image retrieval based on similarity or metadata, they fall short in scenarios that necessitate semantic reasoning about the content of the image. This paper explores a new multi-modal image search approach that allows users to conveniently specify and perform semantic image search tasks. With our tool, PhotoScout, the user interactively provides natural language descriptions, positive and negative examples, and object tags to specify their search tasks. Under the hood, PhotoScout is powered by a program synthesis engine that generates visual queries in a domain-specific language and executes the synthesized program to retrieve the desired images. In a study with 25 participants, we observed that PhotoScout allows users to perform image retrieval tasks more accurately and with less manual effort. | false | false | [
"Celeste Barnaby",
"Qiaochu Chen",
"Chenglong Wang",
"Isil Dillig"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2401.10464v1",
"icon": "paper"
}
] |
CHI | 2,024 | PriviAware: Exploring Data Visualization and Dynamic Privacy Control Support for Data Collection in Mobile Sensing Research | 10.1145/3613904.3642815 | With increased interest in leveraging personal data collected from 24/7 mobile sensing for digital healthcare research, supporting user-friendly consent to data collection for user privacy has also become important. This work proposes PriviAware, a mobile app that promotes flexible user consent to data collection with data exploration and contextual filters that enable users to turn off data collection based on time and places that are considered privacy-sensitive. We conducted a user study (N = 58) to explore how users leverage data exploration and contextual filter functions to explore and manage their data and whether our system design helped users mitigate their privacy concerns. Our findings indicate that offering fine-grained control is a promising approach to raising users' privacy awareness under the dynamic nature of the pervasive sensing context. We provide practical privacy-by-design guidelines for mobile sensing research. | false | false | [
"Hyunsoo Lee",
"Yugyeong Jung",
"Hei Yiu Law",
"Seolyeong Bae",
"Uichin Lee"
] | [] | [] | [] |
CHI | 2,024 | ProInterAR: A Visual Programming Platform for Creating Immersive AR Interactions | 10.1145/3613904.3642527 | AR applications commonly contain diverse interactions among different AR contents. Creating such applications requires creators to have advanced programming skills for scripting interactive behaviors of AR contents, repeated transferring and adjustment of virtual contents from virtual to physical scenes, testing by traversing between desktop interfaces and target AR scenes, and digitalizing AR contents. Existing immersive tools for prototyping/authoring such interactions are tailored for domain-specific applications. To support programming general interactive behaviors of real object(s)/environment(s) and virtual object(s)/environment(s) for novice AR creators, we propose ProInterAR, an integrated visual programming platform to create immersive AR applications with a tablet and an AR-HMD. Users can construct interaction scenes by creating virtual contents and augmenting real contents from the view of an AR-HMD, script interactive behaviors by stacking blocks from a tablet UI, and then execute and control the interactions in the AR scene. We showcase a wide range of AR application scenarios enabled by ProInterAR, including AR game, AR teaching, sequential animation, AR information visualization, etc. Two usability studies validate that novice AR creators can easily program various desired AR applications using ProInterAR. | false | false | [
"Hui Ye",
"Jiaye Leng",
"Pengfei Xu 0002",
"Karan Singh",
"Hongbo Fu 0001"
] | [] | [] | [] |
CHI | 2,024 | Promoting Eco-Friendly Behaviour through Virtual Reality - Implementation and Evaluation of Immersive Feedback Conditions of a Virtual CO2 Calculator | 10.1145/3613904.3642957 | Climate change is one of the most pressing global challenges in the 21st century. Urgent actions favoring the environment's well-being are essential to mitigate its potentially irreversible consequences. However, the delayed and often distant nature of the effects of sustainable behavior makes it challenging for individuals to connect with the issue personally. Immersive media are an opportunity to introduce innovative feedback mechanisms to highlight the urgency of behavior effects. We introduce a VR carbon calculator that visualizes users' annual carbon footprint as CO2-filled balloons over multiple periods. In a 2 × 2 design, participants calculated and visualized their carbon footprint numerically or as balloons over one or three years. We found no effect of our visualization but a significant impact of the visualized period on participants' environmental self-efficacy. These findings emphasize the importance of target-oriented design in VR behavior interventions. | false | false | [
"Carolin Wienrich",
"Stephanie Vogt",
"Nina Döllinger",
"David Obremski"
] | [] | [] | [] |
CHI | 2,024 | PromptCharm: Text-to-Image Generation through Multi-modal Prompting and Refinement | 10.1145/3613904.3642803 | The recent advancements in Generative AI have significantly advanced the field of text-to-image generation. The state-of-the-art text-to-image model, Stable Diffusion, is now capable of synthesizing high-quality images with a strong sense of aesthetics. Crafting text prompts that align with the model's interpretation and the user's intent thus becomes crucial. However, prompting remains challenging for novice users due to the complexity of the stable diffusion model and the non-trivial efforts required for iteratively editing and refining the text prompts. To address these challenges, we propose PromptCharm, a mixed-initiative system that facilitates text-to-image creation through multi-modal prompt engineering and refinement. To assist novice users in prompting, PromptCharm first automatically refines and optimizes the user's initial prompt. Furthermore, PromptCharm supports the user in exploring and selecting different image styles within a large database. To assist users in effectively refining their prompts and images, PromptCharm renders model explanations by visualizing the model's attention values. If the user notices any unsatisfactory areas in the generated images, they can further refine the images through model attention adjustment or image inpainting within the rich feedback loop of PromptCharm. To evaluate the effectiveness and usability of PromptCharm, we conducted a controlled user study with 12 participants and an exploratory user study with another 12 participants. These two studies show that participants using PromptCharm were able to create images with higher quality and better aligned with the user's expectations compared with using two variants of PromptCharm that lacked interaction or visualization support. | false | false | [
"Zhijie Wang",
"Yuheng Huang",
"Da Song",
"Lei Ma 0003",
"Tianyi Zhang 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.04014v1",
"icon": "paper"
}
] |
CHI | 2,024 | RASSAR: Room Accessibility and Safety Scanning in Augmented Reality | 10.1145/3613904.3642140 | The safety and accessibility of our homes is critical to quality of life and evolves as we age, become ill, host guests, or experience life events such as having children. Researchers and health professionals have created assessment instruments such as checklists that enable homeowners and trained experts to identify and mitigate safety and access issues. With advances in computer vision, augmented reality (AR), and mobile sensors, new approaches are now possible. We introduce RASSAR, a mobile AR application for semi-automatically identifying, localizing, and visualizing indoor accessibility and safety issues such as an inaccessible table height or unsafe loose rugs using LiDAR and real-time computer vision. We present findings from three studies: a formative study with 18 participants across five stakeholder groups to inform the design of RASSAR, a technical performance evaluation across ten homes demonstrating state-of-the-art performance, and a user study with six stakeholders. We close with a discussion of future AI-based indoor accessibility assessment tools, RASSAR's extensibility, and key application scenarios. | false | false | [
"Xia Su",
"Han Zhang",
"Kaiming Cheng",
"Jaewook Lee 0005",
"Qiaochu Liu",
"Wyatt Olson",
"Jon E. Froehlich"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.07479v1",
"icon": "paper"
}
] |
CHI | 2,024 | Reading Between the Pixels: Investigating the Barriers to Visualization Literacy | 10.1145/3613904.3642760 | In our current visual-centric digital age, the capability to interpret, understand, and produce visual representations of data —termed visualization literacy— is paramount. However, not everyone is adept at navigating this visual terrain. This paper explores the barriers that individuals who misread a visualization encounter, aiming to understand their specific mental gaps. Utilizing a mixed-method approach, we administered the Visualization Literacy Assessment Test (VLAT) to a group of 120 participants drawn from diverse demographic backgrounds, which provided us with 1774 task completions. We augmented the standard VLAT test to capture quantitative and qualitative data on participants' errors. We collected participant sketches and open-ended text about their analysis approach, providing insight into users' mental models and rationale. Our findings reveal that individuals who incorrectly answer visualization literacy questions often misread visual channels, confound chart labels with data values, or struggle to translate data-driven questions into visual queries. Recognizing and bridging visualization literacy gaps not only ensures inclusivity but also enhances the overall effectiveness of visual communication in our society. | false | false | [
"Carolina Nobre",
"Kehang Zhu",
"Eric Mörth",
"Hanspeter Pfister",
"Johanna Beyer"
] | [] | [] | [] |
CHI | 2,024 | Robot-Assisted Decision-Making: Unveiling the Role of Uncertainty Visualisation and Embodiment | 10.1145/3613904.3642911 | Robots are embodied agents that act under several sources of uncertainty. When assisting humans in a collaborative task, robots need to communicate their uncertainty to help inform decisions. In this study, we examine the use of visualising a robot's uncertainty in a high-stakes assisted decision-making task. In particular, we explore how different modalities of uncertainty visualisations (graphical display vs. the robot's embodied behaviour) and confidence levels (low, high, 100%) conveyed by a robot affect the human decision-making and perception during a collaborative task. Our results show that these visualisations significantly impact how participants arrive to their decision as well as how they perceive the robot's transparency across the different confidence levels. We highlight potential trade-offs and offer implications for robot-assisted decision-making. Our work contributes empirical insights on how humans make use of uncertainty visualisations conveyed by a robot in a critical robot-assisted decision-making scenario. | false | false | [
"Sarah Schömbs",
"Saumya Pareek",
"Jorge Goncalves",
"Wafa Johal"
] | [] | [] | [] |
CHI | 2,024 | SalChartQA: Question-driven Saliency on Information Visualisations | 10.1145/3613904.3642942 | Understanding the link between visual attention and users' information needs when visually exploring information visualisations is under-explored due to a lack of large and diverse datasets to facilitate these analyses. To fill this gap we introduce SalChartQA – a novel crowd-sourced dataset that uses the BubbleView interface to track user attention and a question-answering (QA) paradigm to induce different information needs in users. SalChartQA contains 74,340 answers to 6,000 questions on 3,000 visualisations. Informed by our analyses demonstrating the close correlation between information needs and visual saliency, we propose the first computational method to predict question-driven saliency on visualisations. Our method outperforms state-of-the-art saliency models for several metrics, such as the correlation coefficient and the Kullback-Leibler divergence. These results show the importance of information needs for shaping attentive behaviour and pave the way for new applications, such as task-driven optimisation of visualisations or explainable AI in chart question-answering. | false | false | [
"Yao Wang",
"Weitian Wang",
"Abdullah Abdelhafez",
"Mayar Elfares",
"Zhiming Hu",
"Mihai Bâce",
"Andreas Bulling"
] | [] | [] | [] |
CHI | 2,024 | SalienTime: User-driven Selection of Salient Time Steps for Large-Scale Geospatial Data Visualization | 10.1145/3613904.3642944 | The voluminous nature of geospatial temporal data from physical monitors and simulation models poses challenges to efficient data access, often resulting in cumbersome temporal selection experiences in web-based data portals. Thus, selecting a subset of time steps for prioritized visualization and pre-loading is highly desirable. Addressing this issue, this paper establishes a multifaceted definition of salient time steps via extensive need-finding studies with domain experts to understand their workflows. Building on this, we propose a novel approach that leverages autoencoders and dynamic programming to facilitate user-driven temporal selections. Structural features, statistical variations, and distance penalties are incorporated to make more flexible selections. User-specified priorities, spatial regions, and aggregations are used to combine different perspectives. We design and implement a web-based interface to enable efficient and context-aware selection of time steps and evaluate its efficacy and usability through case studies, quantitative evaluations, and expert interviews. | false | false | [
"Juntong Chen",
"Haiwen Huang",
"Huayuan Ye",
"Zhong Peng",
"Chenhui Li",
"Changbo Wang"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.03449v1",
"icon": "paper"
}
] |
CHI | 2,024 | SolarClub: Supporting Renewable Energy Communities through an Interactive Coordination System | 10.1145/3613904.3642449 | Energy communities are a key focus for governments around the world in support of more sustainable energy practices. However, interactive systems for supporting energy communities to coordinate around renewable energy resources are still lacking. We present SolarClub, a demand-shifting visualization system that supported households in coordinating their energy usage by booking energy-hungry activities when solar energy was available. We deployed SolarClub with four groups of neighbors (N=15) for a month. SolarClub successfully enabled neighbors to coordinate, even when some of those participating households were less flexible. While participants reported that SolarClub did not foster a feeling of community, it helped them empathize with their neighbors. Our findings demonstrate the potential of sensor- and visualization-based technology to help understand the relation between everyday practices and resources consumption, beyond individual eco-feedback. This work thus contributes to the development of a next generation of practices and technologies that support collective action for environmental sustainability. | false | false | [
"Georgia Panagiotidou",
"Enrico Costanza",
"Kyrill Potapov",
"Sonia Nkatha",
"Michael J. Fell",
"Farhan Samanani",
"Hannah Knox"
] | [] | [] | [] |
CHI | 2,024 | Taking ASCII Drawings Seriously: How Programmers Diagram Code | 10.1145/3613904.3642683 | Documentation in codebases facilitates knowledge transfer. But tools for programming are largely text-based, and so developers resort to creating ASCII diagrams—graphical artifacts approximated with text—to show visual ideas within their code. Despite real-world use, little is known about these diagrams. We interviewed nine authors of ASCII diagrams, learning why they use ASCII and what roles the diagrams play. We also compile and analyze a corpus of 507 ASCII diagrams from four open source projects, deriving a design space with seven dimensions that classify what these diagrams show, how they show it, and ways they connect to code. These investigations reveal that ASCII diagrams are professional artifacts used across many steps in the development lifecycle, diverse in role and content, and used because they visualize ideas within the variety of programming tools in use. Our findings highlight the importance of visualization within code and lay a foundation for future programming tools that tightly couple text and graphics. | false | false | [
"Devamardeep Hayatpur",
"Brian Hempel",
"Kathy Chen",
"William Duan",
"Philip J. Guo",
"Haijun Xia"
] | [
"HM"
] | [] | [] |
CHI | 2,024 | Talaria: Interactively Optimizing Machine Learning Models for Efficient Inference | 10.1145/3613904.3642628 | On-device machine learning (ML) moves computation from the cloud to personal devices, protecting user privacy and enabling intelligent user experiences. However, fitting models on devices with limited resources presents a major technical challenge: practitioners need to optimize models and balance hardware metrics such as model size, latency, and power. To help practitioners create efficient ML models, we designed and developed Talaria : a model visualization and optimization system. Talaria enables practitioners to compile models to hardware, interactively visualize model statistics, and simulate optimizations to test the impact on inference metrics. Since its internal deployment two years ago, we have evaluated Talaria using three methodologies: (1) a log analysis highlighting its growth of 800+ practitioners submitting 3,600+ models; (2) a usability survey with 26 users assessing the utility of 20 Talaria features; and (3) a qualitative interview with the 7 most active users about their experience using Talaria. | false | false | [
"Fred Hohman",
"Chaoqun Wang",
"Jinmook Lee",
"Jochen Görtler",
"Dominik Moritz",
"Jeffrey P. Bigham",
"Zhile Ren",
"Cecile Foret",
"Qi Shan",
"Xiaoyi Zhang 0006"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.03085v1",
"icon": "paper"
}
] |
CHI | 2,024 | That's Rough! Encoding Data into Roughness for Physicalization | 10.1145/3613904.3641900 | While visual channels (e.g., color, shape, size) have been explored for visualizing data in data physicalizations, there is a lack of understanding regarding how to encode data into physical material properties (e.g., roughness, hardness). This understanding is critical for ensuring data is correctly communicated and for potentially extending the channels and bandwidth available for encoding that data. We present a method to encode ordinal data into roughness, validated through user studies. In the first study, we identified just noticeable differences in perceived roughness from this method. In the second study, we 3D-printed proof of concepts for five different multivariate physicalizations using the model. These physicalizations were qualitatively explored (N=10) to understand people's comprehension and impressions of the roughness channel. Our findings suggest roughness may be used for certain types of data encoding, and the context of the data can impact how people interpret roughness mapping direction. | false | false | [
"Xiaojiao Du",
"Kadek Ananta Satriadi",
"Adam Drogemuller",
"Brandon J. Matthews",
"Ross Smith 0001",
"James A. Walsh",
"Andrew Cunningham"
] | [
"HM"
] | [] | [] |
CHI | 2,024 | The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing with Interactive Visualization | 10.1145/3613904.3641895 | The use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other hand, AI-assistance can improve writing as long as writers can conform to publisher policies, and as long as readers can be assured that a text has been verified by a human. We argue that a system that captures the provenance of interaction with an LLM can help writers retain their agency, conform to policies, and communicate their use of AI to publishers and readers transparently. Thus we propose HaLLMark, a tool for visualizing the writer's interaction with the LLM. We evaluated HaLLMark with 13 creative writers, and found that it helped them retain a sense of control and ownership of the text. | false | false | [
"Md. Naimul Hoque",
"Tasfia Mashiat",
"Bhavya Ghai",
"Cecilia D. Shelton",
"Fanny Chevalier",
"Kari Kraus",
"Niklas Elmqvist"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2311.13057v4",
"icon": "paper"
}
] |
CHI | 2,024 | To Cut or Not To Cut? A Systematic Exploration of Y-Axis Truncation | 10.1145/3613904.3642102 | Y-axis truncation is a well-known, much-debated visualization practice. Our work complements existing empirical work by providing a systematic analysis of y-axis truncation on grouped bar charts. Drawing upon theoretical frameworks such as Algebraic Visualization Design, we examine how structure-preserving modifications to visualization affect user performance by systematically dividing the space of possible truncations according to their monotonicity and the type of relations in the underlying data. Our results demonstrate that for comparing and estimating the difference between the lengths of two bars, truncating the y-axis does not affect task performance. For comparing or estimating the relative growth between two bars, truncating monotonically has similar performance to no truncation, while truncating non-monotonically is very likely to impair performance. We discuss possible extensions of our work and recommendations for y-axis truncation. All supplementary materials are available at https://osf.io/k4hjd/?view_only=008b087fc3d94be7ba0ce7aea95012a7. | false | false | [
"Sheng Long",
"Matthew Kay 0003"
] | [] | [] | [] |
CHI | 2,024 | Understanding Reader Takeaways in Thematic Maps Under Varying Text, Detail, and Spatial Autocorrelation | 10.1145/3613904.3642132 | Maps are crucial in conveying geospatial data in diverse contexts such as news and scientific reports. This research, utilizing thematic maps, probes deeper into the underexplored intersection of text framing and map types in influencing map interpretation. In this work, we conducted experiments to evaluate how textual detail and semantic content variations affect the quality of insights derived from map examination. We also explored the influence of explanatory annotations across different map types (e.g., choropleth, hexbin, isarithmic), base map details, and changing levels of spatial autocorrelation in the data. From two online experiments with N = 103 participants, we found that annotations, their specific attributes, and map type used to present the data significantly shape the quality of takeaways. Notably, we found that the effectiveness of annotations hinges on their contextual integration. These findings offer valuable guidance to the visualization community for crafting impactful thematic geospatial representations. | false | false | [
"Arlen Fan",
"Fan Lei",
"Michelle Mancenido",
"Alan M. MacEachren",
"Ross Maciejewski"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2403.08260v1",
"icon": "paper"
}
] |
CHI | 2,024 | V-FRAMER: Visualization Framework for Mitigating Reasoning Errors in Public Policy | 10.1145/3613904.3642750 | Existing data visualization design guidelines focus primarily on constructing grammatically-correct visualizations that faithfully convey the values and relationships in the underlying data. However, a designer may create a grammatically-correct visualization that still leaves audiences susceptible to reasoning misleaders, e.g. by failing to normalize data or using unrepresentative samples. Reasoning misleaders are especially pernicious when presenting public policy data, where data-driven decisions can affect public health, safety, and economic development. Through textual analysis, a formative evaluation, and iterative design with 19 policy communicators, we construct an actionable visualization design framework, V-FRAMER, that effectively synthesizes ways of mitigating reasoning misleaders. We discuss important design considerations for frameworks like V-FRAMER, including using concrete examples to help designers understand reasoning misleaders, and using a hierarchical structure to support example-based accessing. We further describe V-FRAMER's congruence with current practice and how practitioners might integrate the framework into their existing workflows. Related materials available at: https://osf.io/q3uta/. | false | false | [
"Lily W. Ge",
"Matthew W. Easterday",
"Matthew Kay 0001",
"Evanthia Dimara",
"Peter Cheng",
"Steven L. Franconeri"
] | [] | [] | [] |
CHI | 2,024 | VAID: Indexing View Designs in Visual Analytics System | 10.1145/3613904.3642237 | Visual analytics (VA) systems have been widely used in various application domains. However, VA systems are complex in design, which imposes a serious problem: although the academic community constantly designs and implements new designs, the designs are difficult to query, understand, and refer to by subsequent designers. To mark a major step forward in tackling this problem, we index VA designs in an expressive and accessible way, transforming the designs into a structured format. We first conducted a workshop study with VA designers to learn user requirements for understanding and retrieving professional designs in VA systems. Thereafter, we came up with an index structure VAID to describe advanced and composited visualization designs with comprehensive labels about their analytical tasks and visual designs. The usefulness of VAID was validated through user studies. Our work opens new perspectives for enhancing the accessibility and reusability of professional visualization designs. | false | false | [
"Lu Ying",
"Aoyu Wu",
"Haotian Li 0001",
"Zikun Deng",
"Ji Lan",
"Jiang Wu",
"Yong Wang 0021",
"Huamin Qu",
"Dazhen Deng",
"Yingcai Wu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2211.02567v2",
"icon": "paper"
}
] |
CHI | 2,024 | VisTorch: Interacting with Situated Visualizations using Handheld Projectors | 10.1145/3613904.3642857 | Spatial data is best analyzed in situ, but existing mixed reality technologies can be bulky, expensive, or unsuitable for collaboration. We present VisTorch: a handheld device for projected situated analytics consisting of a pico-projector, a multi-spectrum camera, and a touch surface. VisTorch enables viewing charts situated in physical space by simply pointing the device at a surface to reveal visualizations in that location. We evaluated the approach using both a user study and an expert review. In the former, we asked 20 participants to first organize charts in space and then refer to these charts to answer questions. We observed three spatial and one temporal pattern in participant analyses. In the latter, four experts—a museum designer, a statistical software developer, a theater stage designer, and an environmental educator—utilized VisTorch to derive practical usage scenarios. Results from our study showcase the utility of situated visualizations for memory and recall. | false | false | [
"Biswaksen Patnaik",
"Huaishu Peng",
"Niklas Elmqvist"
] | [] | [] | [] |
CHI | 2,024 | Visual Cues for Data Analysis Features Amplify Challenges for Blind Spreadsheet Users | 10.1145/3613904.3642753 | Spreadsheets are widely used for storing, manipulating, analyzing, and visualizing data. Features such as conditional formatting, formulas, sorting, and filtering play an important role when understanding and analyzing data in spreadsheets. They employ visual cues, but we have little understanding of the experiences of blind screen reader (SR) users with such features. We conducted a study with 12 blind SR users to gain insights into their challenges, workarounds, and strategies in understanding and extracting information from a spreadsheet consisting of multiple tables that incorporated data analysis features. We identified five factors that impact blind SR users' experiences: cognitive overload, time-information trade-off, lack of awareness and expertise, inadequate system feedback, and delayed and absent SR responses. Drawn from these findings, we discuss design suggestions and future research agenda to improve SR users' spreadsheet experiences. | false | false | [
"Minoli Perera",
"Bongshin Lee",
"Eun Kyoung Choe",
"Kim Marriott"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.