Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
EuroVis | 2,022 | Level of Detail Exploration of Electronic Transition Ensembles using Hierarchical Clustering | 10.1111/cgf.14544 | We present a pipeline for the interactive visual analysis and exploration of molecular electronic transition ensembles. Each ensemble member is specified by a molecular configuration, the charge transfer between two molecular states, and a set of physical properties. The pipeline is targeted towards theoretical chemists, supporting them in comparing and characterizing electronic transitions by combining automatic and interactive visual analysis. A quantitative feature vector characterizing the electron charge transfer serves as the basis for hierarchical clustering as well as for the visual representations. The interface for the visual exploration consists of four components. A dendrogram provides an overview of the ensemble. It is augmented with a level of detail glyph for each cluster. A scatterplot using dimensionality reduction provides a second visualization, highlighting ensemble outliers. Parallel coordinates show the correlation with physical parameters. A spatial representation of selected ensemble members supports an in‐depth inspection of transitions in a form that is familiar to chemists. All views are linked and can be used to filter and select ensemble members. The usefulness of the pipeline is shown in three different case studies. | false | false | [
"Signe Sidwall Thygesen",
"Talha Bin Masood",
"Mathieu Linares",
"Vijay Natarajan",
"Ingrid Hotz"
] | [] | [] | [] |
EuroVis | 2,022 | Leveraging Analysis History for Improved In Situ Visualization Recommendation | 10.1111/cgf.14529 | Existing visualization recommendation systems commonly rely on a single snapshot of a dataset to suggest visualizations to users. However, exploratory data analysis involves a series of related interactions with a dataset over time rather than one‐off analytical steps. We present Solas, a tool that tracks the history of a user's data analysis, models their interest in each column, and uses this information to provide visualization recommendations, all within the user's native analytical environment. Recommending with analysis history improves visualizations in three primary ways: task‐specific visualizations use the provenance of data to provide sensible encodings for common analysis functions, aggregated history is used to rank visualizations by our model of a user's interest in each column, and column data types are inferred based on applied operations. We present a usage scenario and a user evaluation demonstrating how leveraging analysis history improves in situ visualization recommendations on real‐world analysis tasks. | false | false | [
"Will Epperson",
"Doris Jung Lin Lee",
"Leijie Wang",
"Kunal Agarwal",
"Aditya G. Parameswaran",
"Dominik Moritz",
"Adam Perer"
] | [] | [] | [] |
EuroVis | 2,022 | LineageD: An Interactive Visual System for Plant Cell Lineage Assignments based on Correctable Machine Learning | 10.1111/cgf.14533 | We describe LineageD—a hybrid web‐based system to predict, visualize, and interactively adjust plant embryo cell lineages. Currently, plant biologists explore the development of an embryo and its hierarchical cell lineage manually, based on a 3D dataset that represents the embryo status at one point in time. This human decision‐making process, however, is time‐consuming, tedious, and error‐prone due to the lack of integrated graphical support for specifying the cell lineage. To fill this gap, we developed a new system to support the biologists in their tasks using an interactive combination of 3D visualization, abstract data visualization, and correctable machine learning to modify the proposed cell lineage. We use existing manually established cell lineages to obtain a neural network model. We then allow biologists to use this model to repeatedly predict assignments of a single cell division stage. After each hierarchy level prediction, we allow them to interactively adjust the machine learning based assignment, which we then integrate into the pool of verified assignments for further predictions. In addition to building the hierarchy this way in a bottom‐up fashion, we also offer users to divide the whole embryo and create the hierarchy tree in a top‐down fashion for a few steps, improving the ML‐based assignments by reducing the potential for wrong predictions. We visualize the continuously updated embryo and its hierarchical development using both 3D spatial and abstract tree representations, together with information about the model's confidence and spatial properties. We conducted case study validations with five expert biologists to explore the utility of our approach and to assess the potential for LineageD to be used in their daily workflow. We found that the visualizations of both 3D representations and abstract representations help with decision making and the hierarchy tree top‐down building approach can reduce assignments errors in real practice. | false | false | [
"Jiayi Hong",
"Alain Trubuil",
"Tobias Isenberg 0001"
] | [] | [] | [] |
EuroVis | 2,022 | LMFingerprints: Visual Explanations of Language Model Embedding Spaces through Layerwise Contextualization Scores | 10.1111/cgf.14541 | Language models, such as BERT, construct multiple, contextualized embeddings for each word occurrence in a corpus. Understanding how the contextualization propagates through the model's layers is crucial for deciding which layers to use for a specific analysis task. Currently, most embedding spaces are explained by probing classifiers; however, some findings remain inconclusive. In this paper, we present LMFingerprints, a novel scoring‐based technique for the explanation of contextualized word embeddings. We introduce two categories of scoring functions, which measure (1) the degree of contextualization, i.e., the layerwise changes in the embedding vectors, and (2) the type of contextualization, i.e., the captured context information. We integrate these scores into an interactive explanation workspace. By combining visual and verbal elements, we provide an overview of contextualization in six popular transformer‐based language models. We evaluate hypotheses from the domain of computational linguistics, and our results not only confirm findings from related work but also reveal new aspects about the information captured in the embedding spaces. For instance, we show that while numbers are poorly contextualized, stopwords have an unexpected high contextualization in the models' upper layers, where their neighborhoods shift from similar functionality tokens to tokens that contribute to the meaning of the surrounding sentences. | false | false | [
"Rita Sevastjanova",
"Aikaterini-Lida Kalouli",
"Christin Beck",
"Hanna Hauptmann",
"Mennatallah El-Assady"
] | [] | [] | [] |
EuroVis | 2,022 | LOOPS: LOcally Optimized Polygon Simplification | 10.1111/cgf.14546 | Displaying polygonal vector data is essential in various application scenarios such as geometry visualization, vector graphics rendering, CAD drawing and in particular geographic, or cartographic visualization. Dealing with static polygonal datasets that has a large scale and are highly detailed poses several challenges to the efficient and adaptive display of polygons in interactive geographic visualization applications. For linear vector data, only recently a GPU‐based level‐of‐detail (LOD) polyline simplification and rendering approach has been presented which can perform locally‐adaptive LOD visualization of large‐scale line datasets interactively. However, locally optimized LOD simplification and interactive display of large‐scale polygon data, consisting of filled vector line loops, remains still a challenge, specifically in 3D geographic visualizations where varying LOD over a scene is necessary. Our solution to this challenge is a novel technique for locally‐optimized simplification and visualization of 2D polygons over a 3D terrain which features a parallelized point‐inside‐polygon testing mechanism. Our approach is capable of employing any simplification algorithm that sequentially removes vertices such as Douglas‐Peucker and Wang‐Müller. Moreover, we generalized our technique to also visualizing polylines in order to have a unified method for displaying both data types. The results and performance analysis show that our new algorithm can handle large datasets containing polygons composed of millions of segments in real time, and has a lower memory demand and higher performance in comparison to prior methods of line simplification and visualization. | false | false | [
"Alireza Amiraghdam",
"Alexandra Diehl",
"Renato Pajarola"
] | [] | [] | [] |
EuroVis | 2,022 | Misinformed by Visualization: What Do We Learn From Misinformative Visualizations? | 10.1111/cgf.14559 | Data visualization is powerful in persuading an audience. However, when it is done poorly or maliciously, a visualization may become misleading or even deceiving. Visualizations give further strength to the dissemination of misinformation on the Internet. The visualization research community has long been aware of visualizations that misinform the audience, mostly associated with the terms “lie” and “deceptive.” Still, these discussions have focused only on a handful of cases. To better understand the landscape of misleading visualizations, we open‐coded over one thousand real‐world visualizations that have been reported as misleading. From these examples, we discovered 74 types of issues and formed a taxonomy of misleading elements in visualizations. We found four directions that the research community can follow to widen the discussion on misleading visualizations: (1) informal fallacies in visualizations, (2) exploiting conventions and data literacy, (3) deceptive tricks in uncommon charts, and (4) understanding the designers' dilemma. This work lays the groundwork for these research directions, especially in understanding, detecting, and preventing them. | false | false | [
"Leo Yu-Ho Lo",
"Ayush Gupta",
"Kento Shigyo",
"Aoyu Wu",
"Enrico Bertini",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2204.09548v1",
"icon": "paper"
}
] |
EuroVis | 2,022 | Mobile and Multimodal? A Comparative Evaluation of Interactive Workplaces for Visual Data Exploration | 10.1111/cgf.14551 | Mobile devices are increasingly being used in the workplace. The combination of touch, pen, and speech interaction with mobile devices is considered particularly promising for a more natural experience. However, we do not yet know how everyday work with multimodal data visualizations on a mobile device differs from working in the standard WIMP workplace setup. To address this gap, we created a visualization system for social scientists, with a WIMP interface for desktop PCs, and a multimodal interface for tablets. The system provides visualizations to explore spatio‐temporal data with consistent WIMP and multimodal interaction techniques. To investigate how the different combinations of devices and interaction modalities affect the performance and experience of domain experts in a work setting, we conducted an experiment with 16 social scientists where they carried out a series of tasks with both interfaces. Participants were significantly faster and slightly more accurate on the WIMP interface. They solved the tasks with different strategies according to the interaction modalities available. The pen was the most used and appreciated input modality. Most participants preferred the multimodal setup and could imagine using it at work. We present our findings, together with their implications for the interaction design of data visualizations. | false | false | [
"Gabriela Molina León",
"Michael Lischka",
"W. Luo",
"Andreas Breiter"
] | [] | [] | [] |
EuroVis | 2,022 | ModelWise: Interactive Model Comparison for Model Diagnosis, Improvement and Selection | 10.1111/cgf.14525 | Model comparison is an important process to facilitate model diagnosis, improvement, and selection when multiple models are developed for a classification task. It involves careful comparison concerning model performance and interpretation. Current visual analytics solutions often ignore the feature selection process. They either do not support detailed analysis of multiple multi‐class classifiers or rely on feature analysis alone to interpret model results. Understanding how different models make classification decisions, especially classification disagreements of the same instances, requires a deeper model understanding. We present ModelWise, a visual analytics method to compare multiple multi‐class classifiers in terms of model performance, feature space, and model explanation. ModelWise adapts visualizations with rich interactions to support multiple workflows to achieve model diagnosis, improvement, and selection. It considers feature subspaces generated for use in different models and improves model understanding by model explanation. We demonstrate the usability of ModelWise with two case studies, one with a small exemplar dataset and another developed with a machine learning expert with real‐world perioperative data. | false | false | [
"Linhao Meng",
"Stef van den Elzen",
"Anna Vilanova"
] | [] | [] | [] |
EuroVis | 2,022 | Nested Papercrafts for Anatomical and Biological Edutainment | 10.1111/cgf.14561 | In this paper, we present a new workflow for the computer‐aided generation of physicalizations, addressing Nested configurations in anatomical and biological structures. Physicalizations are an important component of anatomical and biological education and edutainment. However, existing approaches have mainly revolved around creating data sculptures through digital fabrication. Only a few recent works proposed computer‐aided pipelines for generating sculptures, such as papercrafts, with affordable and readily available materials. Papercraft generation remains a Challenging topic by itself. Yet, anatomical and biological applications pose additional Challenges, such as reconstruction complexity and insufficiency to account for multiple, Nested structures—often present in anatomical and biological structures. Our workflow comprises the following steps: (i) define the Nested configuration of the model and detect its levels, (ii) calculate the viewpoint that provides optimal, unobstructed views on inner levels, (iii) perform cuts on the outer levels to reveal the inner ones based on the viewpoint selection, (iv) estimate the stability of the cut papercraft to ensure a reliable outcome, (v) generate textures at each level, as a smart visibility mechanism that provides additional information on the inner structures, and (vi) unfold each textured mesh guaranteeing reconstruction. Our novel approach exploits the interactivity of Nested papercraft models for edutainment purposes. | false | false | [
"Marwin Schindler",
"Thorsten Korpitsch",
"Renata G. Raidou",
"Hsiang-Yun Wu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2204.10901v1",
"icon": "paper"
}
] |
EuroVis | 2,022 | Neural Flow Map Reconstruction | 10.1111/cgf.14549 | In this paper we present a reconstruction technique for the reduction of unsteady flow data based on neural representations of time‐varying vector fields. Our approach is motivated by the large amount of data typically generated in numerical simulations, and in turn the types of data that domain scientists can generate in situ that are compact, yet useful, for post hoc analysis. One type of data commonly acquired during simulation are samples of the flow map, where a single sample is the result of integrating the underlying vector field for a specified time duration. In our work, we treat a collection of flow map samples for a single dataset as a meaningful, compact, and yet incomplete, representation of unsteady flow, and our central objective is to find a representation that enables us to best recover arbitrary flow map samples. To this end, we introduce a technique for learning implicit neural representations of time‐varying vector fields that are specifically optimized to reproduce flow map samples sparsely covering the spatiotemporal domain of the data. We show that, despite aggressive data reduction, our optimization problem — learning a function‐space neural network to reproduce flow map samples under a fixed integration scheme — leads to representations that demonstrate strong generalization, both in the field itself, and using the field to approximate the flow map. Through quantitative and qualitative analysis across different datasets we show that our approach is an improvement across a variety of data reduction methods, and across a variety of measures ranging from improved vector fields, flow maps, and features derived from the flow map. | false | false | [
"Saroj Sahoo",
"Y. Lu",
"Matthew Berger"
] | [] | [] | [] |
EuroVis | 2,022 | Of Course it's Political! A Critical Inquiry into Underemphasized Dimensions in Civic Text Visualization | 10.1111/cgf.14518 | Recent developments in critical information visualization have brought the field's attention to political, feminist, ethical, and rhetorical aspects of data visualization. However, less work has explored the interplay between design decisions and political ramifications—structures of authority, means of representation, etc. In this paper, we build upon these critical perspectives and highlight the political aspect of civic text visualization especially in the context of democratic decision‐making. Based on a critical analysis of survey papers about text visualization in general, followed by a review on the status quo of text visualization in civics, we argue that civic text visualization inherits an exclusively analytic framing. This framing leads to a series of issues and challenges in the fundamentally political context of civics, such as misinterpretation of data, missing minority voices, and excluding the public from decision making processes. To span this gap between political context and analytic framing, we provide a series of two‐pole conceptual dimensions, such as from singular user to multiple relationships, and from complexity to inclusivity of visualization design. For each dimension, we discuss how the tensions between these poles can help surface the political ramifications of design decisions in civic text visualization. These dimensions can thus help visualization researchers, designers, and practitioners attend more intentionally to these political aspects and inspire their design choices. We conclude by suggesting that these dimensions may be useful for visualization design across a variety of application domains, beyond civic text visualization. | false | false | [
"Eric P. S. Baumer",
"Mahmood Jasim",
"Ali Sarvghad",
"Narges Mahyar"
] | [
"BP"
] | [] | [] |
EuroVis | 2,022 | Optimizing Grid Layouts for Level-of-Detail Exploration of Large Data Collections | 10.1111/cgf.14537 | This paper introduces an optimization approach for generating grid layouts from large data collections such that they are amenable to level‐of‐detail presentation and exploration. Classic (flat) grid layouts visually do not scale to large collections, yielding overwhelming numbers of tiny member representations. The proposed local search‐based progressive optimization scheme generates hierarchical grids: leaves correspond to one grid cell and represent one member, while inner nodes cover a quadratic range of cells and convey an aggregate of contained members. The scheme is solely based on pairwise distances and jointly optimizes for homogeneity within inner nodes and across grid neighbors. The generated grids allow to present and flexibly explore the whole data collection with arbitrary local granularity. Diverse use cases featuring large data collections exemplify the application: stock market predictions from a Black‐Scholes model, channel structures in soil from Markov chain Monte Carlo, and image collections with feature vectors from neural network classification models. The paper presents feedback by a domain scientist, compares against previous approaches, and demonstrates visual and computational scalability to a million members, surpassing classic grid layout techniques by orders of magnitude. | false | false | [
"Steffen Frey"
] | [] | [] | [] |
EuroVis | 2,022 | Reusing Interactive Analysis Workflows | 10.1111/cgf.14528 | Interactive visual analysis has many advantages, but an important disadvantage is that analysis processes and workflows cannot be easily stored and reused. This is in contrast to code‐based analysis workflows, which can simply be run on updated datasets, and adapted when necessary. In this paper, we introduce methods to capture workflows in interactive visualization systems for different interactions such as selections, filters, categorizing/grouping, labeling, and aggregation. These workflows can then be applied to updated datasets, making interactive visualization sessions reusable. We demonstrate this specification using an interactive visualization system that tracks interaction provenance, and allows generating workflows from the recorded actions. The system can then be used to compare different versions of datasets and apply workflows to them. Finally, we introduce a Python library that can load workflows and apply it to updated datasets directly in a computational notebook, providing a seamless bridge between computational workflows and interactive visualization tools. | false | false | [
"Kiran Gadhave",
"Zach Cutler",
"Alexander Lex"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/udqjr",
"icon": "paper"
}
] |
EuroVis | 2,022 | Rich Screen Reader Experiences for Accessible Data Visualization | 10.1111/cgf.14519 | Current web accessibility guidelines ask visualization designers to support screen readers via basic non‐visual alternatives like textual descriptions and access to raw data tables. But charts do more than summarize data or reproduce tables; they afford interactive data exploration at varying levels of granularity—from fine‐grained datum‐by‐datum reading to skimming and surfacing high‐level trends. In response to the lack of comparable non‐visual affordances, we present a set of rich screen reader experiences for accessible data visualization and exploration. Through an iterative co‐design process, we identify three key design dimensions for expressive screen reader accessibility: structure, or how chart entities should be organized for a screen reader to traverse; navigation, or the structural, spatial, and targeted operations a user might perform to step through the structure; and, description, or the semantic content, composition, and verbosity of the screen reader's narration. We operationalize these dimensions to prototype screen‐reader‐accessible visualizations that cover a diverse range of chart types and combinations of our design dimensions. We evaluate a subset of these prototypes in a mixed‐methods study with 13 blind and visually impaired readers. Our findings demonstrate that these designs help users conceptualize data spatially, selectively attend to data of interest at different levels of granularity, and experience control and agency over their data analysis process. An accessible HTML version of this paper is available at: http://vis.csail.mit.edu/pubs/rich-screen-reader-vis-experiences. | false | false | [
"Jonathan Zong",
"Crystal Lee",
"Alan Lundgard",
"JiWoong Jang",
"Daniel Hajas",
"Arvind Satyanarayan"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2205.04917v1",
"icon": "paper"
}
] |
EuroVis | 2,022 | Seeing Through Sounds: Mapping Auditory Dimensions to Data and Charts for People with Visual Impairments | 10.1111/cgf.14523 | Sonification can be an effective medium for people with visual impairments to understand data in visualizations. However, there are no universal design principles that apply to various charts that encode different data types. Towards generalizable principles, we conducted an exploratory experiment to assess how different auditory channels (e.g., pitch, volume) impact the data and visualization perception among people with visual impairments. In our experiment, participants evaluated the intuitiveness and accuracy of the mapping of auditory channels on different data and chart types. We found that participants rated pitch to be the most intuitive, while the number of tappings and the length of sounds yielded the most accurate perception in decoding data. We study how audio channels can intuitively represent different charts and demonstrate that data‐level perception might not directly transfer to chart‐level perception as participants reflect on visual aspects of the charts while listening to audio. We conclude by how future experiments can be designed to establish a robust ranking for creating audio charts. | false | false | [
"Ruobin Wang",
"Crescentia Jung",
"Yea-Seul Kim"
] | [] | [] | [] |
EuroVis | 2,022 | SimilarityNet: A Deep Neural Network for Similarity Analysis Within Spatio-temporal Ensembles | 10.1111/cgf.14548 | Latent feature spaces of deep neural networks are frequently used to effectively capture semantic characteristics of a given dataset. In the context of spatio‐temporal ensemble data, the latent space represents a similarity space without the need of an explicit definition of a field similarity measure. Commonly, these networks are trained for specific data within a targeted application. We instead propose a general training strategy in conjunction with a deep neural network architecture, which is readily applicable to any spatio‐temporal ensemble data without re‐training. The latent‐space visualization allows for a comprehensive visual analysis of patterns and temporal evolution within the ensemble. With the use of SimilarityNet, we are able to perform similarity analyses on large‐scale spatio‐temporal ensembles in less than a second on commodity consumer hardware. We qualitatively compare our results to visualizations with established field similarity measures to document the interpretability of our latent space visualizations and show that they are feasible for an in‐depth basic understanding of the underlying temporal evolution of a given ensemble. | false | false | [
"Karim Huesmann",
"Lars Linsen"
] | [] | [] | [] |
EuroVis | 2,022 | Six methods for transforming layered hypergraphs to apply layered graph layout algorithms | 10.1111/cgf.14538 | Hypergraphs are a generalization of graphs in which edges (hyperedges) can connect more than two vertices—as opposed to ordinary graphs where edges involve only two vertices. Hypergraphs are a fairly common data structure but there is little consensus on how to visualize them. To optimize a hypergraph drawing for readability, we need a layout algorithm. Common graph layout algorithms only consider ordinary graphs and do not take hyperedges into account. We focus on layered hypergraphs, a particular class of hypergraphs that, like layered graphs, assigns every vertex to a layer, and the vertices in a layer are drawn aligned on a linear axis with the axes arranged in parallel. In this paper, we propose a general method to apply layered graph layout algorithms to layered hypergraphs. We introduce six different transformations for layered hypergraphs. The choice of transformation affects the subsequent graph layout algorithm in terms of computational performance and readability of the results. Thus, we perform a comparative evaluation of these transformations in terms of number of crossings, edge length, and impact on performance. We also provide two case studies showing how our transformations can be applied to real‐life use cases. A copy of this paper with all appendices and supplemental material is available at osf.io/grvwu. | false | false | [
"Sara Di Bartolomeo",
"Alexis Pister",
"Paolo Buono",
"Catherine Plaisant",
"Cody Dunne",
"Jean-Daniel Fekete"
] | [] | [] | [] |
EuroVis | 2,022 | Streaming Approach to In Situ Selection of Key Time Steps for Time-Varying Volume Data | 10.1111/cgf.14542 | Key time steps selection, i.e., selecting a subset of most representative time steps, is essential for effective and efficient scientific visualization of large time‐varying volume data. In particular, as computer simulations continue to grow in size and complexity, they often generate output that exceeds both the available storage capacity and bandwidth for transferring results to storage, making it indispensable to save only a subset of time steps. At the same time, this subset must be chosen so that it is highly representative, to facilitate post‐processing and reconstruction with high fidelity. The key time steps selection problem is especially challenging in the in situ setting, where we can only process data in one pass in an online streaming fashion, using a small amount of main memory and fast computation. In this paper, we formulate the problem as that of optimal piece‐wise linear interpolation. We first apply a method from numerical linear algebra to compute linear interpolation solutions and their errors in an online streaming fashion. Using that method as a building block, we can obtain a global optimal solution for the piece‐wise linear interpolation problem via a standard dynamic programming (DP) algorithm. However, this approach needs to process the time steps in multiple passes and is too slow for the in situ setting. To address this issue, we introduce a novel approximation algorithm, which processes time steps in one pass in an online streaming fashion, with very efficient computing time and main memory space both in theory and in practice. The algorithm is suitable for the in situ setting. Moreover, we prove that our algorithm, which is based on a greedy update rule, has strong theoretical guarantees on the approximation quality and the number of time steps stored. To the best of our knowledge, this is the first algorithm suitable for in situ key time steps selection with such theoretical guarantees, and is the main contribution of this paper. Experiments demonstrate the efficacy of our new techniques. | false | false | [
"Mengxi Wu",
"Yi-Jen Chiang",
"Christopher Musco"
] | [] | [] | [] |
EuroVis | 2,022 | SurfNet: Learning Surface Representations via Graph Convolutional Network | 10.1111/cgf.14526 | For scientific visualization applications, understanding the structure of a single surface (e.g., stream surface, isosurface) and selecting representative surfaces play a crucial role. In response, we propose SurfNet, a graph‐based deep learning approach for representing a surface locally at the node level and globally at the surface level. By treating surfaces as graphs, we leverage a graph convolutional network to learn node embedding on a surface. To make the learned embedding effective, we consider various pieces of information (e.g., position, normal, velocity) for network input and investigate multiple losses. Furthermore, we apply dimensionality reduction to transform the learned embeddings into 2D space for understanding and exploration. To demonstrate the effectiveness of SurfNet, we evaluate the embeddings in node clustering (node‐level) and surface selection (surface‐level) tasks. We compare SurfNet against state‐of‐the‐art node embedding approaches and surface selection methods. We also demonstrate the superiority of SurfNet by comparing it against a spectral‐based mesh segmentation approach. The results show that SurfNet can learn better representations at the node and surface levels with less training time and fewer training samples while generating comparable or better clustering and selection results. | false | false | [
"Jun Han 0010",
"Chaoli Wang 0001"
] | [] | [] | [] |
EuroVis | 2,022 | Trends & Opportunities in Visualization for Physiology: A Multiscale Overview | 10.1111/cgf.14575 | Combining elements of biology, chemistry, physics, and medicine, the science of human physiology is complex and multifaceted. In this report, we offer a broad and multiscale perspective on key developments and challenges in visualization for physiology. Our literature search process combined standard methods with a state‐of‐the‐art visual analysis search tool to identify surveys and representative individual approaches for physiology. Our resulting taxonomy sorts literature on two levels. The first level categorizes literature according to organizational complexity and ranges from molecule to organ. A second level identifies any of three high‐level visualization tasks within a given work: exploration, analysis, and communication. The findings of this report may be used by visualization researchers to understand the overarching trends, challenges, and opportunities in visualization for physiology and to provide a foundation for discussion and future research directions in this area. | false | false | [
"Laura A. Garrison",
"Ivan Kolesár",
"Ivan Viola",
"Helwig Hauser",
"Stefan Bruckner"
] | [] | [] | [] |
EuroVis | 2,022 | Urban Rhapsody: Large-scale exploration of urban soundscapes | 10.1111/cgf.14534 | Noise is one of the primary quality‐of‐life issues in urban environments. In addition to annoyance, noise negatively impacts public health and educational performance. While low‐cost sensors can be deployed to monitor ambient noise levels at high temporal resolutions, the amount of data they produce and the complexity of these data pose significant analytical challenges. One way to address these challenges is through machine listening techniques, which are used to extract features in attempts to classify the source of noise and understand temporal patterns of a city's noise situation. However, the overwhelming number of noise sources in the urban environment and the scarcity of labeled data makes it nearly impossible to create classification models with large enough vocabularies that capture the true dynamism of urban soundscapes. In this paper, we first identify a set of requirements in the yet unexplored domain of urban soundscape exploration. To satisfy the requirements and tackle the identified challenges, we propose Urban Rhapsody, a framework that combines state‐of‐the‐art audio representation, machine learning and visual analytics to allow users to interactively create classification models, understand noise patterns of a city, and quickly retrieve and label audio excerpts in order to create a large high‐precision annotated database of urban sound recordings. We demonstrate the tool's utility through case studies performed by domain experts using data generated over the five‐year deployment of a one‐of‐a‐kind sensor network in New York City. | false | false | [
"João Rulff",
"Fabio Miranda 0001",
"Maryam Hosseini",
"Marcos Lage",
"Mark Cartwright",
"Graham Dove",
"Juan Pablo Bello",
"Cláudio T. Silva"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2205.13064v1",
"icon": "paper"
}
] |
EuroVis | 2,022 | Vessel Maps: A Survey of Map-Like Visualizations of the Cardiovascular System | 10.1111/cgf.14576 | Map‐like visualizations of patient‐specific cardiovascular structures have been applied in numerous medical application contexts. The term map‐like alludes to the characteristics these depictions share with cartographic maps: they show the spatial relations of data attributes from a single perspective, they abstract the underlying data to inCrease legibility, and they facilitate tasks centered around overview, navigation, and comparison. A vast landscape of techniques exists to derive such maps from heterogeneous data spaces. Yet, they all target similar purposes within disease diagnostics, treatment, or research and they face coinciding challenges in mapping the spatial component of a treelike structure to a legible layout. In this report, we present a framing to unify these approaches. On the one hand, we provide a classification of the existing literature according to the data spaces such maps can be derived from. On the other hand, we view the approaches in light of the manifold requirements medical practitioners and researchers have in their efforts to combat the ever‐growing burden of cardiovascular disease. Based on these two perspectives, we offer recommendations for the design of map‐like visualizations of the cardiovascular system. | false | false | [
"Pepe Eulzer",
"Monique Meuschke",
"Gabriel Mistelbauer",
"Kai Lawonn"
] | [] | [] | [] |
EuroVis | 2,022 | VIBE: A Design Space for VIsual Belief Elicitation in Data Journalism | 10.1111/cgf.14556 | The process of forming, expressing, and updating beliefs from data plays a critical role in data‐driven decision making. Effectively eliciting those beliefs has potential for high impact across a broad set of applications, including increased engagement with data and visualizations, personalizing visualizations, and understanding users' visual reasoning processes, which can inform improved data analysis and decision making strategies (e.g., via bias mitigation). Recently, belief‐driven visualizations have been used to elicit and visualize readers' beliefs in a visualization alongside data in narrative media and data journalism platforms such as the New York Times and FiveThirtyEight. However, there is little research on different aspects that constitute designing an effective belief‐driven visualization. In this paper, we synthesize a design space for belief‐driven visualizations based on formative and summative interviews with designers and visualization experts. The design space includes 7 main design considerations, beginning with an assumed data set, then structured according to: from who, why, when, what, and how the belief is elicited, and the possible feedback about the belief that may be provided to the visualization viewer. The design space covers considerations such as the type of data parameter with optional uncertainty being elicited, interaction techniques, and visual feedback, among others. Finally, we describe how more than 24 existing belief‐driven visualizations from popular news media outlets span the design space and discuss trends and opportunities within this space. | false | false | [
"Shambhavi Mahajan",
"Bonnie Chen",
"Alireza Karduni",
"Yea-Seul Kim",
"Emily Wall"
] | [] | [] | [] |
EuroVis | 2,022 | Visual Analytics of Contact Tracing Policy Simulations During an Emergency Response | 10.1111/cgf.14520 | Epidemiologists use individual‐based models to (a) simulate disease spread over dynamic contact networks and (b) to investigate strategies to control the outbreak. These model simulations generate complex ‘infection maps’ of time‐varying transmission trees and patterns of spread. Conventional statistical analysis of outputs offers only limited interpretation. This paper presents a novel visual analytics approach for the inspection of infection maps along with their associated metadata, developed collaboratively over 16 months in an evolving emergency response situation. We introduce the concept of representative trees that summarize the many components of a time‐varying infection map while preserving the epidemiological characteristics of each individual transmission tree. We also present interactive visualization techniques for the quick assessment of different control policies. Through a series of case studies and a qualitative evaluation by epidemiologists, we demonstrate how our visualizations can help improve the development of epidemiological models and help interpret complex transmission patterns. | false | false | [
"Max Sondag",
"Cagatay Turkay",
"Kai Xu 0003",
"Louise Matthews",
"Sibylle Mohr",
"Daniel Archambault"
] | [
"HM"
] | [] | [] |
EuroVis | 2,022 | Visual Parameter Selection for Spatial Blind Source Separation | 10.1111/cgf.14530 | Analysis of spatial multivariate data, i.e., measurements at irregularly‐spaced locations, is a challenging topic in visualization and statistics alike. Such data are inteGral to many domains, e.g., indicators of valuable minerals are measured for mine prospecting. Popular analysis methods, like PCA, often by design do not account for the spatial nature of the data. Thus they, together with their spatial variants, must be employed very carefully. Clearly, it is preferable to use methods that were specifically designed for such data, like spatial blind source separation (SBSS). However, SBSS requires two tuning parameters, which are themselves complex spatial objects. Setting these parameters involves navigating two large and interdependent parameter spaces, while also taking into account prior knowledge of the physical reality represented by the data. To support analysts in this process, we developed a visual analytics prototype. We evaluated it with experts in visualization, SBSS, and geochemistry. Our evaluations show that our interactive prototype allows to define complex and realistic parameter settings efficiently, which was so far impractical. Settings identified by a non‐expert led to remarkable and surprising insights for a domain expert. Therefore, this paper presents important first steps to enable the use of a promising analysis method for spatial multivariate data. | false | false | [
"Nikolaus Piccolotto",
"Markus Bögl",
"Christoph Muehlmann",
"Klaus Nordhausen",
"Peter Filzmoser",
"Silvia Miksch"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2112.08888v2",
"icon": "paper"
}
] |
EuroVis | 2,022 | Where did my Lines go? Visualizing Missing Data in Parallel Coordinates | 10.1111/cgf.14536 | We evaluate visualization concepts to represent missing values in parallel coordinates. We focus on the trade‐off between the ability to perceive missing values and the concept's impact on common tasks. For this purpose, we identified three missing value representation concepts: removing line segments where values are missing, adding a separate, horizontal axis onto which missing values are projected, and using imputed values as a replacement for missing values. For the missing values axis and imputed values concepts, we additionally add downplay and highlight variations. We performed a crowd‐sourced, quantitative user study with 732 participants comparing the concepts and their variations using five real‐world datasets. Based on our findings, we provide suggestions regarding which visual encoding to employ depending on the task at focus. | false | false | [
"Alex Bäuerle",
"Christian van Onzenoodt",
"Simon der Kinderen",
"Jimmy Johansson Westberg",
"Daniel Jönsson",
"Timo Ropinski"
] | [] | [] | [] |
CHI | 2,022 | "I See You!": A Design Framework for Interface Cues about Agent Visual Perception from a Thematic Analysis of Videogames | 10.1145/3491102.3517699 | As artificial agents proliferate, there will be more and more situations in which they must communicate their capabilities to humans, including what they can “see.” Artificial agents have existed for decades in the form of computer-controlled agents in videogames. We analyze videogames in order to not only inspire the design of better agents, but to stop agent designers from replicating research that has already been theorized, designed, and tested in-depth. We present a qualitative thematic analysis of sight cues in videogames and develop a framework to support human-agent interaction design. The framework identifies the different locations and stimulus types – both visualizations and sonifications – available to designers and the types of information they can convey as sight cues. Insights from several other cue properties are also presented. We close with suggestions for implementing such cues with existing technologies to improve the safety, privacy, and efficiency of human-agent interactions. | false | false | [
"Matthew Rueben",
"Matthew Rodney Horrocks",
"Jennifer Eleanor Martinez",
"Michelle V. Cormier",
"Nicolas J. LaLone",
"Marlena R. Fraune",
"Z. Toups Dugas"
] | [] | [] | [] |
CHI | 2,022 | 'Are They Doing Better In The Clinic Or At Home?': Understanding Clinicians' Needs When Visualizing Wearable Sensor Data Used In Remote Gait Assessments For People With Multiple Sclerosis | 10.1145/3491102.3501989 | Walking impairment is a debilitating symptom of Multiple Sclerosis (MS), a disease affecting 2.8 million people worldwide. While clinicians’ in-person observational gait assessments are important, research suggests that data from wearable sensors can indicate early onset of gait impairment, track patients’ responses to treatment, and support remote and longitudinal assessment. We present an inquiry into supporting the transition from research to clinical practice. Co-design by HCI, biomedical, neurology and rehabilitation researchers resulted in a data-rich interface prototype for augmented gait analysis based on visualized sensor data. We used this as a prompt in interviews with ten experienced clinicians from a range of MS rehabilitation roles. We find that clinicians value quantitative sensor data within a whole patient narrative, to help track specific rehabilitation goals, but identify a tension between grasping critical information quickly and more detailed understanding. Based on the findings we make design recommendations for data-rich remote rehabilitation interfaces. | false | false | [
"Ayanna Seals",
"Giuseppina Pilloni",
"Jin Kim",
"Raul Sanchez",
"John-Ross Rizzo",
"Leigh Charvet",
"Oded Nov",
"Graham Dove"
] | [] | [] | [] |
CHI | 2,022 | 'ShishuShurokkha': A Transformative Justice Approach for Combating Child Sexual Abuse in Bangladesh | 10.1145/3491102.3517543 | The challenge of designing against child sexual abuse becomes more complicated in conservative societies where talking about sex is tabooed. Our mix-method study, comprised of an online survey, five FGDs, and 20 semi-structured interviews in Bangladesh, investigates the common nature, location, and time of the abuse, post-incident support, and possible combating strategies. Besides revealing important facts, our findings highlight the need of decentering the design from the victims (children and/or guardians) to the community. Hence, building on the theory of transformative justice, we prototyped and evaluated ‘ShishuShurokkha’ – an online tool that involves the whole community by allowing anonymous bystander reporting, visualizing case-maps, connecting with legal, medical, and social support, and raising awareness. The evaluation of ShishuShurokkha shows the promise for such a communal approach toward combating child sexual abuse, and highlights the needs for sincere involvement of the government, NGOs, the legal, educational, and religious services in this. | false | false | [
"Sharifa Sultana",
"Sadia Tasnuva Pritha",
"Rahnuma Tasnim",
"Anik Das",
"Rokeya Akter",
"Shaid Hasan",
"S. M. Raihanul Alam",
"Muhammad Ashad Kabir",
"Syed Ishtiaque Ahmed"
] | [] | [] | [] |
CHI | 2,022 | A Design Space For Data Visualisation Transformations Between 2D And 3D In Mixed-Reality Environments | 10.1145/3491102.3501859 | As mixed-reality (MR) technologies become more mainstream, the delineation between data visualisations displayed on screens or other surfaces and those floating in space becomes increasingly blurred. Rather than the choice of using either a 2D surface or the 3D space for visualising data being a dichotomy, we argue that users should have the freedom to transform visualisations seamlessly between the two as needed. However, the design space for such transformations is large, and practically uncharted. To explore this, we first establish an overview of the different states that a data visualisation can take in MR, followed by how transformations between these states can facilitate common visualisation tasks. We then describe a design space of how these transformations function, in terms of the different stages throughout the transformation, and the user interactions and input parameters that affect it. This design space is then demonstrated with multiple exemplary techniques based in MR. | false | false | [
"Benjamin Lee",
"Maxime Cordeil",
"Arnaud Prouzeau",
"Bernhard Jenny",
"Tim Dwyer"
] | [
"HM"
] | [] | [] |
CHI | 2,022 | Accessibility for Color Vision Deficiencies: Challenges and Findings of a Large Scale Study on Paper Figures | 10.1145/3491102.3502133 | We present an exploratory study on the accessibility of images in publications when viewed with color vision deficiencies (CVDs). The study is based on 1,710 images sampled from a visualization dataset (VIS30K) over five years. We simulated four CVDs on each image. First, four researchers (one with a CVD) identified existing issues and helpful aspects in a subset of the images. Based on the resulting labels, 200 crowdworkers provided 30,000 ratings on present CVD issues in the simulated images. We analyzed this data for correlations, clusters, trends, and free text comments to gain a first overview of paper figure accessibility. Overall, about 60 % of the images were rated accessible. Furthermore, our study indicates that accessibility issues are subjective and hard to detect. On a meta-level, we reflect on our study experience to point out challenges and opportunities of large-scale accessibility studies for future research directions. | false | false | [
"Katrin Angerbauer",
"Nils Rodrigues",
"René Cutura",
"Seyda Öney",
"Nelusa Pathmanathan",
"Cristina Morariu",
"Daniel Weiskopf",
"Michael Sedlmair"
] | [] | [] | [] |
CHI | 2,022 | Annotating Line Charts for Addressing Deception | 10.1145/3491102.3502138 | Deceptive visualizations are visualizations that, whether intentionally or not, lead the reader to an understanding of the data which varies from the actual data. Examples of deceptive visualizations can be found in every digital platform, and, despite their widespread use in the wild, there have been limited efforts to alert laypersons to common deceptive visualization practices. In this paper, we present a tool for annotating line charts in the wild that reads line chart images and outputs text and visual annotations to assess the line charts for distortions and help guide the reader towards an honest understanding of the chart data. We demonstrate the usefulness of our tool through a series of case studies on real-world charts. Finally, we perform a crowdsourced experiment to evaluate the ability of the proposed tool to educate readers about potentially deceptive visualization practices. | false | false | [
"Arlen Fan",
"Yuxin Ma",
"Michelle Mancenido",
"Ross Maciejewski"
] | [
"HM"
] | [] | [] |
CHI | 2,022 | AvatAR: An Immersive Analysis Environment for Human Motion Data Combining Interactive 3D Avatars and Trajectories | 10.1145/3491102.3517676 | Analysis of human motion data can reveal valuable insights about the utilization of space and interaction of humans with their environment. To support this, we present AvatAR, an immersive analysis environment for the in-situ visualization of human motion data, that combines 3D trajectories with virtual avatars showing people’s detailed movement and posture. Additionally, we describe how visualizations can be embedded directly into the environment, showing what a person looked at or what surfaces they touched, and how the avatar’s body parts can be used to access and manipulate those visualizations. AvatAR combines an AR HMD with a tablet to provide both mid-air and touch interaction for system control, as well as an additional overview device to help users navigate the environment. We implemented a prototype and present several scenarios to show that AvatAR can enhance the analysis of human motion data by making data not only explorable, but experienceable. | false | false | [
"Patrick Reipschläger",
"Frederik Brudy",
"Raimund Dachselt",
"Justin Matejka",
"George W. Fitzmaurice",
"Fraser Anderson"
] | [] | [] | [] |
CHI | 2,022 | BikeAR: Understanding Cyclists' Crossing Decision-Making at Uncontrolled Intersections using Augmented Reality | 10.1145/3491102.3517560 | Cycling has become increasingly popular as a means of transportation. However, cyclists remain a highly vulnerable group of road users. According to accident reports, one of the most dangerous situations for cyclists are uncontrolled intersections, where cars approach from both directions. To address this issue and assist cyclists in crossing decision-making at uncontrolled intersections, we designed two visualizations that: (1) highlight occluded cars through an X-ray vision and (2) depict the remaining time the intersection is safe to cross via a Countdown. To investigate the efficiency of these visualizations, we proposed an Augmented Reality simulation as a novel evaluation method, in which the above visualizations are represented as AR, and conducted a controlled experiment with 24 participants indoors. We found that the X-ray ensures a fast selection of shorter gaps between cars, while the Countdown facilitates a feeling of safety and provides a better intersection overview. | false | false | [
"Andrii Matviienko",
"Florian Müller 0003",
"Dominik Schön",
"Paul Seesemann",
"Sebastian Günther 0001",
"Max Mühlhäuser"
] | [] | [] | [] |
CHI | 2,022 | Cicero: A Declarative Grammar for Responsive Visualization | 10.1145/3491102.3517455 | Designing responsive visualizations can be cast as applying transformations to a source view to render it suitable for a different screen size. However, designing responsive visualizations is often tedious as authors must manually apply and reason about candidate transformations. We present Cicero, a declarative grammar for concisely specifying responsive visualization transformations which paves the way for more intelligent responsive visualization authoring tools. Cicero’s flexible specifier syntax allows authors to select visualization elements to transform, independent of the source view’s structure. Cicero encodes a concise set of actions to encode a diverse set of transformations in both desktop-first and mobile-first design processes. Authors can ultimately reuse design-agnostic transformations across different visualizations. To demonstrate the utility of Cicero, we develop a compiler to an extended version of Vega-Lite, and provide principles for our compiler. We further discuss the incorporation of Cicero into responsive visualization authoring tools, such as a design recommender. | false | false | [
"Hyeok Kim",
"Ryan A. Rossi",
"Fan Du",
"Eunyee Koh",
"Shunan Guo",
"Jessica Hullman",
"Jane Hoffswell"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2203.08314v1",
"icon": "paper"
}
] |
CHI | 2,022 | Classroom Dandelions: Visualising Participant Position, Trajectory and Body Orientation Augments Teachers' Sensemaking | 10.1145/3491102.3517736 | Despite the digital revolution, physical space remains the site for teaching and learning embodied knowledge and skills. Both teachers and students must develop spatial competencies to effectively use classroom spaces, enabling fluid verbal and non-verbal interaction. While video permits rich activity capture, it provides no support for quickly seeing activity patterns that can assist learning. In contrast, position tracking systems permit the automated modelling of spatial behaviour, opening new possibilities for feedback. This paper introduces the design rationale for ”Dandelion Diagrams” that integrate participant location, trajectory and body orientation over a variable period. Applied in two authentic teaching contexts (a science laboratory, and a nursing simulation) we show how heatmaps showing only teacher/student location led to misinterpretations that were resolved by overlaying Dandelion Diagrams. Teachers also identified a variety of ways they could aid professional development. We conclude Dandelion Diagrams assisted sensemaking, but discuss the ethical risks of over-interpretation. | false | false | [
"Gloria Fernández-Nieto",
"Pengcheng An",
"Jian Zhao 0010",
"Simon Buckingham Shum",
"Roberto Martínez Maldonado"
] | [] | [] | [] |
CHI | 2,022 | ComputableViz: Mathematical Operators as a Formalism for Visualisation Processing and Analysis | 10.1145/3491102.3517618 | Data visualizations are created and shared on the web at an unprecedented speed, raising new needs and questions for processing and analyzing visualizations after they have been generated and digitized. However, existing formalisms focus on operating on a single visualization instead of multiple visualizations, making it challenging to perform analysis tasks such as sorting and clustering visualizations. Through a systematic analysis of previous work, we abstract visualization-related tasks into mathematical operators such as union and propose a design space of visualization operations. We realize the design by developing ComputableViz, a library that supports operations on multiple visualization specifications. To demonstrate its usefulness and extensibility, we present multiple usage scenarios concerning processing and analyzing visualization, such as generating visualization embeddings and automatically making visualizations accessible. We conclude by discussing research opportunities and challenges for managing and exploiting the massive visualizations on the web. | false | false | [
"Aoyu Wu",
"Wai Tong",
"Haotian Li 0001",
"Dominik Moritz",
"Yong Wang 0021",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2204.00856v1",
"icon": "paper"
}
] |
CHI | 2,022 | CrossData: Leveraging Text-Data Connections for Authoring Data Documents | 10.1145/3491102.3517485 | Data documents play a central role in recording, presenting, and disseminating data. Despite the proliferation of applications and systems designed to support the analysis, visualization, and communication of data, writing data documents remains a laborious process, requiring a constant back-and-forth between data processing and writing tools. Interviews with eight professionals revealed that their workflows contained numerous tedious, repetitive, and error-prone operations. The key issue that we identified is the lack of persistent connection between text and data. Thus, we developed CrossData, a prototype that treats text-data connections as persistent, interactive, first-class objects. By automatically identifying, establishing, and leveraging text-data connections, CrossData enables rich interactions to assist in the authoring of data documents. An expert evaluation with eight users demonstrated the usefulness of CrossData, showing that it not only reduced the manual effort in writing data documents but also opened new possibilities to bridge the gap between data exploration and writing. | false | false | [
"Zhutian Chen",
"Haijun Xia"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2310.11639v1",
"icon": "paper"
}
] |
CHI | 2,022 | Data Every Day: Designing and Living with Personal Situated Visualizations | 10.1145/3491102.3517737 | We explore the design and utility of situated manual self-tracking visualizations on dedicated displays that integrate data tracking into existing practices and physical environments. Situating self-tracking tools in relevant locations is a promising approach to enable reflection on and awareness of data without needing to rely on sensorized tracking or personal devices. In both a long-term autobiographical design process and a co-design study with six participants, we rapidly prototyped and deployed 30 situated self-tracking applications over a ten month period. Grounded in the experience of designing and living with these trackers, we contribute findings on logging and data entry, the use of situated displays, and the visual design and customization of trackers. Our results demonstrate the potential of customizable dedicated self-tracking visualizations that are situated in relevant physical spaces, and suggest future research opportunities and new potential applications for situated visualizations. | false | false | [
"Nathalie Bressa",
"Jo Vermeulen",
"Wesley Willett"
] | [] | [] | [] |
CHI | 2,022 | Designing for Knowledge Construction to Facilitate the Uptake of Open Science: Laying out the Design Space | 10.1145/3491102.3517450 | The uptake of open science resources needs knowledge construction on the side of the readers/receivers of scientific content. The design of technologies surrounding open science resources can facilitate such knowledge construction, but this has not been investigated yet. To do so, we first conducted a scoping review of literature, from which we draw design heuristics for knowledge construction in digital environments. Subsequently, we grouped the underlying technological functionalities into three design categories: i) structuring and supporting collaboration, ii) supporting the learning process, and iii) structuring, visualising and navigating (learning) content. Finally, we mapped the design categories and associated design heuristics to core components of popular open science platforms. This mapping constitutes a design space (design implications), which informs researchers and designers in the HCI community about suitable functionalities for supporting knowledge construction in existing or new digital open science platforms. | false | false | [
"Leonie Disch",
"Angela Fessl",
"Viktoria Pammer-Schindler"
] | [] | [] | [] |
CHI | 2,022 | Designing Word Filter Tools for Creator-led Comment Moderation | 10.1145/3491102.3517505 | Online social platforms centered around content creators often allow comments on content, where creators can then moderate the comments they receive. As creators can face overwhelming numbers of comments, with some of them harassing or hateful, platforms typically provide tools such as word filters for creators to automate aspects of moderation. From needfinding interviews with 19 creators about how they use existing tools, we found that they struggled with writing good filters as well as organizing and revising their filters, due to the difficulty of determining what the filters actually catch. To address these issues, we present FilterBuddy, a system that supports creators in authoring new filters or building from pre-made ones, as well as organizing their filters and visualizing what comments are captured by them over time. We conducted an early-stage evaluation of FilterBuddy with YouTube creators, finding that participants see FilterBuddy not just as a moderation tool, but also a means to organize their comments to better understand their audiences. | false | false | [
"Shagun Jhaver",
"Quan Ze Chen",
"Detlef Knauss",
"Amy X. Zhang"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2202.08818v1",
"icon": "paper"
}
] |
CHI | 2,022 | Diff in the Loop: Supporting Data Comparison in Exploratory Data Analysis | 10.1145/3491102.3502123 | Data science is characterized by evolution: since data science is exploratory, results evolve from moment to moment; since it can be collaborative, results evolve as the work changes hands. While existing tools help data scientists track changes in code, they provide less support for understanding the iterative changes that the code produces in the data. We explore the idea of visualizing differences in datasets as a core feature of exploratory data analysis, a concept we call Diff in the Loop (DITL). We evaluated DITL in a user study with 16 professional data scientists and found it helped them understand the implications of their actions when manipulating data. We summarize these findings and discuss how the approach can be generalized to different data science workflows. | false | false | [
"April Yi Wang",
"Will Epperson",
"Robert A. DeLine",
"Steven Mark Drucker"
] | [] | [] | [] |
CHI | 2,022 | Do You See What I Hear? - Peripheral Absolute and Relational Visualisation Techniques for Sound Zones | 10.1145/3491102.3501938 | Sound zone technology allows multiple simultaneous sound experiences for multiple people in the same room without interference. However, given the inherent invisible and intangible nature of sound zones, it is unclear how to communicate the position and size of sound zones to users. This paper compares two visualisation techniques; absolute visualisation, relational visualisation, as well as a baseline condition without visualisations. In a within-subject experiment (N = 33), we evaluated these techniques for effectiveness and efficiency across four representative tasks. Our findings show that the absolute and relational visualisation techniques increase effectiveness in multi-user tasks but not in single-user tasks. The efficiency for all tasks was improved using visualisations. We discuss the potential of visualisations for sound zones and highlight future research opportunities for sound zone interaction. | false | false | [
"Rune Møberg Jacobsen",
"Niels van Berkel",
"Mikael B. Skov",
"Stine S. Johansen",
"Jesper Kjeldskov"
] | [] | [] | [] |
CHI | 2,022 | Do You See What You Mean? Using Predictive Visualizations to Reduce Optimism in Duration Estimates | 10.1145/3491102.3502010 | Making time estimates, such as how long a given task might take, frequently leads to inaccurate predictions because of an optimistic bias. Previous attempts to alleviate this bias, including decomposing the task into smaller components and listing potential surprises, have not shown any major improvement. This article builds on the premise that these procedures may have failed because they involve compound probabilities and mixture distributions which are difficult to compute in one’s head. We hypothesize that predictive visualizations of such distributions would facilitate the estimation of task durations. We conducted a crowdsourced study in which 145 participants provided different estimates of overall and sub-task durations and we used these to generate predictive visualizations of the resulting mixture distributions. We compared participants’ initial estimates with their updated ones and found compelling evidence that predictive visualizations encourage less optimistic estimates. | false | false | [
"Morgane Koval",
"Yvonne Jansen"
] | [
"HM"
] | [] | [] |
CHI | 2,022 | FluidMeet: Enabling Frictionless Transitions Between In-Group, Between-Group, and Private Conversations During Virtual Breakout Meetings | 10.1145/3491102.3517558 | People often form small conversation groups during physical gatherings to have ad-hoc and informal conversations. As these groups are loosely defined, others can often overhear and join the conversation. However, current video-conferencing tools only allow for strict boundaries between small conversation groups, inhibiting fluid group formations and between-group conversations. This isolates small-group conversations from others and leads to inefficient transitions between conversations. We present FluidMeet, a virtual breakout meeting system that employs flexible conversation boundaries and cross-group conversation visualizations to enable fluid conversation group formations and ad-hoc, informal conversations. FluidMeet enables out-group members to overhear group conversations while allowing conversation groups to control their shared level of context. Users within conversation groups can also quickly switch between in-group and private conversations. A study of FluidMeet showed that it encouraged users to break group boundaries, made them feel less isolated in group conversations, and facilitated communication across different groups. | false | false | [
"Erzhen Hu",
"Md. Aashikur Rahman Azim",
"Seongkook Heo"
] | [] | [] | [] |
CHI | 2,022 | GANSlider: How Users Control Generative Models for Images using Multiple Sliders with and without Feedforward Information | 10.1145/3491102.3502141 | We investigate how multiple sliders with and without feedforward visualizations influence users’ control of generative models. In an online study (N=138), we collected a dataset of people interacting with a generative adversarial network (StyleGAN2) in an image reconstruction task. We found that more control dimensions (sliders) significantly increase task difficulty and user actions. Visual feedforward partly mitigates this by enabling more goal-directed interaction. However, we found no evidence of faster or more accurate task performance. This indicates a tradeoff between feedforward detail and implied cognitive costs, such as attention. Moreover, we found that visualizations alone are not always sufficient for users to understand individual control dimensions. Our study quantifies fundamental UI design factors and resulting interaction behavior in this context, revealing opportunities for improvement in the UI design for interactive applications of generative models. We close by discussing design directions and further aspects. | false | false | [
"Hai Dang",
"Lukas Mecke",
"Daniel Buschek"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2202.00965v1",
"icon": "paper"
}
] |
CHI | 2,022 | graphiti: Sketch-based Graph Analytics for Images and Videos | 10.1145/3491102.3501923 | Graph analytics is currently performed using a combination of code, symbolic algebra, and network visualizations. The analyst has to work with symbolic and abstract forms of data to construct and analyze graphs. We locate unique design opportunities at the intersection of computer vision and graph analytics, by utilizing visual variables extracted from images/videos and some direct manipulation and pen interaction techniques. We also summarize commonly used graph operations and graphical representations (graphs, simplicial complexes, hypergraphs), and map them to a few brushes and direct manipulation actions. The mapping enables us to visually construct and analyze a wide range of graphs on top of images, videos, and sketches. The design framework is implemented as a sketch-based notebook interface to demonstrate the design possibilities. User studies with scientists from various fields reveal innovative use cases for such an embodied interaction paradigm for graph analytics. | false | false | [
"Nazmus Saquib",
"Faria Huq",
"Syed Arefinul Haque"
] | [] | [] | [] |
CHI | 2,022 | HAExplorer: Understanding Interdependent Biomechanical Motions with Interactive Helical Axes | 10.1145/3491102.3501841 | The helical axis is a common tool used in biomechanical modeling to parameterize the motion of rigid objects. It encodes an object’s rotation around and translation along a unique axis. Visualizations of helical axes have helped to make kinematic data tangible. However, the analysis process often remains tedious, especially if complex motions are examined. We identify multiple key challenges: the absence of interactive tools for the computation and handling of helical axes, visual clutter in axis representations, and a lack of contextualization. We solve these issues by providing the first generalized framework for kinematic analysis with helical axes. Axis sets can be computed on-demand, interactively filtered, and explored in multiple coordinated views. We iteratively developed and evaluated the HAExplorer with active biomechanics researchers. Our results show that the techniques we introduce open up the possibility to analyze non-planar, compound, and interdependent motion data. | false | false | [
"Pepe Eulzer",
"Robert Rockenfeller",
"Kai Lawonn"
] | [] | [] | [] |
CHI | 2,022 | i-LaTeX : Manipulating Transitional Representations between LaTeX Code and Generated Documents | 10.1145/3491102.3517494 | Document description languages such as LaTeX are used extensively to author scientific and technical documents, but editing them is cumbersome: code-based editors only provide generic features, while WYSIWYG interfaces only support a subset of the language. Our interviews with 11 LaTeX users highlighted their difficulties dealing with textually-encoded abstractions and with the mappings between source code and document output. To address some of these issues, we introduce Transitional Representations for document description languages, which enable the visualisation and manipulation of fragments of code in relation to their generated output. We present i-LaTeX, a LaTeX editor equipped with Transitional Representations of formulae, tables, images, and grid layouts. A 16-participant experiment shows that Transitional Representations let them complete common editing tasks significantly faster, with fewer compilations, and with a lower workload. We discuss how Transitional Representations affect editing strategies and conclude with directions for future work. | false | false | [
"Camille Gobert",
"Michel Beaudouin-Lafon"
] | [] | [] | [] |
CHI | 2,022 | Infosonics: Accessible Infographics for People who are Blind using Sonification and Voice | 10.1145/3491102.3517465 | Data visualisations are increasingly used online to engage readers and enable independent analysis of the data underlying news stories. However, access to such infographics is problematic for readers who are blind or have low vision (BLV). Equitable access to information is a basic human right and essential for independence and inclusion. We introduce infosonics, the audio equivalent of infographics, as a new style of interactive sonification that uses a spoken introduction and annotation, non-speech audio and sound design elements to present data in an understandable and engaging way. A controlled user evaluation with 18 BLV adults found a COVID-19 infosonic enabled a clearer mental image than a traditional sonification. Further, infosonics prove complementary to text descriptions and facilitate independent understanding of the data. Based on our findings, we provide preliminary suggestions for infosonics design, which we hope will enable BLV people to gain equitable access to online news and information. | false | false | [
"Leona M. Holloway",
"Cagatay Goncu",
"Alon Ilsar",
"Matthew Butler 0002",
"Kim Marriott"
] | [] | [] | [] |
CHI | 2,022 | Interpolating Happiness: Understanding the Intensity Gradations of Face Emojis Across Cultures | 10.1145/3491102.3517661 | We frequently utilize face emojis to express emotions in digital communication. But how wholly and precisely do such pictographs sample the emotional spectrum, and are there gaps to be closed? Our research establishes emoji intensity scales for seven basic emotions: happiness, anger, disgust, sadness, shock, annoyance, and love. In our survey (N = 1195), participants worldwide assigned emotions and intensities to 68 face emojis. According to our results, certain feelings, such as happiness or shock, are visualized by manifold emojis covering a broad spectrum of intensities. Other feelings, such as anger, have limited and only very intense representative visualizations. We further emphasize that the cultural background influences emojis’ perception: for instance, linear-active cultures (e.g., UK, Germany) rate the intensity of such visualizations higher than multi-active (e.g., Brazil, Russia) or reactive cultures (e.g., Indonesia, Singapore). To summarize, our manuscript promotes future research on more expressive, culture-aware emoji design. | false | false | [
"Andrey Krekhov",
"Katharina Emmerich",
"Johannes Fuchs 0001",
"Jens Harald Krüger"
] | [] | [] | [] |
CHI | 2,022 | Investigating Potentials of Shape-Changing Displays for Sound Zones | 10.1145/3491102.3517632 | In this paper, we investigate the use of shape-change for interaction with sound zones. A core challenge to designing interaction with sound zone systems is to support users’ understanding of the unique spatial properties of sound zones. Shape-changing interfaces present new opportunities for addressing this. We present a structured investigation into this. We leveraged the knowledge of 12 sound experts to define a set of basic shapes and movements. Then, we constructed a prototype and conducted an elicitation study with 17 novice users, investigating the experience of these shapes and movements. Our findings show that physical visualizations of sound zones can be useful in supporting users’ experience of sound zones. We present a framework of 4 basic pattern categories that prompt different sound zone experiences and outline further research directions for shape-change in supporting sound zone interaction. | false | false | [
"Stine S. Johansen",
"Timothy Merritt 0001",
"Rune Møberg Jacobsen",
"Peter Axel Nielsen",
"Jesper Kjeldskov"
] | [] | [] | [] |
CHI | 2,022 | Juvenile Graphical Perception: A Comparison between Children and Adults | 10.1145/3491102.3501893 | Data visualization is pervasive in the lives of children as they encounter graphs and charts in early education and online media. In spite of this prevalence, our guidelines and understanding of how children perceive graphs stem primarily from studies conducted with adults. Previous psychology and education research indicates that children’s cognitive abilities are different from adults. Therefore, we conducted a classic graphical perception study on a population of children aged 8–12 enrolled in the Ivy After School Program in Boston, MA and adult computer science students enrolled in Northeastern University to determine how accurately participants judge differences in particular graphical encodings. We record the accuracy of participants’ answers for five encodings most commonly used with quantitative data. The results of our controlled experiment show that children have remarkably similar graphical perception to adults, but are consistently less accurate at interpreting the visual encodings. We found similar effectiveness rankings, relative differences in error between the different encodings, and patterns of bias across encoding types. Based on our findings, we provide design guidelines and recommendations for creating visualizations for children. This paper and all supplemental materials are available at https://osf.io/ygrdv. | false | false | [
"Liudas Panavas",
"Amy E. Worth",
"Tarik Crnovrsanin",
"Tejas Sathyamurthi",
"Sara Cordes",
"Michelle A. Borkin",
"Cody Dunne"
] | [] | [] | [] |
CHI | 2,022 | Making Data Tangible: A Cross-disciplinary Design Space for Data Physicalization | 10.1145/3491102.3501939 | Designing a data physicalization requires a myriad of different considerations. Despite the cross-disciplinary nature of these considerations, research currently lacks a synthesis across the different communities data physicalization sits upon, including their approaches, theories, and even terminologies. To bridge these communities synergistically, we present a design space that describes and analyzes physicalizations according to three facets: context (end-user considerations), structure (the physical structure of the artifact), and interactions (interactions with both the artifact and data). We construct this design space through a systematic review of 47 physicalizations and analyze the interrelationships of key factors when designing a physicalization. This design space cross-pollinates knowledge from relevant HCI communities, providing a cohesive overview of what designers should consider when creating a data physicalization while suggesting new design possibilities. We analyze the design decisions present in current physicalizations, discuss emerging trends, and identify underlying open challenges. | false | false | [
"S. Sandra Bae",
"Clement Zheng",
"Mary Etta West",
"Ellen Yi-Luen Do",
"Samuel Huron",
"Danielle Albers Szafir"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2202.10520v1",
"icon": "paper"
}
] |
CHI | 2,022 | Neo: Generalizing Confusion Matrix Visualization to Hierarchical and Multi-Output Labels | 10.1145/3491102.3501823 | The confusion matrix, a ubiquitous visualization for helping people evaluate machine learning models, is a tabular layout that compares predicted class labels against actual class labels over all data instances. We conduct formative research with machine learning practitioners at Apple and find that conventional confusion matrices do not support more complex data-structures found in modern-day applications, such as hierarchical and multi-output labels. To express such variations of confusion matrices, we design an algebra that models confusion matrices as probability distributions. Based on this algebra, we develop Neo, a visual analytics system that enables practitioners to flexibly author and interact with hierarchical and multi-output confusion matrices, visualize derived metrics, renormalize confusions, and share matrix specifications. Finally, we demonstrate Neo’s utility with three model evaluation scenarios that help people better understand model performance and reveal hidden confusions. | false | false | [
"Jochen Görtler",
"Fred Hohman",
"Dominik Moritz",
"Kanit Wongsuphasawat",
"Donghao Ren",
"Rahul Nair",
"Marc Kirchner",
"Kayur Patel"
] | [
"BP"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2110.12536v2",
"icon": "paper"
}
] |
CHI | 2,022 | One Week in the Future: Previs Design Futuring for HCI Research | 10.1145/3491102.3517584 | We explore the use of cinematic “pre-visualization” (previs) techniques as a rapid ideation and design futuring method for human computer interaction (HCI) research. Previs approaches, which are widely used in animation and film production, use digital design tools to create medium-fidelity videos that capture richer interaction, motion, and context than sketches or static illustrations. When used as a design futuring method, previs can facilitate rapid, iterative discussions that reveal tensions, challenges, and opportunities for new research. We performed eight one-week design futuring sprints, in which individual HCI researchers collaborated with a lead designer to produce concept sketches, storyboards, and videos that examined future applications of their research. From these experiences, we identify recurring themes and challenges and present a One Week Futuring Workbook that other researchers can use to guide their own futuring sprints. We also highlight how variations of our approach could support other speculative design practices. | false | false | [
"Alexander Ivanov 0004",
"Tim Au Yeung",
"Kathryn Blair",
"Kurtis Thorvald Danyluk",
"Georgina Freeman",
"Marcus Friedel",
"Carmen Hull",
"Michael Yuk-Shing Hung",
"Sydney Pratte",
"Wesley Willett"
] | [] | [] | [] |
CHI | 2,022 | Pandemic Displays: Considering Hygiene on Public Touchscreens in the Post-Pandemic Era | 10.1145/3491102.3501937 | The COVID-19 pandemic created unprecedented questions for touch-based public displays regarding hygiene, risks, and general awareness. We study how people perceive and consider hygiene on shared touchscreens, and how touchscreens could be improved through hygiene-related functions. First, we report the results from an online survey (n = 286). Second, we present a hygiene concept for touchscreens that visualizes prior touches and provides information about the cleaning of the display and number of prior users. Third, we report the feedback for our hygiene concept from 77 participants. We find that there is demand for improved awareness of public displays’ hygiene status, especially among those with stronger concerns about COVID-19. A particularly desired detail is when the display has been cleaned. For visualizing prior touches, fingerprints worked best. We present further considerations for designing for hygiene on public displays. | false | false | [
"Ville Mäkelä",
"Jonas Winter",
"Jasmin Schwab",
"Michael Koch 0001",
"Florian Alt"
] | [] | [] | [] |
CHI | 2,022 | Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactions | 10.1145/3491102.3502127 | Optical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs. | false | false | [
"Nuwan Janaka",
"Chloe Haigh",
"Hyeong Cheol Kim",
"Shan Zhang",
"Shengdong Zhao"
] | [
"HM"
] | [] | [] |
CHI | 2,022 | Preferences and Effectiveness of Sleep Data Visualizations for Smartwatches and Fitness Bands | 10.1145/3491102.3501921 | We present the findings of four studies related to the visualization of sleep data on wearables with two form factors: smartwatches and fitness bands. Our goal was to understand the interests, preferences, and effectiveness of different sleep visualizations by form factor. In a survey, we showed that wearers were mostly interested in weekly sleep duration, and nightly sleep phase data. Visualizations of this data were generally preferred over purely text-based representations, and the preferred chart type for fitness bands, and smartwatches was often the same. In one in-person pilot study, and two crowdsourced studies, we then tested the effectiveness of the most preferred representations for different tasks, and found that participants performed simple tasks effectively on both form factors but more complex tasks benefited from the larger smartwatch size. Lastly, we reflect on our crowdsourced study methodology for testing the effectiveness of visualizations for wearables. Supplementary material is available at https://osf.io/yz8ar/. | false | false | [
"Alaul Islam",
"Ranjini Aravind",
"Tanja Blascheck",
"Anastasia Bezerianos",
"Petra Isenberg"
] | [] | [] | [] |
CHI | 2,022 | Pretty Princess vs. Successful Leader: Gender Roles in Greeting Card Messages | 10.1145/3491102.3502114 | People write personalized greeting cards on various occasions. While prior work has studied gender roles in greeting card messages, systematic analysis at scale and tools for raising the awareness of gender stereotyping remain under-investigated. To this end, we collect a large greeting card message corpus covering three different occasions (birthday, Valentine’s Day and wedding) from three sources (exemplars from greeting message websites, real-life greetings from social media and language model generated ones). We uncover a wide range of gender stereotypes in this corpus via topic modeling, odds ratio and Word Embedding Association Test (WEAT). We further conduct a survey to understand people’s perception of gender roles in messages from this corpus and if gender stereotyping is a concern. The results show that people want to be aware of gender roles in the messages, but remain unconcerned unless the perceived gender roles conflict with the recipient’s true personality. In response, we developed GreetA, an interactive visualization and writing assistant tool to visualize fine-grained topics in greeting card messages drafted by the users and the associated gender perception scores, but without suggesting text changes as an intervention. | false | false | [
"Jiao Sun",
"Tongshuang Wu",
"Yue Jiang",
"Ronil Awalegaonkar",
"Xi Victoria Lin",
"Diyi Yang"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2112.13980v1",
"icon": "paper"
}
] |
CHI | 2,022 | Put a Label On It! Approaches for Constructing and Contextualizing Bar Chart Physicalizations | 10.1145/3491102.3501952 | Physicalizations represent data through their tangible and material properties. In contrast to screen-based visualizations, there is currently very limited understanding of how to label or annotate physicalizations to support people in interpreting the data encoded by the physicalization. Because of its spatiality, contextualization through labeling or annotation is crucial to communicate data across different orientations. In this paper, we study labeling approaches as part of the overall construction process of bar chart physicalizations. We designed a toolkit of physical tokens and paper data labels and asked 16 participants to construct and contextualize their own data physicalizations. We found that (i) the construction and contextualization of physicalizations is a highly intertwined process, (ii) data labels are integrated with physical constructs in the final design, and (iii) these are both influenced by orientation changes. We contribute with an understanding of the role of data labeling in the creation and contextualization of physicalizations. | false | false | [
"Kim Sauvé",
"Argenis Ramirez Gomez",
"Steven Houben"
] | [] | [] | [] |
CHI | 2,022 | Recommendations for Visualization Recommendations: Exploring Preferences and Priorities in Public Health | 10.1145/3491102.3501891 | The promise of visualization recommendation systems is that analysts will be automatically provided with relevant and high-quality visualizations that will reduce the work of manual exploration or chart creation. However, little research to date has focused on what analysts value in the design of visualization recommendations. We interviewed 18 analysts in the public health sector and explored how they made sense of a popular in-domain dataset1 in service of generating visualizations to recommend to others. We also explored how they interacted with a corpus of both automatically- and manually-generated visualization recommendations, with the goal of uncovering how the design values of these analysts are reflected in current visualization recommendation systems. We find that analysts champion simple charts with clear takeaways that are nonetheless connected with existing semantic information or domain hypotheses. We conclude by recommending that visualization recommendation designers explore ways of integrating context and expectation into their systems. | false | false | [
"Calvin S. Bao",
"Siyao Li",
"Sarah G. Flores",
"Michael Correll",
"Leilani Battle"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2202.01335v1",
"icon": "paper"
}
] |
CHI | 2,022 | Reflective Spring Cleaning: Using Personal Informatics to Support Infrequent Notification Personalization | 10.1145/3491102.3517493 | Distracting mobile notifications are a high-profile problem but previous research suggests notification management tools are underused because of the barriers users face in relation to the perceived benefits. We posit that users might be more motivated to personalize if they could view contextual data for how personalizations would have impacted their recent notifications. We propose the ‘Reflective Spring Cleaning’ approach to support notification management through infrequent personalization with visualization of collected notification data. To simplify and contextualize key trends in a user’s notifications, we framed these visualizations within a novel who-what-when data abstraction. We evaluated it through a four-week longitudinal study: 21 participants logged their notifications before and after a personalization session that included suggestions for notification management contextualized against visualizations of their recent notifications. A debriefing interview described their new experience after two more weeks of logging. Our approach encouraged users to critically reflect on their notifications, which frequently inspired them to personalize and improved the experience of the majority. | false | false | [
"Izabelle F. Janzen",
"Joanna McGrenere"
] | [] | [] | [] |
CHI | 2,022 | ReLive: Bridging In-Situ and Ex-Situ Visual Analytics for Analyzing Mixed Reality User Studies | 10.1145/3491102.3517550 | The nascent field of mixed reality is seeing an ever-increasing need for user studies and field evaluation, which are particularly challenging given device heterogeneity, diversity of use, and mobile deployment. Immersive analytics tools have recently emerged to support such analysis in situ, yet the complexity of the data also warrants an ex-situ analysis using more traditional non-immersive visual analytics setups. To bridge the gap between both approaches, we introduce ReLive: a mixed-immersion visual analytics framework for exploring and analyzing mixed reality user studies. ReLive combines an in-situ virtual reality view with a complementary ex-situ desktop view. While the virtual reality view allows users to relive interactive spatial recordings replicating the original study, the synchronized desktop view provides a familiar interface for analyzing aggregated data. We validated our concepts in a two-step evaluation consisting of a design walkthrough and an empirical expert user study. | false | false | [
"Sebastian Hubenschmid",
"Jonathan Wieland",
"Daniel Immanuel Fink 0001",
"Andrea Batch",
"Johannes Zagermann",
"Niklas Elmqvist",
"Harald Reiterer"
] | [] | [] | [] |
CHI | 2,022 | RoleSeer: Understanding Informal Social Role Changes in MMORPGs via Visual Analytics | 10.1145/3491102.3517712 | Massively multiplayer online role-playing games create virtual communities that support heterogeneous “social roles” determined by gameplay interaction behaviors under a specific social context. For all social roles, formal roles are pre-defined, obvious, and explicitly ascribed to the people holding the roles, whereas informal roles are not well-defined and unspoken. Identifying the informal roles and understanding their subtle changes are critical to designing sociability mechanisms. However, it is nontrivial to understand the existence and evolution of such roles due to their loosely defined, interconvertible, and dynamic characteristics. We propose a visual analytics system, RoleSeer, to investigate informal roles from the perspectives of behavioral interactions and depict their dynamic interconversions and transitions. Two cases, experts’ feedback, and a user study suggest that RoleSeer helps interpret the identified informal roles and explore the patterns behind role changes. We see our approach’s potential in investigating informal roles in a broader range of social games. | false | false | [
"Laixin Xie",
"Ziming Wu",
"Peng Xu",
"Wei Li 0094",
"Xiaojuan Ma",
"Quan Li"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2210.10698v1",
"icon": "paper"
}
] |
CHI | 2,022 | Sensitive Pictures: Emotional Interpretation in the Museum | 10.1145/3491102.3502080 | Museums are interested in designing emotional visitor experiences to complement traditional interpretations. HCI is interested in the relationship between Affective Computing and Affective Interaction. We describe Sensitive Pictures, an emotional visitor experience co-created with the Munch art museum. Visitors choose emotions, locate associated paintings in the museum, experience an emotional story while viewing them, and self-report their response. A subsequent interview with a portrayal of the artist employs computer vision to estimate emotional responses from facial expressions. Visitors are given a souvenir postcard visualizing their emotional data. A study of 132 members of the public (39 interviewed) illuminates key themes: designing emotional provocations; capturing emotional responses; engaging visitors with their data; a tendency for them to align their views with the system's interpretation; and integrating these elements into emotional trajectories. We consider how Affective Computing can hold up a mirror to our emotions during Affective Interaction | false | false | [
"Steve Benford",
"Anders Sundnes Løvlie",
"Karin Ryding",
"Paulina Rajkowska",
"Edgar Bodiaj",
"Dimitrios Paris Darzentas",
"Harriet R. Cameron",
"Jocelyn Spence",
"Joy Egede",
"Bogdan Spanjevic"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2203.01041v1",
"icon": "paper"
}
] |
CHI | 2,022 | Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users | 10.1145/3491102.3517790 | We increasingly rely on up-to-date, data-driven graphs to understand our environments and make informed decisions. However, many of the methods blind and visually impaired users (BVI) rely on to access data-driven information do not convey important shape-characteristics of graphs, are not refreshable, or are prohibitively expensive. To address these limitations, we introduce two refreshable, 1-DOF audio-haptic interfaces based on haptic cues fundamental to object shape perception. Slide-tone uses finger position with sonification, and Tilt-tone uses fingerpad contact inclination with sonification to provide shape feedback to users. Through formative design workshops (n = 3) and controlled evaluations (n = 8), we found that BVI participants appreciated the additional shape information, versatility, and reinforced understanding these interfaces provide; and that task accuracy was comparable to using interactive tactile graphics or sonification alone. Our research offers insight into the benefits, limitations, and considerations for adopting these haptic cues into a data visualization context. | false | false | [
"Danyang Fan",
"Alexa Fay Siu",
"Wing-Sum Adrienne Law",
"Raymond Ruihong Zhen",
"Sile O'Modhrain",
"Sean Follmer"
] | [] | [] | [] |
CHI | 2,022 | Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual Reality | 10.1145/3491102.3517454 | Haptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations’ real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion. | false | false | [
"Sebastian Günther 0001",
"Julian Rasch",
"Dominik Schön",
"Florian Müller 0003",
"Martin Schmitz",
"Jan Riemann",
"Andrii Matviienko",
"Max Mühlhäuser"
] | [] | [] | [] |
CHI | 2,022 | STRAIDE: A Research Platform for Shape-Changing Spatial Displays based on Actuated Strings | 10.1145/3491102.3517462 | We present STRAIDE, a string-actuated interactive display environment that allows to explore the promising potential of shape-changing interfaces for casual visualizations. At the core, we envision a platform that spatially levitates elements to create dynamic visual shapes in space. We conceptualize this type of tangible mid-air display and discuss its multifaceted design dimensions. Through a design exploration, we realize a physical research platform with adjustable parameters and modular components. For conveniently designing and implementing novel applications, we provide developer tools ranging from graphical emulators to in-situ augmented reality representations. To demonstrate STRAIDE’s reconfigurability, we further introduce three representative physical setups as a basis for situated applications including ambient notifications, personal smart home controls, and entertainment. They serve as a technical validation, lay the foundations for a discussion with developers that provided valuable insights, and encourage ideas for future usage of this type of appealing interactive installation. | false | false | [
"Severin Engert",
"Konstantin Klamka",
"Andreas Peetz",
"Raimund Dachselt"
] | [] | [] | [] |
CHI | 2,022 | Structure-aware Visualization Retrieval | 10.1145/3491102.3502048 | With the wide usage of data visualizations, a huge number of Scalable Vector Graphic (SVG)-based visualizations have been created and shared online. Accordingly, there has been an increasing interest in exploring how to retrieve perceptually similar visualizations from a large corpus, since it can benefit various downstream applications such as visualization recommendation. Existing methods mainly focus on the visual appearance of visualizations by regarding them as bitmap images. However, the structural information intrinsically existing in SVG-based visualizations is ignored. Such structural information can delineate the spatial and hierarchical relationship among visual elements, and characterize visualizations thoroughly from a new perspective. This paper presents a structure-aware method to advance the performance of visualization retrieval by collectively considering both the visual and structural information. We extensively evaluated our approach through quantitative comparisons, a user study and case studies. The results demonstrate the effectiveness of our approach and its advantages over existing methods. | false | false | [
"Haotian Li 0001",
"Yong Wang 0021",
"Aoyu Wu",
"Huan Wei",
"Huamin Qu"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2202.05960v1",
"icon": "paper"
}
] |
CHI | 2,022 | Supporting Accessible Data Visualization Through Audio Data Narratives | 10.1145/3491102.3517678 | Online data visualizations play an important role in informing public opinion but are often inaccessible to screen reader users. To address the need for accessible data representations on the web that provide direct, multimodal, and up-to-date access to the data, we investigate audio data narratives –which combine textual descriptions and sonification (the mapping of data to non-speech sounds). We conduct two co-design workshops with screen reader users to define design principles that guide the structure, content, and duration of a data narrative. Based on these principles and relevant auditory processing characteristics, we propose a dynamic programming approach to automatically generate an audio data narrative from a given dataset. We evaluate our approach with 16 screen reader users. Findings show with audio narratives, users gain significantly more insights from the data. Users describe data narratives help them better extract and comprehend the information in both the sonification and description. | false | false | [
"Alexa F. Siu",
"Gene S.-H. Kim",
"Sile O'Modhrain",
"Sean Follmer"
] | [] | [] | [] |
CHI | 2,022 | Supporting Data-Driven Basketball Journalism through Interactive Visualization | 10.1145/3491102.3502078 | Basketball writers and journalists report on the sport that millions of fans follow and love. However, the recent emergence of pervasive data about the sport and the growth of new forms of sports analytics is changing writers’ jobs. While these writers seek to leverage the data and analytics to create engaging, data-driven stories, they typically lack the technical background to perform analytics or efficiently explore data. We investigated and analyzed the work and context of basketball writers, interviewed nine stakeholders to understand the challenges from a holistic view. Based on what we learned, we designed and constructed two interactive visualization systems that support rapid and in-depth sports data exploration and sense-making to enhance their articles and reporting. We deployed the systems during the recent NBA playoffs to gather initial feedback. This article describes the visualization design study we conducted, the resulting visualization systems, and what we learned to potentially help basketball writers in the future. | false | false | [
"Yu Fu",
"John T. Stasko"
] | [] | [] | [] |
CHI | 2,022 | Supporting the Contact Tracing Process with WiFi Location Data: Opportunities and Challenges | 10.1145/3491102.3517703 | Contact tracers assist in containing the spread of highly infectious diseases such as COVID-19 by engaging community members who receive a positive test result in order to identify close contacts. Many contact tracers rely on community member’s recall for those identifications, and face limitations such as unreliable memory. To investigate how technology can alleviate this challenge, we developed a visualization tool using de-identified location data sensed from campus WiFi and provided it to contact tracers during mock contact tracing calls. While the visualization allowed contact tracers to find and address inconsistencies due to gaps in community member’s memory, it also introduced inconsistencies such as false-positive and false-negative reports due to imperfect data, and information sharing hesitancy. We suggest design implications for technologies that can better highlight and inform contact tracers of potential areas of inconsistencies, and further present discussion on using imperfect data in decision making. | false | false | [
"Kaely Hall",
"Dong Whi Yoo",
"Wenrui Zhang",
"Mehrab Bin Morshed",
"Vedant Das Swain",
"Gregory D. Abowd",
"Munmun De Choudhury",
"Alex Endert",
"John T. Stasko",
"Jennifer G. Kim"
] | [] | [] | [] |
CHI | 2,022 | Symphony: Composing Interactive Interfaces for Machine Learning | 10.1145/3491102.3502102 | Interfaces for machine learning (ML), information and visualizations about models or data, can help practitioners build robust and responsible ML systems. Despite their benefits, recent studies of ML teams and our interviews with practitioners (n=9) showed that ML interfaces have limited adoption in practice. While existing ML interfaces are effective for specific tasks, they are not designed to be reused, explored, and shared by multiple stakeholders in cross-functional teams. To enable analysis and communication between different ML practitioners, we designed and implemented Symphony, a framework for composing interactive ML interfaces with task-specific, data-driven components that can be used across platforms such as computational notebooks and web dashboards. We developed Symphony through participatory design sessions with 10 teams (n=31), and discuss our findings from deploying Symphony to 3 production ML projects at Apple. Symphony helped ML practitioners discover previously unknown issues like data duplicates and blind spots in models while enabling them to share insights with other stakeholders. | false | false | [
"Alex Bäuerle",
"Ángel Alexander Cabrera",
"Fred Hohman",
"Megan Maher",
"David Koski",
"Xavier Suau",
"Titus Barik",
"Dominik Moritz"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2202.08946v1",
"icon": "paper"
}
] |
CHI | 2,022 | Tangible Globes for Data Visualisation in Augmented Reality | 10.1145/3491102.3517715 | Head-mounted augmented reality (AR) displays allow for the seamless integration of virtual visualisation with contextual tangible references, such as physical (tangible) globes. We explore the design of immersive geospatial data visualisation with AR and tangible globes. We investigate the “tangible-virtual interplay” of tangible globes with virtual data visualisation, and propose a conceptual approach for designing immersive geospatial globes. We demonstrate a set of use cases, such as augmenting a tangible globe with virtual overlays, using a physical globe as a tangible input device for interacting with virtual globes and maps, and linking an augmented globe to an abstract data visualisation. We gathered qualitative feedback from experts about our use case visualisations, and compiled a summary of key takeaways as well as ideas for envisioned future improvements. The proposed design space, example visualisations and lessons learned aim to guide the design of tangible globes for data visualisation in AR. | false | false | [
"Kadek Ananta Satriadi",
"Jim Smiley",
"Barrett Ens",
"Maxime Cordeil",
"Tobias Czauderna",
"Benjamin Lee",
"Ying Yang",
"Tim Dwyer",
"Bernhard Jenny"
] | [] | [] | [] |
CHI | 2,022 | Telling Stories from Computational Notebooks: AI-Assisted Presentation Slides Creation for Presenting Data Science Work | 10.1145/3491102.3517615 | Creating presentation slides is a critical but time-consuming task for data scientists. While researchers have proposed many AI techniques to lift data scientists’ burden on data preparation and model selection, few have targeted the presentation creation task. Based on the needs identified from a formative study, this paper presents NB2Slides, an AI system that facilitates users to compose presentations of their data science work. NB2Slides uses deep learning methods as well as example-based prompts to generate slides from computational notebooks, and take users’ input (e.g., audience background) to structure the slides. NB2Slides also provides an interactive visualization that links the slides with the notebook to help users further edit the slides. A follow-up user evaluation with 12 data scientists shows that participants believed NB2Slides can improve efficiency and reduces the complexity of creating slides. Yet, participants questioned the future of full automation and suggested a human-AI collaboration paradigm. | false | false | [
"Chengbo Zheng",
"Dakuo Wang",
"April Yi Wang",
"Xiaojuan Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2203.11085v3",
"icon": "paper"
}
] |
CHI | 2,022 | The Pattern is in the Details: An Evaluation of Interaction Techniques for Locating, Searching, and Contextualizing Details in Multivariate Matrix Visualizations | 10.1145/3491102.3517673 | Matrix visualizations are widely used to display large-scale network, tabular, set, or sequential data. They typically only encode a single value per cell, e.g., through color. However, this can greatly limit the visualizations’ utility when exploring multivariate data, where each cell represents a data point with multiple values (referred to as details). Three well-established interaction approaches can be applicable in multivariate matrix visualizations (or MMV): focus+context, pan&zoom, and overview+detail. However, there is little empirical knowledge of how these approaches compare in exploring MMV. We report on two studies comparing them for locating, searching, and contextualizing details in MMV. We first compared four focus+context techniques and found that the fisheye lens overall outperformed the others. We then compared the fisheye lens, to pan&zoom and overview+detail. We found that pan&zoom was faster in locating and searching details, and as good as overview+detail in contextualizing details. | false | false | [
"Yalong Yang 0001",
"Wenyu Xia",
"Fritz Lekschas",
"Carolina Nobre",
"Robert Krüger",
"Hanspeter Pfister"
] | [] | [] | [] |
CHI | 2,022 | Understanding and Designing Avatar Biosignal Visualizations for Social Virtual Reality Entertainment | 10.1145/3491102.3517451 | Visualizing biosignals can be important for social Virtual Reality (VR), where avatar non-verbal cues are missing. While several biosignal representations exist, designing effective visualizations and understanding user perceptions within social VR entertainment remains unclear. We adopt a mixed-methods approach to design biosignals for social VR entertainment. Using survey (N=54), context-mapping (N=6), and co-design (N=6) methods, we derive four visualizations. We then ran a within-subjects study (N=32) in a virtual jazz-bar to investigate how heart rate (HR) and breathing rate (BR) visualizations, and signal rate, influence perceived avatar arousal, user distraction, and preferences. Findings show that skeuomorphic visualizations for both biosignals allow differentiable arousal inference; skeuomorphic and particles were least distracting for HR, whereas all were similarly distracting for BR; biosignal perceptions often depend on avatar relations, entertainment type, and emotion inference of avatars versus spaces. We contribute HR and BR visualizations, and considerations for designing social VR entertainment biosignal visualizations. | false | false | [
"Sueyoon Lee",
"Abdallah El Ali",
"Maarten W. A. Wijntjes",
"Pablo César"
] | [] | [] | [] |
CHI | 2,022 | Understanding Visual Investigation Patterns Through Digital "Field" Observations | 10.1145/3491102.3517445 | An extensive body of work in visual analytics has examined how users conduct analyses in scientific and academic settings, identifying and categorizing user goals and the actions they undertake to achieve them. However, most of this work has studied the analysis process in simulated or isolated environments, leading to a gap in connecting these findings to large-scale business (enterprise) contexts, where visual analysis is most needed to make sense of the large amounts of data being generated. In this work, we conducted digital ”field” observations to understand how users conduct visual analyses in an enterprise setting, where they operate within a large ecosystem of systems and people. From these observations, we identified four common objectives, six recurring visual investigation patterns, and five emergent themes. We also performed a quantitative analysis of logs over 2530 user sessions from a second visual analysis product to validate that our patterns were not product-specific. | false | false | [
"Irene Rae",
"Feng Zhou",
"Martin Bilsing",
"Philipp Bunge"
] | [] | [] | [] |
CHI | 2,022 | VisGuide: User-Oriented Recommendations for Data Event Extraction | 10.1145/3491102.3517648 | Data exploration systems have become popular tools with which data analysts and others can explore raw data and organize their observations. However, users of such systems who are unfamiliar with their datasets face several challenges when trying to extract data events of interest to them. Those challenges include progressively discovering informative charts, organizing them into a logical order to depict a meaningful fact, and arranging one or more facts to illustrate a data event. To alleviate them, we propose VisGuide—a data exploration system that generates personalized recommendations to aid users’ discovery of data events in breadth and depth by incrementally learning their data exploration preferences and recommending meaningful charts tailored to them. As well as user preferences, VisGuide’s recommendations simultaneously consider sequence organization and chart presentation. We conducted two user studies to evaluate 1) the usability of VisGuide and 2) user satisfaction with its recommendation system. The results of those studies indicate that VisGuide can effectively help users create coherent and user-oriented visualization trees that represent meaningful data events. | false | false | [
"Yu-Rong Cao",
"Xiao-Han Li",
"Jia-Yu Pan",
"Wen-Chieh Lin"
] | [] | [] | [] |
CHI | 2,022 | Visualization Accessibility in the Wild: Challenges Faced by Visualization Designers | 10.1145/3491102.3517630 | Data visualizations are now widely used across many disciplines. However, many of them are not easily accessible for visually impaired people. In this work, we use three-staged mixed methods to understand the current practice of accessible visualization design for visually impaired people. We analyzed 95 visualizations from various venues to inspect how they are made inaccessible. To understand the rationale and context behind the design choices, we also conducted surveys with 144 practitioners in the U.S. and follow-up interviews with ten selected survey participants. Our findings include the difficulties of handling modern complex and interactive visualizations and the lack of accessibility support from visualization tools in addition to personal and organizational factors making it challenging to perform accessible design practices. | false | false | [
"Shakila Cherise S. Joyner",
"Amalia Riegelhuth",
"Kathleen Garrity",
"Yea-Seul Kim",
"Nam Wook Kim"
] | [] | [] | [] |
CHI | 2,022 | Visualizing Instructions for Physical Training: Exploring Visual Cues to Support Movement Learning from Instructional Videos | 10.1145/3491102.3517735 | Instructional videos for physical training have gained popularity in recent years among sport and fitness practitioners, due to the proliferation of affordable and ubiquitous forms of online training. Yet, learning movement this way poses challenges: lack of feedback and personalised instructions, and having to rely on personal imitation capacity to learn movements. We address some of these challenges by exploring visual cues’ potential to help people imitate movements from instructional videos. With a Research through Design approach, focused on strength training, we augmented an instructional video with different sets of visual cues: directional cues, body highlights, and metaphorical visualizations. We tested each set with ten practitioners over three recorded sessions, with follow-up interviews. Through thematic analysis, we derived insights on the effect of each set of cues for supporting movement learning. Finally, we generated design takeaways to inform future HCI work on visual cues for instructional training videos. | false | false | [
"Alessandra Semeraro",
"Laia Turmo Vidal"
] | [] | [] | [] |
CHI | 2,022 | Visualizing Urban Accessibility: Investigating Multi-Stakeholder Perspectives through a Map-based Design Probe Study | 10.1145/3491102.3517460 | Urban accessibility assessments are challenging: they involve varied stakeholders across decision-making contexts while serving a diverse population of people with disabilities. To better support urban accessibility assessment using data visualizations, we conducted a three-part interview study with 25 participants across five stakeholder groups using map visualization probes. We present a multi-stakeholder analysis of visualization needs and sensemaking processes to explore how interactive visualizations can support stakeholder decision making. In particular, we elaborate how stakeholders’ varying levels of familiarity with accessibility, geospatial analysis, and specific geographic locations influences their sensemaking needs. We then contribute 10 design considerations for geovisual analytic tools for urban accessibility communication, planning, policymaking, and advocacy. | false | false | [
"Manaswi Saha",
"Siddhant Patil",
"Emily Cho",
"Evie Yu-Yen Cheng",
"Chris Horng",
"Devanshi Chauhan",
"Rachel Kangas",
"Richard McGovern",
"Anthony Li",
"Jeffrey Heer",
"Jon E. Froehlich"
] | [] | [] | [] |
CHI | 2,022 | VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In | 10.1145/3491102.3517431 | JavaScript visualization libraries are widely used to create online data visualizations but provide limited access to their information for screen-reader users. Building on prior findings about the experiences of screen-reader users with online data visualizations, we present VoxLens, an open-source JavaScript plug-in that—with a single line of code—improves the accessibility of online data visualizations for screen-reader users using a multi-modal approach. Specifically, VoxLens enables screen-reader users to obtain a holistic summary of presented information, play sonified versions of the data, and interact with visualizations in a “drill-down” manner using voice-activated commands. Through task-based experiments with 21 screen-reader users, we show that VoxLens improves the accuracy of information extraction and interaction time by 122% and 36%, respectively, over existing conventional interaction with online data visualizations. Our interviews with screen-reader users suggest that VoxLens is a “game-changer” in making online data visualizations accessible to screen-reader users, saving them time and effort. | false | false | [
"Ather Sharif",
"Olivia H. Wang",
"Alida T. Muongchan",
"Katharina Reinecke",
"Jacob O. Wobbrock"
] | [] | [] | [] |
Vis | 2,021 | A Critical Reflection on Visualization Research: Where Do Decision Making Tasks Hide? | 10.1109/TVCG.2021.3114813 | It has been widely suggested that a key goal of visualization systems is to assist decision making, but is this true? We conduct a critical investigation on whether the activity of decision making is indeed central to the visualization domain. By approaching decision making as a user task, we explore the degree to which decision tasks are evident in visualization research and user studies. Our analysis suggests that decision tasks are not commonly found in current visualization task taxonomies and that the visualization field has yet to leverage guidance from decision theory domains on how to study such tasks. We further found that the majority of visualizations addressing decision making were not evaluated based on their ability to assist decision tasks. Finally, to help expand the impact of visual analytics in organizational as well as casual decision making activities, we initiate a research agenda on how decision making assistance could be elevated throughout visualization research. | false | false | [
"Evanthia Dimara",
"John T. Stasko"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/UDfr73S95YU",
"icon": "video"
}
] |
Vis | 2,021 | A Design Space for Applying the Freytag's Pyramid Structure to Data Stories | 10.1109/TVCG.2021.3114774 | Data stories integrate compelling visual content to communicate data insights in the form of narratives. The narrative structure of a data story serves as the backbone that determines its expressiveness, and it can largely influence how audiences perceive the insights. Freytag's Pyramid is a classic narrative structure that has been widely used in film and literature. While there are continuous recommendations and discussions about applying Freytag's Pyramid to data stories, little systematic and practical guidance is available on how to use Freytag's Pyramid for creating structured data stories. To bridge this gap, we examined how existing practices apply Freytag's Pyramid by analyzing stories extracted from 103 data videos. Based on our findings, we proposed a design space of narrative patterns, data flows, and visual communications to provide practical guidance on achieving narrative intents, organizing data facts, and selecting visual design techniques through story creation. We evaluated the proposed design space through a workshop with 25 participants. Results show that our design space provides a clear framework for rapid storyboarding of data stories with Freytag's Pyramid. | false | false | [
"Leni Yang",
"Xian Xu",
"Xingyu Lan",
"Ziyan Liu",
"Shunan Guo",
"Yang Shi 0007",
"Huamin Qu",
"Nan Cao"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/l7mmyofcD8A",
"icon": "video"
}
] |
Vis | 2,021 | A Domain-Oblivious Approach for Learning Concise Representations of Filtered Topological Spaces for Clustering | 10.1109/TVCG.2021.3114872 | Persistence diagrams have been widely used to quantify the underlying features of filtered topological spaces in data visualization. In many applications, computing distances between diagrams is essential; however, computing these distances has been challenging due to the computational cost. In this paper, we propose a persistence diagram hashing framework that learns a binary code representation of persistence diagrams, which allows for fast computation of distances. This framework is built upon a generative adversarial network (GAN) with a diagram distance loss function to steer the learning process. Instead of using standard representations, we hash diagrams into binary codes, which have natural advantages in large-scale tasks. The training of this model is domain-oblivious in that it can be computed purely from synthetic, randomly created diagrams. As a consequence, our proposed method is directly applicable to various datasets without the need for retraining the model. These binary codes, when compared using fast Hamming distance, better maintain topological similarity properties between datasets than other vectorized representations. To evaluate this method, we apply our framework to the problem of diagram clustering and we compare the quality and performance of our approach to the state-of-the-art. In addition, we show the scalability of our approach on a dataset with 10k persistence diagrams, which is not possible with current techniques. Moreover, our experimental results demonstrate that our method is significantly faster with the potential of less memory usage, while retaining comparable or better quality comparisons. | false | false | [
"Yu Qin",
"Brittany Terese Fasy",
"Carola Wenk",
"Brian Summa"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2105.12208v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/7uGtGV1G7qc",
"icon": "video"
}
] |
Vis | 2,021 | A Memory Efficient Encoding for Ray Tracing Large Unstructured Data | 10.1109/TVCG.2021.3114869 | In theory, efficient and high-quality rendering of unstructured data should greatly benefit from modern GPUs, but in practice, GPUs are often limited by the large amount of memory that large meshes require for element representation and for sample reconstruction acceleration structures. We describe a memory-optimized encoding for large unstructured meshes that efficiently encodes both the unstructured mesh and corresponding sample reconstruction acceleration structure, while still allowing for fast random-access sampling as required for rendering. We demonstrate that for large data our encoding allows for rendering even the 2.9 billion element Mars Lander on a single off-the-shelf GPU-and the largest 6.3 billion version on a pair of such GPUs. | false | false | [
"Ingo Wald",
"Nathan Morrical",
"Stefan Zellmann"
] | [] | [] | [] |
Vis | 2,021 | A Mixed-Initiative Approach to Reusing Infographic Charts | 10.1109/TVCG.2021.3114856 | Infographic bar charts have been widely adopted for communicating numerical information because of their attractiveness and memorability. However, these infographics are often created manually with general tools, such as PowerPoint and Adobe Illustrator, and merely composed of primitive visual elements, such as text blocks and shapes. With the absence of chart models, updating or reusing these infographics requires tedious and error-prone manual edits. In this paper, we propose a mixed-initiative approach to mitigate this pain point. On one hand, machines are adopted to perform precise and trivial operations, such as mapping numerical values to shape attributes and aligning shapes. On the other hand, we rely on humans to perform subjective and creative tasks, such as changing embellishments or approving the edits made by machines. We encapsulate our technique in a PowerPoint add-in prototype and demonstrate the effectiveness by applying our technique on a diverse set of infographic bar chart examples. | false | false | [
"Weiwei Cui",
"Jinpeng Wang",
"He Huang",
"Yun Wang 0012",
"Chin-Yew Lin",
"Haidong Zhang",
"Dongmei Zhang 0001"
] | [] | [] | [] |
Vis | 2,021 | A Visualization Approach for Monitoring Order Processing in E-Commerce Warehouse | 10.1109/TVCG.2021.3114878 | The efficiency of warehouses is vital to e-commerce. Fast order processing at the warehouses ensures timely deliveries and improves customer satisfaction. However, monitoring, analyzing, and manipulating order processing in the warehouses in real time are challenging for traditional methods due to the sheer volume of incoming orders, the fuzzy definition of delayed order patterns, and the complex decision-making of order handling priorities. In this paper, we adopt a data-driven approach and propose OrderMonitor, a visual analytics system that assists warehouse managers in analyzing and improving order processing efficiency in real time based on streaming warehouse event data. Specifically, the order processing pipeline is visualized with a novel pipeline design based on the sedimentation metaphor to facilitate real-time order monitoring and suggest potentially abnormal orders. We also design a novel visualization that depicts order timelines based on the Gantt charts and Marey's graphs. Such a visualization helps the managers gain insights into the performance of order processing and find major blockers for delayed orders. Furthermore, an evaluating view is provided to assist users in inspecting order details and assigning priorities to improve the processing performance. The effectiveness of OrderMonitor is evaluated with two case studies on a real-world warehouse dataset. | false | false | [
"Junxiu Tang",
"Yuhua Zhou",
"Tan Tang",
"Di Weng",
"Boyang Xie",
"Lingyun Yu 0001",
"Huaqiang Zhang",
"Yingcai Wu"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/GZCKFCVMYFs",
"icon": "video"
}
] |
Vis | 2,021 | Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content | 10.1109/TVCG.2021.3114770 | Natural language descriptions sometimes accompany visualizations to better communicate and contextualize their insights, and to improve their accessibility for readers with disabilities. However, it is difficult to evaluate the usefulness of these descriptions, and how effectively they improve access to meaningful information, because we have little understanding of the semantic content they convey, and how different readers receive this content. In response, we introduce a conceptual model for the semantic content conveyed by natural language descriptions of visualizations. Developed through a grounded theory analysis of 2,147 sentences, our model spans four levels of semantic content: enumerating visualization construction properties (e.g., marks and encodings); reporting statistical concepts and relations (e.g., extrema and correlations); identifying perceptual and cognitive phenomena (e.g., complex trends and patterns); and elucidating domain-specific insights (e.g., social and political context). To demonstrate how our model can be applied to evaluate the effectiveness of visualization descriptions, we conduct a mixed-methods evaluation with 30 blind and 90 sighted readers, and find that these reader groups differ significantly on which semantic content they rank as most useful. Together, our model and findings suggest that access to meaningful information is strongly reader-specific, and that research in automatic visualization captioning should orient toward descriptions that more richly communicate overall trends and statistics, sensitive to reader preferences. Our work further opens a space of research on natural language as a data interface coequal with visualization. | false | false | [
"Alan Lundgard",
"Arvind Satyanarayan"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2110.04406v1",
"icon": "paper"
}
] |
Vis | 2,021 | AffectiveTDA: Using Topological Data Analysis to Improve Analysis and Explainability in Affective Computing | 10.1109/TVCG.2021.3114784 | We present an approach utilizing Topological Data Analysis to study the structure of face poses used in affective computing, i.e., the process of recognizing human emotion. The approach uses a conditional comparison of different emotions, both respective and irrespective of time, with multiple topological distance metrics, dimension reduction techniques, and face subsections (e.g., eyes, nose, mouth, etc.). The results confirm that our topology-based approach captures known patterns, distinctions between emotions, and distinctions between individuals, which is an important step towards more robust and explainable emotion recognition by machines. | false | false | [
"Hamza Elhamdadi",
"Shaun J. Canavan",
"Paul Rosen 0001"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2107.08573v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/KgQhn8BgztQ",
"icon": "video"
}
] |
Vis | 2,021 | An Automated Approach to Reasoning About Task-Oriented Insights in Responsive Visualization | 10.1109/TVCG.2021.3114782 | Authors often transform a large screen visualization for smaller displays through rescaling, aggregation and other techniques when creating visualizations for both desktop and mobile devices (i.e., responsive visualization). However, transformations can alter relationships or patterns implied by the large screen view, requiring authors to reason carefully about what information to preserve while adjusting their design for the smaller display. We propose an automated approach to approximating the loss of support for task-oriented visualization insights (identification, comparison, and trend) in responsive transformation of a source visualization. We operationalize identification, comparison, and trend loss as objective functions calculated by comparing properties of the rendered source visualization to each realized target (small screen) visualization. To evaluate the utility of our approach, we train machine learning models on human ranked small screen alternative visualizations across a set of source visualizations. We find that our approach achieves an accuracy of 84% (random forest model) in ranking visualizations. We demonstrate this approach in a prototype responsive visualization recommender that enumerates responsive transformations using Answer Set Programming and evaluates the preservation of task-oriented insights using our loss measures. We discuss implications of our approach for the development of automated and semi-automated responsive visualization recommendation. | false | false | [
"Hyeok Kim",
"Ryan A. Rossi",
"Abhraneel Sarma",
"Dominik Moritz",
"Jessica Hullman"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2107.08141v1",
"icon": "paper"
}
] |
Vis | 2,021 | An Efficient Dual-Hierarchy t-SNE Minimization | 10.1109/TVCG.2021.3114817 | t-distributed Stochastic Neighbour Embedding (t-SNE) has become a standard for exploratory data analysis, as it is capable of revealing clusters even in complex data while requiring minimal user input. While its run-time complexity limited it to small datasets in the past, recent efforts improved upon the expensive similarity computations and the previously quadratic minimization. Nevertheless, t-SNE still has high runtime and memory costs when operating on millions of points. We present a novel method for executing the t-SNE minimization. While our method overall retains a linear runtime complexity, we obtain a significant performance increase in the most expensive part of the minimization. We achieve a significant improvement without a noticeable decrease in accuracy even when targeting a 3D embedding. Our method constructs a pair of spatial hierarchies over the embedding, which are simultaneously traversed to approximate many N-body interactions at once. We demonstrate an efficient GPGPU implementation and evaluate its performance against state-of-the-art methods on a variety of datasets. | false | false | [
"Mark van de Ruit",
"Markus Billeter",
"Elmar Eisemann"
] | [] | [] | [] |
Vis | 2,021 | An Evaluation-Focused Framework for Visualization Recommendation Algorithms | 10.1109/TVCG.2021.3114814 | Although we have seen a proliferation of algorithms for recommending visualizations, these algorithms are rarely compared with one another, making it difficult to ascertain which algorithm is best for a given visual analysis scenario. Though several formal frameworks have been proposed in response, we believe this issue persists because visualization recommendation algorithms are inadequately specified from an evaluation perspective. In this paper, we propose an evaluation-focused framework to contextualize and compare a broad range of visualization recommendation algorithms. We present the structure of our framework, where algorithms are specified using three components: (1) a graph representing the full space of possible visualization designs, (2) the method used to traverse the graph for potential candidates for recommendation, and (3) an oracle used to rank candidate designs. To demonstrate how our framework guides the formal comparison of algorithmic performance, we not only theoretically compare five existing representative recommendation algorithms, but also empirically compare four new algorithms generated based on our findings from the theoretical comparison. Our results show that these algorithms behave similarly in terms of user performance, highlighting the need for more rigorous formal comparisons of recommendation algorithms to further clarify their benefits in various analysis scenarios. | false | false | [
"Zehua Zeng",
"Phoebe Moh",
"Fan Du",
"Jane Hoffswell",
"Tak Yeon Lee",
"Sana Malik",
"Eunyee Koh",
"Leilani Battle"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2109.02706v1",
"icon": "paper"
}
] |
Vis | 2,021 | Attribute-based Explanation of Non-Linear Embeddings of High-Dimensional Data | 10.1109/TVCG.2021.3114870 | Embeddings of high-dimensional data are widely used to explore data, to verify analysis results, and to communicate information. Their explanation, in particular with respect to the input attributes, is often difficult. With linear projects like PCA the axes can still be annotated meaningfully. With non-linear projections this is no longer possible and alternative strategies such as attribute-based color coding are required. In this paper, we review existing augmentation techniques and discuss their limitations. We present the Non-Linear Embeddings Surveyor (NoLiES) that combines a novel augmentation strategy for projected data (rangesets) with interactive analysis in a small multiples setting. Rangesets use a set-based visualization approach for binned attribute values that enable the user to quickly observe structure and detect outliers. We detail the link between algebraic topology and rangesets and demonstrate the utility of NoLiES in case studies with various challenges (complex attribute value distribution, many attributes, many data points) and a real-world application to understand latent features of matrix completion in thermodynamics. | false | false | [
"Jan-Tobias Sohns",
"Michaela Schmitt",
"Fabian Jirasek",
"Hans Hasse",
"Heike Leitte"
] | [] | [] | [] |
Vis | 2,021 | Augmenting Sports Videos with VisCommentator | 10.1109/TVCG.2021.3114806 | Visualizing data in sports videos is gaining traction in sports analytics, given its ability to communicate insights and explicate player strategies engagingly. However, augmenting sports videos with such data visualizations is challenging, especially for sports analysts, as it requires considerable expertise in video editing. To ease the creation process, we present a design space that characterizes augmented sports videos at an element-level (what the constituents are) and clip-level (how those constituents are organized). We do so by systematically reviewing 233 examples of augmented sports videos collected from TV channels, teams, and leagues. The design space guides selection of data insights and visualizations for various purposes. Informed by the design space and close collaboration with domain experts, we design VisCommentator, a fast prototyping tool, to eases the creation of augmented table tennis videos by leveraging machine learning-based data extractors and design space-based visualization recommendations. With VisCommentator, sports analysts can create an augmented video by selecting the data to visualize instead of manually drawing the graphical marks. Our system can be generalized to other racket sports (e.g., tennis, badminton) once the underlying datasets and models are available. A user study with seven domain experts shows high satisfaction with our system, confirms that the participants can reproduce augmented sports videos in a short period, and provides insightful implications into future improvements and opportunities. | false | false | [
"Zhutian Chen",
"Shuainan Ye",
"Xiangtong Chu",
"Haijun Xia",
"Hui Zhang 0051",
"Huamin Qu",
"Yingcai Wu"
] | [
"HM"
] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2306.13491v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/XyvyPYvd54k",
"icon": "video"
}
] |
Vis | 2,021 | Automatic Polygon Layout for Primal-Dual Visualization of Hypergraphs | 10.1109/TVCG.2021.3114759 | N-ary relationships, which relate $N$ entities where $N$ is not necessarily two, can be visually represented as polygons whose vertices are the entities of the relationships. Manually generating a high-quality layout using this representation is labor-intensive. In this paper, we provide an automatic polygon layout generation algorithm for the visualization of N-ary relationships. At the core of our algorithm is a set of objective functions motivated by a number of design principles that we have identified. These objective functions are then used in an optimization framework that we develop to achieve high-quality layouts. Recognizing the duality between entities and relationships in the data, we provide a second visualization in which the roles of entities and relationships in the original data are reversed. This can lead to additional insight about the data. Furthermore, we enhance our framework for a joint optimization on the primal layout (original data) and the dual layout (where the roles of entities and relationships are reversed). This allows users to inspect their data using two complementary views. We apply our visualization approach to a number of datasets that include co-authorship data and social contact pattern data. | false | false | [
"Botong Qu",
"Eugene Zhang",
"Yue Zhang 0009"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2108.00671v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/M9siftg-vIc",
"icon": "video"
}
] |
Vis | 2,021 | Causal Support: Modeling Causal Inferences with Visualizations | 10.1109/TVCG.2021.3114824 | Analysts often make visual causal inferences about possible data-generating models. However, visual analytics (VA) software tends to leave these models implicit in the mind of the analyst, which casts doubt on the statistical validity of informal visual “insights”. We formally evaluate the quality of causal inferences from visualizations by adopting causal support—a Bayesian cognition model that learns the probability of alternative causal explanations given some data—as a normative benchmark for causal inferences. We contribute two experiments assessing how well crowdworkers can detect (1) a treatment effect and (2) a confounding relationship. We find that chart users' causal inferences tend to be insensitive to sample size such that they deviate from our normative benchmark. While interactively cross-filtering data in visualizations can improve sensitivity, on average users do not perform reliably better with common visualizations than they do with textual contingency tables. These experiments demonstrate the utility of causal support as an evaluation framework for inferences in VA and point to opportunities to make analysts' mental models more explicit in VA software. | false | false | [
"Alex Kale",
"Yifan Wu",
"Jessica Hullman"
] | [
"HM"
] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2107.13485v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Tl6gXHw-EvU",
"icon": "video"
}
] |
Vis | 2,021 | Communicating Visualizations without Visuals: Investigation of Visualization Alternative Text for People with Visual Impairments | 10.1109/TVCG.2021.3114846 | Alternative text is critical in communicating graphics to people who are blind or have low vision. Especially for graphics that contain rich information, such as visualizations, poorly written or an absence of alternative texts can worsen the information access inequality for people with visual impairments. In this work, we consolidate existing guidelines and survey current practices to inspect to what extent current practices and recommendations are aligned. Then, to gain more insight into what people want in visualization alternative texts, we interviewed 22 people with visual impairments regarding their experience with visualizations and their information needs in alternative texts. The study findings suggest that participants actively try to construct an image of visualizations in their head while listening to alternative texts and wish to carry out visualization tasks (e.g., retrieve specific values) as sighted viewers would. The study also provides ample support for the need to reference the underlying data instead of visual elements to reduce users' cognitive burden. Informed by the study, we provide a set of recommendations to compose an informative alternative text. | false | false | [
"Crescentia Jung",
"Shubham Mehta",
"Atharva Kulkarni",
"Yuhang Zhao 0001",
"Yea-Seul Kim"
] | [
"HM"
] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2108.03657v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/JsS1FfH7J8s",
"icon": "video"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.