Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
EuroVis | 2,020 | Survey on Individual Differences in Visualization | 10.1111/cgf.14033 | Developments in data visualization research have enabled visualization systems to achieve great general usability and application across a variety of domains. These advancements have improved not only people's understanding of data, but also the general understanding of people themselves, and how they interact with visualization systems. In particular, researchers have gradually come to recognize the deficiency of having one‐size‐fits‐all visualization interfaces, as well as the significance of individual differences in the use of data visualization systems. Unfortunately, the absence of comprehensive surveys of the existing literature impedes the development of this research. In this paper, we review the research perspectives, as well as the personality traits and cognitive abilities, visualizations, tasks, and measures investigated in the existing literature. We aim to provide a detailed summary of existing scholarship, produce evidence‐based reviews, and spur future inquiry. | false | false | [
"Zhengliang Liu",
"R. Jordan Crouser",
"Alvitta Ottley"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2002.07950v2",
"icon": "paper"
}
] |
EuroVis | 2,020 | Survey on the Analysis of User Interactions and Visualization Provenance | 10.1111/cgf.14035 | There is fast‐growing literature on provenance‐related research, covering aspects such as its theoretical framework, use cases, and techniques for capturing, visualizing, and analyzing provenance data. As a result, there is an increasing need to identify and taxonomize the existing scholarship. Such an organization of the research landscape will provide a complete picture of the current state of inquiry and identify knowledge gaps or possible avenues for further investigation. In this STAR, we aim to produce a comprehensive survey of work in the data visualization and visual analytics field that focus on the analysis of user interaction and provenance data. We structure our survey around three primary questions: (1) WHY analyze provenance data, (2) WHAT provenance data to encode and how to encode it, and (3) HOW to analyze provenance data. A concluding discussion provides evidence‐based guidelines and highlights concrete opportunities for future development in this emerging area. The survey and papers discussed can be explored online interactively at https://provenance-survey.caleydo.org. | false | false | [
"Kai Xu 0003",
"Alvitta Ottley",
"Conny Walchshofer",
"Marc Streit",
"Remco Chang",
"John E. Wenskovitch"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/jux76",
"icon": "paper"
}
] |
EuroVis | 2,020 | The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations | 10.1111/cgf.14034 | Machine learning (ML) models are nowadays used in complex applications in various domains, such as medicine, bioinformatics, and other sciences. Due to their black box nature, however, it may sometimes be hard to understand and trust the results they provide. This has increased the demand for reliable visualization tools related to enhancing trust in ML models, which has become a prominent topic of research in the visualization community over the past decades. To provide an overview and present the frontiers of current research on the topic, we present a State‐of‐the‐Art Report (STAR) on enhancing trust in ML models with the use of interactive visualization. We define and describe the background of the topic, introduce a categorization for visualization techniques that aim to accomplish this goal, and discuss insights and opportunities for future research directions. Among our contributions is a categorization of trust against different facets of interactive ML, expanded and improved from previous research. Our results are investigated from different analytical perspectives: (a) providing a statistical overview, (b) summarizing key findings, (c) performing topic analyses, and (d) exploring the data sets used in the individual papers, all with the support of an interactive web‐based survey browser. We intend this survey to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data. | false | false | [
"Angelos Chatzimparmpas",
"Rafael Messias Martins",
"Ilir Jusufi",
"Kostiantyn Kucher",
"Fabrice Rossi",
"Andreas Kerren"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2212.11737v1",
"icon": "paper"
}
] |
EuroVis | 2,020 | The State of the Art in Map-Like Visualization | 10.1111/cgf.14031 | Cartographic maps have been shown to provide cognitive benefits when interpreting data in relation to a geographic location. In visualization, the term map‐like describes techniques that incorporate characteristics of cartographic maps in their representation of abstract data. However, the field of map‐like visualization is vast and currently lacks a clear classification of the existing techniques. Moreover, choosing the right technique to support a particular visualization task is further complicated, as techniques are scattered across different domains, with each considering different characteristics as map‐like. In this paper, we give an overview of the literature on map‐like visualization and provide a hierarchical classification of existing techniques along two general perspectives: imitation and schematization of cartographic maps. Each perspective is further divided into four principal categories that group common map‐like techniques along the visual primitives they affect. We further discuss this classification from a task‐centered view and highlight open research questions. | false | false | [
"Marius Hogräfer",
"Magnus Heitzler",
"Hans-Jörg Schulz"
] | [] | [] | [] |
EuroVis | 2,020 | Understanding the Design Space and Authoring Paradigms for Animated Data Graphics | 10.1111/cgf.13974 | Creating expressive animated data graphics often requires designers to possess highly specialized programming skills. Alternatively, the use of direct manipulation tools is popular among animation designers, but these tools have limited support for generating graphics driven by data. Our goal is to inform the design of next‐generation animated data graphic authoring tools. To understand the composition of animated data graphics, we survey real‐world examples and contribute a description of the design space. We characterize animated transitions based on object, graphic, data, and timing dimensions. We synthesize the primitives from the object, graphic, and data dimensions as a set of 10 transition types, and describe how timing primitives compose broader pacing techniques. We then conduct an ideation study that uncovers how people approach animation creation with three authoring paradigms: keyframe animation, procedural animation, and presets & templates. Our analysis shows that designers have an overall preference for keyframe animation. However, we find evidence that an authoring tool should combine these three paradigms as designers’ preferences depend on the characteristics of the animated transition design and the authoring task. Based on these findings, we contribute guidelines and design considerations for developing future animated data graphic authoring tools. | false | false | [
"John Thompson 0002",
"Zhicheng Liu",
"Wilmot Li",
"John T. Stasko"
] | [] | [] | [] |
EuroVis | 2,020 | v-plots: Designing Hybrid Charts for the Comparative Analysis of Data Distributions | 10.1111/cgf.14002 | Comparing data distributions is a core focus in descriptive statistics, and part of most data analysis processes across disciplines. In particular, comparing distributions entails numerous tasks, ranging from identifying global distribution properties, comparing aggregated statistics (e.g., mean values), to the local inspection of single cases. While various specialized visualizations have been proposed (e.g., box plots, histograms, or violin plots), they are not usually designed to support more than a few tasks, unless they are combined. In this paper, we present the v‐plot designer; a technique for authoring custom hybrid charts, combining mirrored bar charts, difference encodings, and violin‐style plots. v‐plots are customizable and enable the simultaneous comparison of data distributions on global, local, and aggregation levels. Our system design is grounded in an expert survey that compares and evaluates 20 common visualization techniques to derive guidelines for the task‐driven selection of appropriate visualizations. This knowledge externalization step allowed us to develop a guiding wizard that can tailor v‐plots to individual tasks and particular distribution properties. Finally, we confirm the usefulness of our system design and the user‐guiding process by measuring the fitness for purpose and applicability in a second study with four domain and statistic experts. | false | false | [
"Michael Blumenschein",
"Luka J. Debbeler",
"Nadine C. Lages",
"Britta Renner",
"Daniel A. Keim",
"Mennatallah El-Assady"
] | [] | [] | [] |
EuroVis | 2,020 | VA-TRAC: Geospatial Trajectory Analysis for Monitoring, Identification, and Verification in Fishing Vessel Operations | 10.1111/cgf.13966 | In order to ensure sustainability, fishing operations are governed by many rules and regulations that restrict the use of certain techniques and equipment, specify the species and size offish that can be harvested, and regulate commercial activities based on licensing schemes. As the world's second largest exporter offish and seafood products, Norway invests a significant amount of effort into maintaining natural ecosystem dynamics by ensuring compliance with its constantly evolving science‐based regulatory body. This paper introduces VA‐TRAC, a geovisual analytics application developed in collaboration with the Norwegian Directorate of Fisheries in order to address this complex task. Our approach uses automatic methods to identify possible catch operations based on fishing vessel trajectories, embedded in an interactive web‐based visual interface used to explore the results, compare them with licensing information, and incorporate the analysts’ domain knowledge into the decision making process. We present a data and task analysis based on a close collaboration with domain experts, and the design and implementation of VA‐TRAC to address the identified requirements. | false | false | [
"S. Storm-Furru",
"Stefan Bruckner"
] | [] | [] | [] |
EuroVis | 2,020 | Visual Analysis of the Finite-Time Lyapunov Exponent | 10.1111/cgf.13984 | In this paper, we present an integrated visual analytics approach to support the parametrization and exploration of flow visualization based on the finite‐time Lyapunov exponent. Such visualization of time‐dependent flow faces various challenges, including the choice of appropriate advection times, temporal regions of interest, and spatial resolution. Our approach eases these challenges by providing the user with context by means of parametric aggregations, with support and guidance for a more directed exploration, and with a set of derived measures for better qualitative assessment. We demonstrate the utility of our approach with examples from computation fluid dynamics and time‐dependent dynamical systems. | false | false | [
"Antoni Sagristà",
"Stefan Jordan",
"Filip Sadlo"
] | [] | [] | [] |
EuroVis | 2,020 | VisuaLint: Sketchy In Situ Annotations of Chart Construction Errors | 10.1111/cgf.13975 | Chart construction errors, such as truncated axes or inexpressive visual encodings, can hinder reading a visualization, or worse, imply misleading facts about the underlying data. These errors can be caught by critical readings of visualizations, but readers must have a high level of data and design literacy and must be paying close attention. To address this issue, we introduce VisuaLint: a technique for surfacing chart construction errors in situ. Inspired by the ubiquitous red wavy underline that indicates spelling mistakes, visualization elements that contain errors (e.g., axes and legends) are sketchily rendered and accompanied by a concise annotation. VisuaLint is unobtrusive — it does not interfere with reading a visualization — and its direct display establishes a close mapping between erroneous elements and the expression of error. We demonstrate five examples of VisualLint and present the results of a crowdsourced evaluation (N = 62) of its efficacy. These results contribute an empirical baseline proficiency for recognizing chart construction errors, and indicate near‐universal difficulty in error identification. We find that people more reliably identify chart construction errors after being shown examples of VisuaLint, and prefer more verbose explanations for unfamiliar or less obvious flaws. | false | false | [
"Aspen K. Hopkins",
"Michael Correll",
"Arvind Satyanarayan"
] | [] | [] | [] |
EuroVis | 2,020 | Warehouse Vis: A Visual Analytics Approach to Facilitating Warehouse Location Selection for Business Districts | 10.1111/cgf.13996 | Selecting a proper warehouse location serving to satisfy the demands of the goods from a certain business area is important to a successful retail business. However, the large solution space, uncertain traffic conditions, and varying business preferences impose great challenges on warehouse location selection. Conventional approaches mainly summarize relevant evaluation criteria and compile them into an analysis report to facilitate rapid data absorption but fail to support a comprehensive and joint decision‐making process in warehouse location selection. In this paper, we propose a visual analytics approach to facilitating warehouse location selection. We first visually centralize relevant information of warehouses and adapts a widely‐used methodology to efficiently rank warehouse candidates. We then design a delivering estimation model based on massive logistics trajectories to resolve the uncertainty issue of traffic conditions of warehouses. Based on these techniques, an interactive framework is proposed to generate and explore the candidate warehouses. We conduct a case study and a within‐subject study with baseline systems to assess the efficacy of our system. Experts ‘feedback also suggests that our approach indeed helps them better tackle the problem of finding an ideal warehouse in the field of retail logistics management. | false | false | [
"Quan Li",
"Qiangqiang Liu",
"Chunfeng Tang",
"Z. W. Li",
"S. C. Wei",
"X. R. Peng",
"M. H. Zheng",
"Tianjian Chen",
"Q. Yang"
] | [] | [] | [] |
CHI | 2,020 | A Comparison of Geographical Propagation Visualizations | 10.1145/3313831.3376350 | Geographical propagation phenomena occur in multiple domains, such as in epidemiology and social media. Propagation dynamics are often complex, and visualizations play a key role in helping subject-matter experts understand and analyze them. However, there is little empirical data about the effectiveness of the various strategies used to visualize geographical propagation. To fill this gap, we conduct an experiment to evaluate the effectiveness of three strategies: an animated map, small-multiple maps, and a single map with glyphs. We compare them under five tasks that vary in one of the following dimensions: propagation scope, direction, speed, peaks, and spatial jumps. Our results show that small-multiple maps perform best overall, but that the effectiveness of each visualization varies depending on the task considered. | false | false | [
"Vanessa Peña Araya",
"Anastasia Bezerianos",
"Emmanuel Pietriga"
] | [] | [] | [] |
CHI | 2,020 | A Participatory Simulation of the Accountable Capitalism Act | 10.1145/3313831.3376326 | Interactive computing systems increasingly allow for experimental evaluations of fundamental issues in law, government, and society. In this paper, we describe a participatory simulation of the Accountable Capitalism Act, a bill proposed in 2018 by US Senator Elizabeth Warren. We present findings from an empirical study conducted using this system, relating to the impact of 1) interactive visualization and 2) the Accountable Capitalism Act legal framework on the behavior of participants acting as corporate directors. From this study, we draw lessons about research possibilities at the juncture of HCI and legal and policy studies. This study contributes an analysis and evaluation of a design probe used to investigate potential impacts of the Accountable Capitalism Act, experimental evidence from a study conducted using the design probe, and guidance for future participatory simulations that seek to inform the design of social institutions. | false | false | [
"Bill Tomlinson",
"M. Six Silberman",
"Andrew W. Torrance",
"Kurt Squire",
"Paramdeep S. Atwal",
"Ameya N. Mandalik",
"Sahil Railkar",
"Rebecca W. Black"
] | [] | [] | [] |
CHI | 2,020 | A Probabilistic Grammar of Graphics | 10.1145/3313831.3376466 | Visualizations depicting probabilities and uncertainty are used everywhere from medical risk communication to machine learning, yet these probabilistic visualizations are difficult to specify, prone to error, and their designs are cumbersome to explore. We propose a Probabilistic Grammar of Graphics (PGoG), an extension to Wilkinson's original framework. Inspired by the success of probabilistic programming languages, PGoG makes probability expressions, such as P(A|B), a first-class citizen in the language. PGoG abstractions also reflect the distinction between probability and frequency framing, a concept from the uncertainty communication literature. It is expressive, encompassing product plots, density plots, icon arrays, and dotplots, among other visualizations. Its coherent syntax ensures correctness (that the proportions of visual elements and their spatial placement reflect the underlying probability distribution) and reduces edit distance between probabilistic visualization specifications, potentially supporting more design exploration. We provide a proof-of-concept implementation of PGoG in R. | false | false | [
"Xiaoying Pu",
"Matthew Kay 0001"
] | [
"HM"
] | [] | [] |
CHI | 2,020 | Addressing Cognitive and Emotional Barriers in Parent-Clinician Communication through Behavioral Visualization Webtools | 10.1145/3313831.3376181 | Effective communication between clinicians and parents of young children with developmental delays can decrease parents' anxiety, help them handle bad news, and improve their adherence to proposed interventions. However, parents have reported dissatisfaction regarding their current communication with clinicians, and they face cognitive and emotional challenges when discussing their child's developmental delays. In this paper, we present visualization as a facilitator of parent-clinician communication and how it could address existing communication challenges. Parents and clinicians anticipated visualization webtools would aid their communication by helping parents gain a better understanding of their child, acting as objective evidence, and highlighting the strength of the child as well as important medical concepts. In addition, visualization can act as a longitudinal record, helping parents track, explore, and share their child's developmental progress. Finally, we propose visualization as a tool to guide parents in their transition from feeling emotional and disempowered to advocating with confidence. | false | false | [
"Ha Kyung Kong",
"Karrie Karahalios"
] | [] | [] | [] |
CHI | 2,020 | Assessing 2D and 3D Heatmaps for Comparative Analysis: An Empirical Study | 10.1145/3313831.3376675 | Heatmaps are a popular visualization technique that encode 2D density distributions using color or brightness. Experimental studies have shown though that both of these visual variables are inaccurate when reading and comparing numeric data values. A potential remedy might be to use 3D heatmaps by introducing height as a third dimension to encode the data. Encoding abstract data in 3D, however, poses many problems, too. To better understand this tradeoff, we conducted an empirical study (N=48) to evaluate the user performance of 2D and 3D heatmaps for comparative analysis tasks. We test our conditions on a conventional 2D screen, but also in a virtual reality environment to allow for real stereoscopic vision. Our main results show that 3D heatmaps are superior in terms of error rate when reading and comparing single data items. However, for overview tasks, the well-established 2D heatmap performs better. | false | false | [
"Matthias Kraus 0002",
"Katrin Angerbauer",
"Juri Buchmüller",
"Daniel Schweitzer",
"Daniel A. Keim",
"Michael Sedlmair",
"Johannes Fuchs 0001"
] | [] | [] | [] |
CHI | 2,020 | Augmenting Static Visualizations with PapARVis Designer | 10.1145/3313831.3376436 | This paper presents an authoring environment for augmenting static visualizations with virtual content in augmented reality.Augmenting static visualizations can leverage the best of both physical and digital worlds, but its creation currently involves different tools and devices, without any means to explicitly design and debug both static and virtual content simultaneously. To address these issues, we design an environment that seamlessly integrates all steps of a design and deployment workflow through its main features: i) an extension to Vega, ii) a preview, and iii) debug hints that facilitate valid combinations of static and augmented content. We inform our design through a design space with four ways to augment static visualizations. We demonstrate the expressiveness of our tool through examples, including books, posters, projections, wall-sized visualizations. A user study shows high user satisfaction of our environment and confirms that participants can create augmented visualizations in an average of 4.63 minutes. | false | false | [
"Zhutian Chen",
"Wai Tong",
"Qianwen Wang",
"Benjamin Bach",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2310.04826v1",
"icon": "paper"
}
] |
CHI | 2,020 | Automatic Annotation Synchronizing with Textual Description for Visualization | 10.1145/3313831.3376443 | In this paper, we propose a technique for automatically annotating visualizations according to the textual description. In our approach, visual elements in the target visualization, along with their visual properties, are identified and extracted with a Mask R-CNN model. Meanwhile, the description is parsed to generate visual search requests. Based on the identification results and search requests, each descriptive sentence is displayed beside the described focal areas as annotations. Different sentences are presented in various scenes of the generated animation to promote a vivid step-by-step presentation. With a user-customized style, the animation can guide the audience's attention via proper highlighting such as emphasizing specific features or isolating part of the data. We demonstrate the utility and usability of our method through a user study with use cases. | false | false | [
"Chufan Lai",
"Zhixian Lin",
"Ruike Jiang",
"Yun Han",
"Can Liu 0004",
"Xiaoru Yuan"
] | [] | [] | [] |
CHI | 2,020 | Awareness, Understanding, and Action: A Conceptual Framework of User Experiences and Expectations about Indoor Air Quality Visualizations | 10.1145/3313831.3376521 | With the advent of new sensors and technologies, smart devices that monitor the level of indoor air quality (IAQ) are increasingly available to create a healthy home environment. However, little has been studied regarding design principles for effective IAQ visualizations to help better understand and improve IAQ. We analyzed Amazon reviews of IAQ monitors and their design components for IAQ visualizations. Based on our findings, we created a conceptual framework to explain the process of facilitating an effective IAQ visualization with a proposed set of design considerations in each stage. The process includes helping users easily understand what is happing to IAQ (awareness), what it means to them (understanding), and what to do with the information (action), which results in two outcomes, knowledge gain and emotional relief. We hope our framework can help practitioners and researchers in designing eco-feedback system and beyond to advance both research and practice. | false | false | [
"Sunyoung Kim",
"Muyang Li"
] | [] | [] | [] |
CHI | 2,020 | Cheat Sheets for Data Visualization Techniques | 10.1145/3313831.3376271 | This paper introduces the concept of 'cheat sheets' for data visualization techniques, a set of concise graphical explanations and textual annotations inspired by infographics, data comics, and cheat sheets in other domains. Cheat sheets aim to address the increasing need for accessible material that supports a wide audience in understanding data visualization techniques, their use, their fallacies and so forth. We have carried out an iterative design process with practitioners, teachers and students of data science and visualization, resulting six types of cheat sheet (anatomy, construction, visual patterns, pitfalls, false-friends and well-known relatives) for six types of visualization, and formats for presentation. We assess these with a qualitative user study using 11 participants that demonstrates the readability and usefulness of our cheat sheets. | false | false | [
"Zezhong Wang 0001",
"Lovisa Sundin",
"Dave Murray-Rust",
"Benjamin Bach"
] | [] | [] | [] |
CHI | 2,020 | COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations | 10.1145/3313831.3376615 | Interpretable machine learning models trade -off accuracy for simplicity to make explanations more readable and easier to comprehend. Drawing from cognitive psychology theories in graph comprehension, we formalize readability as visual cognitive chunks to measure and moderate the cognitive load in explanation visualizations. We present Cognitive-GAM (COGAM) to generate explanations with desired cognitive load and accuracy by combining the expressive nonlinear generalized additive models (GAM) with simpler sparse linear models. We calibrated visual cognitive chunks with reading time in a user study, characterized the trade-off between cognitive load and accuracy for four datasets in simulation studies, and evaluated COGAM against baselines with users. We found that COGAM can decrease cognitive load without decreasing accuracy and/or increase accuracy without increasing cognitive load. Our framework and empirical measurement instruments for cognitive load will enable more rigorous assessment of the human interpretability of explainable AI. | false | false | [
"Ashraf M. Abdul",
"Christian von der Weth",
"Mohan S. Kankanhalli",
"Brian Y. Lim"
] | [] | [] | [] |
CHI | 2,020 | DataQuilt: Extracting Visual Elements from Images to Craft Pictorial Visualizations | 10.1145/3313831.3376172 | Recent years have seen an increasing interest in the authoring and crafting of personal visualizations. Mainstream data analysis and authoring tools lack the flexibility for customization and personalization, whereas tools from the research community either require creativity and drawing skills, or are limited to simple vector graphics. We present DataQuilt, a novel system that enables visualization authors to iteratively design pictorial visualizations as collages. Real images (e.g., paintings, photographs, sketches) act as both inspiration and as a resource of visual elements that can be mapped to data. The creative pipeline involves the semi-guided extraction of relevant elements of an image (arbitrary regions, regular shapes, color palettes, textures) aided by computer vision techniques; the binding of these graphical elements and their features to data in order to create meaningful visualizations; and the iterative refinement of both features and visualizations through direct manipulation. We demonstrate the usability of DataQuilt in a controlled study and its expressiveness through a collection of authored visualizations from a second open-ended study. | false | false | [
"Jiayi Eris Zhang",
"Nicole Sultanum",
"Anastasia Bezerianos",
"Fanny Chevalier"
] | [] | [] | [] |
CHI | 2,020 | Dear Pictograph: Investigating the Role of Personalization and Immersion for Consuming and Enjoying Visualizations | 10.1145/3313831.3376348 | Much of the visualization literature focuses on assessment of visual representations with regard to their effectiveness for understanding data. In the present work, we instead focus on making data visualization experiences more enjoyable, to foster deeper engagement with data. We investigate two strategies to make visualization experiences more enjoyable and engaging: personalization, and immersion. We selected pictographs (composed of multiple data glyphs) as this representation affords creative freedom, allowing people to craft symbolic or whimsical shapes of personal significance to represent data. We present the results of a qualitative study with 12 participants crafting pictographs using a large pen-enabled device and while immersed within a VR environment. Our results indicate that personalization and immersion both have positive impact on making visualizations more enjoyable experiences. | false | false | [
"Hugo Romat",
"Nathalie Henry Riche",
"Christophe Hurter",
"Steven Mark Drucker",
"Fereshteh Amini",
"Ken Hinckley"
] | [] | [] | [] |
CHI | 2,020 | Debugging Database Queries: A Survey of Tools, Techniques, and Users | 10.1145/3313831.3376485 | Database management systems (or DBMSs) have been around for decades, and yet are still difficult to use, particularly when trying to identify and fix errors in user programs (or queries). We seek to understand what methods have been proposed to help people debug database queries, and whether these techniques have ultimately been adopted by DBMSs (and users). We conducted an interdisciplinary review of 112 papers and tools from the database, visualisation and HCI communities. To better understand whether academic and industry approaches are meeting the needs of users, we interviewed 20 database users (and some designers), and found surprising results. In particular, there seems to be a wide gulf between users' debugging strategies and the functionality implemented in existing DBMSs, as well as proposed in the literature. In response, we propose new design guidelines to help system designers to build features that more closely match users debugging strategies. | false | false | [
"Sneha Gathani",
"Peter Lim",
"Leilani Battle"
] | [] | [] | [] |
CHI | 2,020 | Decipher: An Interactive Visualization Tool for Interpreting Unstructured Design Feedback from Multiple Providers | 10.1145/3313831.3376380 | Feedback from diverse audiences can vary in focus, differ in structure, and contradict each other, making it hard to interpret and act on. While prior work has explored generating quality feedback, our work helps a designer interpret that feedback. Through a formative study with professional designers (N=10), we discovered that the interpretation process includes categorizing feedback, identifying valuable feedback, and prioritizing which feedback to incorporate in a revision. We also found that designers leverage feedback topic and sentiment, and the status of the provider to aid interpretation. Based on the findings, we created a new tool (Decipher) that enables designers to visualize and navigate a collection of feedback using its topic and sentiment structure. In a preliminary evaluation (N=20), we found that Decipher helped users feel less overwhelmed during feedback interpretation tasks and better attend to critical issues and conflicting opinions compared to using a typical document-editing tool. | false | false | [
"Yu-Chun (Grace) Yen",
"Joy O. Kim",
"Brian P. Bailey"
] | [] | [] | [] |
CHI | 2,020 | Design Study "Lite" Methodology: Expediting Design Studies and Enabling the Synergy of Visualization Pedagogy and Social Good | 10.1145/3313831.3376829 | Design studies are frequently used to conduct problem-driven visualization research by working with real-world domain experts. In visualization pedagogy, design studies are often introduced but rarely practiced due to their large time requirements. This limits students to a classroom curriculum, often involving projects that may not have implications beyond the classroom. Thus we present the Design Study "Lite" Methodology, a novel framework for implementing design studies with novice students in 14 weeks. We utilized the Design Study "Lite" Methodology in conjunction with Service-Learning to teach five Data Visualization courses and demonstrate that it benefits not only the students but also the community through service to non-profit partners. In this paper, we provide a detailed breakdown of the methodology and how Service-Learning can be incorporated with it. We also include an extensive reflection on the methodology and provide recommendations for future applications of the framework for teaching visualization courses and research. | false | false | [
"Uzma Haque Syeda",
"Prasanth Murali",
"Lisa Roe",
"Becca Berkey",
"Michelle A. Borkin"
] | [
"BP"
] | [] | [] |
CHI | 2,020 | DFSeer: A Visual Analytics Approach to Facilitate Model Selection for Demand Forecasting | 10.1145/3313831.3376866 | Selecting an appropriate model to forecast product demand is critical to the manufacturing industry. However, due to the data complexity, market uncertainty and users' demanding requirements for the model, it is challenging for demand analysts to select a proper model. Although existing model selection methods can reduce the manual burden to some extent, they often fail to present model performance details on individual products and reveal the potential risk of the selected model. This paper presents DFSeer, an interactive visualization system to conduct reliable model selection for demand forecasting based on the products with similar historical demand. It supports model comparison and selection with different levels of details. Besides, it shows the difference in model performance on similar products to reveal the risk of model selection and increase users' confidence in choosing a forecasting model. Two case studies and interviews with domain experts demonstrate the effectiveness and usability of DFSeer. | false | false | [
"Dong Sun 0001",
"Zezheng Feng",
"Yuanzhe Chen",
"Yong Wang 0021",
"Jia Zeng",
"Mingxuan Yuan",
"Ting-Chuen Pong",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.03244v1",
"icon": "paper"
}
] |
CHI | 2,020 | DoughNets: Visualising Networks Using Torus Wrapping | 10.1145/3313831.3376180 | We investigate visualisations of networks on a 2-dimensional torus topology, like an opened-up and flattened doughnut. That is, the network is drawn on a rectangular area while "wrapping" specific links around the border. Previous work on torus drawings of networks has been mostly theoretical, limited to certain classes of networks, and not evaluated by human readability studies. We offer a simple interactive layout approach applicable to general graphs. We use this to find layouts affording better aesthetics in terms of conventional measures like more equal edge length and fewer crossings. In two controlled user studies we find that torus layout with either additional context or interactive panning provided significant performance improvement (in terms of error and time) over torus layout without either of these improvements, to the point that it is comparable to standard non-torus layout. | false | false | [
"Kun-Ting Chen",
"Tim Dwyer",
"Kim Marriott",
"Benjamin Bach"
] | [] | [] | [] |
CHI | 2,020 | Du Bois Wrapped Bar Chart: Visualizing Categorical Data with Disproportionate Values | 10.1145/3313831.3376365 | We propose a visualization technique, Du Bois wrapped bar chart, inspired by work of W.E.B Du Bois. Du Bois wrapped bar charts enable better large-to-small bar comparison by wrapping large bars over a certain threshold. We first present two crowdsourcing experiments comparing wrapped and standard bar charts to evaluate (1) the benefit of wrapped bars in helping participants identify and compare values; (2) the characteristics of data most suitable for wrapped bars. In the first study (n=98) using real-world datasets, we find that wrapped bar charts lead to higher accuracy in identifying and estimating ratios between bars. In a follow-up study (n=190) with 13 simulated datasets, we find participants were consistently more accurate with wrapped bar charts when certain category values are disproportionate as measured by entropy and H-spread. Finally, in an in-lab study, we investigate participants' experience and strategies, leading to guidelines for when and how to use wrapped bar charts. | false | false | [
"Alireza Karduni",
"Ryan Wesslen",
"Isaac Cho",
"Wenwen Dou"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2001.03271v2",
"icon": "paper"
}
] |
CHI | 2,020 | Dziban: Balancing Agency & Automation in Visualization Design via Anchored Recommendations | 10.1145/3313831.3376880 | Visualization recommender systems attempt to automate design decisions spanning choices of selected data, transformations, and visual encodings. However, across invocations such recommenders may lack the context of prior results, producing unstable outputs that override earlier design choices. To better balance automated suggestions with user intent, we contribute Dziban, a visualization API that supports both ambiguous specification and a novel anchoring mechanism for conveying desired context. Dziban uses the Draco knowledge base to automatically complete partial specifications and suggest appropriate visualizations. In addition, it extends Draco with chart similarity logic, enabling recommendations that also remain perceptually similar to a provided "anchor" chart. Existing APIs for exploratory visualization, such as ggplot2 and Vega-Lite, require fully specified chart definitions. In contrast, Dziban provides a more concise and flexible authoring experience through automated design, while preserving predictability and control through anchored recommendations. | false | false | [
"Halden Lin",
"Dominik Moritz",
"Jeffrey Heer"
] | [] | [] | [] |
CHI | 2,020 | Ecology Meets Computer Science: Designing Tools to Reconcile People, Data, and Practices | 10.1145/3313831.3376663 | Ecoacoustics draws together computer scientists and ecologists to achieve an understanding of ecosystems and wildlife using acoustic recordings of the environment. Computer scientists are challenged to manage increasingly large datasets while developing analytic and visualisation tools. Ecologists struggle to find and use tools that answer highly heterogeneous research questions. These two fields are naturally drawn together at the tool interface, however, less attention has been paid to how their practices influence tool design and use. We interviewed and collected email correspondence from four computer scientists and eight ecologists to learn how their practices indicate opportunities for reconciling difference through design. We found that different temporal rhythms, relationships to data, and data-driven questions demand tool configuration, data integration, and standardisation. This research outlines interfacing opportunities for new ecological research utilising large acoustic datasets, and also contributes to evolving HCI approaches in areas making use of big data and human-in-the-loop processes. | false | false | [
"Kellie Vella",
"Jessica L. Oliver",
"Tshering Dema",
"Margot Brereton",
"Paul Roe"
] | [] | [] | [] |
CHI | 2,020 | Embodied Axes: Tangible, Actuated Interaction for 3D Augmented Reality Data Spaces | 10.1145/3313831.3376613 | We present Embodied Axes, a controller which supports selection operations for 3D imagery and data visualisations in Augmented Reality. The device is an embodied representation of a 3D data space -- each of its three orthogonal arms corresponds to a data axis or domain specific frame of reference. Each axis is composed of a pair of tangible, actuated range sliders for precise data selection, and rotary encoding knobs for additional parameter tuning or menu navigation. The motor actuated sliders support alignment to positions of significant values within the data, or coordination with other input: e.g., mid-air gestures in the data space, touch gestures on the surface below the data, or another Embodied Axes device supporting multi-user scenarios. We conducted expert enquiries in medical imaging which provided formative feedback on domain tasks and refinements to the design. Additionally, a controlled user study was performed and found that the Embodied Axes was overall more accurate than conventional tracked controllers for selection tasks. | false | false | [
"Maxime Cordeil",
"Benjamin Bach",
"Andrew Cunningham",
"Bastian Montoya",
"Ross T. Smith",
"Bruce H. Thomas",
"Tim Dwyer"
] | [] | [] | [] |
CHI | 2,020 | Evaluating Multivariate Network Visualization Techniques Using a Validated Design and Crowdsourcing Approach | 10.1145/3313831.3376381 | Visualizing multivariate networks is challenging because of the trade-offs necessary for effectively encoding network topology and encoding the attributes associated with nodes and edges. A large number of multivariate network visualization techniques exist, yet there is little empirical guidance on their respective strengths and weaknesses. In this paper, we describe a crowdsourced experiment, comparing node-link diagrams with on-node encoding and adjacency matrices with juxtaposed tables. We find that node-link diagrams are best suited for tasks that require close integration between the network topology and a few attributes. Adjacency matrices perform well for tasks related to clusters and when many attributes need to be considered. We also reflect on our method of using validated designs for empirically evaluating complex, interactive visualizations in a crowdsourced setting. We highlight the importance of training, compensation, and provenance tracking. | false | false | [
"Carolina Nobre",
"Dylan Wootton",
"Lane Harrison",
"Alexander Lex"
] | [] | [] | [] |
CHI | 2,020 | Evaluating the Effect of Timeline Shape on Visualization Task Performance | 10.1145/3313831.3376237 | Timelines are commonly represented on a horizontal line, which is not necessarily the most effective way to visualize temporal event sequences. However, few experiments have evaluated how timeline shape influences task performance. We present the design and results of a controlled experiment run on Amazon Mechanical Turk (n=192) in which we evaluate how timeline shape affects task completion time, correctness, and user preference. We tested 12 combinations of 4 shapes --- horizontal line, vertical line, circle, and spiral and 3 data types recurrent, non-recurrent, and mixed event sequences. We found good evidence that timeline shape meaningfully affects user task completion time but not correctness and that users have a strong shape preference. Building on our results, we present design guidelines for creating effective timeline visualizations based on user task and data types. A free copy of this paper, the evaluation stimuli and data, and code are available https://osf.io/qr5yu/ | false | false | [
"Sara Di Bartolomeo",
"Aditeya Pandey",
"Aristotelis Leventidis",
"David Saffo",
"Uzma Haque Syeda",
"Elín Carstensdóttir",
"Magy Seif El-Nasr",
"Michelle A. Borkin",
"Cody Dunne"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.06039v1",
"icon": "paper"
}
] |
CHI | 2,020 | Evaluation of a Financial Portfolio Visualization using Computer Displays and Mixed Reality Devices with Domain Experts | 10.1145/3313831.3376556 | With the advent of mixed reality devices such as the Microsoft HoloLens, developers have been faced with the challenge to utilize the third dimension in information visualization effectively. Research on stereoscopic devices has shown that three-dimensional representation can improve accuracy in specific tasks (e.g., network visualization). Yet, so far the field has remained mute on the underlying mechanism. Our study systematically investigates the differences in user perception between a regular monitor and a mixed reality device. In a real-life within-subject experiment in the field with twenty-eight investment bankers, we assessed subjective and objective task performance with two- and three-dimensional systems, respectively. We tested accuracy with regard to position, size, and color using single and combined tasks. Our results do not show a significant difference in accuracy between mixed-reality and standard 2D monitor visualizations. | false | false | [
"Kay Schröder",
"Batoul Ajdadilish",
"Alexander P. Henkel",
"André Calero Valdez"
] | [] | [] | [] |
CHI | 2,020 | FDHelper: Assist Unsupervised Fraud Detection Experts with Interactive Feature Selection and Evaluation | 10.1145/3313831.3376140 | Online fraud is the well-known dark side of the modern Internet. Unsupervised fraud detection algorithms are widely used to address this problem. However, selecting features, adjusting hyperparameters, evaluating the algorithms, and eliminating false positives all require human expert involvement. In this work, we design and implement an end-to-end interactive visualization system, FDHelper, based on the deep understanding of the mechanism of the black market and fraud detection algorithms. We identify a workflow based on experience from both fraud detection algorithm experts and domain experts. Using a multi-granularity three-layer visualization map embedding an entropy-based distance metric ColDis, analysts can interactively select different feature sets, refine fraud detection algorithms, tune parameters and evaluate the detection result in near real-time. We demonstrate the effectiveness and significance of FDHelper through two case studies with state-of-the-art fraud detection algorithms, interviews with domain experts and algorithm experts, and a user study with eight first-time end users. | false | false | [
"Jiao Sun",
"Yin Li",
"Charley Chen",
"Jihae Lee",
"Xin Liu",
"Zhongping Zhang",
"Ling Huang",
"Lei Shi 0002",
"Wei Xu 0005"
] | [] | [] | [] |
CHI | 2,020 | From Data to Insights: A Layered Storytelling Approach for Multimodal Learning Analytics | 10.1145/3313831.3376148 | Significant progress to integrate and analyse multimodal data has been carried out in the last years. Yet, little research has tackled the challenge of visualising and supporting the sensemaking of multimodal data to inform teaching and learning. It is naïve to expect that simply by rendering multiple data streams visually, a teacher or learner will be able to make sense of them. This paper introduces an approach to unravel the complexity of multimodal data by organising it into meaningful layers that explain critical insights to teachers and students. The approach is illustrated through the design of two data storytelling prototypes in the context of nursing simulation. Two authentic studies with educators and students identified the potential of the approach to create learning analytics interfaces that communicate insights on team performance, as well as concerns in terms of accountability and automated insights discovery. | false | false | [
"Roberto Martínez Maldonado",
"Vanessa Echeverría",
"Gloria Fernández-Nieto",
"Simon Buckingham Shum"
] | [] | [] | [] |
CHI | 2,020 | Genie in the Bottle: Anthropomorphized Perceptions of Conversational Agents | 10.1145/3313831.3376665 | This paper presents a qualitative multi-phase study seeking to identify patterns in users' anthropomorphized perceptions of conversational agents. Through a comparative analysis of behavioral perceptions and visual conceptions of three agents - Alexa, Google Assistant, and Siri - we first show that the perceptions of an agent's character are structured according to five categories: approachability, sentiment toward a user, professionalism, intelligence, and individuality. We then explore visualizations of the agents' appearance and discuss the specifics assigned to each agent. Finally, we analyze associative explanations for these perceptions. We demonstrate that the anthropomorphized behavioral and visual perceptions of agents yield structural consistency and discuss how these perceptions are linked with each other and system features. | false | false | [
"Anastasia Kuzminykh",
"Jenny Sun",
"Nivetha Govindaraju",
"Jeff Avery",
"Edward Lank"
] | [] | [] | [] |
CHI | 2,020 | GoTree: A Grammar of Tree Visualizations | 10.1145/3313831.3376297 | We present GoTree, a declarative grammar allowing users to instantiate tree visualizations by specifying three aspects: visual elements, layout, and coordinate system. Within the set of all possible tree visualization techniques, we identify a subset of techniques that are both "unit-decomposable" and "axis-decomposable" (terms we define). For tree visualizations within this subset, GoTree gives the user flexible and fine-grained control over the parameters of the techniques, supporting both explicit and implicit tree visualizations. We developed Tree Illustrator, an interactive authoring tool based on GoTree grammar. Tree Illustrator allows users to create a considerable number of tree visualizations, including not only existing techniques but also undiscovered and hybrid visualizations. We demonstrate the expressiveness and generative power of GoTree with a gallery of examples and conduct a qualitative study to validate the usability of Tree Illustrator. | false | false | [
"Guozheng Li 0002",
"Min Tian",
"Qinmei Xu 0001",
"Michael J. McGuffin",
"Xiaoru Yuan"
] | [] | [] | [] |
CHI | 2,020 | Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld AR | 10.1145/3313831.3376848 | In Handheld Augmented Reality, users look at AR scenes through the smartphone held in their hand. In this setting, having a mid-air pointing device like a pen in the other hand greatly expands the interaction possibilities. For example, it lets users create 3D sketches and models while on the go. However, perceptual issues in Handheld AR make it difficult to judge the distance of a virtual object, making it hard to align a pen to it. To address this, we designed and compared different visualizations of the pen's position in its virtual environment, measuring pointing precision, task time, activation patterns, and subjective ratings of helpfulness, confidence, and comprehensibility of each visualization. While all visualizations resulted in only minor differences in precision and task time, subjective ratings of perceived helpfulness and confidence favor a 'heatmap' technique that colors the objects in the scene based on their distance to the pen. | false | false | [
"Philipp Wacker",
"Adrian Wagner",
"Simon Voelker",
"Jan O. Borchers"
] | [] | [] | [] |
CHI | 2,020 | How Visualizing Inferential Uncertainty Can Mislead Readers About Treatment Effects in Scientific Results | 10.1145/3313831.3376454 | When presenting visualizations of experimental results, scientists often choose to display either inferential uncertainty (e.g., uncertainty in the estimate of a population mean) or outcome uncertainty (e.g., variation of outcomes around that mean) about their estimates. How does this choice impact readers' beliefs about the size of treatment effects? We investigate this question in two experiments comparing 95% confidence intervals (means and standard errors) to 95% prediction intervals (means and standard deviations). The first experiment finds that participants are willing to pay more for and overestimate the effect of a treatment when shown confidence intervals relative to prediction intervals. The second experiment evaluates how alternative visualizations compare to standard visualizations for different effect sizes. We find that axis rescaling reduces error, but not as well as prediction intervals or animated hypothetical outcome plots (HOPs), and that depicting inferential uncertainty causes participants to underestimate variability in individual outcomes. | false | false | [
"Jake M. Hofman",
"Daniel G. Goldstein",
"Jessica Hullman"
] | [
"HM"
] | [] | [] |
CHI | 2,020 | InChorus: Designing Consistent Multimodal Interactions for Data Visualization on Tablet Devices | 10.1145/3313831.3376782 | While tablet devices are a promising platform for data visualization, supporting consistent interactions across different types of visualizations on tablets remains an open challenge. In this paper, we present multimodal interactions that function consistently across different visualizations, supporting common operations during visual data analysis. By considering standard interface elements (e.g., axes, marks) and grounding our design in a set of core concepts including operations, parameters, targets, and instruments, we systematically develop interactions applicable to different visualization types. To exemplify how the proposed interactions collectively facilitate data exploration, we employ them in a tablet-based system, InChorus that supports pen, touch, and speech input. Based on a study with 12 participants performing replication and factchecking tasks with InChorus, we discuss how participants adapted to using multimodal input and highlight considerations for future multimodal visualization systems. | false | false | [
"Arjun Srinivasan",
"Bongshin Lee",
"Nathalie Henry Riche",
"Steven Mark Drucker",
"Ken Hinckley"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2001.06423v1",
"icon": "paper"
}
] |
CHI | 2,020 | Interacting with Literary Style through Computational Tools | 10.1145/3313831.3376730 | Style is an important aspect of writing, shaping how audiences interpret and engage with literary works. However, for most people style is difficult to articulate precisely. While users frequently interact with computational word processing tools with well-defined metrics, such as spelling and grammar checkers, style is a significantly more nuanced concept. In this paper, we present a computational technique to help surface style in written text. We collect a dataset of crowdsourced human judgments of style, derive a model of style by training a neural net on this data, and present novel applications for visualizing and browsing style across broad bodies of literature, as well as an interactive text editor with real-time style feedback. We study these interactive style applications with users and discuss implications for enabling this novel approach to style. | false | false | [
"Sarah Sterman",
"Evey Huang",
"Vivian Liu",
"Eric Paulos"
] | [] | [] | [] |
CHI | 2,020 | Interaction Techniques for Visual Exploration Using Embedded Word-Scale Visualizations | 10.1145/3313831.3376842 | We describe a design space of view manipulation interactions for small data-driven contextual visualizations (word-scale visualizations). These interaction techniques support an active reading experience and engage readers through exploration of embedded visualizations whose placement and content connect them to specific terms in a document. A reader could, for example, use our proposed interaction techniques to explore word-scale visualizations of stock market trends for companies listed in a market overview article. When readers wish to engage more deeply with the data, they can collect, arrange, compare, and navigate the document using the embedded word-scale visualizations, permitting more visualization-centric analyses. We support our design space with a concrete implementation, illustrate it with examples from three application domains, and report results from two experiments. The experiments show how view manipulation interactions helped readers examine embedded visualizations more quickly and with less scrolling and yielded qualitative feedback on usability and future opportunities. | false | false | [
"Pascal Goffin",
"Tanja Blascheck",
"Petra Isenberg",
"Wesley Willett"
] | [] | [] | [] |
CHI | 2,020 | Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning | 10.1145/3313831.3376219 | Machine learning (ML) models are now routinely deployed in domains ranging from criminal justice to healthcare. With this newfound ubiquity, ML has moved beyond academia and grown into an engineering discipline. To that end, interpretability tools have been designed to help data scientists and machine learning practitioners better understand how ML models work. However, there has been little evaluation of the extent to which these tools achieve this goal. We study data scientists' use of two existing interpretability tools, the InterpretML implementation of GAMs and the SHAP Python package. We conduct a contextual inquiry (N=11) and a survey (N=197) of data scientists to observe how they use interpretability tools to uncover common issues that arise when building and evaluating ML models. Our results indicate that data scientists over-trust and misuse interpretability tools. Furthermore, few of our participants were able to accurately describe the visualizations output by these tools. We highlight qualitative themes for data scientists' mental models of interpretability tools. We conclude with implications for researchers and tool designers, and contextualize our findings in the social science literature. | false | false | [
"Harmanpreet Kaur",
"Harsha Nori",
"Samuel Jenkins",
"Rich Caruana",
"Hanna M. Wallach",
"Jennifer Wortman Vaughan"
] | [
"HM"
] | [] | [] |
CHI | 2,020 | Investigating Collaborative Exploration of Design Alternatives on a Wall-Sized Display | 10.1145/3313831.3376736 | Industrial design review is an iterative process which mainly relies on two steps involving many stakeholders: design discussion and CAD data adjustment. We investigate how a wall-sized display could be used to merge these two steps by allowing multidisciplinary collaborators to simultaneously generate and explore design alternatives. We designed ShapeCompare based on the feedback from a usability study. It enables multiple users to compute and distribute CAD data with touch interaction. To assess the benefit of the wall-sized display in such context, we ran a controlled experiment which aims to compare ShapeCompare with a visualization technique suitable for standard screens. The results show that pairs of participants performed a constraint solving task faster and used more deictic instructions with ShapeCompare. From these findings, we draw generic recommendations for collaborative exploration of alternatives. | false | false | [
"Yujiro Okuya",
"Olivier Gladin",
"Nicolas Ladevèze",
"Cédric Fleury",
"Patrick Bourdot"
] | [] | [] | [] |
CHI | 2,020 | MaraVis: Representation and Coordinated Intervention of Medical Encounters in Urban Marathon | 10.1145/3313831.3376281 | There is an increased use of Internet-of-Things and wearable sensing devices in the urban marathon to ensure effective response to unforeseen medical needs. However, the massive amount of real-time, heterogeneous movement and psychological data of runners impose great challenges on prompt medical incident analysis and intervention. Conventional approaches compile such data into one dashboard visualization to facilitate rapid data absorption but fail to support joint decision-making and operations in medical encounters. In this paper, we present MaraVis, a real-time urban marathon visualization and coordinated intervention system. It first visually summarizes real-time marathon data to facilitate the detection and exploration of possible anomalous events. Then, it calculates an optimal camera route with an arrangement of shots to guide offline effort to catch these events in time with a smooth view transition. We conduct a within-subjects study with two baseline systems to assess the efficacy of MaraVis. | false | false | [
"Quan Li",
"Huanbin Lin",
"Xiguang Wei",
"Yangkun Huang",
"Lixin Fan",
"Jian Du",
"Xiaojuan Ma",
"Tianjian Chen"
] | [] | [] | [] |
CHI | 2,020 | Move Your Body: Engaging Museum Visitors with Human-Data Interaction | 10.1145/3313831.3376186 | Museums have embraced embodied interaction: its novelty generates buzz and excitement among their patrons, and it has enormous educational potential. Human-Data Interaction (HDI) is a class of embodied interactions that enables people to explore large sets of data using interactive visualizations that users control with gestures and body movements. In museums, however, HDI installations have no utility if visitors do not engage with them. In this paper, we present a quasi-experimental study that investigates how different ways of representing the user ("mode type") next-to a data visualization alters the way in which people engage with a HDI system. We consider four mode types: avatar, skeleton, camera overlay, and control. Our findings indicate that the mode type impacts the number of visitors that interact with the installation, the gestures that people do, and the amount of time that visitors spend observing the data on display and interacting with the system. | false | false | [
"Milka Trajkova",
"A'aeshah Alhakamy",
"Francesco Cafaro",
"Rashmi Mallappa",
"Sreekanth R. Kankara"
] | [] | [] | [] |
CHI | 2,020 | MRAT: The Mixed Reality Analytics Toolkit | 10.1145/3313831.3376330 | Significant tool support exists for the development of mixed reality (MR) applications; however, there is a lack of tools for analyzing MR experiences. We elicit requirements for future tools through interviews with 8 university research, instructional, and media teams using AR/VR in a variety of domains. While we find a common need for capturing how users perform tasks in MR, the primary differences were in terms of heuristics and metrics relevant to each project. Particularly in the early project stages, teams were uncertain about what data should, and even could, be collected with MR technologies. We designed the Mixed Reality Analytics Toolkit (MRAT) to instrument MR apps via visual editors without programming and enable rapid data collection and filtering for visualizations of MR user sessions. With MRAT, we contribute flexible interaction tracking and task definition concepts, an extensible set of heuristic techniques and metrics to measure task success, and visual inspection tools with in-situ visualizations in MR. Focusing on a multi-user, cross-device MR crisis simulation and triage training app as a case study, we then show the benefits of using MRAT, not only for user testing of MR apps, but also performance tuning throughout the design process. | false | false | [
"Michael Nebeling",
"Maximilian Speicher",
"Xizi Wang",
"Shwetha Rajaram",
"Brian D. Hall",
"Zijian Xie",
"Alexander R. E. Raistrick",
"Michelle Aebersold",
"Edward G. Happ",
"Jiayin Wang",
"Yanan Sun",
"Lotus Zhang",
"Leah E. Ramsier",
"Rhea Kulkarni"
] | [
"BP"
] | [] | [] |
CHI | 2,020 | Paths Explored, Paths Omitted, Paths Obscured: Decision Points & Selective Reporting in End-to-End Data Analysis | 10.1145/3313831.3376533 | Drawing reliable inferences from data involves many, sometimes arbitrary, decisions across phases of data collection, wrangling, and modeling. As different choices can lead to diverging conclusions, understanding how researchers make analytic decisions is important for supporting robust and replicable analysis. In this study, we pore over nine published research studies and conduct semi-structured interviews with their authors. We observe that researchers often base their decisions on methodological or theoretical concerns, but subject to constraints arising from the data, expertise, or perceived interpretability. We confirm that researchers may experiment with choices in search of desirable results, but also identify other reasons why researchers explore alternatives yet omit findings. In concert with our interviews, we also contribute visualizations for communicating decision processes throughout an analysis. Based on our results, we identify design opportunities for strengthening end-to-end analysis, for instance via tracking and meta-analysis of multiple decision paths. | false | false | [
"Yang Liu 0136",
"Tim Althoff",
"Jeffrey Heer"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1910.13602v3",
"icon": "paper"
}
] |
CHI | 2,020 | Prior Setting in Practice: Strategies and Rationales Used in Choosing Prior Distributions for Bayesian Analysis | 10.1145/3313831.3376377 | Bayesian statistical analysis is steadily growing in popularity and use. Choosing priors is an integral part of Bayesian inference. While there exist extensive normative recommendations for prior setting, little is known about how priors are chosen in practice. We conducted a survey (N = 50) and interviews (N = 9) where we used interactive visualizations to elicit prior distributions from researchers experienced withBayesian statistics and asked them for rationales for those priors. We found that participants' experience and philosophy influence how much and what information they are willing to incorporate into their priors, manifesting as different levels of informativeness and skepticism. We also identified three broad strategies participants use to set their priors: centrality matching, interval matching, and visual mass allocation. We discovered that participants' understanding of the notion of 'weakly informative priors"-a commonly-recommended normative approach to prior setting-manifests very differently across participants. Our results have implications both for how to develop prior setting recommendations and how to design tools to elicit priors in Bayesian analysis. | false | false | [
"Abhraneel Sarma",
"Matthew Kay 0001"
] | [] | [] | [] |
CHI | 2,020 | Progression Maps: Conceptualizing Narrative Structure for Interaction Design Support | 10.1145/3313831.3376527 | Interactive narratives are frequently designed for learning and training applications, such as social training. In these contexts, designers may be inexperienced in storytelling and interaction design, and it may be difficult to quickly build an effective experience, even for experienced designers. Designers often approach this problem through iterative design. To augment and reduce iteration, we argue for the utility of employing models to reason about, evaluate, and improve designs. While there has been much previous work on interactive narrative models, none of them capture aspects of the interaction design necessary for testing and evaluation. In this paper we propose a new computational model called Progression Maps, which abstracts interaction design elements of the narrative's structure and visualizes its interaction properties. We report on the model, its implementation, and two studies evaluating its use. Our results demonstrate Progression Maps' effectiveness in communicating the underlying design through an easily understandable visualization. | false | false | [
"Elín Carstensdóttir",
"Nathan Partlan",
"Steven C. Sutherland",
"Tyler Duke",
"Erika Ferris",
"Robin M. Richter",
"Maria Jose Valladares",
"Magy Seif El-Nasr"
] | [] | [] | [] |
CHI | 2,020 | Projection Boxes: On-the-fly Reconfigurable Visualization for Live Programming | 10.1145/3313831.3376494 | Live programming is a regime in which the programming environment provides continual feedback, most often in the form of runtime values. In this paper, we present Projection Boxes, a novel visualization technique for displaying runtime values of programs. The key idea behind projection boxes is to start with a full semantics of the program, and then use projections to pick a subset of the semantics to display. By varying the projection used, projection boxes can encode both previously known visualization techniques, and also new ones. As such, projection boxes provide an expressive and configurable framework for displaying runtime information. Through a user study we demonstrate that (1) users find projection boxes and their configurability useful (2) users are not distracted by the always-on visualization (3) a key driving force behind the need for a configurable visualization for live programming lies with the wide variation in programmer preferences. | false | false | [
"Sorin Lerner"
] | [] | [] | [] |
CHI | 2,020 | Pushing the (Visual) Narrative: The Effects of Prior Knowledge Elicitation in Provocative Topics | 10.1145/3313831.3376887 | Narrative visualization is a popular style of data-driven storytelling. Authors use this medium to engage viewers with complex and sometimes controversial issues. A challenge for authors is to not only deliver new information, but to also overcome people's biases and misconceptions. We study how people adjust their attitudes toward (or away from) a message experienced through a narrative visualization. In a mixed-methods analysis, we investigate whether eliciting participants' prior beliefs, and visualizing those beliefs alongside actual data, can increase narrative persuasiveness. We find that incorporating priors does not significantly affect attitudinal change. However, participants who externalized their beliefs expressed greater surprise at the data. Their comments also indicated a greater likelihood of acquiring new information, despite the minimal change in attitude. Our results also extend prior findings, showing that visualizations are more persuasive than equivalent textual data representations for exposing contentious issues. We discuss the implications and outline future research directions. | false | false | [
"Jeremy Heyer",
"Nirmal Kumar Raveendranath",
"Khairi Reda"
] | [] | [] | [] |
CHI | 2,020 | QMaps: Engaging Students in Voluntary Question Generation and Linking | 10.1145/3313831.3376882 | Generating multiple-choice questions is known to improve students' critical thinking and deep learning. Visualizing relationships between concepts enhances meaningful learning, students' ability to relate new concepts to previously learned concepts. We designed and deployed a collaborative learning process through which students generate multiple-choice questions and represent the prerequisite knowledge structure between questions as visual links in a shared map, using a variation of Concept Maps that we call "QMap." We conducted a four-month study with 19 undergraduate students. Students sustained voluntary contributions, creating 992 good questions, and drawing 1,255 meaningful links between the questions. Through analyzing self-reports, observations, and usage data, we report on the technical and social design features that led students to sustain their motivation. | false | false | [
"Iman YeckehZaare",
"Tirdad Barghi",
"Paul Resnick"
] | [] | [] | [] |
CHI | 2,020 | RunAhead: Exploring Head Scanning based Navigation for Runners | 10.1145/3313831.3376828 | Navigation systems for runners commonly provide turn-by-turn directions via voice and/or map-based visualizations. While voice directions require permanent attention, map-based guidance requires regular consultation. Both disrupt the running activity. To address this, we designed RunAhead, a navigation system using head scanning to query for navigation feedback, and we explored its suitability for runners in an outdoor experiment. In our design, we provide the runner with simple and intuitive navigation feedback on the path s/he is looking at through three different feedback modes: haptic, music and audio cues. In our experiment, we compare the resulting three versions of RunAhead with a baseline voice-based navigation system. We find that demand and error are equivalent across all four conditions. However, the head scanning based haptic and music conditions are preferred over the baseline and these preferences are impacted by runners' habits. With this study we contribute insights for designing navigation support for runners. | false | false | [
"Danilo Gallo",
"Shreepriya Shreepriya",
"Jutta Willamowski"
] | [] | [] | [] |
CHI | 2,020 | See, Feel, Move: Player Behaviour Analysis through Combined Visualization of Gaze, Emotions, and Movement | 10.1145/3313831.3376401 | Playtesting of games often relies on a mixed-methods approach to obtain more holistic insights about and, in turn, improve the player experience. However, triangulating the different data sources and visualizing them in an integrated manner such that they contextualize each other still proves challenging. Despite its potential value for gauging player behaviour, this area of research continues to be underexplored. In this paper, we propose a visualization approach that combines commonly tracked movement data with - from a visualization perspective rarely considered - gaze behaviour and emotional responses. We evaluated our approach through a qualitative expert study with five professional game developers. Our results show that both the individual visualization of gaze, emotions, and movement but especially their combination are valuable to understand and form hypotheses about player behaviour. At the same time, our results stress that careful attention needs to be paid to ensure that the visualization remains legible and does not obfuscate information. | false | false | [
"Daniel Kepplinger",
"Günter Wallner",
"Simone Kriglstein",
"Michael Lankes"
] | [
"HM"
] | [] | [] |
CHI | 2,020 | Surfacing Visualization Mirages | 10.1145/3313831.3376420 | Dirty data and deceptive design practices can undermine, invert, or invalidate the purported messages of charts and graphs. These failures can arise silently: a conclusion derived from a particular visualization may look plausible unless the analyst looks closer and discovers an issue with the backing data, visual specification, or their own assumptions. We term such silent but significant failures . We describe a conceptual model of mirages and show how they can be generated at every stage of the visual analytics process. We adapt a methodology from software testing, , as a way of automatically surfacing potential mirages at the visual encoding stage of analysis through modifications to the underlying data and chart specification. We show that metamorphic testing can reliably identify mirages across a variety of chart types with relatively little prior knowledge of the data or the domain. | false | false | [
"Andrew M. McNutt",
"Gordon L. Kindlmann",
"Michael Correll"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2001.02316v1",
"icon": "paper"
}
] |
CHI | 2,020 | Tactile Presentation of Network Data: Text, Matrix or Diagram? | 10.1145/3313831.3376367 | Visualisations are commonly used to understand social, biological and other kinds of networks. Currently we do not know how to effectively present network data to people who are blind or have low-vision (BLV). We ran a controlled study with 8 BLV participants comparing four tactile representations: organic node-link diagram, grid node-link diagram, adjacency matrix and braille list. We found that the node-link representations were preferred and more effective for path following and cluster identification while the matrix and list were better for adjacency tasks. This is broadly in line with findings for the corresponding visual representations. | false | false | [
"Yalong Yang 0001",
"Kim Marriott",
"Matthew Butler 0002",
"Cagatay Goncu",
"Leona Holloway"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2003.14274v1",
"icon": "paper"
}
] |
CHI | 2,020 | Techniques for Flexible Responsive Visualization Design | 10.1145/3313831.3376777 | Responsive visualizations adapt to effectively present information based on the device context. Such adaptations are essential for news content that is increasingly consumed on mobile devices. However, existing tools provide little support for responsive visualization design. We analyze a corpus of 231 responsive news visualizations and discuss formative interviews with five journalists about responsive visualization design. These interviews motivate four central design guidelines: enable simultaneous cross-device edits, facilitate device-specific customization, show cross-device previews, and support propagation of edits. Based on these guidelines, we present a prototype system that allows users to preview and edit multiple visualization versions simultaneously. We demonstrate the utility of the system features by recreating four real-world responsive visualizations from our corpus. | false | false | [
"Jane Hoffswell",
"Wilmot Li",
"Zhicheng Liu 0001"
] | [
"BP"
] | [] | [] |
CHI | 2,020 | Toward Automated Feedback on Teacher Discourse to Enhance Teacher Learning | 10.1145/3313831.3376418 | Like anyone, teachers need feedback to improve. Due to the high cost of human classroom observation, teachers receive infrequent feedback which is often more focused on evaluating performance than on improving practice. To address this critical barrier to teacher learning, we aim to provide teachers with detailed and actionable automated feedback. Towards this end, we developed an approach that enables teachers to easily record high-quality audio from their classes. Using this approach, teachers recorded 142 classroom sessions, of which 127 (89%) were usable. Next, we used speech recognition and machine learning to develop teacher-generalizable computer-scored estimates of key dimensions of teacher discourse. We found that automated models were moderately accurate when compared to human coders and that speech recognition errors did not influence performance. We conclude that authentic teacher discourse can be recorded and analyzed for automatic feedback. Our next step is to incorporate the automatic models into an interactive visualization tool that will provide teachers with objective feedback on the quality of their discourse. | false | false | [
"Emily Jensen",
"Meghan Dale",
"Patrick J. Donnelly",
"Cathlyn Stone",
"Sean Kelly",
"Amanda Godley",
"Sidney K. D'Mello"
] | [] | [] | [] |
CHI | 2,020 | Towards an Understanding of Augmented Reality Extensions for Existing 3D Data Analysis Tools | 10.1145/3313831.3376657 | We present an observational study with domain experts to understand how augmented reality (AR) extensions to traditional PC-based data analysis tools can help particle physicists to explore and understand 3D data. Our goal is to allow researchers to integrate stereoscopic AR-based visual representations and interaction techniques into their tools, and thus ultimately to increase the adoption of modern immersive analytics techniques in existing data analysis workflows. We use Microsoft's HoloLens as a lightweight and easily maintainable AR headset and replicate existing visualization and interaction capabilities on both the PC and the AR view. We treat the AR headset as a second yet stereoscopic screen, allowing researchers to study their data in a connected multi-view manner. Our results indicate that our collaborating physicists appreciate a hybrid data exploration setup with an interactive AR extension to improve their understanding of particle collision events. | false | false | [
"Xiyao Wang",
"Lonni Besançon",
"David Rousseau",
"Mickaël Sereno",
"Mehdi Ammi",
"Tobias Isenberg 0001"
] | [] | [] | [] |
CHI | 2,020 | Truncating the Y-Axis: Threat or Menace? | 10.1145/3313831.3376222 | Bar charts with y-axes that don't begin at zero can visually exaggerate effect sizes. However, advice for whether or not to truncate the y-axis can be equivocal for other visualization types. In this paper we present examples of visualizations where this y-axis truncation can be beneficial as well as harmful, depending on the communicative and analytic intent. We also present the results of a series of crowd-sourced experiments in which we examine how y-axis truncation impacts subjective effect size across visualization types, and we explore alternative designs that more directly alert viewers to this truncation. We find that the subjective impact of axis truncation is persistent across visualizations designs, even for designs with explicit visual cues that indicate truncation has taken place. We suggest that designers consider the scale of the meaningful effect sizes and variation they intend to communicate, regardless of the visual encoding. | false | false | [
"Michael Correll",
"Enrico Bertini",
"Steven Franconeri"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.02035v2",
"icon": "paper"
}
] |
CHI | 2,020 | Understanding and Visualizing Data Iteration in Machine Learning | 10.1145/3313831.3376177 | Successful machine learning (ML) applications require iterations on both modeling and the underlying data. While prior visualization tools for ML primarily focus on modeling, our interviews with 23 ML practitioners reveal that they improve model performance frequently by iterating on their data (e.g., collecting new data, adding labels) rather than their models. We also identify common types of data iterations and associated analysis tasks and challenges. To help attribute data iterations to model performance, we design a collection of interactive visualizations and integrate them into a prototype, Chameleon, that lets users compare data features, training/testing splits, and performance across data versions. We present two case studies where developers apply \system to their own evolving datasets on production ML projects. Our interface helps them verify data collection efforts, find failure cases stretching across data versions, capture data processing changes that impacted performance, and identify opportunities for future data iterations. | false | false | [
"Fred Hohman",
"Kanit Wongsuphasawat",
"Mary Beth Kery",
"Kayur Patel"
] | [] | [] | [] |
CHI | 2,020 | Unwind: Interactive Fish Straightening | 10.1145/3313831.3376846 | The ScanAllFish project is a large-scale effort to scan all the world's 33,100 known species of fishes. It has already generated thousands of volumetric CT scans of fish species which are available on open access platforms such as the Open Science Framework. To achieve a scanning rate required for a project of this magnitude, many specimens are grouped together into a single tube and scanned all at once. The resulting data contain many fish which are often bent and twisted to fit into the scanner. Our system, Unwind, is a novel interactive visualization and processing tool which extracts, unbends, and untwists volumetric images of fish with minimal user interaction. Our approach enables scientists to interactively unwarp these volumes to remove the undesired torque and bending using a piecewise-linear skeleton extracted by averaging isosurfaces of a harmonic function connecting the head and tail of each fish. The result is a volumetric dataset of a individual, straight fish in a canonical pose defined by the marine biologist expert user. We have developed Unwind in collaboration with a team of marine biologists: Our system has been deployed in their labs, and is presently being used for dataset construction, biomechanical analysis, and the generation of figures for scientific publication. | false | false | [
"Francis Williams",
"Alexander Bock 0002",
"Harish Doraiswamy",
"Cassandra M. Donatelli",
"Kayla Hall",
"Adam Summers",
"Daniele Panozzo",
"Cláudio T. Silva"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1904.04890v2",
"icon": "paper"
}
] |
CHI | 2,020 | Watch+Strap: Extending Smartwatches with Interactive StrapDisplays | 10.1145/3313831.3376199 | While smartwatches are widely adopted these days, their input and output space remains fairly limited by their screen size. We present StrapDisplays-interactive watchbands with embedded display and touch technologies-that enhance commodity watches and extend their input and output capabilities. After introducing the physical design space of these StrapDisplays, we explore how to combine a smartwatch and straps in a synergistic Watch+Strap system. Specifically, we propose multiple interface concepts that consider promising content distributions, interaction techniques, usage types, and display roles. For example, the straps can enrich watch apps, display visualizations, provide glanceable feedback, or help avoiding occlusion issues. Further, we provide a modular research platform incorporating three StrapDisplay prototypes and a flexible web-based software architecture, demonstrating the feasibility of our approach. Early brainstorming sessions with 15 participants informed our design process, while later interviews with six experts supported our concepts and provided valuable feedback for future developments. | false | false | [
"Konstantin Klamka",
"Tom Horak",
"Raimund Dachselt"
] | [] | [] | [] |
CHI | 2,020 | What's Wrong with Computational Notebooks? Pain Points, Needs, and Design Opportunities | 10.1145/3313831.3376729 | Computational notebooks - such as Azure, Databricks, and Jupyter - are a popular, interactive paradigm for data scientists to author code, analyze data, and interleave visualizations, all within a single document. Nevertheless, as data scientists incorporate more of their activities into notebooks, they encounter unexpected difficulties, or pain points, that impact their productivity and disrupt their workflow. Through a systematic, mixed-methods study using semi-structured interviews (n=20) and survey (n=156) with data scientists, we catalog nine pain points when working with notebooks. Our findings suggest that data scientists face numerous pain points throughout the entire workflow - from setting up notebooks to deploying to production - across many notebook environments. Our data scientists report essential notebook requirements, such as supporting data exploration and visualization. The results of our study inform and inspire the design of computational notebooks. | false | false | [
"Souti Chattopadhyay",
"Ishita Prasad",
"Austin Z. Henley",
"Anita Sarma",
"Titus Barik"
] | [
"HM"
] | [] | [] |
CHI | 2,020 | Would you do it?: Enacting Moral Dilemmas in Virtual Reality for Understanding Ethical Decision-Making | 10.1145/3313831.3376788 | A moral dilemma is a decision-making paradox without unambiguously acceptable or preferable options. This paper investigates if and how the virtual enactment of two renowned moral dilemmas---the Trolley and the Mad Bomber---influence decision-making when compared with mentally visualizing such situations. We conducted two user studies with two gender-balanced samples of 60 participants in total that compared between paper-based and virtual-reality (VR) conditions, while simulating 5 distinct scenarios for the Trolley dilemma, and 4 storyline scenarios for the Mad Bomber's dilemma. Our findings suggest that the VR enactment of moral dilemmas further fosters utilitarian decision-making, while it amplifies biases such as sparing juveniles and seeking retribution. Ultimately, we theorize that the VR enactment of renowned moral dilemmas can yield ecologically-valid data for training future Artificial Intelligence (AI) systems on ethical decision-making, and we elicit early design principles for the training of such systems. | false | false | [
"Evangelos Niforatos",
"Adam Palma",
"Roman Gluszny",
"Athanasios Vourvopoulos",
"Fotis Liarokapis"
] | [] | [] | [] |
VAST | 2,019 | A Natural-language-based Visual Query Approach of Uncertain Human Trajectories | 10.1109/TVCG.2019.2934671 | Visual querying is essential for interactively exploring massive trajectory data. However, the data uncertainty imposes profound challenges to fulfill advanced analytics requirements. On the one hand, many underlying data does not contain accurate geographic coordinates, e.g., positions of a mobile phone only refer to the regions (i.e., mobile cell stations) in which it resides, instead of accurate GPS coordinates. On the other hand, domain experts and general users prefer a natural way, such as using a natural language sentence, to access and analyze massive movement data. In this paper, we propose a visual analytics approach that can extract spatial-temporal constraints from a textual sentence and support an effective query method over uncertain mobile trajectory data. It is built up on encoding massive, spatially uncertain trajectories by the semantic information of the POls and regions covered by them, and then storing the trajectory documents in text database with an effective indexing scheme. The visual interface facilitates query condition specification, situation-aware visualization, and semantic exploration of large trajectory data. Usage scenarios on real-world human mobility datasets demonstrate the effectiveness of our approach. | false | false | [
"Zhaosong Huang",
"Ye Zhao 0003",
"Wei Chen 0001",
"Shengjie Gao",
"Kejie Yu",
"Weixia Xu",
"MingJie Tang",
"Min-Feng Zhu",
"Mingliang Xu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.00277v2",
"icon": "paper"
}
] |
VAST | 2,019 | Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures | 10.1109/TVCG.2019.2934261 | The performance of deep learning models is dependent on the precise configuration of many layers and parameters. However, there are currently few systematic guidelines for how to configure a successful model. This means model builders often have to experiment with different configurations by manually programming different architectures (which is tedious and time consuming) or rely on purely automated approaches to generate and train the architectures (which is expensive). In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures. In REMAP, the user explores the large and complex parameter space for neural network architectures using a combination of global inspection and local experimentation. Through a visual overview of a set of models, the user identifies interesting clusters of architectures. Based on their findings, the user can run ablation and variation experiments to identify the effects of adding, removing, or replacing layers in a given architecture and generate new models accordingly. They can also handcraft new models using a simple graphical interface. As a result, a model builder can build deep learning models quickly, efficiently, and without manual programming. We inform the design of REMAP through a design study with four deep learning model builders. Through a use case, we demonstrate that REMAP allows users to discover performant neural network architectures efficiently using visual exploration and user-defined semi-automated searches through the model space. | false | false | [
"Dylan Cashman",
"Adam Perer",
"Remco Chang",
"Hendrik Strobelt"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.00387v1",
"icon": "paper"
}
] |
VAST | 2,019 | AirVis: Visual Analytics of Air Pollution Propagation | 10.1109/TVCG.2019.2934670 | Air pollution has become a serious public health problem for many cities around the world. To find the causes of air pollution, the propagation processes of air pollutants must be studied at a large spatial scale. However, the complex and dynamic wind fields lead to highly uncertain pollutant transportation. The state-of-the-art data mining approaches cannot fully support the extensive analysis of such uncertain spatiotemporal propagation processes across multiple districts without the integration of domain knowledge. The limitation of these automated approaches motivates us to design and develop AirVis, a novel visual analytics system that assists domain experts in efficiently capturing and interpreting the uncertain propagation patterns of air pollution based on graph visualizations. Designing such a system poses three challenges: a) the extraction of propagation patterns; b) the scalability of pattern presentations; and c) the analysis of propagation processes. To address these challenges, we develop a novel pattern mining framework to model pollutant transportation and extract frequent propagation patterns efficiently from large-scale atmospheric data. Furthermore, we organize the extracted patterns hierarchically based on the minimum description length (MDL) principle and empower expert users to explore and analyze these patterns effectively on the basis of pattern topologies. We demonstrated the effectiveness of our approach through two case studies conducted with a real-world dataset and positive feedback from domain experts. | false | false | [
"Zikun Deng",
"Di Weng",
"Jiahui Chen",
"Ren Liu",
"Zhibin Wang",
"Jie Bao 0003",
"Yu Zheng 0004",
"Yingcai Wu"
] | [] | [] | [] |
VAST | 2,019 | CloudDet: Interactive Visual Analysis of Anomalous Performances in Cloud Computing Systems | 10.1109/TVCG.2019.2934613 | Detecting and analyzing potential anomalous performances in cloud computing systems is essential for avoiding losses to customers and ensuring the efficient operation of the systems. To this end, a variety of automated techniques have been developed to identify anomalies in cloud computing. These techniques are usually adopted to track the performance metrics of the system (e.g., CPU, memory, and disk I/O), represented by a multivariate time series. However, given the complex characteristics of cloud computing data, the effectiveness of these automated methods is affected. Thus, substantial human judgment on the automated analysis results is required for anomaly interpretation. In this paper, we present a unified visual analytics system named CloudDet to interactively detect, inspect, and diagnose anomalies in cloud computing systems. A novel unsupervised anomaly detection algorithm is developed to identify anomalies based on the specific temporal patterns of the given metrics data (e.g., the periodic pattern). Rich visualization and interaction designs are used to help understand the anomalies in the spatial and temporal context. We demonstrate the effectiveness of CloudDet through a quantitative evaluation, two case studies with real-world data, and interviews with domain experts. | false | false | [
"Ke Xu",
"Yun Wang 0012",
"Leni Yang",
"Yifang Wang 0001",
"Bo Qiao 0001",
"Si Qin",
"Yong Xu",
"Haidong Zhang",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.13187v1",
"icon": "paper"
}
] |
VAST | 2,019 | CourtTime: Generating Actionable Insights into Tennis Matches Using Visual Analytics | 10.1109/TVCG.2019.2934243 | Tennis players and coaches of all proficiency levels seek to understand and improve their play. Summary statistics alone are inadequate to provide the insights players need to improve their games. Spatio-temporal data capturing player and ball movements is likely to provide the actionable insights needed to identify player strengths, weaknesses, and strategies. To fully utilize this spatio-temporal data, we need to integrate it with domain-relevant context meta-data. In this paper, we propose CourtTime, a novel approach to perform data-driven visual analysis of individual tennis matches. Our visual approach introduces a novel visual metaphor, namely 1–D Space-Time Charts that enable the analysis of single points at a glance based on small multiples. We also employ user-driven sorting and clustering techniques and a layout technique that aligns the last few shots in a point to facilitate shot pattern discovery. We discuss the usefulness of CourtTime via an extensive case study and report on feedback from an amateur tennis player and three tennis coaches. | false | false | [
"Tom Polk",
"Dominik Jäckle",
"Johannes Häußler",
"Jing Yang"
] | [] | [] | [] |
VAST | 2,019 | Do What I Mean, Not What I Say! Design Considerations for Supporting Intent and Context in Analytical Conversation | 10.1109/VAST47406.2019.8986918 | Natural language can be a useful modality for creating and interacting with visualizations but users often have unrealistic expectations about the intelligence of natural language systems. The gulf between user expectations and system capabilities may lead to a disappointing user experience. So - if we want to engineer a natural language system, what are the requirements around system intelligence? This work takes a retrospective look at how we answered this question in the design of Ask Data, a natural language interaction feature for Tableau. We examine two factors contributing to perceived system intelligence: the system's ability to understand the analytic intent behind an input utterance and the ability to interpret an utterance contextually (i.e. taking into account the current visualization state and recent actions). Our aim was to understand the ways in which a system would need to support these two aspects of intelligence to enable a positive user experience. We first describe a pre-design Wizard of Oz study that offered insight into this question and narrowed the space of designs under consideration. We then reflect on the impact of this study on system development, examining how design implications from the study played out in practice. Our work contributes insights for the design of natural language interaction in visual analytics as well as a reflection on the value of pre-design empirical studies in the development of visual analytic systems. | false | false | [
"Melanie Tory",
"Vidya Setlur"
] | [] | [] | [] |
VAST | 2,019 | EmoCo: Visual Analysis of Emotion Coherence in Presentation Videos | 10.1109/TVCG.2019.2934656 | Emotions play a key role in human communication and public presentations. Human emotions are usually expressed through multiple modalities. Therefore, exploring multimodal emotions and their coherence is of great value for understanding emotional expressions in presentations and improving presentation skills. However, manually watching and studying presentation videos is often tedious and time-consuming. There is a lack of tool support to help conduct an efficient and in-depth multi-level analysis. Thus, in this paper, we introduce EmoCo, an interactive visual analytics system to facilitate efficient analysis of emotion coherence across facial, text, and audio modalities in presentation videos. Our visualization system features a channel coherence view and a sentence clustering view that together enable users to obtain a quick overview of emotion coherence and its temporal evolution. In addition, a detail view and word view enable detailed exploration and comparison from the sentence level and word level, respectively. We thoroughly evaluate the proposed system and visualization techniques through two usage scenarios based on TED Talk videos and interviews with two domain experts. The results demonstrate the effectiveness of our system in gaining insights into emotion coherence in presentations. | false | false | [
"Haipeng Zeng",
"Xingbo Wang 0001",
"Aoyu Wu",
"Yong Wang 0021",
"Quan Li",
"Alex Endert",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.12918v2",
"icon": "paper"
}
] |
VAST | 2,019 | Evaluating Perceptual Bias During Geometric Scaling of Scatterplots | 10.1109/TVCG.2019.2934208 | Scatterplots are frequently scaled to fit display areas in multi-view and multi-device data analysis environments. A common method used for scaling is to enlarge or shrink the entire scatterplot together with the inside points synchronously and proportionally. This process is called geometric scaling. However, geometric scaling of scatterplots may cause a perceptual bias, that is, the perceived and physical values of visual features may be dissociated with respect to geometric scaling. For example, if a scatterplot is projected from a laptop to a large projector screen, then observers may feel that the scatterplot shown on the projector has fewer points than that viewed on the laptop. This paper presents an evaluation study on the perceptual bias of visual features in scatterplots caused by geometric scaling. The study focuses on three fundamental visual features (i.e., numerosity, correlation, and cluster separation) and three hypotheses that are formulated on the basis of our experience. We carefully design three controlled experiments by using well-prepared synthetic data and recruit participants to complete the experiments on the basis of their subjective experience. With a detailed analysis of the experimental results, we obtain a set of instructive findings. First, geometric scaling causes a bias that has a linear relationship with the scale ratio. Second, no significant difference exists between the biases measured from normally and uniformly distributed scatterplots. Third, changing the point radius can correct the bias to a certain extent. These findings can be used to inspire the design decisions of scatterplots in various scenarios. | false | false | [
"Yating Wei",
"Honghui Mei",
"Ying Zhao 0001",
"Shuyue Zhou",
"Bingru Lin",
"Haojing Jiang",
"Wei Chen 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.00403v2",
"icon": "paper"
}
] |
VAST | 2,019 | explAIner: A Visual Analytics Framework for Interactive and Explainable Machine Learning | 10.1109/TVCG.2019.2934629 | We propose a framework for interactive and explainable machine learning that enables users to (1) understand machine learning models; (2) diagnose model limitations using different explainable AI methods; as well as (3) refine and optimize the models. Our framework combines an iterative XAI pipeline with eight global monitoring and steering mechanisms, including quality monitoring, provenance tracking, model comparison, and trust building. To operationalize the framework, we present explAIner, a visual analytics system for interactive and explainable machine learning that instantiates all phases of the suggested pipeline within the commonly used TensorBoard environment. We performed a user-study with nine participants across different expertise levels to examine their perception of our workflow and to collect suggestions to fill the gap between our system and framework. The evaluation confirms that our tightly integrated system leads to an informed machine learning process while disclosing opportunities for further extensions. | false | false | [
"Thilo Spinner",
"Udo Schlegel",
"Hanna Hauptmann",
"Mennatallah El-Assady"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.00087v2",
"icon": "paper"
}
] |
VAST | 2,019 | Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics | 10.1109/TVCG.2019.2934631 | Machine learning models are currently being deployed in a variety of real-world applications where model predictions are used to make decisions about healthcare, bank loans, and numerous other critical tasks. As the deployment of artificial intelligence technologies becomes ubiquitous, it is unsurprising that adversaries have begun developing methods to manipulate machine learning models to their advantage. While the visual analytics community has developed methods for opening the black box of machine learning models, little work has focused on helping the user understand their model vulnerabilities in the context of adversarial attacks. In this paper, we present a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks. Our framework employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures. We demonstrate our framework through two case studies on binary classifiers and illustrate model vulnerabilities with respect to varying attack strategies. | false | false | [
"Yuxin Ma",
"Tiankai Xie",
"Jundong Li",
"Ross Maciejewski"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.07296v4",
"icon": "paper"
}
] |
VAST | 2,019 | Exploranative Code Quality Documents | 10.1109/TVCG.2019.2934669 | Good code quality is a prerequisite for efficiently developing maintainable software. In this paper, we present a novel approach to generate exploranative (explanatory and exploratory) data-driven documents that report code quality in an interactive, exploratory environment. We employ a template-based natural language generation method to create textual explanations about the code quality, dependent on data from software metrics. The interactive document is enriched by different kinds of visualization, including parallel coordinates plots and scatterplots for data exploration and graphics embedded into text. We devise an interaction model that allows users to explore code quality with consistent linking between text and visualizations; through integrated explanatory text, users are taught background knowledge about code quality aspects. Our approach to interactive documents was developed in a design study process that included software engineering and visual analytics experts. Although the solution is specific to the software engineering scenario, we discuss how the concept could generalize to multivariate data and report lessons learned in a broader scope. | false | false | [
"Haris Mumtaz",
"Shahid Latif",
"Fabian Beck 0001",
"Daniel Weiskopf"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.11481v2",
"icon": "paper"
}
] |
VAST | 2,019 | Facetto: Combining Unsupervised and Supervised Learning for Hierarchical Phenotype Analysis in Multi-Channel Image Data | 10.1109/TVCG.2019.2934547 | Facetto is a scalable visual analytics application that is used to discover single-cell phenotypes in high-dimensional multi-channel microscopy images of human tumors and tissues. Such images represent the cutting edge of digital histology and promise to revolutionize how diseases such as cancer are studied, diagnosed, and treated. Highly multiplexed tissue images are complex, comprising 109 or more pixels, 60-plus channels, and millions of individual cells. This makes manual analysis challenging and error-prone. Existing automated approaches are also inadequate, in large part, because they are unable to effectively exploit the deep knowledge of human tissue biology available to anatomic pathologists. To overcome these challenges, Facetto enables a semi-automated analysis of cell types and states. It integrates unsupervised and supervised learning into the image and feature exploration process and offers tools for analytical provenance. Experts can cluster the data to discover new types of cancer and immune cells and use clustering results to train a convolutional neural network that classifies new cells accordingly. Likewise, the output of classifiers can be clustered to discover aggregate patterns and phenotype subsets. We also introduce a new hierarchical approach to keep track of analysis steps and data subsets created by users; this assists in the identification of cell types. Users can build phenotype trees and interact with the resulting hierarchical structures of both high-dimensional feature and image spaces. We report on use-cases in which domain scientists explore various large-scale fluorescence imaging datasets. We demonstrate how Facetto assists users in steering the clustering and classification process, inspecting analysis results, and gaining new scientific insights into cancer biology. | false | false | [
"Robert Krüger",
"Johanna Beyer",
"Won-Dong Jang",
"Nam Wook Kim",
"Artem Sokolov",
"Peter K. Sorger",
"Hanspeter Pfister"
] | [] | [] | [] |
VAST | 2,019 | FairSight: Visual Analytics for Fairness in Decision Making | 10.1109/TVCG.2019.2934262 | Data-driven decision making related to individuals has become increasingly pervasive, but the issue concerning the potential discrimination has been raised by recent studies. In response, researchers have made efforts to propose and implement fairness measures and algorithms, but those efforts have not been translated to the real-world practice of data-driven decision making. As such, there is still an urgent need to create a viable tool to facilitate fair decision making. We propose FairSight, a visual analytic system to address this need; it is designed to achieve different notions of fairness in ranking decisions through identifying the required actions – understanding, measuring, diagnosing and mitigating biases – that together lead to fairer decision making. Through a case study and user study, we demonstrate that the proposed visual analytic and diagnostic modules in the system are effective in understanding the fairness-aware decision pipeline and obtaining more fair outcomes. | false | false | [
"Yongsu Ahn",
"Yu-Ru Lin"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.00176v2",
"icon": "paper"
}
] |
VAST | 2,019 | FAIRVIS: Visual Analytics for Discovering Intersectional Bias in Machine Learning | 10.1109/VAST47406.2019.8986948 | The growing capability and accessibility of machine learning has led to its application to many real-world domains and data about people. Despite the benefits algorithmic systems may bring, models can reflect, inject, or exacerbate implicit and explicit societal biases into their outputs, disadvantaging certain demographic subgroups. Discovering which biases a machine learning model has introduced is a great challenge, due to the numerous definitions of fairness and the large number of potentially impacted subgroups. We present FAIRVIS, a mixed-initiative visual analytics system that integrates a novel subgroup discovery technique for users to audit the fairness of machine learning models. Through FAIRVIS, users can apply domain knowledge to generate and investigate known subgroups, and explore suggested and similar subgroups. FAIRVIS's coordinated views enable users to explore a high-level overview of subgroup performance and subsequently drill down into detailed investigation of specific subgroups. We show how FAIRVIS helps to discover biases in two real datasets used in predicting income and recidivism. As a visual analytics system devoted to discovering bias in machine learning, FAIRVIS demonstrates how interactive visualization may help data scientists and the general public understand and create more equitable algorithmic systems. | false | false | [
"Ángel Alexander Cabrera",
"Will Epperson",
"Fred Hohman",
"Minsuk Kahng",
"Jamie Morgenstern",
"Polo Chau"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1904.05419v4",
"icon": "paper"
}
] |
VAST | 2,019 | FDive: Learning Relevance Models Using Pattern-based Similarity Measures | 10.1109/VAST47406.2019.8986940 | The detection of interesting patterns in large high-dimensional datasets is difficult because of their dimensionality and pattern complexity. Therefore, analysts require automated support for the extraction of relevant patterns. In this paper, we present FDive, a visual active learning system that helps to create visually explorable relevance models, assisted by learning a pattern-based similarity. We use a small set of user-provided labels to rank similarity measures, consisting of feature descriptor and distance function combinations, by their ability to distinguish relevant from irrelevant data. Based on the best-ranked similarity measure, the system calculates an interactive Self-Organizing Map-based relevance model, which classifies data according to the cluster affiliation. It also automatically prompts further relevance feedback to improve its accuracy. Uncertain areas, especially near the decision boundaries, are highlighted and can be refined by the user. We evaluate our approach by comparison to state-of-the-art feature selection techniques and demonstrate the usefulness of our approach by a case study classifying electron microscopy images of brain cells. The results show that FDive enhances both the quality and understanding of relevance models and can thus lead to new insights for brain research. | false | false | [
"Frederik L. Dennig",
"Tom Polk",
"Zudi Lin",
"Tobias Schreck",
"Hanspeter Pfister",
"Michael Behrisch 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.12489v3",
"icon": "paper"
}
] |
VAST | 2,019 | FlowSense: A Natural Language Interface for Visual Data Exploration within a Dataflow System | 10.1109/TVCG.2019.2934668 | Dataflow visualization systems enable flexible visual data exploration by allowing the user to construct a dataflow diagram that composes query and visualization modules to specify system functionality. However learning dataflow diagram usage presents overhead that often discourages the user. In this work we design FlowSense, a natural language interface for dataflow visualization systems that utilizes state-of-the-art natural language processing techniques to assist dataflow diagram construction. FlowSense employs a semantic parser with special utterance tagging and special utterance placeholders to generalize to different datasets and dataflow diagrams. It explicitly presents recognized dataset and diagram special utterances to the user for dataflow context awareness. With FlowSense the user can expand and adjust dataflow diagrams more conveniently via plain English. We apply FlowSense to the VisFlow subset-flow visualization system to enhance its usability. We evaluate FlowSense by one case study with domain experts on a real-world data analysis problem and a formal user study. | false | false | [
"Bowen Yu 0004",
"Cláudio T. Silva"
] | [
"BP"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.00681v2",
"icon": "paper"
}
] |
VAST | 2,019 | Galex: Exploring the Evolution and Intersection of Disciplines | 10.1109/TVCG.2019.2934667 | Revealing the evolution of science and the intersections among its sub-fields is extremely important to understand the characteristics of disciplines, discover new topics, and predict the future. The current work focuses on either building the skeleton of science, lacking interaction, detailed exploration and interpretation or on the lower topic level, missing high-level macro-perspective. To fill this gap, we design and implement Galaxy Evolution Explorer (Galex), a hierarchical visual analysis system, in combination with advanced text mining technologies, that could help analysts to comprehend the evolution and intersection of one discipline rapidly. We divide Galex into three progressively fine-grained levels: discipline, area, and institution levels. The combination of interactions enables analysts to explore an arbitrary piece of history and an arbitrary part of the knowledge space of one discipline. Using a flexible spotlight component, analysts could freely select and quickly understand an exploration region. A tree metaphor allows analysts to perceive the expansion, decline, and intersection of topics intuitively. A synchronous spotlight interaction aids in comparing research contents among institutions easily. Three cases demonstrate the effectiveness of our system. | false | false | [
"Zeyu Li 0003",
"Changhong Zhang",
"Shichao Jia",
"Jiawan Zhang"
] | [] | [] | [] |
VAST | 2,019 | GPGPU Linear Complexity t-SNE Optimization | 10.1109/TVCG.2019.2934307 | In recent years the t-distributed Stochastic Neighbor Embedding (t-SNE) algorithm has become one of the most used and insightful techniques for exploratory data analysis of high-dimensional data. It reveals clusters of high-dimensional data points at different scales while only requiring minimal tuning of its parameters. However, the computational complexity of the algorithm limits its application to relatively small datasets. To address this problem, several evolutions of t-SNE have been developed in recent years, mainly focusing on the scalability of the similarity computations between data points. However, these contributions are insufficient to achieve interactive rates when visualizing the evolution of the t-SNE embedding for large datasets. In this work, we present a novel approach to the minimization of the t-SNE objective function that heavily relies on graphics hardware and has linear computational complexity. Our technique decreases the computational cost of running t-SNE on datasets by orders of magnitude and retains or improves on the accuracy of past approximated techniques. We propose to approximate the repulsive forces between data points by splatting kernel textures for each data point. This approximation allows us to reformulate the t-SNE minimization problem as a series of tensor operations that can be efficiently executed on the graphics card. An efficient implementation of our technique is integrated and available for use in the widely used Google TensorFlow.js, and an open-source C++ library. | false | false | [
"Nicola Pezzotti",
"Julian Thijssen",
"Alexander Mordvintsev",
"Thomas Höllt",
"Baldur van Lew",
"Boudewijn P. F. Lelieveldt",
"Elmar Eisemann",
"Anna Vilanova"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1805.10817v2",
"icon": "paper"
}
] |
VAST | 2,019 | GUIRO: User-Guided Matrix Reordering | 10.1109/TVCG.2019.2934300 | Matrix representations are one of the main established and empirically proven to be effective visualization techniques for relational (or network) data. However, matrices—similar to node-link diagrams—are most effective if their layout reveals the underlying data topology. Given the many developed algorithms, a practical problem arises: “Which matrix reordering algorithm should I choose for my dataset at hand?” To make matters worse, different reordering algorithms applied to the same dataset may let significantly different visual matrix patterns emerge. This leads to the question of trustworthiness and explainability of these fully automated, often heuristic, black-box processes. We present GUIRO, a Visual Analytics system that helps novices, network analysts, and algorithm designers to open the black-box. Users can investigate the usefulness and expressiveness of 70 accessible matrix reordering algorithms. For network analysts, we introduce a novel model space representation and two interaction techniques for a user-guided reordering of rows or columns, and especially groups thereof (submatrix reordering). These novel techniques contribute to the understanding of the global and local dataset topology. We support algorithm designers by giving them access to 16 reordering quality metrics and visual exploration means for comparing reordering implementations on a row/column permutation level. We evaluated GUIRO in a guided explorative user study with 12 subjects, a case study demonstrating its usefulness in a real-world scenario, and through an expert study gathering feedback on our design decisions. We found that our proposed methods help even inexperienced users to understand matrix patterns and allow a user-guided steering of reordering algorithms. GUIRO helps to increase the transparency of matrix reordering algorithms, thus helping a broad range of users to get a better insight into the complex reordering process, in turn supporting data and reordering algorithm insights. | false | false | [
"Michael Behrisch 0001",
"Tobias Schreck",
"Hanspeter Pfister"
] | [] | [] | [] |
VAST | 2,019 | ICE: An Interactive Configuration Explorer for High Dimensional Categorical Parameter Spaces | 10.1109/VAST47406.2019.8986923 | There are many applications where users seek to explore the impact of the settings of several categorical variables with respect to one dependent numerical variable. For example, a computer systems analyst might want to study how the type of file system or storage device affects system performance. A usual choice is the method of Parallel Sets designed to visualize multivariate categorical variables, However, we found that the magnitude of the parameter impacts on the numerical variable cannot be easily observed here. We also attempted a dimension reduction approach based on Multiple Correspondence Analysis but found that the SVD-generated 2D layout resulted in a loss of information. We hence propose a novel approach, the Interactive Configuration Explorer (ICE), which directly addresses the need of analysts to learn how the dependent numerical variable is affected by the parameter settings given multiple optimization objectives. No information is lost as ICE shows the complete distribution and statistics of the dependent variable in context with each categorical variable. Analysts can interactively filter the variables to optimize for certain goals such as achieving a system with maximum performance, low variance, etc. Our system was developed in tight collaboration with a group of systems performance researchers and its final effectiveness was evaluated with expert interviews, a comparative user study, and two case studies. | false | false | [
"Anjul Tyagi",
"Zhen Cao",
"Tyler Estro",
"Erez Zadok",
"Klaus Mueller 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.12627v2",
"icon": "paper"
}
] |
VAST | 2,019 | Influence Flowers of Academic Entities | 10.1109/VAST47406.2019.8986934 | We present the Influence Flower, a new visual metaphor for the influence profile of academic entities, including people, projects, institutions, conferences, and journals. While many tools quantify influence, we aim to expose the flow of influence between entities. The Influence Flower is an ego-centric graph, with a query entity placed in the centre. The petals are styled to reflect the strength of influence to and from other entities of the same or different type. For example, one can break down the incoming and outgoing influences of a research lab by research topics. The Influence Flower uses a recent snapshot of Microsoft Academic Graph, consisting of 212 million authors, their 176 million publications, and 1.2 billion citations. An interactive web app, Influence Map, is constructed around this central metaphor for searching and curating visualisations. We also propose a visual comparison method that highlights change in influence patterns over time. We demonstrate through several case studies that the Influence Flower supports data-driven inquiries about the following: researchers' careers over time; paper(s) and projects, including those with delayed recognition; the interdisciplinary profile of a research institution; and the shifting topical trends in conferences. We also use this tool on influence data beyond academic citations, by contrasting the academic and Twitter activities of a researcher. | false | false | [
"Minjeong Shin",
"Alexander Soen",
"Benjamin T. Readshaw",
"Steve Blackburn",
"Mitchell Whitelaw",
"Lexing Xie"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.12748v1",
"icon": "paper"
}
] |
VAST | 2,019 | Interactive Correction of Mislabeled Training Data | 10.1109/VAST47406.2019.8986943 | In this paper, we develop a visual analysis method for interactively improving the quality of labeled data, which is essential to the success of supervised and semi-supervised learning. The quality improvement is achieved through the use of user-selected trusted items. We employ a bi-level optimization model to accurately match the labels of the trusted items and to minimize the training loss. Based on this model, a scalable data correction algorithm is developed to handle tens of thousands of labeled data efficiently. The selection of the trusted items is facilitated by an incremental tSNE with improved computational efficiency and layout stability to ensure a smooth transition between different levels. We evaluated our method on real-world datasets through quantitative evaluation and case studies, and the results were generally favorable. | false | false | [
"Shouxing Xiang",
"Xi Ye",
"Jiazhi Xia",
"Jing Wu 0004",
"Yang Chen",
"Shixia Liu"
] | [] | [] | [] |
VAST | 2,019 | Interactive Learning for Identifying Relevant Tweets to Support Real-time Situational Awareness | 10.1109/TVCG.2019.2934614 | Various domain users are increasingly leveraging real-time social media data to gain rapid situational awareness. However, due to the high noise in the deluge of data, effectively determining semantically relevant information can be difficult, further complicated by the changing definition of relevancy by each end user for different events. The majority of existing methods for short text relevance classification fail to incorporate users' knowledge into the classification process. Existing methods that incorporate interactive user feedback focus on historical datasets. Therefore, classifiers cannot be interactively retrained for specific events or user-dependent needs in real-time. This limits real-time situational awareness, as streaming data that is incorrectly classified cannot be corrected immediately, permitting the possibility for important incoming data to be incorrectly classified as well. We present a novel interactive learning framework to improve the classification process in which the user iteratively corrects the relevancy of tweets in real-time to train the classification model on-the-fly for immediate predictive improvements. We computationally evaluate our classification model adapted to learn at interactive rates. Our results show that our approach outperforms state-of-the-art machine learning models. In addition, we integrate our framework with the extended Social Media Analytics and Reporting Toolkit (SMART) 2.0 system, allowing the use of our interactive learning framework within a visual analytics system tailored for real-time situational awareness. To demonstrate our framework's effectiveness, we provide domain expert feedback from first responders who used the extended SMART 2.0 system. | false | false | [
"Luke S. Snyder",
"Yi-Shan Lin",
"Morteza Karimzadeh",
"Dan Goldwasser",
"David S. Ebert"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1908.02588v2",
"icon": "paper"
}
] |
VAST | 2,019 | LightGuider: Guiding Interactive Lighting Design using Suggestions, Provenance, and Quality Visualization | 10.1109/TVCG.2019.2934658 | LightGuider is a novel guidance-based approach to interactive lighting design, which typically consists of interleaved 3D modeling operations and light transport simulations. Rather than having designers use a trial-and-error approach to match their illumination constraints and aesthetic goals, LightGuider supports the process by simulating potential next modeling steps that can deliver the most significant improvements. LightGuider takes predefined quality criteria and the current focus of the designer into account to visualize suggestions for lighting-design improvements via a specialized provenance tree. This provenance tree integrates snapshot visualizations of how well a design meets the given quality criteria weighted by the designer's preferences. This integration facilitates the analysis of quality improvements over the course of a modeling workflow as well as the comparison of alternative design solutions. We evaluate our approach with three lighting designers to illustrate its usefulness. | false | false | [
"Andreas Walch",
"Michael Schwärzler",
"Christian Luksch",
"Elmar Eisemann",
"Theresia Gschwandtner"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.08553v2",
"icon": "paper"
}
] |
VAST | 2,019 | MetricsVis: A Visual Analytics System for Evaluating Employee Performance in Public Safety Agencies | 10.1109/TVCG.2019.2934603 | Evaluating employee performance in organizations with varying workloads and tasks is challenging. Specifically, it is important to understand how quantitative measurements of employee achievements relate to supervisor expectations, what the main drivers of good performance are, and how to combine these complex and flexible performance evaluation metrics into an accurate portrayal of organizational performance in order to identify shortcomings and improve overall productivity. To facilitate this process, we summarize common organizational performance analyses into four visual exploration task categories. Additionally, we develop MetricsVis, a visual analytics system composed of multiple coordinated views to support the dynamic evaluation and comparison of individual, team, and organizational performance in public safety organizations. MetricsVis provides four primary visual components to expedite performance evaluation: (1) a priority adjustment view to support direct manipulation on evaluation metrics; (2) a reorderable performance matrix to demonstrate the details of individual employees; (3) a group performance view that highlights aggregate performance and individual contributions for each group; and (4) a projection view illustrating employees with similar specialties to facilitate shift assignments and training. We demonstrate the usability of our framework with two case studies from medium-sized law enforcement agencies and highlight its broader applicability to other domains. | false | false | [
"Jieqiong Zhao",
"Morteza Karimzadeh",
"Luke S. Snyder",
"Chittayong Surakitbanharn",
"Cheryl Z. Qian",
"David S. Ebert"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.13601v3",
"icon": "paper"
}
] |
VAST | 2,019 | Motion Browser: Visualizing and Understanding Complex Upper Limb Movement Under Obstetrical Brachial Plexus Injuries | 10.1109/TVCG.2019.2934280 | The brachial plexus is a complex network of peripheral nerves that enables sensing from and control of the movements of the arms and hand. Nowadays, the coordination between the muscles to generate simple movements is still not well understood, hindering the knowledge of how to best treat patients with this type of peripheral nerve injury. To acquire enough information for medical data analysis, physicians conduct motion analysis assessments with patients to produce a rich dataset of electromyographic signals from multiple muscles recorded with joint movements during real-world tasks. However, tools for the analysis and visualization of the data in a succinct and interpretable manner are currently not available. Without the ability to integrate, compare, and compute multiple data sources in one platform, physicians can only compute simple statistical values to describe patient's behavior vaguely, which limits the possibility to answer clinical questions and generate hypotheses for research. To address this challenge, we have developed Motion Browser, an interactive visual analytics system which provides an efficient framework to extract and compare muscle activity patterns from the patient's limbs and coordinated views to help users analyze muscle signals, motion data, and video information to address different tasks. The system was developed as a result of a collaborative endeavor between computer scientists and orthopedic surgery and rehabilitation physicians. We present case studies showing physicians can utilize the information displayed to understand how individuals coordinate their muscles to initiate appropriate treatment and generate new hypotheses for future research. | false | false | [
"Gromit Yeuk-Yin Chan",
"Luis Gustavo Nonato",
"Alice Chu",
"Preeti Raghavan",
"Viswanath Aluru",
"Cláudio T. Silva"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.09146v1",
"icon": "paper"
}
] |
VAST | 2,019 | NNVA: Neural Network Assisted Visual Analysis of Yeast Cell Polarization Simulation | 10.1109/TVCG.2019.2934591 | Complex computational models are often designed to simulate real-world physical phenomena in many scientific disciplines. However, these simulation models tend to be computationally very expensive and involve a large number of simulation input parameters, which need to be analyzed and properly calibrated before the models can be applied for real scientific studies. We propose a visual analysis system to facilitate interactive exploratory analysis of high-dimensional input parameter space for a complex yeast cell polarization simulation. The proposed system can assist the computational biologists, who designed the simulation model, to visually calibrate the input parameters by modifying the parameter values and immediately visualizing the predicted simulation outcome without having the need to run the original expensive simulation for every instance. Our proposed visual analysis system is driven by a trained neural network-based surrogate model as the backend analysis framework. In this work, we demonstrate the advantage of using neural networks as surrogate models for visual analysis by incorporating some of the recent advances in the field of uncertainty quantification, interpretability and explainability of neural network-based models. We utilize the trained network to perform interactive parameter sensitivity analysis of the original simulation as well as recommend optimal parameter configurations using the activation maximization framework of neural networks. We also facilitate detail analysis of the trained network to extract useful insights about the simulation model, learned by the network, during the training process. We performed two case studies, and discovered multiple new parameter configurations, which can trigger high cell polarization results in the original simulation model. We evaluated our results by comparing with the original simulation model outcomes as well as the findings from previous parameter analysis performed by our experts. | false | false | [
"Subhashis Hazarika",
"Haoyu Li",
"Ko-Chih Wang",
"Han-Wei Shen",
"Ching-Shan Chou"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1904.09044v3",
"icon": "paper"
}
] |
VAST | 2,019 | OD Morphing: Balancing Simplicity with Faithfulness for OD Bundling | 10.1109/TVCG.2019.2934657 | OD bundling is a promising method to identify key origin-destination (OD) patterns, but the bundling can mislead the interpretation of actual trajectories traveled. We present OD Morphing, an interactive OD bundling technique that improves geographical faithfulness to actual trajectories while preserving visual simplicity for OD patterns. OD Morphing iteratively identifies critical waypoints from the actual trajectory network with a min-cut algorithm and transitions OD bundles to pass through the identified waypoints with a smooth morphing method. Furthermore, we extend OD Morphing to support bundling at interaction speeds to enable users to interactively transition between degrees of faithfulness to aid sensemaking. We introduce metrics for faithfulness and simplicity to evaluate their trade-off achieved by OD morphed bundling. We demonstrate OD Morphing on real-world city-scale taxi trajectory and USA domestic planned flight datasets. | false | false | [
"Yan Lyu",
"Xu Liu 0014",
"Hanyi Chen",
"Arpan Mangal",
"Kai Liu 0001",
"Chao Chen 0004",
"Brian Y. Lim"
] | [] | [] | [] |
VAST | 2,019 | Origraph: Interactive Network Wrangling | 10.1109/VAST47406.2019.8986909 | Networks are a natural way of thinking about many datasets. The data on which a network is based, however, is rarely collected in a form that suits the analysis process, making it necessary to create and reshape networks. Data wrangling is widely acknowledged to be a critical part of the data analysis pipeline, yet interactive network wrangling has received little attention in the visualization research community. In this paper, we discuss a set of operations that are important for wrangling network datasets and introduce a visual data wrangling tool, Origraph, that enables analysts to apply these operations to their datasets. Key operations include creating a network from source data such as tables, reshaping a network by introducing new node or edge classes, filtering nodes or edges, and deriving new node or edge attributes. Our tool, Origraph, enables analysts to execute these operations with little to no programming, and to immediately visualize the results. Origraph provides views to investigate the network model, a sample of the network, and node and edge attributes. In addition, we introduce interfaces designed to aid analysts in specifying arguments for sensible network wrangling operations. We demonstrate the usefulness of Origraph in two Use Cases: first, we investigate gender bias in the film industry, and then the influence of money on the political support for the war in Yemen. | false | false | [
"Alex Bigelow",
"Carolina Nobre",
"Miriah D. Meyer",
"Alexander Lex"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1812.06337v3",
"icon": "paper"
}
] |
VAST | 2,019 | PlanningVis: A Visual Analytics Approach to Production Planning in Smart Factories | 10.1109/TVCG.2019.2934275 | Production planning in the manufacturing industry is crucial for fully utilizing factory resources (e.g., machines, raw materials and workers) and reducing costs. With the advent of industry 4.0, plenty of data recording the status of factory resources have been collected and further involved in production planning, which brings an unprecedented opportunity to understand, evaluate and adjust complex production plans through a data-driven approach. However, developing a systematic analytics approach for production planning is challenging due to the large volume of production data, the complex dependency between products, and unexpected changes in the market and the plant. Previous studies only provide summarized results and fail to show details for comparative analysis of production plans. Besides, the rapid adjustment to the plan in the case of an unanticipated incident is also not supported. In this paper, we propose PlanningVis, a visual analytics system to support the exploration and comparison of production plans with three levels of details: a plan overview presenting the overall difference between plans, a product view visualizing various properties of individual products, and a production detail view displaying the product dependency and the daily production details in related factories. By integrating an automatic planning algorithm with interactive visual explorations, PlanningVis can facilitate the efficient optimization of daily production planning as well as support a quick response to unanticipated incidents in manufacturing. Two case studies with real-world data and carefully designed interviews with domain experts demonstrate the effectiveness and usability of PlanningVis. | false | false | [
"Dong Sun 0001",
"Renfei Huang",
"Yuanzhe Chen",
"Yong Wang 0021",
"Jia Zeng",
"Mingxuan Yuan",
"Ting-Chuen Pong",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.12201v3",
"icon": "paper"
}
] |
VAST | 2,019 | ProtoSteer: Steering Deep Sequence Model with Prototypes | 10.1109/TVCG.2019.2934267 | Recently we have witnessed growing adoption of deep sequence models (e.g. LSTMs) in many application domains, including predictive health care, natural language processing, and log analysis. However, the intricate working mechanism of these models confines their accessibility to the domain experts. Their black-box nature also makes it a challenging task to incorporate domain-specific knowledge of the experts into the model. In ProtoSteer (Prototype Steering), we tackle the challenge of directly involving the domain experts to steer a deep sequence model without relying on model developers as intermediaries. Our approach originates in case-based reasoning, which imitates the common human problem-solving process of consulting past experiences to solve new problems. We utilize ProSeNet (Prototype Sequence Network), which learns a small set of exemplar cases (i.e., prototypes) from historical data. In ProtoSteer they serve both as an efficient visual summary of the original data and explanations of model decisions. With ProtoSteer the domain experts can inspect, critique, and revise the prototypes interactively. The system then incorporates user-specified prototypes and incrementally updates the model. We conduct extensive case studies and expert interviews in application domains including sentiment analysis on texts and predictive diagnostics based on vehicle fault logs. The results demonstrate that involvements of domain users can help obtain more interpretable models with concise prototypes while retaining similar accuracy. | false | false | [
"Yao Ming",
"Panpan Xu",
"Furui Cheng",
"Huamin Qu",
"Ren Liu"
] | [] | [] | [] |
VAST | 2,019 | R-Map: A Map Metaphor for Visualizing Information Reposting Process in Social Media | 10.1109/TVCG.2019.2934263 | We propose R-Map (Reposting Map), a visual analytical approach with a map metaphor to support interactive exploration and analysis of the information reposting process in social media. A single original social media post can cause large cascades of repostings (i.e., retweets) on online networks, involving thousands, even millions of people with different opinions. Such reposting behaviors form the reposting tree, in which a node represents a message and a link represents the reposting relation. In R-Map, the reposting tree structure can be spatialized with highlighted key players and tiled nodes. The important reposting behaviors, the following relations and the semantics relations are represented as rivers, routes and bridges, respectively, in a virtual geographical space. R-Map supports a scalable overview of a large number of information repostings with semantics. Additional interactions on the map are provided to support the investigation of temporal patterns and user behaviors in the information diffusion process. We evaluate the usability and effectiveness of our system with two use cases and a formal user study. | false | false | [
"Shuai Chen 0001",
"Sihang Li",
"Siming Chen 0001",
"Xiaoru Yuan"
] | [] | [] | [] |
VAST | 2,019 | Scalable Topological Data Analysis and Visualization for Evaluating Data-Driven Models in Scientific Applications | 10.1109/TVCG.2019.2934594 | With the rapid adoption of machine learning techniques for large-scale applications in science and engineering comes the convergence of two grand challenges in visualization. First, the utilization of black box models (e.g., deep neural networks) calls for advanced techniques in exploring and interpreting model behaviors. Second, the rapid growth in computing has produced enormous datasets that require techniques that can handle millions or more samples. Although some solutions to these interpretability challenges have been proposed, they typically do not scale beyond thousands of samples, nor do they provide the high-level intuition scientists are looking for. Here, we present the first scalable solution to explore and analyze high-dimensional functions often encountered in the scientific data analysis pipeline. By combining a new streaming neighborhood graph construction, the corresponding topology computation, and a novel data aggregation scheme, namely topology aware datacubes, we enable interactive exploration of both the topological and the geometric aspect of high-dimensional data. Following two use cases from high-energy-density (HED) physics and computational biology, we demonstrate how these capabilities have led to crucial new insights in both applications. | false | false | [
"Shusen Liu",
"Jim Gaffney",
"Jayson Luc Peterson",
"Peter B. Robinson",
"Harsh Bhatia",
"Valerio Pascucci",
"Brian K. Spears",
"Peer-Timo Bremer",
"Di Wang",
"Dan Maljovec",
"Rushil Anirudh",
"Jayaraman J. Thiagarajan",
"Sam Ade Jacobs",
"Brian Van Essen",
"David Hysom",
"Jae-Seung Yeom"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1907.08325v1",
"icon": "paper"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.