Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
CHI | 2,017 | Affective Color in Visualization | 10.1145/3025453.3026041 | Communicating the right affect, a feeling, experience or emotion, is critical in creating engaging visual communication. We carried out three studies examining how different color properties (lightness, chroma and hue) and different palette properties (combinations and distribution of colors) contribute to different affective interpretations in information visualization where the numbers of colors is typically smaller than the rich palettes used in design. Our results show how color and palette properties can be manipulated to achieve affective expressiveness even in the small sets of colors used for data encoding in information visualization. | false | false | [
"Lyn Bartram",
"Abhisekh Patra",
"Maureen C. Stone"
] | [] | [] | [] |
CHI | 2,017 | AVUI: Designing a Toolkit for Audiovisual Interfaces | 10.1145/3025453.3026042 | The combined use of sound and image has a rich history, from audiovisual artworks to research exploring the potential of data visualization and sonification. However, we lack standard tools or guidelines for audiovisual (AV) interaction design, particularly for live performance. We propose the AVUI (AudioVisual User Interface), where sound and image are used together in a cohesive way in the interface; and an enabling technology, the ofxAVUI toolkit. AVUI guidelines and ofxAVUI were developed in a three-stage process, together with AV producers: 1) participatory design activities; 2) prototype development; 3) encapsulation of prototype as a plug-in, evaluation, and roll out. Best practices identified include: reconfigurable interfaces and mappings; object-oriented packaging of AV and UI; diverse sound visualization; flexible media manipulation and management. The toolkit and a mobile app developed using it have been released as open-source. Guidelines and toolkit demonstrate the potential of AVUI and offer designers a convenient framework for AV interaction design. | false | false | [
"Nuno N. Correia",
"Atau Tanaka"
] | [] | [] | [] |
CHI | 2,017 | Bottom-up vs. Top-down: Trade-offs in Efficiency, Understanding, Freedom and Creativity with InfoVis Tools | 10.1145/3025453.3025942 | The emergence of tools that support fast-and-easy visualization creation by non-experts has made the benefits of InfoVis widely accessible. Key features of these tools include attribute-level operations, automated mappings, and visualization templates. However, these features shield people from lower-level visualization design steps, such as the specific mapping of data points to visuals. In contrast, recent research promotes constructive visualization where individual data units and visuals are directly manipulated. We present a qualitative study comparing people's visualization processes using two visualization tools: one promoting a top-down approach to visualization construction (Tableau Desktop) and one implementing a bottom-up constructive visualization approach (iVoLVER). Our results show how the two approaches influence: 1) the visualization process, 2) decisions on the visualization design, 3) the feeling of control and authorship, and 4) the willingness to explore alternative designs. We discuss the complex trade-offs between the two approaches and outline considerations for designing better visualization tools. | false | false | [
"Gonzalo Gabriel Méndez",
"Uta Hinrichs",
"Miguel A. Nacenta"
] | [] | [] | [] |
CHI | 2,017 | Building with Data: Architectural Models as Inspiration for Data Physicalization | 10.1145/3025453.3025850 | In this paper we analyze the role of physical scale models in the architectural design process and apply insights from architecture for the creation and use of data physicalizations. Based on a survey of the architecture literature on model making and ten interviews with practicing architects, we describe the role of physical models as a tool for exploration and communication. From these observations, we identify trends in the use of physical models in architecture, which have the potential to inform the design of data physicalizations. We identify four functions of architectural modeling that can be directly adapted for use in the process of building rich data models. Finally, we discuss how the visualization community can apply observations from architecture to the design of new data physicalizations. | false | false | [
"Carmen Hull",
"Wesley Willett"
] | [] | [] | [] |
CHI | 2,017 | Designing Game-Based Myoelectric Prosthesis Training | 10.1145/3025453.3025676 | A myoelectric prosthesis (myo) is a dexterous artificial limb controlled by muscle contractions. Learning to use a myo can be challenging, so extensive training is often required to use a myo prosthesis effectively. Signal visualizations and simple muscle-controlled games are currently used to help patients train their muscles, but are boring and frustrating. Furthermore, current training systems require expensive medical equipment and clinician oversight, restricting training to infrequent clinical visits. To address these limitations, we developed a new game that promotes fun and success, and shows the viability of a low-cost myoelectric input device. We adapted a user-centered design (UCD) process to receive feedback from patients, clinicians, and family members as we iteratively addressed challenges to improve our game. Through this work, we introduce a free and open myo training game, provide new information about the design of myo training games, and reflect on an adapted UCD process for the practical iterative development of therapeutic games. | false | false | [
"Aaron Tabor",
"Scott Bateman",
"Erik J. Scheme",
"David R. Flatla",
"Kathrin Gerling"
] | [] | [] | [] |
CHI | 2,017 | Effects of Sharing Physiological States of Players in a Collaborative Virtual Reality Gameplay | 10.1145/3025453.3026028 | Interfaces for collaborative tasks, such as multiplayer games can enable more effective and enjoyable collaboration. However, in these systems, the emotional states of the users are often not communicated properly due to their remoteness from one another. In this paper, we investigate the effects of showing emotional states of one collaborator to the other during an immersive Virtual Reality (VR) gameplay experience. We created two collaborative immersive VR games that display the real-time heart-rate of one player to the other. The two different games elicited different emotions, one joyous and the other scary. We tested the effects of visualizing heart-rate feedback in comparison with conditions where such a feedback was absent. The games had significant main effects on the overall emotional experience. | false | false | [
"Arindam Dey 0001",
"Thammathip Piumsomboon",
"Youngho Lee",
"Mark Billinghurst"
] | [] | [] | [] |
CHI | 2,017 | Empirical Analysis of the Subjective Impressions and Objective Measures of Domain Scientists' Visual Analytic Judgments | 10.1145/3025453.3025882 | Scientists often use specific data analysis and presentation methods familiar within their domain. But does high familiarity drive better analytical judgment? This question is especially relevant when familiar methods themselves can have shortcomings: many visualizations used conventionally for scientific data analysis and presentation do not follow established best practices. This necessitates new methods that might be unfamiliar yet prove to be more effective. But there is little empirical understanding of the relationships between scientists' subjective impressions about familiar and unfamiliar visualizations and objective measures of their visual analytic judgments. To address this gap and to study these factors, we focus on visualizations used for comparison of climate model performance. We report on a comprehensive survey-based user study with 47 climate scientists and present an analysis of: i) relationships among scientists' familiarity, their perceived levels of comfort, confidence, accuracy, and objective measures of accuracy, and ii) relationships among domain experience, visualization familiarity, and post-study preference. | false | false | [
"Aritra Dasgupta",
"Susannah Burrows",
"Kyungsik Han",
"Philip J. Rasch"
] | [] | [] | [] |
CHI | 2,017 | Evaluating Perceptually Complementary Views for Network Exploration Tasks | 10.1145/3025453.3026024 | We explore the relative merits of matrix, node-link and combined side-by-side views for the visualisation of weighted networks with three controlled studies: (1) finding the most effective visual encoding for weighted edges in matrix representations; (2) comparing matrix, node-link and combined views for static weighted networks; and (3) comparing MatrixWave, Sankey and combined views of both for event-sequence data. Our studies underline that node-link and matrix views are suited to different analysis tasks. For the combined view, our studies show that there is a perceptually complementary effect in terms of improved accuracy for some tasks, but that there is a cost in terms of longer completion time than the faster of the two techniques alone. Eye-movement data shows that for many tasks participants strongly favour one of the two views, after trying both in the training phase. | false | false | [
"Chunlei Chang",
"Benjamin Bach",
"Tim Dwyer",
"Kim Marriott"
] | [] | [] | [] |
CHI | 2,017 | Explaining the Gap: Visualizing One's Predictions Improves Recall and Comprehension of Data | 10.1145/3025453.3025592 | Information visualizations use interactivity to enable user-driven querying of visualized data. However, users' interactions with their internal representations, including their expectations about data, are also critical for a visualization to support learning. We present multiple graphically-based techniques for eliciting and incorporating a user's prior knowledge about data into visualization interaction. We use controlled experiments to evaluate how graphically eliciting forms of prior knowledge and presenting feedback on the gap between prior knowledge and the observed data impacts a user's ability to recall and understand the data. We find that participants who are prompted to reflect on their prior knowledge by predicting and self-explaining data outperform a control group in recall and comprehension. These effects persist when participants have moderate or little prior knowledge on the datasets. We discuss how the effects differ based on text versus visual presentations of data. We characterize the design space of graphical prediction and feedback techniques and describe design recommendations. | false | false | [
"Yea-Seul Kim",
"Katharina Reinecke",
"Jessica Hullman"
] | [] | [] | [] |
CHI | 2,017 | GIAnT: Visualizing Group Interaction at Large Wall Displays | 10.1145/3025453.3026006 | Large interactive displays are increasingly important and a relevant research topic, and several studies have focused on wall interaction. However, in many cases, thorough user studies currently require time-consuming video analysis and coding. We present the Group Interaction Analysis Toolkit GIAnT, which provides a rich set of visualizations supporting investigation of multi-user interaction at large display walls. GIAnT focuses on visualizing time periods, making it possible to gain overview-level insights quickly. The toolkit is designed to be extensible and features several carefully crafted visualizations: A novel timeline visualization shows movement in front of the wall over time, a wall visualization shows interactions on the wall and gaze data, and a floor visualization displays user positions. In addition, GIAnT shows the captured video stream along with basic statistics. We validate our tool by analyzing how it supports investigating major research topics and by practical use in evaluating a cooperative game. | false | false | [
"Ulrich von Zadow",
"Raimund Dachselt"
] | [] | [] | [] |
CHI | 2,017 | GraphScape: A Model for Automated Reasoning about Visualization Similarity and Sequencing | 10.1145/3025453.3025866 | We present GraphScape, a directed graph model of the vi- sualization design space that supports automated reasoning about visualization similarity and sequencing. Graph nodes represent grammar-based chart specifications and edges rep- resent edits that transform one chart to another. We weight edges with an estimated cost of the difficulty of interpreting a target visualization given a source visualization. We con- tribute (1) a method for deriving transition costs via a partial ordering of edit operations and the solution of a resulting lin- ear program, and (2) a global weighting term that rewards consistency across transition subsequences. In a controlled experiment, subjects rated visualization sequences covering a taxonomy of common transition types. In all but one case, GraphScape's highest-ranked suggestion aligns with subjects' top-rated sequences. Finally, we demonstrate applications of GraphScape to automatically sequence visualization presen- tations, elaborate transition paths between visualizations, and recommend design alternatives (e.g., to improve scalability while minimizing design changes). | false | false | [
"Younghoon Kim",
"Kanit Wongsuphasawat",
"Jessica Hullman",
"Jeffrey Heer"
] | [] | [] | [] |
CHI | 2,017 | Group Spinner: Recognizing and Visualizing Learning in the Classroom for Reflection, Communication, and Planning | 10.1145/3025453.3025679 | Group Spinner is a digital visual tool intended to help teachers observe and reflect on children's collaborative technology-enhanced learning activities in the classroom. We describe the design of Group Spinner, which was informed by activity theory, previous work and teachers' focus group feedback. Based on a radar chart and a set of indicators, Group Spinner allows teachers to record in-class observations as to different aspects of group learning and learning behaviors, beyond the limited knowledge acquisition measures. Our exploratory study involved 6 teachers who used the tool for a total of 23 classes in subjects ranging from Maths and Geography to Sociology and Art. Semi-structured interviews with these teachers revealed a number of different uses of the tool. Depending on their experience and pedagogy, teachers considered Group Spinner to be a valuable tool to support awareness, reflection, communication, and/or planning. | false | false | [
"Ahmed Kharrufa",
"Sally Rix",
"Timur Osadchiy",
"Anne Preston",
"Patrick Olivier"
] | [] | [] | [] |
CHI | 2,017 | Has Instagram Fundamentally Altered the 'Family Snapshot'? | 10.1145/3025453.3025928 | This paper considers how parents use the social media platform Instagram to facilitate the capture, curation and sharing of 'family snapshots'. Our work draws upon established cross-disciplinary literature relating to film photography and the composition of family albums in order to establish whether social media has changed the way parents visually present their families. We conducted a qualitative visual analysis of a sample of 4,000 photographs collected from Instagram using hashtags relating to children and parenting. We show that the style and composition of snapshots featuring children remains fundamentally unchanged and continues to be dominated by rather bland and idealised images of the happy family and the cute child. In addition, we find that the frequent taking and sharing of photographs via Instagram has inevitably resulted in a more mundane visual catalogue of daily life. We note a tension in the desire to use social media as a means to evidence good parenting, while trying to effectively manage the social identity of the child and finally, we note the reluctance of parents to use their own snapshots to portray family tension or disharmony, but their willingness to use externally generated content for this purpose. | false | false | [
"Effie Le Moignan",
"Shaun W. Lawson",
"Duncan A. Rowland",
"Jamie Mahoney",
"Pam Briggs"
] | [] | [] | [] |
CHI | 2,017 | Improving Communication Between Pair Programmers Using Shared Gaze Awareness | 10.1145/3025453.3025573 | Remote collaboration can be more difficult than collocated collaboration for a number of reasons, including the inability to easily determine what your collaborator is looking at. This impedes a pair's ability to efficiently communicate about on-screen locations and makes synchronous coordination difficult. We designed a novel gaze visualization for remote pair programmers which shows where in the code their partner is currently looking, and changes color when they are looking at the same thing. Our design is unobtrusive, and transparently depicts the imprecision inherent in eye tracking technology. We evaluated our design with an experiment in which pair programmers worked remotely on code refactoring tasks. Our results show that with the visualization, pairs spent a greater proportion of their time concurrently looking at the same code locations. Pairs communicated using a larger ratio of implicit to explicit references, and were faster and more successful at responding to those references. | false | false | [
"Sarah D'Angelo",
"Andrew Begel"
] | [] | [] | [] |
CHI | 2,017 | Is Two Enough?: ! Studying Benefits, Barriers, and Biases of Multi-Tablet Use for Collaborative Visualization | 10.1145/3025453.3025537 | A sizable part of HCI research on cross-device interaction is driven by the vision of users conducting complex knowledge work seamlessly across multiple mobile devices. This is based on the Weiserian assumption that people will be inclined to distribute their work across multiple ``pads' if such are available. We observed that this is not the reality today, even when devices were in abundance. We present a study with 24 participants in 12 dyads completing a collaborative visualization task with up to six tablets. They could choose between three different visualization types to answer questions about economic data. Tasks were designed to afford simultaneous use of tablets, either with linked or independent views. We found that users typically utilized only one tablet per user. A quantitative and qualitative analysis revealed a ``legacy bias' that introduced barriers for using more tablets and reduced the overall benefit of multi-device visualization. | false | false | [
"Thomas Plank",
"Hans-Christian Jetter",
"Roman Rädle",
"Clemens Nylandsted Klokmose",
"Thomas Luger",
"Harald Reiterer"
] | [] | [] | [] |
CHI | 2,017 | iSphere: Focus+Context Sphere Visualization for Interactive Large Graph Exploration | 10.1145/3025453.3025628 | Interactive exploration plays a critical role in large graph visualization. Existing techniques, such as zoom-and-pan on a 2D plane and hyperbolic browser facilitate large graph exploration by showing both the details of a focal area and its surrounding context that guides the exploration process. However, existing techniques for large graph exploration are limited in either providing too little context or presenting graphs with too much distortion. In this paper, we propose a novel focus+context technique, iSphere, to address the limitation. iSphere maps a large graph onto a Riemann Sphere that better preserves graph structures and shows greater context information. We conduct extensive experiment studies on different graph exploration tasks under various conditions. The results show that iSphere performs the best in task completion time compared to the baseline techniques in link and path exploration tasks. This research also contributes to understanding large graph exploration on small screens. | false | false | [
"Fan Du",
"Nan Cao 0001",
"Yu-Ru Lin",
"Panpan Xu",
"Hanghang Tong"
] | [] | [] | [] |
CHI | 2,017 | Live Physiological Sensing and Visualization Ecosystems: An Activity Theory Analysis | 10.1145/3025453.3025987 | Wearable sensing poses new opportunities to enhance personal connections to learning and authentic scientific inquiry experiences. In our work, we leverage the body and physical action as an engaging platform for learning through live physiological sensing and visualization (LPSV). Prior research suggests the potential of this approach, but was limited to single-session evaluations in informal environments. In this paper, we examine LPSV tools in a classroom environment during a four-day deployment. To highlight the complex interconnections between space, teachers, curriculum, and tool use, we analyze our data through the lens of Activity Theory. Our findings show the importance of integrating model-based representations for supporting exploration and analytic representations for scaffolding scientific inquiry. Activity Theory highlights leveraging life-relevant connections available within a physical space and considering policies and norms related to learners' physical bodies. | false | false | [
"Tamara L. Clegg",
"Leyla Norooz",
"Seokbin Kang",
"Virginia Byrne",
"Monica Katzen",
"Rafael Valez",
"Angelisa C. Plane",
"Vanessa Oguamanam",
"Thomas Outing",
"Jason C. Yip 0001",
"Elizabeth M. Bonsignore",
"Jon Froehlich"
] | [] | [] | [] |
CHI | 2,017 | Narratives in Crowdsourced Evaluation of Visualizations: A Double-Edged Sword? | 10.1145/3025453.3025870 | We explore the effects of providing task context when evaluating visualization tools using crowdsourcing. We gave crowdsource workers i) abstract information visualization tasks without any context, ii) tasks where we added semantics to the dataset, and iii) tasks with two types of backstory narratives: an analytic narrative and a decision-making narrative. Contrary to our expectations, we did not find evidence that adding data semantics increases accuracy, and further found that our backstory narratives can even decrease accuracy. Adding dataset semantics can however increase attention and provide subjective benefits in terms of confidence, perceived easiness, task enjoyability and perceived usefulness of the visualization. Nevertheless, our backstory narratives did not appear to provide additional subjective benefits. These preliminary findings suggest that narratives may have complex and unanticipated effects, calling for more studies in this area. | false | false | [
"Evanthia Dimara",
"Anastasia Bezerianos",
"Pierre Dragicevic"
] | [] | [] | [] |
CHI | 2,017 | PathViewer: Visualizing Pathways through Student Data | 10.1145/3025453.3025819 | Analysis of student data is critical for improving education. In particular, educators need to understand what approaches their students are taking to solve a problem. However, identifying student strategies and discovering areas of confusion is difficult because an educator may not know what queries to ask or what patterns to look for in the data. In this paper, we present a visualization tool, PathViewer, to model the paths that students follow when solving a problem. PathViewer leverages ideas from flow diagrams and natural language processing to visualize the sequences of intermediate steps that students take. Using PathViewer, we analyzed how several students solved a Python assignment, discovering interesting and unexpected patterns. Our results suggest that PathViewer can allow educators to quickly identify areas of interest, drill down into specific areas, and identify student approaches to the problem as well as misconceptions they may have. | false | false | [
"Yiting Wang",
"Walker M. White",
"Erik Andersen 0001"
] | [] | [] | [] |
CHI | 2,017 | PersaLog: Personalization of News Article Content | 10.1145/3025453.3025631 | Content personalization automatically modifying text and multimedia features within articles based on the reader's individual features'is evolving as a new form of journalism. Informed by constraints articulated through a survey of journalists, we have implemented PersaLog, a novel system for creating personalized content (e.g., text and interactive visualizations). Because crafting, and validating, personalized content can be challenging to scale across articles (unlike feed personalization), we offer a simple Domain Specific Language (DSL), and editing environment, to support this task. PersaLog is particularly designed to support the personalization of existing text and visualizations. Our work provides guidelines for personalization as well as a system that allows for both subtle and dramatic personalization-driven content changes. We validate PersaLog using case and lab studies. | false | false | [
"Eytan Adar",
"Carolyn Gearig",
"Ayshwarya Balasubramanian",
"Jessica Hullman"
] | [] | [] | [] |
CHI | 2,017 | Pressure-Based Gain Factor Control for Mobile 3D Interaction using Locally-Coupled Devices | 10.1145/3025453.3025890 | We present the design and evaluation of pressure-based interactive control of 3D navigation precision. Specifically, we examine the control of gain factors in tangible 3D interactions using locally-coupled mobile devices. By focusing on pressure as a separate input channel we can adjust gain factors independently from other input modalities used in 3D navigation, in particular for the exploration of 3D visualisations. We present two experiments. First, we determined that people strongly preferred higher pressures to be mapped to higher gain factors. Using this mapping, we compared pressure with rate control, velocity control, and slider-based control in a second study. Our results show that pressure-based gain control allows people to be more precise in the same amount of time compared to established input modalities. Pressure-based control was also clearly preferred by our participants. In summary, we demonstrate that pressure facilitates effective and efficient precision control for mobile 3D navigation. | false | false | [
"Lonni Besançon",
"Mehdi Ammi",
"Tobias Isenberg 0001"
] | [] | [] | [] |
CHI | 2,017 | Regression by Eye: Estimating Trends in Bivariate Visualizations | 10.1145/3025453.3025922 | Observing trends and predicting future values are common tasks for viewers of bivariate data visualizations. As many charts do not explicitly include trend lines or related statistical summaries, viewers often visually estimate trends directly from a plot. How reliable are the inferences viewers draw when performing such regression by eye? Do particular visualization designs or data features bias trend perception? We present a series of crowdsourced experiments that assess the accuracy of trends estimated using regression by eye across a variety of bivariate visualizations, and examine potential sources of bias in these estimations. We find that viewers accurately estimate trends in many standard visualizations of bivariate data, but that both visual features (e.g., "within-the-bar" bias) and data features (e.g., the presence of outliers) can result in visual estimates that systematically diverge from standard least-squares regression models. | false | false | [
"Michael Correll",
"Jeffrey Heer"
] | [] | [] | [] |
CHI | 2,017 | Scalable Annotation of Fine-Grained Categories Without Experts | 10.1145/3025453.3025930 | We present a crowdsourcing workflow to collect image annotations for visually similar synthetic categories without requiring experts. In animals, there is a direct link between taxonomy and visual similarity: e.g. a collie (type of dog) looks more similar to other collies (e.g. smooth collie) than a greyhound (another type of dog). However, in synthetic categories such as cars, objects with similar taxonomy can have very different appearance: e.g. a 2011 Ford F-150 Supercrew-HD looks the same as a 2011 Ford F-150 Supercrew-LL but very different from a 2011 Ford F-150 Supercrew-SVT. We introduce a graph based crowdsourcing algorithm to automatically group visually indistinguishable objects together. Using our workflow, we label 712,430 images by ~1,000 Amazon Mechanical Turk workers; resulting in the largest fine-grained visual dataset reported to date with 2,657 categories of cars annotated at 1/20th the cost of hiring experts. | false | false | [
"Timnit Gebru",
"Jonathan Krause",
"Jia Deng 0001",
"Li Fei-Fei 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1709.02482v1",
"icon": "paper"
}
] |
CHI | 2,017 | Showing People Behind Data: Does Anthropomorphizing Visualizations Elicit More Empathy for Human Rights Data? | 10.1145/3025453.3025512 | We investigate the impact of using anthropomorphized data graphics over standard charts on viewers' empathy for, and prosocial behavior toward suffering populations, in the context of human rights narratives. We present a series of experiments conducted on Amazon Mechanical Turk, in which we compare various forms of anthropomorphized data graphics-ranging from a single human figure that "fills up" to show proportional data, to separated groups of individual human beings-with a standard chart baseline. Each experiment uses two carefully crafted human rights data-driven stories to present the graphics. Contrary to our expectations, we consistently find that anthropomorphized data graphics and standard charts have very similar effects on empathy and prosocial behavior. | false | false | [
"Jeremy Boy",
"Anshul Vikram Pandey",
"John Emerson",
"Margaret Satterthwaite",
"Oded Nov",
"Enrico Bertini"
] | [] | [] | [] |
CHI | 2,017 | Supporting Community Health Workers in India through Voice- and Web-Based Feedback | 10.1145/3025453.3025514 | Our research aims to support community health workers (CHWs) in low-resource settings by providing them with personalized information regarding their work. This information is delivered through a combination of voice- and web-based feedback that is derived from data already collected by CHWs. We describe the in situ participatory design approach used to create usable and appropriate feedback for low-literate CHWs and present usage data from a 12-month study with 71 CHWs in India. We show how the system supported and motivated CHWs, and how they used both the web- and voice-based systems, and each of the visualizations, for different reasons. We also show that the comparative feedback provided by the system introduced elements of competition that discouraged some CHWs while motivating others. Taken together, our findings suggest that providing personalized voice- and web-based feedback could be an effective way to support and motivate CHWs in low-resource settings. | false | false | [
"Brian DeRenzi",
"Nicola Dell",
"Jeremy Wacksman",
"Scott Lee",
"Neal Lesh"
] | [] | [] | [] |
CHI | 2,017 | Supporting Making Fixations and the Effect on Gaze Gesture Performance | 10.1145/3025453.3025920 | Gaze gestures are deliberate patterns of eye movements that can be used to invoke commands. These are less reliant on accurate measurement and calibration than other gaze-based interaction techniques. These may be used with wearable displays fitted with eye tracking capability, or as part of an assistive technology. The visual stimuli in the information on the display that can act as fixation targets may or may not be sparse and will vary over time. The paper describes an experiment to investigate how the amount of information provided on a display to assist making fixations affects gaze gesture performance. The impact of providing visualization guides and small fixation targets on the time to complete gestures and error rates is presented. The number and durations of fixations made during gesture completion is used to explain differences in performance as a result of practice and direction of eye movement. | false | false | [
"Howell O. Istance",
"Aulikki I. Hyrskykari"
] | [] | [] | [] |
CHI | 2,017 | The Catch(es) with Smart Home: Experiences of a Living Lab Field Study | 10.1145/3025453.3025799 | Smart home systems are becoming an integral feature of the emerging home IT market. Under this general term, products mainly address issues of security, energy savings and comfort. Comprehensive systems that cover several use cases are typically operated and managed via a unified dashboard. Unfortunately, research targeting user experience (UX) design for smart home interaction that spans several use cases or covering the entire system is scarce. Furthermore, existing comprehensive and user-centered longterm studies on challenges and needs throughout phases of information collection, installation and operation of smart home systems are technologically outdated. Our 18-month Living Lab study covering 14 households equipped with smart home technology provides insights on how to design for improving smart home appropriation. This includes a stronger sensibility for household practices during setup and configuration, flexible visualizations for evolving demands and an extension of smart home beyond the location. | false | false | [
"Timo Jakobi",
"Corinna Ogonowski",
"Nico Castelli",
"Gunnar Stevens",
"Volker Wulf"
] | [] | [] | [] |
CHI | 2,017 | TouchPivot: Blending WIMP & Post-WIMP Interfaces for Data Exploration on Tablet Devices | 10.1145/3025453.3025752 | Recent advancements in tablet technology pose a great opportunity for information visualization to expand its horizons beyond desktops. In this paper, we present TouchPivot, a novel interface that assists visual data exploration on tablet devices. With novices in mind, TouchPivot supports data transformations, such as pivoting and filtering, with simple pen and touch interactions, and facilitates understanding of the transformations through tight coupling between a data table and visualization. We bring in WIMP interfaces to TouchPivot, leveraging their familiarity and accessibility to novices. We report on a user study conducted to compare TouchPivot with two commercial interfaces, Tableau and Microsoft Excel's PivotTable. Our results show that novices not only answered data-driven questions faster, but also created a larger number of meaningful charts during freeform exploration with TouchPivot than others. Finally, we discuss the main hurdles novices encountered during our study and possible remedies for them. | false | false | [
"Jaemin Jo",
"Sehi L'Yi",
"Bongshin Lee",
"Jinwook Seo"
] | [] | [] | [] |
CHI | 2,017 | Trust, but Verify: Optimistic Visualizations of Approximate Queries for Exploring Big Data | 10.1145/3025453.3025456 | Analysts need interactive speed for exploratory analysis, but big data systems are often slow. With sampling, data systems can produce approximate answers fast enough for exploratory visualization, at the cost of accuracy and trust. We propose optimistic visualization, which approaches these issues from a user experience perspective. This method lets analysts explore approximate results interactively, and provides a way to detect and recover from errors later. Pangloss implements these ideas. We discuss design issues raised by optimistic visualization systems. We test this concept with five expert visualizers in a laboratory study and three case studies at Microsoft. Analysts reported that they felt more confident in their results, and used optimistic visualization to check that their preliminary results were correct. | false | false | [
"Dominik Moritz",
"Danyel Fisher",
"Bolin Ding",
"Chi Wang 0001"
] | [] | [] | [] |
CHI | 2,017 | Understanding Concept Maps: A Closer Look at How People Organise Ideas | 10.1145/3025453.3025977 | Research into creating visualisations that organise ideas into concise concept maps often focuses on implicit mathematical and statistical theories which are built around algorithmic efficacy or visual complexity. Although there are multiple techniques which attempt to mathematically optimise this multi-dimensional problem, it is still unknown how to create concept maps that are immediately understandable to people. In this paper, we present an in-depth qualitative study observing the behaviour and discussing the strategy used by non-expert participants to create, interact, update and communicate a concept map that represents a collection of research ideas. Our results show non-expert individuals create concept maps differently to visualisation algorithms. We found that our participants prioritised narrative, landmarks, abstraction, clarity, and simplicity. Finally, we derive design recommendations from our results which we hope will inspire future algorithms that automatically create more usable and compelling concept maps better suited to the natural behaviours and needs of users. | false | false | [
"Stefano Padilla",
"Thomas S. Methven",
"David A. Robb 0001",
"Mike J. Chantler"
] | [] | [] | [] |
CHI | 2,017 | Visualization Literacy at Elementary School | 10.1145/3025453.3025877 | This work advances our understanding of children's visualization literacy, and aims to improve it through a novel approach for teaching visualization at elementary school. We first contribute an analysis of data graphics and activities employed in grade K to 4 educational materials, and the results of a survey conducted with 16 elementary school teachers. We find that visualization education could benefit from integrating pedagogical strategies for teaching abstract concepts with established interactive visualization techniques. Building on these insights, we develop and study design principles for novel interactive teaching material aimed at increasing children's visualization literacy. We specifically contribute C'est La Vis, an online platform for teachers and students to respectively teach and learn about pictographs and bar charts, and report on our initial observations of its use in grades K and 2. | false | false | [
"Basak Alper",
"Nathalie Henry Riche",
"Fanny Chevalier",
"Jeremy Boy",
"T. Metin Sezgin"
] | [] | [] | [] |
CHI | 2,017 | Voyager 2: Augmenting Visual Analysis with Partial View Specifications | 10.1145/3025453.3025768 | Visual data analysis involves both open-ended and focused exploration. Manual chart specification tools support question answering, but are often tedious for early-stage exploration where systematic data coverage is needed. Visualization recommenders can encourage broad coverage, but irrelevant suggestions may distract users once they commit to specific questions. We present Voyager 2, a mixed-initiative system that blends manual and automated chart specification to help analysts engage in both open-ended exploration and targeted question answering. We contribute two partial specification interfaces: wildcards let users specify multiple charts in parallel, while related views suggest visualizations relevant to the currently specified chart. We present our interface design and applications of the CompassQL visualization query language to enable these interfaces. In a controlled study we find that Voyager 2 leads to increased data field coverage compared to a traditional specification tool, while still allowing analysts to flexibly drill-down and answer specific questions. | false | false | [
"Kanit Wongsuphasawat",
"Zening Qu",
"Dominik Moritz",
"Riley Chang",
"Felix Ouk",
"Anushka Anand",
"Jock D. Mackinlay",
"Bill Howe",
"Jeffrey Heer"
] | [] | [] | [] |
CHI | 2,017 | What Happened in my Home?: An End-User Development Approach for Smart Home Data Visualization | 10.1145/3025453.3025485 | Smart home systems change the way we experience the home. While there are established research fields within HCI for visualizing specific use cases of a smart home, studies targeting user demands on visualizations spanning across multiple use cases are rare. Especially, individual data-related demands pose a challenge for usable visualizations. To investigate potentials of an end-user development (EUD) approach for flexibly supporting such demands, we developed a smart home system featuring both pre-defined visualizations and a visualization creation tool. To evaluate our concept, we installed our prototype in 12 households as part of a Living Lab study. Results are based on three interview studies, a design workshop and system log data. We identified eight overarching interests in home data and show how participants used pre-defined visualizations to get an overview and the creation tool to not only address specific use cases but also to answer questions by creating temporary visualizations. | false | false | [
"Nico Castelli",
"Corinna Ogonowski",
"Timo Jakobi",
"Martin Stein",
"Gunnar Stevens",
"Volker Wulf"
] | [] | [] | [] |
CHI | 2,017 | Where No One Has Gone Before: A Meta-Dataset of the World's Largest Fanfiction Repository | 10.1145/3025453.3025720 | With its roots dating to popular television shows of the 1960s such as Star Trek, fanfiction has blossomed into an extremely widespread form of creative expression. The transition from printed zines to online fanfiction repositories has facilitated this growth in popularity, with millions of fans writing stories and adding daily to sites such as Archive Of Our Own, Fanfiction.net, FIMfiction.net, and many others. Enthusiasts are sharing their writing, reading stories written by others, and helping each other to grow as writers. Yet, this domain is often undervalued by society and understudied by researchers. To facilitate the study of this large but often marginalized community, we present a fully anonymized data release (via differential privacy) of the metadata from a large fanfiction site (to protect author privacy, story, profile, and review text is excluded, and only metadata is provided). We use visual analytics techniques to draw several intriguing insights from the data and show the potential for future research. We hope other researchers can use this data to explore further questions related to online fanfiction communities. | false | false | [
"Kodlee Yin",
"Cecilia R. Aragon",
"Sarah Evans",
"Katie Davis 0001"
] | [] | [] | [] |
VAST | 2,016 | A Grammar-based Approach for Modeling User Interactions and Generating Suggestions During the Data Exploration Process | 10.1109/TVCG.2016.2598471 | Despite the recent popularity of visual analytics focusing on big data, little is known about how to support users that use visualization techniques to explore multi-dimensional datasets and accomplish specific tasks. Our lack of models that can assist end-users during the data exploration process has made it challenging to learn from the user's interactive and analytical process. The ability to model how a user interacts with a specific visualization technique and what difficulties they face are paramount in supporting individuals with discovering new patterns within their complex datasets. This paper introduces the notion of visualization systems understanding and modeling user interactions with the intent of guiding a user through a task thereby enhancing visual data exploration. The challenges faced and the necessary future steps to take are discussed; and to provide a working example, a grammar-based model is presented that can learn from user interactions, determine the common patterns among a number of subjects using a K-Reversible algorithm, build a set of rules, and apply those rules in the form of suggestions to new users with the goal of guiding them along their visual analytic process. A formal evaluation study with 300 subjects was performed showing that our grammar-based model is effective at capturing the interactive process followed by users and that further research in this area has the potential to positively impact how users interact with a visualization system. | false | false | [
"Filip Dabek",
"Jesus J. Caban"
] | [] | [] | [] |
VAST | 2,016 | A Visual Analytics Approach for Categorical Joint Distribution Reconstruction from Marginal Projections | 10.1109/TVCG.2016.2598479 | Oftentimes multivariate data are not available as sets of equally multivariate tuples, but only as sets of projections into subspaces spanned by subsets of these attributes. For example, one may find data with five attributes stored in six tables of two attributes each, instead of a single table of five attributes. This prohibits the visualization of these data with standard high-dimensional methods, such as parallel coordinates or MDS, and there is hence the need to reconstruct the full multivariate (joint) distribution from these marginal ones. Most of the existing methods designed for this purpose use an iterative procedure to estimate the joint distribution. With insufficient marginal distributions and domain knowledge, they lead to results whose joint errors can be large. Moreover, enforcing smoothness for regularizations in the joint space is not applicable if the attributes are not numerical but categorical. We propose a visual analytics approach that integrates both anecdotal data and human experts to iteratively narrow down a large set of plausible solutions. The solution space is populated using a Monte Carlo procedure which uniformly samples the solution space. A level-of-detail high dimensional visualization system helps the user understand the patterns and the uncertainties. Constraints that narrow the solution space can then be added by the user interactively during the iterative exploration, and eventually a subset of solutions with narrow uncertainty intervals emerges. | false | false | [
"Cong Xie",
"Wen Zhong",
"Klaus Mueller 0001"
] | [
"HM"
] | [] | [] |
VAST | 2,016 | A Visual Analytics Approach for Understanding Reasons behind Snowballing and Comeback in MOBA Games | 10.1109/TVCG.2016.2598415 | To design a successful Multiplayer Online Battle Arena (MOBA) game, the ratio of snowballing and comeback occurrences to all matches played must be maintained at a certain level to ensure its fairness and engagement. Although it is easy to identify these two types of occurrences, game developers often find it difficult to determine their causes and triggers with so many game design choices and game parameters involved. In addition, the huge amounts of MOBA game data are often heterogeneous, multi-dimensional and highly dynamic in terms of space and time, which poses special challenges for analysts. In this paper, we present a visual analytics system to help game designers find key events and game parameters resulting in snowballing or comeback occurrences in MOBA game data. We follow a user-centered design process developing the system with game analysts and testing with real data of a trial version MOBA game from NetEase Inc. We apply novel visualization techniques in conjunction with well-established ones to depict the evolution of players' positions, status and the occurrences of events. Our system can reveal players' strategies and performance throughout a single match and suggest patterns, e.g., specific player' actions and game events, that have led to the final occurrences. We further demonstrate a workflow of leveraging human analyzed patterns to improve the scalability and generality of match data analysis. Finally, we validate the usability of our system by proving the identified patterns are representative in snowballing or comeback matches in a one-month-long MOBA tournament dataset. | false | false | [
"Quan Li",
"Peng Xu",
"Yeukyin Chan",
"Yun Wang 0012",
"Zhipeng Wang",
"Huamin Qu",
"Xiaojuan Ma"
] | [] | [] | [] |
VAST | 2,016 | An Analysis of Machine- and Human-Analytics in Classification | 10.1109/TVCG.2016.2598829 | In this work, we present a study that traces the technical and cognitive processes in two visual analytics applications to a common theoretic model of soft knowledge that may be added into a visual analytics process for constructing a decision-tree model. Both case studies involved the development of classification models based on the “bag of features” approach. Both compared a visual analytics approach using parallel coordinates with a machine-learning approach using information theory. Both found that the visual analytics approach had some advantages over the machine learning approach, especially when sparse datasets were used as the ground truth. We examine various possible factors that may have contributed to such advantages, and collect empirical evidence for supporting the observation and reasoning of these factors. We propose an information-theoretic model as a common theoretic basis to explain the phenomena exhibited in these two case studies. Together we provide interconnected empirical and theoretical evidence to support the usefulness of visual analytics. | false | false | [
"Gary K. L. Tam",
"Vivek Kothari",
"Min Chen 0001"
] | [
"BP"
] | [] | [] |
VAST | 2,016 | AnaFe: Visual Analytics of Image-derived Temporal Features Focusing on the Spleen | 10.1109/TVCG.2016.2598463 | We present a novel visualization framework, AnaFe, targeted at observing changes in the spleen over time through multiple image-derived features. Accurate monitoring of progressive changes is crucial for diseases that result in enlargement of the organ. Our system is comprised of multiple linked views combining visualization of temporal 3D organ data, related measurements, and features. Thus it enables the observation of progression and allows for simultaneous comparison within and between the subjects. AnaFe offers insights into the overall distribution of robustly extracted and reproducible quantitative imaging features and their changes within the population, and also enables detailed analysis of individual cases. It performs similarity comparison of temporal series of one subject to all other series in both sick and healthy groups. We demonstrate our system through two use case scenarios on a population of 189 spleen datasets from 68 subjects with various conditions observed over time. | false | false | [
"Ievgeniia Gutenko",
"Konstantin Dmitriev",
"Arie E. Kaufman",
"Matthew A. Barish"
] | [] | [] | [] |
VAST | 2,016 | Annotation Graphs: A Graph-Based Visualization for Meta-Analysis of Data Based on User-Authored Annotations | 10.1109/TVCG.2016.2598543 | User-authored annotations of data can support analysts in the activity of hypothesis generation and sensemaking, where it is not only critical to document key observations, but also to communicate insights between analysts. We present annotation graphs, a dynamic graph visualization that enables meta-analysis of data based on user-authored annotations. The annotation graph topology encodes annotation semantics, which describe the content of and relations between data selections, comments, and tags. We present a mixed-initiative approach to graph layout that integrates an analyst's manual manipulations with an automatic method based on similarity inferred from the annotation semantics. Various visual graph layout styles reveal different perspectives on the annotation semantics. Annotation graphs are implemented within C8, a system that supports authoring annotations during exploratory analysis of a dataset. We apply principles of Exploratory Sequential Data Analysis (ESDA) in designing C8, and further link these to an existing task typology in the visualization literature. We develop and evaluate the system through an iterative user-centered design process with three experts, situated in the domain of analyzing HCI experiment data. The results suggest that annotation graphs are effective as a method of visually extending user-authored annotations to data meta-analysis for discovery and organization of ideas. | false | false | [
"Jian Zhao 0010",
"Michael Glueck",
"Simon Breslav",
"Fanny Chevalier",
"Azam Khan"
] | [] | [] | [] |
VAST | 2,016 | AxiSketcher: Interactive Nonlinear Axis Mapping of Visualizations through User Drawings | 10.1109/TVCG.2016.2598446 | Visual analytics techniques help users explore high-dimensional data. However, it is often challenging for users to express their domain knowledge in order to steer the underlying data model, especially when they have little attribute-level knowledge. Furthermore, users' complex, high-level domain knowledge, compared to low-level attributes, posits even greater challenges. To overcome these challenges, we introduce a technique to interpret a user's drawings with an interactive, nonlinear axis mapping approach called AxiSketcher. This technique enables users to impose their domain knowledge on a visualization by allowing interaction with data entries rather than with data attributes. The proposed interaction is performed through directly sketching lines over the visualization. Using this technique, users can draw lines over selected data points, and the system forms the axes that represent a nonlinear, weighted combination of multidimensional attributes. In this paper, we describe our techniques in three areas: 1) the design space of sketching methods for eliciting users' nonlinear domain knowledge; 2) the underlying model that translates users' input, extracts patterns behind the selected data points, and results in nonlinear axes reflecting users' complex intent; and 3) the interactive visualization for viewing, assessing, and reconstructing the newly formed, nonlinear axes. | false | false | [
"Bum Chul Kwon",
"Hannah Kim",
"Emily Wall",
"Jaegul Choo",
"Haesun Park",
"Alex Endert"
] | [] | [] | [] |
VAST | 2,016 | Blockwise Human Brain Network Visual Comparison Using NodeTrix Representation | 10.1109/TVCG.2016.2598472 | Visually comparing human brain networks from multiple population groups serves as an important task in the field of brain connectomics. The commonly used brain network representation, consisting of nodes and edges, may not be able to reveal the most compelling network differences when the reconstructed networks are dense and homogeneous. In this paper, we leveraged the block information on the Region Of Interest (ROI) based brain networks and studied the problem of blockwise brain network visual comparison. An integrated visual analytics framework was proposed. In the first stage, a two-level ROI block hierarchy was detected by optimizing the anatomical structure and the predictive comparison performance simultaneously. In the second stage, the NodeTrix representation was adopted and customized to visualize the brain network with block information. We conducted controlled user experiments and case studies to evaluate our proposed solution. Results indicated that our visual analytics method outperformed the commonly used node-link graph and adjacency matrix design in the blockwise network comparison tasks. We have shown compelling findings from two real-world brain network data sets, which are consistent with the prior connectomics studies. | false | false | [
"Xinsong Yang",
"Lei Shi 0002",
"Madelaine Daianu",
"Hanghang Tong",
"Qingsong Liu",
"Paul M. Thompson"
] | [] | [] | [] |
VAST | 2,016 | C2A: Crowd consensus analytics for virtual colonoscopy | 10.1109/VAST.2016.7883508 | We present a medical crowdsourcing visual analytics platform called C2A to visualize, classify and filter crowdsourced clinical data. More specifically, C2A is used to build consensus on a clinical diagnosis by visualizing crowd responses and filtering out anomalous activity. Crowdsourcing medical applications have recently shown promise where the non-expert users (the crowd) were able to achieve accuracy similar to the medical experts. This has the potential to reduce interpretation/reading time and possibly improve accuracy by building a consensus on the findings beforehand and letting the medical experts make the final diagnosis. In this paper, we focus on a virtual colonoscopy (VC) application with the clinical technicians as our target users, and the radiologists acting as consultants and classifying segments as benign or malignant. In particular, C2A is used to analyze and explore crowd responses on video segments, created from fly-throughs in the virtual colon. C2A provides several interactive visualization components to build crowd consensus on video segments, to detect anomalies in the crowd data and in the VC video segments, and finally, to improve the non-expert user's work quality and performance by A/B testing for the optimal crowdsourcing platform and application-specific parameters. Case studies and domain experts feedback demonstrate the effectiveness of our framework in improving workers' output quality, the potential to reduce the radiologists' interpretation time, and hence, the potential to improve the traditional clinical workflow by marking the majority of the video segments as benign based on the crowd consensus. | false | false | [
"Ji Hwan Park",
"Saad Nadeem",
"Seyedkoosha Mirhosseini",
"Arie E. Kaufman"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1810.09012v1",
"icon": "paper"
}
] |
VAST | 2,016 | Characterizing Guidance in Visual Analytics | 10.1109/TVCG.2016.2598468 | Visual analytics (VA) is typically applied in scenarios where complex data has to be analyzed. Unfortunately, there is a natural correlation between the complexity of the data and the complexity of the tools to study them. An adverse effect of complicated tools is that analytical goals are more difficult to reach. Therefore, it makes sense to consider methods that guide or assist users in the visual analysis process. Several such methods already exist in the literature, yet we are lacking a general model that facilitates in-depth reasoning about guidance. We establish such a model by extending van Wijk's model of visualization with the fundamental components of guidance. Guidance is defined as a process that gradually narrows the gap that hinders effective continuation of the data analysis. We describe diverse inputs based on which guidance can be generated and discuss different degrees of guidance and means to incorporate guidance into VA tools. We use existing guidance approaches from the literature to illustrate the various aspects of our model. As a conclusion, we identify research challenges and suggest directions for future studies. With our work we take a necessary step to pave the way to a systematic development of guidance techniques that effectively support users in the context of VA. | false | false | [
"Davide Ceneda",
"Theresia Gschwandtner",
"Thorsten May",
"Silvia Miksch",
"Hans-Jörg Schulz",
"Marc Streit",
"Christian Tominski"
] | [] | [] | [] |
VAST | 2,016 | D-Map: Visual Analysis of Ego-centric Information Diffusion Patterns in Social Media | 10.1109/VAST.2016.7883510 | Popular social media platforms could rapidly propagate vital information over social networks among a significant number of people. In this work we present D-Map (Diffusion Map), a novel visualization method to support exploration and analysis of social behaviors during such information diffusion and propagation on typical social media through a map metaphor. In D-Map, users who participated in reposting (i.e., resending a message initially posted by others) one central user's posts (i.e., a series of original tweets) are collected and mapped to a hexagonal grid based on their behavior similarities and in chronological order of the repostings. With additional interaction and linking, D-Map is capable of providing visual portraits of the influential users and describing their social behaviors. A comprehensive visual analysis system is developed to support interactive exploration with D-Map. We evaluate our work with real world social media data and find interesting patterns among users. Key players, important information diffusion paths, and interactions among social communities can be identified. | false | false | [
"Siming Chen 0001",
"Shuai Chen 0001",
"Zhenhuang Wang",
"Christy Jie Liang",
"Xiaoru Yuan",
"Nan Cao",
"Yadong Wu"
] | [] | [] | [] |
VAST | 2,016 | Designing Progressive and Interactive Analytics Processes for High-Dimensional Data Analysis | 10.1109/TVCG.2016.2598470 | In interactive data analysis processes, the dialogue between the human and the computer is the enabling mechanism that can lead to actionable observations about the phenomena being investigated. It is of paramount importance that this dialogue is not interrupted by slow computational mechanisms that do not consider any known temporal human-computer interaction characteristics that prioritize the perceptual and cognitive capabilities of the users. In cases where the analysis involves an integrated computational method, for instance to reduce the dimensionality of the data or to perform clustering, such non-optimal processes are often likely. To remedy this, progressive computations, where results are iteratively improved, are getting increasing interest in visual analytics. In this paper, we present techniques and design considerations to incorporate progressive methods within interactive analysis processes that involve high-dimensional data. We define methodologies to facilitate processes that adhere to the perceptual characteristics of users and describe how online algorithms can be incorporated within these. A set of design recommendations and according methods to support analysts in accomplishing high-dimensional data analysis tasks are then presented. Our arguments and decisions here are informed by observations gathered over a series of analysis sessions with analysts from finance. We document observations and recommendations from this study and present evidence on how our approach contribute to the efficiency and productivity of interactive visual analysis sessions involving high-dimensional data. | false | false | [
"Cagatay Turkay",
"Erdem Kaya",
"Selim Balcisoy",
"Helwig Hauser"
] | [] | [] | [] |
VAST | 2,016 | DimScanner: A Relation-based Visual Exploration Approach Towards Data Dimension Inspection | 10.1109/VAST.2016.7883514 | Exploring multi-dimensional datasets can be cumbersome if data analysts have little knowledge about the data. Various dimension relation inspection tools and dimension exploration tools have been proposed for efficient data examining and understanding. However, the needed workload varies largely with respect to data complexity and user expertise, which can only be reduced with rich background knowledge over the data. In this paper we address the workload challenge with a data structuring and exploration scheme that affords dimension relation detection and that serves as the background knowledge for further investigation. We contribute a novel data structuring scheme that leverages an information-theoretic view structuring algorithm to uncover information-aware relations among different data views, and thereby discloses redundancy and other relation patterns among dimensions. The integrated system, DimScanner, empowers analysts with rich user controls and assistance widgets to interactively detect the relations of multi-dimensional data. | false | false | [
"Jing Xia",
"Wei Chen 0001",
"Yumeng Hou",
"Wanqi Hu",
"Xinxin Huang",
"David S. Ebert"
] | [] | [] | [] |
VAST | 2,016 | DocuCompass: Effective exploration of document landscapes | 10.1109/VAST.2016.7883507 | The creation of interactive visualization to analyze text documents has gained an impressive momentum in recent years. This is not surprising in the light of massive and still increasing amounts of available digitized texts. Websites, social media, news wire, and digital libraries are just few examples of the diverse text sources whose visual analysis and exploration offers new opportunities to effectively mine and manage the information and knowledge hidden within them. A popular visualization method for large text collections is to represent each document by a glyph in 2D space. These landscapes can be the result of optimizing pairwise distances in 2D to represent document similarities, or they are provided directly as meta data, such as geo-locations. For well-defined information needs, suitable interaction methods are available for these spatializations. However, free exploration and navigation on a level of abstraction between a labeled document spatialization and reading single documents is largely unsupported. As a result, vital foraging steps for task-tailored actions, such as selecting subgroups of documents for detailed inspection, or subsequent sense-making steps are hampered. To fill in this gap, we propose DocuCompass, a focus+context approach based on the lens metaphor. It comprises multiple methods to characterize local groups of documents, and to efficiently guide exploration based on users' requirements. DocuCompass thus allows for effective interactive exploration of document landscapes without disrupting the mental map of users by changing the layout itself. We discuss the suitability of multiple navigation and characterization methods for different spatializations and texts. Finally, we provide insights generated through user feedback and discuss the effectiveness of our approach. | false | false | [
"Florian Heimerl",
"Markus John",
"Qi Han 0006",
"Steffen Koch 0001",
"Thomas Ertl"
] | [] | [] | [] |
VAST | 2,016 | DropoutSeer: Visualizing learning patterns in Massive Open Online Courses for dropout reasoning and prediction | 10.1109/VAST.2016.7883517 | Aiming at massive participation and open access education, Massive Open Online Courses (MOOCs) have attracted millions of learners over the past few years. However, the high dropout rate of learners is considered to be one of the most crucial factors that may hinder the development of MOOCs. To tackle this problem, statistical models have been developed to predict dropout behavior based on learner activity logs. Although predictive models can foresee the dropout behavior, it is still difficult for users to understand the reasons behind the predicted results and further design interventions to prevent dropout. In addition, with a better understanding of dropout, researchers in the area of predictive modeling in turn can improve the models. In this paper, we introduce DropoutSeer, a visual analytics system which not only helps instructors and education experts understand the reasons for dropout, but also allows researchers to identify crucial features which can further improve the performance of the models. Both the heterogeneous data extracted from three different kinds of learner activity logs (i.e., clickstream, forum posts and assignment records) and the predicted results are visualized in the proposed system. Case studies and expert interviews have been conducted to demonstrate the usefulness and effectiveness of DropoutSeer. | false | false | [
"Yuanzhe Chen",
"Qing Chen 0001",
"Mingqian Zhao",
"Sebastien Boyer",
"Kalyan Veeramachaneni",
"Huamin Qu"
] | [] | [] | [] |
VAST | 2,016 | EventAction: Visual analytics for temporal event sequence recommendation | 10.1109/VAST.2016.7883512 | Recommender systems are being widely used to assist people in making decisions, for example, recommending films to watch or books to buy. Despite its ubiquity, the problem of presenting the recommendations of temporal event sequences has not been studied. We propose EventAction, which to our knowledge, is the first attempt at a prescriptive analytics interface designed to present and explain recommendations of temporal event sequences. EventAction provides a visual analytics approach to (1) identify similar records, (2) explore potential outcomes, (3) review recommended temporal event sequences that might help achieve the users' goals, and (4) interactively assist users as they define a personalized action plan associated with a probability of success. Following the design study framework, we designed and deployed EventAction in the context of student advising and reported on the evaluation with a student review manager and three graduate students. | false | false | [
"Fan Du",
"Catherine Plaisant",
"Neil Spring",
"Ben Shneiderman"
] | [] | [] | [] |
VAST | 2,016 | Familiarity Vs Trust: A Comparative Study of Domain Scientists' Trust in Visual Analytics and Conventional Analysis Methods | 10.1109/TVCG.2016.2598544 | Combining interactive visualization with automated analytical methods like statistics and data mining facilitates data-driven discovery. These visual analytic methods are beginning to be instantiated within mixed-initiative systems, where humans and machines collaboratively influence evidence-gathering and decision-making. But an open research question is that, when domain experts analyze their data, can they completely trust the outputs and operations on the machine-side? Visualization potentially leads to a transparent analysis process, but do domain experts always trust what they see? To address these questions, we present results from the design and evaluation of a mixed-initiative, visual analytics system for biologists, focusing on analyzing the relationships between familiarity of an analysis medium and domain experts' trust. We propose a trust-augmented design of the visual analytics system, that explicitly takes into account domain-specific tasks, conventions, and preferences. For evaluating the system, we present the results of a controlled user study with 34 biologists where we compare the variation of the level of trust across conventional and visual analytic mediums and explore the influence of familiarity and task complexity on trust. We find that despite being unfamiliar with a visual analytic medium, scientists seem to have an average level of trust that is comparable with the same in conventional analysis medium. In fact, for complex sense-making tasks, we find that the visual analytic system is able to inspire greater trust than other mediums. We summarize the implications of our findings with directions for future research on trustworthiness of visual analytic systems. | false | false | [
"Aritra Dasgupta",
"Joon-Yong Lee",
"Ryan Wilson",
"Robert A. Lafrance",
"Nick Cramer",
"Kristin A. Cook",
"Samuel H. Payne"
] | [] | [] | [] |
VAST | 2,016 | GazeDx: Interactive Visual Analytics Framework for Comparative Gaze Analysis with Volumetric Medical Images | 10.1109/TVCG.2016.2598796 | We present an interactive visual analytics framework, GazeDx (abbr. of GazeDiagnosis), for the comparative analysis of gaze data from multiple readers examining volumetric images while integrating important contextual information with the gaze data. Gaze pattern comparison is essential to understanding how radiologists examine medical images, and to identifying factors influencing the examination. Most prior work depended upon comparisons with manually juxtaposed static images of gaze tracking results. Comparative gaze analysis with volumetric images is more challenging due to the additional cognitive load on 3D perception. A recent study proposed a visualization design based on direct volume rendering (DVR) for visualizing gaze patterns in volumetric images; however, effective and comprehensive gaze pattern comparison is still challenging due to a lack of interactive visualization tools for comparative gaze analysis. We take the challenge with GazeDx while integrating crucial contextual information such as pupil size and windowing into the analysis process for more in-depth and ecologically valid findings. Among the interactive visualization components in GazeDx, a context-embedded interactive scatterplot is especially designed to help users examine abstract gaze data in diverse contexts by embedding medical imaging representations well known to radiologists in it. We present the results from two case studies with two experienced radiologists, where they compared the gaze patterns of 14 radiologists reading two patients' volumetric CT images. | false | false | [
"Hyunjoo Song",
"Jeongjin Lee",
"Tae Jung Kim",
"Kyoung Ho Lee",
"Bo Hyoung Kim",
"Jinwook Seo"
] | [] | [] | [] |
VAST | 2,016 | How ideas flow across multiple social groups | 10.1109/VAST.2016.7883511 | Tracking how correlated ideas flow within and across multiple social groups facilitates the understanding of the transfer of information, opinions, and thoughts on social media. In this paper, we present IdeaFlow, a visual analytics system for analyzing the lead-lag changes within and across pre-defined social groups regarding a specific set of correlated ideas, each of which is described by a set of words. To model idea flows accurately, we develop a random-walk-based correlation model and integrate it with Bayesian conditional cointegration and a tensor-based technique. To convey complex lead-lag relationships over time, IdeaFlow combines the strengths of a bubble tree, a flow map, and a timeline. In particular, we develop a Voronoi-treemap-based bubble tree to help users get an overview of a set of ideas quickly. A correlated-clustering-based layout algorithm is used to simultaneously generate multiple flow maps with less ambiguity. We also introduce a focus+context timeline to explore huge amounts of temporal data at different levels of time granularity. Quantitative evaluation and case studies demonstrate the accuracy and effectiveness of IdeaFlow. | false | false | [
"Xiting Wang",
"Shixia Liu",
"Yang Chen",
"Tai-Quan Peng",
"Jing Su",
"Jing Yang",
"Baining Guo"
] | [] | [] | [] |
VAST | 2,016 | Magnostics: Image-Based Search of Interesting Matrix Views for Guided Network Exploration | 10.1109/TVCG.2016.2598467 | In this work we address the problem of retrieving potentially interesting matrix views to support the exploration of networks. We introduce Matrix Diagnostics (or Magnostics), following in spirit related approaches for rating and ranking other visualization techniques, such as Scagnostics for scatter plots. Our approach ranks matrix views according to the appearance of specific visual patterns, such as blocks and lines, indicating the existence of topological motifs in the data, such as clusters, bi-graphs, or central nodes. Magnostics can be used to analyze, query, or search for visually similar matrices in large collections, or to assess the quality of matrix reordering algorithms. While many feature descriptors for image analyzes exist, there is no evidence how they perform for detecting patterns in matrices. In order to make an informed choice of feature descriptors for matrix diagnostics, we evaluate 30 feature descriptors-27 existing ones and three new descriptors that we designed specifically for MAGNOSTICS-with respect to four criteria: pattern response, pattern variability, pattern sensibility, and pattern discrimination. We conclude with an informed set of six descriptors as most appropriate for Magnostics and demonstrate their application in two scenarios; exploring a large collection of matrices and analyzing temporal networks. | false | false | [
"Michael Behrisch 0001",
"Benjamin Bach",
"Michael Blumenschein",
"Michael Delz",
"Laura von Rüden",
"Jean-Daniel Fekete",
"Tobias Schreck"
] | [] | [] | [] |
VAST | 2,016 | Multi-Resolution Climate Ensemble Parameter Analysis with Nested Parallel Coordinates Plots | 10.1109/TVCG.2016.2598830 | Due to the uncertain nature of weather prediction, climate simulations are usually performed multiple times with different spatial resolutions. The outputs of simulations are multi-resolution spatial temporal ensembles. Each simulation run uses a unique set of values for multiple convective parameters. Distinct parameter settings from different simulation runs in different resolutions constitute a multi-resolution high-dimensional parameter space. Understanding the correlation between the different convective parameters, and establishing a connection between the parameter settings and the ensemble outputs are crucial to domain scientists. The multi-resolution high-dimensional parameter space, however, presents a unique challenge to the existing correlation visualization techniques. We present Nested Parallel Coordinates Plot (NPCP), a new type of parallel coordinates plots that enables visualization of intra-resolution and inter-resolution parameter correlations. With flexible user control, NPCP integrates superimposition, juxtaposition and explicit encodings in a single view for comparative data visualization and analysis. We develop an integrated visual analytics system to help domain scientists understand the connection between multi-resolution convective parameters and the large spatial temporal ensembles. Our system presents intricate climate ensembles with a comprehensive overview and on-demand geographic details. We demonstrate NPCP, along with the climate ensemble visualization system, based on real-world use-cases from our collaborators in computational and predictive science. | false | false | [
"Junpeng Wang",
"Xiaotong Liu",
"Han-Wei Shen",
"Guang Lin"
] | [] | [] | [] |
VAST | 2,016 | NameClarifier: A Visual Analytics System for Author Name Disambiguation | 10.1109/TVCG.2016.2598465 | In this paper, we present a novel visual analytics system called NameClarifier to interactively disambiguate author names in publications by keeping humans in the loop. Specifically, NameClarifier quantifies and visualizes the similarities between ambiguous names and those that have been confirmed in digital libraries. The similarities are calculated using three key factors, namely, co-authorships, publication venues, and temporal information. Our system estimates all possible allocations, and then provides visual cues to users to help them validate every ambiguous case. By looping users in the disambiguation process, our system can achieve more reliable results than general data mining models for highly ambiguous cases. In addition, once an ambiguous case is resolved, the result is instantly added back to our system and serves as additional cues for all the remaining unidentified names. In this way, we open up the black box in traditional disambiguation processes, and help intuitively and comprehensively explain why the corresponding classifications should hold. We conducted two use cases and an expert review to demonstrate the effectiveness of NameClarifier. | false | false | [
"Qiaomu Shen",
"Tongshuang Wu",
"Haiyan Yang",
"Yanhong Wu",
"Huamin Qu",
"Weiwei Cui"
] | [] | [] | [] |
VAST | 2,016 | Patterns and Sequences: Interactive Exploration of Clickstreams to Understand Common Visitor Paths | 10.1109/TVCG.2016.2598797 | Modern web clickstream data consists of long, high-dimensional sequences of multivariate events, making it difficult to analyze. Following the overarching principle that the visual interface should provide information about the dataset at multiple levels of granularity and allow users to easily navigate across these levels, we identify four levels of granularity in clickstream analysis: patterns, segments, sequences and events. We present an analytic pipeline consisting of three stages: pattern mining, pattern pruning and coordinated exploration between patterns and sequences. Based on this approach, we discuss properties of maximal sequential patterns, propose methods to reduce the number of patterns and describe design considerations for visualizing the extracted sequential patterns and the corresponding raw sequences. We demonstrate the viability of our approach through an analysis scenario and discuss the strengths and limitations of the methods based on user feedback. | false | false | [
"Zhicheng Liu",
"Yang Wang",
"Mira Dontcheva",
"Matthew Hoffman 0001",
"Seth Walker",
"Alan Wilson 0004"
] | [] | [] | [] |
VAST | 2,016 | PhenoStacks: Cross-Sectional Cohort Phenotype Comparison Visualizations | 10.1109/TVCG.2016.2598469 | Cross-sectional phenotype studies are used by genetics researchers to better understand how phenotypes vary across patients with genetic diseases, both within and between cohorts. Analyses within cohorts identify patterns between phenotypes and patients (e.g., co-occurrence) and isolate special cases (e.g., potential outliers). Comparing the variation of phenotypes between two cohorts can help distinguish how different factors affect disease manifestation (e.g., causal genes, age of onset, etc.). PhenoStacks is a novel visual analytics tool that supports the exploration of phenotype variation within and between cross-sectional patient cohorts. By leveraging the semantic hierarchy of the Human Phenotype Ontology, phenotypes are presented in context, can be grouped and clustered, and are summarized via overviews to support the exploration of phenotype distributions. The design of PhenoStacks was motivated by formative interviews with genetics researchers: we distil high-level tasks, present an algorithm for simplifying ontology topologies for visualization, and report the results of a deployment evaluation with four expert genetics researchers. The results suggest that PhenoStacks can help identify phenotype patterns, investigate data quality issues, and inform data collection design. | false | false | [
"Michael Glueck",
"Alina Gvozdik",
"Fanny Chevalier",
"Azam Khan",
"Michael Brudno",
"Daniel J. Wigdor"
] | [] | [] | [] |
VAST | 2,016 | PorosityAnalyzer: Visual Analysis and Evaluation of Segmentation Pipelines to Determine the Porosity in Fiber-Reinforced Polymers | 10.1109/VAST.2016.7883516 | In this paper we present PorosityAnalyzer, a novel tool for detailed evaluation and visual analysis of pore segmentation pipelines to determine the porosity in fiber-reinforced polymers (FRPs). The presented tool consists of two modules: the computation module and the analysis module. The computation module enables a convenient setup and execution of distributed off-line-computations on industrial 3D X-ray computed tomography datasets. It allows the user to assemble individual segmentation pipelines in the form of single pipeline steps, and to specify the parameter ranges as well as the sampling of the parameter-space of each pipeline segment. The result of a single segmentation run consists of the input parameters, the calculated 3D binary-segmentation mask, the resulting porosity value, and other derived results (e.g., segmentation pipeline run-time). The analysis module presents the data at different levels of detail by drill-down filtering in order to determine accurate and robust segmentation pipelines. Overview visualizations allow to initially compare and evaluate the segmentation pipelines. With a scatter plot matrix (SPLOM), the segmentation pipelines are examined in more detail based on their input and output parameters. Individual segmentation-pipeline runs are selected in the SPLOM and visually examined and compared in 2D slice views and 3D renderings by using aggregated segmentation masks and statistical contour renderings. PorosityAnalyzer has been thoroughly evaluated with the help of twelve domain experts. Two case studies demonstrate the applicability of our proposed concepts and visualization techniques, and show that our tool helps domain experts to gain new insights and improve their workflow efficiency. | false | false | [
"Johannes Weissenböck",
"Artem Amirkhanov",
"M. Eduard Gröller",
"Johann Kastner",
"Christoph Heinzl"
] | [] | [] | [] |
VAST | 2,016 | SemanticTraj: A New Approach to Interacting with Massive Taxi Trajectories | 10.1109/TVCG.2016.2598416 | Massive taxi trajectory data is exploited for knowledge discovery in transportation and urban planning. Existing tools typically require users to select and brush geospatial regions on a map when retrieving and exploring taxi trajectories and passenger trips. To answer seemingly simple questions such as “What were the taxi trips starting from Main Street and ending at Wall Street in the morning?” or “Where are the taxis arriving at the Art Museum at noon typically coming from?”, tedious and time consuming interactions are usually needed since the numeric GPS points of trajectories are not directly linked to the keywords such as “Main Street”, “Wall Street”, and “Art Museum”. In this paper, we present SemanticTraj, a new method for managing and visualizing taxi trajectory data in an intuitive, semantic rich, and efficient means. With SemanticTraj, domain and public users can find answers to the aforementioned questions easily through direct queries based on the terms. They can also interactively explore the retrieved data in visualizations enhanced by semantic information of the trajectories and trips. In particular, taxi trajectories are converted into taxi documents through a textualization transformation process. This process maps GPS points into a series of street/POI names and pick-up/drop-off locations. It also converts vehicle speeds into user-defined descriptive terms. Then, a corpus of taxi documents is formed and indexed to enable flexible semantic queries over a text search engine. Semantic labels and meta-summaries of the results are integrated with a set of visualizations in a SemanticTraj prototype, which helps users study taxi trajectories quickly and easily. A set of usage scenarios are presented to show the usability of the system. We also collected feedback from domain experts and conducted a preliminary user study to evaluate the visual system. | false | false | [
"Shamal Al-Dohuki",
"Yingyu Wu",
"Farah Kamw",
"Jing Yang 0001",
"Xin Li",
"Ye Zhao 0003",
"Xinyue Ye",
"Wei Chen 0001",
"Chao Ma 0023",
"Fei Wang 0016"
] | [] | [] | [] |
VAST | 2,016 | SenseMap: Supporting browser-based online sensemaking through analytic provenance | 10.1109/VAST.2016.7883515 | Sensemaking is described as the process in which people collect, organize and create representations of information, all centered around some problem they need to understand. People often get lost when solving complicated tasks using big datasets over long periods of exploration and analysis. They may forget what they have done, are unaware of where they are in the context of the overall task, and are unsure where to continue. In this paper, we introduce a tool, SenseMap, to address these issues in the context of browser-based online sensemaking. We conducted a semi-structured interview with nine participants to explore their behaviors in online sensemaking with existing browser functionality. A simplified sensemaking model based on Pirolli and Card's model is derived to better represent the behaviors we found: users iteratively collect information sources relevant to the task, curate them in a way that makes sense, and finally communicate their findings to others. SenseMap automatically captures provenance of user sensemaking actions and provides multi-linked views to visualize the collected information and enable users to curate and communicate their findings. To explore how SenseMap is used, we conducted a user study in a naturalistic work setting with five participants completing the same sensemaking task related to their daily work activities. All participants found the visual representation and interaction of the tool intuitive to use. Three of them engaged with the tool and produced successful outcomes. It helped them to organize information sources, to quickly find and navigate to the sources they wanted, and to effectively communicate their findings. | false | false | [
"Phong Hai Nguyen",
"Kai Xu 0003",
"Andy Bardill",
"Betul Salman",
"Kate Herd",
"B. L. William Wong"
] | [] | [] | [] |
VAST | 2,016 | Shape Grammar Extraction for Efficient Query-by-Sketch Pattern Matching in Long Time Series | 10.1109/VAST.2016.7883518 | Long time-series, involving thousands or even millions of time steps, are common in many application domains but remain very difficult to explore interactively. Often the analytical task in such data is to identify specific patterns, but this is a very complex and computationally difficult problem and so focusing the search in order to only identify interesting patterns is a common solution. We propose an efficient method for exploring user-sketched patterns, incorporating the domain expert's knowledge, in time series data through a shape grammar based approach. The shape grammar is extracted from the time series by considering the data as a combination of basic elementary shapes positioned across different amplitudes. We represent these basic shapes using a ratio value, perform binning on ratio values and apply a symbolic approximation. Our proposed method for pattern matching is amplitude-, scale- and translation-invariant and, since the pattern search and pattern constraint relaxation happen at the symbolic level, is very efficient permitting its use in a real-time/online system. We demonstrate the effectiveness of our method in a case study on stock market data although it is applicable to any numeric time series data. | false | false | [
"Prithiviraj K. Muthumanickam",
"Katerina Vrotsou",
"Matthew Cooper 0001",
"Jimmy Johansson 0001"
] | [] | [] | [] |
VAST | 2,016 | SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations | 10.1109/TVCG.2016.2598432 | The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data. | false | false | [
"Dongyu Liu",
"Di Weng",
"Yuhong Li",
"Jie Bao 0003",
"Yu Zheng 0004",
"Huamin Qu",
"Yingcai Wu"
] | [] | [] | [] |
VAST | 2,016 | SocialBrands: Visual analysis of public perceptions of brands on social media | 10.1109/VAST.2016.7883513 | Public perceptions of a brand is critical to its performance. While social media has demonstrated a huge potential to shape public perceptions of brands, existing tools are not intuitive and explanatory for domain users to use as they fail to provide a comprehensive analysis framework for perceptions of brands. In this paper, we present SocialBrands, a novel visual analysis tool for brand managers to understand public perceptions of brands on social media. Social-Brands leverages brand personality framework in marketing literature and social computing approaches to compute the personality of brands from three driving factors (user imagery, employee imagery, and official announcement) on social media, and construct an evidence network explaining the association between brand personality and driving factors. These computational results are then integrated with new interactive visualizations to help brand managers understand personality traits and their driving factors. We demonstrate the usefulness and effectiveness of SocialBrands through a series of user studies with brand managers in an enterprise context. Design lessons are also derived from our studies. | false | false | [
"Xiaotong Liu",
"Anbang Xu",
"Liang Gou",
"Haibin Liu",
"Rama Akkiraju",
"Han-Wei Shen"
] | [] | [] | [] |
VAST | 2,016 | Squares: Supporting Interactive Performance Analysis for Multiclass Classifiers | 10.1109/TVCG.2016.2598828 | Performance analysis is critical in applied machine learning because it influences the models practitioners produce. Current performance analysis tools suffer from issues including obscuring important characteristics of model behavior and dissociating performance from data. In this work, we present Squares, a performance visualization for multiclass classification problems. Squares supports estimating common performance metrics while displaying instance-level distribution information necessary for helping practitioners prioritize efforts and access data. Our controlled study shows that practitioners can assess performance significantly faster and more accurately with Squares than a confusion matrix, a common performance analysis tool in machine learning. | false | false | [
"Donghao Ren",
"Saleema Amershi",
"Bongshin Lee",
"Jina Suh",
"Jason D. Williams"
] | [] | [] | [] |
VAST | 2,016 | Supporting visual exploration for multiple users in large display environments | 10.1109/VAST.2016.7883506 | We present a design space exploration of interaction techniques for supporting multiple collaborators exploring data on a shared large display. Our proposed solution is based on users controlling individual lenses using both explicit gestures as well as proxemics: the spatial relations between people and physical artifacts such as their distance, orientation, and movement. We discuss different design considerations for implicit and explicit interactions through the lens, and evaluate the user experience to find a balance between the implicit and explicit interaction styles. Our findings indicate that users favor implicit interaction through proxemics for navigation and collaboration, but prefer using explicit mid-air gestures to perform actions that are perceived to be direct, such as terminating a lens composition. Based on these results, we propose a hybrid technique utilizing both proxemics and mid-air gestures, along with examples applying this technique to other datasets. Finally, we performed a usability evaluation of the hybrid technique and observed user performance improvements in the presence of both implicit and explicit interaction styles. | false | false | [
"Sriram Karthik Badam",
"Fereshteh Amini",
"Niklas Elmqvist",
"Pourang Irani"
] | [] | [] | [] |
VAST | 2,016 | TextTile: An Interactive Visualization Tool for Seamless Exploratory Analysis of Structured Data and Unstructured Text | 10.1109/TVCG.2016.2598447 | We describe TextTile, a data visualization tool for investigation of datasets and questions that require seamless and flexible analysis of structured data and unstructured text. TextTile is based on real-world data analysis problems gathered through our interaction with a number of domain experts and provides a general purpose solution to such problems. The system integrates a set of operations that can interchangeably be applied to the structured as well as to unstructured text part of the data to generate useful data summaries. Such summaries are then organized in visual tiles in a grid layout to allow their analysis and comparison. We validate TextTile with task analysis, use cases and a user study showing the system can be easily learned and proficiently used to carry out nontrivial tasks. | false | false | [
"Cristian Felix",
"Anshul Vikram Pandey",
"Enrico Bertini"
] | [] | [] | [] |
VAST | 2,016 | The DataSpace for HIV vaccine studies | 10.1109/VAST.2016.7883509 | The DataSpace for HIV vaccine studies is a discovery tool available on the web to hundreds of investigators. We designed it to help them better understand activity in the field and explore new ideas latent in completed research. The DataSpace harmonizes immunoassay results and study metadata so that a broader research community can pursue more flexible discovery than the typical centrally planned analyses. Insights from human-centered design and beta evaluation suggest strong potential for visual analytics that may also apply to other efforts in open science. The contribution of this paper is to elucidate key domain challenges and demonstrate an application that addresses them. We made several changes to familiar visualizations to support key tasks such as identifying and filtering to a cohort of interest, making meaningful comparisons of time series data from multiple studies that have different plans, and preserving analytic context when making data transformations and comparisons that would normally exclude some data. | false | false | [
"David McColgin",
"Paul Hoover",
"Mark Igra"
] | [] | [] | [] |
VAST | 2,016 | The semantics of sketch: Flexibility in visual query systems for time series data | 10.1109/VAST.2016.7883519 | Sketching allows analysts to specify complex and free-form patterns of interest. Visual query systems can make use of sketches to locate these patterns of interest in large datasets. However, sketching is ambiguous: the same drawing could represent a multitude of potential queries. In this work, we investigate these ambiguities as they apply to visual query systems for time series data. We define a class of “invariants” - the properties of a time series that the analyst wishes to ignore when performing a sketch-based query. We present the results of a crowd-sourced study, showing that these invariants are key components of how people rate the strength of match between sketch and target. We adapt a number of algorithms for time series matching to support invariants in sketches. Lastly, we present a web-deployed prototype sketch-based visual query system that relies on these invariants. We apply the prototype to data from finance, the digital humanities, and political science. | false | false | [
"Michael Correll",
"Michael Gleicher"
] | [] | [] | [] |
VAST | 2,016 | TopicLens: Efficient Multi-Level Visual Topic Exploration of Large-Scale Document Collections | 10.1109/TVCG.2016.2598445 | Topic modeling, which reveals underlying topics of a document corpus, has been actively adopted in visual analytics for large-scale document collections. However, due to its significant processing time and non-interactive nature, topic modeling has so far not been tightly integrated into a visual analytics workflow. Instead, most such systems are limited to utilizing a fixed, initial set of topics. Motivated by this gap in the literature, we propose a novel interaction technique called TopicLens that allows a user to dynamically explore data through a lens interface where topic modeling and the corresponding 2D embedding are efficiently computed on the fly. To support this interaction in real time while maintaining view consistency, we propose a novel efficient topic modeling method and a semi-supervised 2D embedding algorithm. Our work is based on improving state-of-the-art methods such as nonnegative matrix factorization and t-distributed stochastic neighbor embedding. Furthermore, we have built a web-based visual analytics system integrated with TopicLens. We use this system to measure the performance and the visualization quality of our proposed methods. We provide several scenarios showcasing the capability of TopicLens using real-world datasets. | false | false | [
"Minjeong Kim",
"Kyeongpil Kang",
"Deok Gun Park 0001",
"Jaegul Choo",
"Niklas Elmqvist"
] | [] | [] | [] |
VAST | 2,016 | Toward Theoretical Techniques for Measuring the Use of Human Effort in Visual Analytic Systems | 10.1109/TVCG.2016.2598460 | Visual analytic systems have long relied on user studies and standard datasets to demonstrate advances to the state of the art, as well as to illustrate the efficiency of solutions to domain-specific challenges. This approach has enabled some important comparisons between systems, but unfortunately the narrow scope required to facilitate these comparisons has prevented many of these lessons from being generalized to new areas. At the same time, advanced visual analytic systems have made increasing use of human-machine collaboration to solve problems not tractable by machine computation alone. To continue to make progress in modeling user tasks in these hybrid visual analytic systems, we must strive to gain insight into what makes certain tasks more complex than others. This will require the development of mechanisms for describing the balance to be struck between machine and human strengths with respect to analytical tasks and workload. In this paper, we argue for the necessity of theoretical tools for reasoning about such balance in visual analytic systems and demonstrate the utility of the Human Oracle Model for this purpose in the context of sensemaking in visual analytics. Additionally, we make use of the Human Oracle Model to guide the development of a new system through a case study in the domain of cybersecurity. | false | false | [
"R. Jordan Crouser",
"Lyndsey Franklin",
"Alex Endert",
"Kristin A. Cook"
] | [] | [] | [] |
VAST | 2,016 | Towards Better Analysis of Deep Convolutional Neural Networks | 10.1109/TVCG.2016.2598831 | Deep convolutional neural networks (CNNs) have achieved breakthrough performance in many pattern recognition tasks such as image classification. However, the development of high-quality deep models typically relies on a substantial amount of trial-and-error, as there is still no clear understanding of when and why a deep model works. In this paper, we present a visual analytics approach for better understanding, diagnosing, and refining deep CNNs. We formulate a deep CNN as a directed acyclic graph. Based on this formulation, a hybrid visualization is developed to disclose the multiple facets of each neuron and the interactions between them. In particular, we introduce a hierarchical rectangle packing algorithm and a matrix reordering algorithm to show the derived features of a neuron cluster. We also propose a biclustering-based edge bundling method to reduce visual clutter caused by a large number of connections between neurons. We evaluated our method on a set of CNNs and the results are generally favorable. | false | false | [
"Mengchen Liu",
"Jiaxin Shi",
"Zhen Li 0044",
"Chongxuan Li",
"Jun Zhu 0001",
"Shixia Liu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1604.07043v3",
"icon": "paper"
}
] |
VAST | 2,016 | ViDX: Visual Diagnostics of Assembly Line Performance in Smart Factories | 10.1109/TVCG.2016.2598664 | Visual analytics plays a key role in the era of connected industry (or industry 4.0, industrial internet) as modern machines and assembly lines generate large amounts of data and effective visual exploration techniques are needed for troubleshooting, process optimization, and decision making. However, developing effective visual analytics solutions for this application domain is a challenging task due to the sheer volume and the complexity of the data collected in the manufacturing processes. We report the design and implementation of a comprehensive visual analytics system, ViDX. It supports both real-time tracking of assembly line performance and historical data exploration to identify inefficiencies, locate anomalies, and form hypotheses about their causes and effects. The system is designed based on a set of requirements gathered through discussions with the managers and operators from manufacturing sites. It features interlinked views displaying data at different levels of detail. In particular, we apply and extend the Marey's graph by introducing a time-aware outlier-preserving visual aggregation technique to support effective troubleshooting in manufacturing processes. We also introduce two novel interaction techniques, namely the quantiles brush and samples brush, for the users to interactively steer the outlier detection algorithms. We evaluate the system with example use cases and an in-depth user interview, both conducted together with the managers and operators from manufacturing plants. The result demonstrates its effectiveness and reports a successful pilot application of visual analytics for manufacturing in smart factories. | false | false | [
"Panpan Xu",
"Honghui Mei",
"Ren Liu",
"Wei Chen 0001"
] | [
"HM"
] | [] | [] |
VAST | 2,016 | VisFlow - Web-based Visualization Framework for Tabular Data with a Subset Flow Model | 10.1109/TVCG.2016.2598497 | Data flow systems allow the user to design a flow diagram that specifies the relations between system components which process, filter or visually present the data. Visualization systems may benefit from user-defined data flows as an analysis typically consists of rendering multiple plots on demand and performing different types of interactive queries across coordinated views. In this paper, we propose VisFlow, a web-based visualization framework for tabular data that employs a specific type of data flow model called the subset flow model. VisFlow focuses on interactive queries within the data flow, overcoming the limitation of interactivity from past computational data flow systems. In particular, VisFlow applies embedded visualizations and supports interactive selections, brushing and linking within a visualization-oriented data flow. The model requires all data transmitted by the flow to be a data item subset (i.e. groups of table rows) of some original input table, so that rendering properties can be assigned to the subset unambiguously for tracking and comparison. VisFlow features the analysis flexibility of a flow diagram, and at the same time reduces the diagram complexity and improves usability. We demonstrate the capability of VisFlow on two case studies with domain experts on real-world datasets showing that VisFlow is capable of accomplishing a considerable set of visualization and analysis tasks. The VisFlow system is available as open source on GitHub. | false | false | [
"Bowen Yu 0004",
"Cláudio T. Silva"
] | [] | [] | [] |
VAST | 2,016 | VisMatchmaker: Cooperation of the User and the Computer in Centralized Matching Adjustment | 10.1109/TVCG.2016.2599378 | Centralized matching is a ubiquitous resource allocation problem. In a centralized matching problem, each agent has a preference list ranking the other agents and a central planner is responsible for matching the agents manually or with an algorithm. While algorithms can find a matching which optimizes some performance metrics, they are used as a black box and preclude the central planner from applying his domain knowledge to find a matching which aligns better with the user tasks. Furthermore, the existing matching visualization techniques (i.e. bipartite graph and adjacency matrix) fail in helping the central planner understand the differences between matchings. In this paper, we present VisMatchmaker, a visualization system which allows the central planner to explore alternatives to an algorithm-generated matching. We identified three common tasks in the process of matching adjustment: problem detection, matching recommendation and matching evaluation. We classified matching comparison into three levels and designed visualization techniques for them, including the number line view and the stacked graph view. Two types of algorithmic support, namely direct assignment and range search, and their interactive operations are also provided to enable the user to apply his domain knowledge in matching adjustment. | false | false | [
"Po-Ming Law",
"Wenchao Wu",
"Yixian Zheng",
"Huamin Qu"
] | [] | [] | [] |
VAST | 2,016 | Visual analysis and coding of data-rich user behavior | 10.1109/VAST.2016.7883520 | Investigating user behavior involves abstracting low-level events to higher-level concepts. This requires an analyst to study individual user activities, assign codes which categorize behavior, and develop a consistent classification scheme. To better support this reasoning process of an analyst, we suggest a novel visual analytics approach which integrates rich user data including transcripts, videos, eye movement data, and interaction logs. Word-sized visualizations embedded into a tabular representation provide a space-efficient and detailed overview of user activities. An analyst assigns codes, grouped into code categories, as part of an interactive process. Filtering and searching helps to select specific activities and focus an analysis. A comparison visualization summarizes results of coding and reveals relationships between codes. Editing features support efficient assignment, refinement, and aggregation of codes. We demonstrate the practical applicability and usefulness of our approach in a case study and describe expert feedback. | false | false | [
"Tanja Blascheck",
"Fabian Beck 0001",
"Sebastian Baltes",
"Thomas Ertl",
"Daniel Weiskopf"
] | [] | [] | [] |
VAST | 2,016 | Visual Analysis of MOOC Forums with iForum | 10.1109/TVCG.2016.2598444 | Discussion forums of Massive Open Online Courses (MOOC) provide great opportunities for students to interact with instructional staff as well as other students. Exploration of MOOC forum data can offer valuable insights for these staff to enhance the course and prepare the next release. However, it is challenging due to the large, complicated, and heterogeneous nature of relevant datasets, which contain multiple dynamically interacting objects such as users, posts, and threads, each one including multiple attributes. In this paper, we present a design study for developing an interactive visual analytics system, called iForum, that allows for effectively discovering and understanding temporal patterns in MOOC forums. The design study was conducted with three domain experts in an iterative manner over one year, including a MOOC instructor and two official teaching assistants. iForum offers a set of novel visualization designs for presenting the three interleaving aspects of MOOC forums (i.e., posts, users, and threads) at three different scales. To demonstrate the effectiveness and usefulness of iForum, we describe a case study involving field experts, in which they use iForum to investigate real MOOC forum data for a course on JAVA programming. | false | false | [
"Siwei Fu",
"Jian Zhao 0010",
"Weiwei Cui",
"Huamin Qu"
] | [] | [] | [] |
VAST | 2,016 | Visual Analytics for Mobile Eye Tracking | 10.1109/TVCG.2016.2598695 | The analysis of eye tracking data often requires the annotation of areas of interest (AOIs) to derive semantic interpretations of human viewing behavior during experiments. This annotation is typically the most time-consuming step of the analysis process. Especially for data from wearable eye tracking glasses, every independently recorded video has to be annotated individually and corresponding AOIs between videos have to be identified. We provide a novel visual analytics approach to ease this annotation process by image-based, automatic clustering of eye tracking data integrated in an interactive labeling and analysis system. The annotation and analysis are tightly coupled by multiple linked views that allow for a direct interpretation of the labeled data in the context of the recorded video stimuli. The components of our analytics environment were developed with a user-centered design approach in close cooperation with an eye tracking expert. We demonstrate our approach with eye tracking data from a real experiment and compare it to an analysis of the data by manual annotation of dynamic AOIs. Furthermore, we conducted an expert user study with 6 external eye tracking researchers to collect feedback and identify analysis strategies they used while working with our application. | false | false | [
"Kuno Kurzhals",
"Marcel Hlawatsch",
"Christof Seeger",
"Daniel Weiskopf"
] | [] | [] | [] |
VAST | 2,016 | Visual Interaction with Dimensionality Reduction: A Structured Literature Analysis | 10.1109/TVCG.2016.2598495 | Dimensionality Reduction (DR) is a core building block in visualizing multidimensional data. For DR techniques to be useful in exploratory data analysis, they need to be adapted to human needs and domain-specific problems, ideally, interactively, and on-the-fly. Many visual analytics systems have already demonstrated the benefits of tightly integrating DR with interactive visualizations. Nevertheless, a general, structured understanding of this integration is missing. To address this, we systematically studied the visual analytics and visualization literature to investigate how analysts interact with automatic DR techniques. The results reveal seven common interaction scenarios that are amenable to interactive control such as specifying algorithmic constraints, selecting relevant features, or choosing among several DR algorithms. We investigate specific implementations of visual analysis systems integrating DR, and analyze ways that other machine learning methods have been combined with DR. Summarizing the results in a “human in the loop” process model provides a general lens for the evaluation of visual interactive DR systems. We apply the proposed model to study and classify several systems previously described in the literature, and to derive future research opportunities. | false | false | [
"Dominik Sacha",
"Leishi Zhang",
"Michael Sedlmair",
"John A. Lee",
"Jaakko Peltonen",
"Daniel Weiskopf",
"Stephen C. North",
"Daniel A. Keim"
] | [] | [] | [] |
VAST | 2,016 | Visualizing Dimension Coverage to Support Exploratory Analysis | 10.1109/TVCG.2016.2598466 | Data analysis involves constantly formulating and testing new hypotheses and questions about data. When dealing with a new dataset, especially one with many dimensions, it can be cumbersome for the analyst to clearly remember which aspects of the data have been investigated (i.e., visually examined for patterns, trends, outliers etc.) and which combinations have not. Yet this information is critical to help the analyst formulate new questions that they have not already answered. We observe that for tabular data, questions are typically comprised of varying combinations of data dimensions (e.g., what are the trends of Sales and Profit for different Regions?). We propose representing analysis history from the angle of dimension coverage (i.e., which data dimensions have been investigated and in which combinations). We use scented widgets to incorporate dimension coverage of the analysts' past work into interaction widgets of a visualization tool. We demonstrate how this approach can assist analysts with the question formation process. Our approach extends the concept of scented widgets to reveal aspects of one's own analysis history, and offers a different perspective on one's past work than typical visualization history tools. Results of our empirical study showed that participants with access to embedded dimension coverage information relied on this information when formulating questions, asked more questions about the data, generated more top-level findings, and showed greater breadth of their analysis without sacrificing depth. | false | false | [
"Ali Sarvghad",
"Melanie Tory",
"Narges Mahyar"
] | [] | [] | [] |
VAST | 2,016 | Visualizing the Hidden Activity of Artificial Neural Networks | 10.1109/TVCG.2016.2598838 | In machine learning, pattern classification assigns high-dimensional vectors (observations) to classes based on generalization from examples. Artificial neural networks currently achieve state-of-the-art results in this task. Although such networks are typically used as black-boxes, they are also widely believed to learn (high-dimensional) higher-level representations of the original observations. In this paper, we propose using dimensionality reduction for two tasks: visualizing the relationships between learned representations of observations, and visualizing the relationships between artificial neurons. Through experiments conducted in three traditional image classification benchmark datasets, we show how visualization can provide highly valuable feedback for network designers. For instance, our discoveries in one of these datasets (SVHN) include the presence of interpretable clusters of learned representations, and the partitioning of artificial neurons into groups with apparently related discriminative roles. | false | false | [
"Paulo E. Rauber",
"Samuel G. Fadel",
"Alexandre X. Falcão",
"Alexandru C. Telea"
] | [] | [] | [] |
VAST | 2,016 | What do Constraint Programming Users Want to See? Exploring the Role of Visualisation in Profiling of Models and Search | 10.1109/TVCG.2016.2598545 | Constraint programming allows difficult combinatorial problems to be modelled declaratively and solved automatically. Advances in solver technologies over recent years have allowed the successful use of constraint programming in many application areas. However, when a particular solver's search for a solution takes too long, the complexity of the constraint program execution hinders the programmer's ability to profile that search and understand how it relates to their model. Therefore, effective tools to support such profiling and allow users of constraint programming technologies to refine their model or experiment with different search parameters are essential. This paper details the first user-centred design process for visual profiling tools in this domain. We report on: our insights and opportunities identified through an on-line questionnaire and a creativity workshop with domain experts carried out to elicit requirements for analytical and visual profiling techniques; our designs and functional prototypes realising such techniques; and case studies demonstrating how these techniques shed light on the behaviour of the solvers in practice. | false | false | [
"Sarah Goodwin",
"Christopher Mears",
"Tim Dwyer",
"Maria Garcia de la Banda",
"Guido Tack",
"Mark Wallace 0001"
] | [] | [] | [] |
SciVis | 2,016 | A Fractional Cartesian Composition Model for Semi-Spatial Comparative Visualization Design | 10.1109/TVCG.2016.2598870 | The study of spatial data ensembles leads to substantial visualization challenges in a variety of applications. In this paper, we present a model for comparative visualization that supports the design of according ensemble visualization solutions by partial automation. We focus on applications, where the user is interested in preserving selected spatial data characteristics of the data as much as possible-even when many ensemble members should be jointly studied using comparative visualization. In our model, we separate the design challenge into a minimal set of user-specified parameters and an optimization component for the automatic configuration of the remaining design variables. We provide an illustrated formal description of our model and exemplify our approach in the context of several application examples from different domains in order to demonstrate its generality within the class of comparative visualization problems for spatial data ensembles. | false | false | [
"Ivan Kolesár",
"Stefan Bruckner",
"Ivan Viola",
"Helwig Hauser"
] | [] | [] | [] |
SciVis | 2,016 | A Versatile and Efficient GPU Data Structure for Spatial Indexing | 10.1109/TVCG.2016.2599043 | In this paper we present a novel GPU-based data structure for spatial indexing. Based on Fenwick trees-a special type of binary indexed trees-our data structure allows construction in linear time. Updates and prefixes can be computed in logarithmic time, whereas point queries require only constant time on average. Unlike competing data structures such as summed-area tables and spatial hashing, our data structure requires a constant amount of bits for each data element, and it offers unconstrained point queries. This property makes our data structure ideally suited for applications requiring unconstrained indexing of large data, such as block-storage of large and block-sparse volumes. Finally, we provide asymptotic bounds on both run-time and memory requirements, and we show applications for which our new data structure is useful. | false | false | [
"Jens Schneider 0002",
"Peter Rautek"
] | [] | [] | [] |
SciVis | 2,016 | Backward Finite-Time Lyapunov Exponents in Inertial Flows | 10.1109/TVCG.2016.2599016 | Inertial particles are finite-sized objects that are carried by fluid flows and in contrast to massless tracer particles they are subject to inertia effects. In unsteady flows, the dynamics of tracer particles have been extensively studied by the extraction of Lagrangian coherent structures (LCS), such as hyperbolic LCS as ridges of the Finite-Time Lyapunov Exponent (FTLE). The extension of the rich LCS framework to inertial particles is currently a hot topic in the CFD literature and is actively under research. Recently, backward FTLE on tracer particles has been shown to correlate with the preferential particle settling of small inertial particles. For larger particles, inertial trajectories may deviate strongly from (massless) tracer trajectories, and thus for a better agreement, backward FTLE should be computed on inertial trajectories directly. Inertial backward integration, however, has not been possible until the recent introduction of the influence curve concept, which - given an observation and an initial velocity - allows to recover all sources of inertial particles as tangent curves of a derived vector field. In this paper, we show that FTLE on the influence curve vector field is in agreement with preferential particle settling and more importantly it is not only valid for small (near-tracer) particles. We further generalize the influence curve concept to general equations of motion in unsteady spatio-velocity phase spaces, which enables backward integration with more general equations of motion. Applying the influence curve concept to tracer particles in the spatio-velocity domain emits streaklines in massless flows as tangent curves of the influence curve vector field. We demonstrate the correlation between inertial backward FTLE and the preferential particle settling in a number of unsteady vector fields | false | false | [
"Tobias Günther",
"Holger Theisel"
] | [] | [] | [] |
SciVis | 2,016 | Categorical Colormap Optimization with Visualization Case Studies | 10.1109/TVCG.2016.2599214 | Mapping a set of categorical values to different colors is an elementary technique in data visualization. Users of visualization software routinely rely on the default colormaps provided by a system, or colormaps suggested by software such as ColorBrewer. In practice, users often have to select a set of colors in a semantically meaningful way (e.g., based on conventions, color metaphors, and logological associations), and consequently would like to ensure their perceptual differentiation is optimized. In this paper, we present an algorithmic approach for maximizing the perceptual distances among a set of given colors. We address two technical problems in optimization, i.e., (i) the phenomena of local maxima that halt the optimization too soon, and (ii) the arbitrary reassignment of colors that leads to the loss of the original semantic association. We paid particular attention to different types of constraints that users may wish to impose during the optimization process. To demonstrate the effectiveness of this work, we tested this technique in two case studies. To reach out to a wider range of users, we also developed a web application called Colourmap Hospital. | false | false | [
"Hui Fang 0003",
"Simon J. Walton",
"E. Delahaye",
"J. Harris",
"D. A. Storchak",
"Min Chen 0001"
] | [] | [] | [] |
SciVis | 2,016 | Combined Visualization of Vessel Deformation and Hemodynamics in Cerebral Aneurysms | 10.1109/TVCG.2016.2598795 | We present the first visualization tool that combines patient-specific hemodynamics with information about the vessel wall deformation and wall thickness in cerebral aneurysms. Such aneurysms bear the risk of rupture, whereas their treatment also carries considerable risks for the patient. For the patient-specific rupture risk evaluation and treatment analysis, both morphological and hemodynamic data have to be investigated. Medical researchers emphasize the importance of analyzing correlations between wall properties such as the wall deformation and thickness, and hemodynamic attributes like the Wall Shear Stress and near-wall flow. Our method uses a linked 2.5D and 3D depiction of the aneurysm together with blood flow information that enables the simultaneous exploration of wall characteristics and hemodynamic attributes during the cardiac cycle. We thus offer medical researchers an effective visual exploration tool for aneurysm treatment risk assessment. The 2.5D view serves as an overview that comprises a projection of the vessel surface to a 2D map, providing an occlusion-free surface visualization combined with a glyph-based depiction of the local wall thickness. The 3D view represents the focus upon which the data exploration takes place. To support the time-dependent parameter exploration and expert collaboration, a camera path is calculated automatically, where the user can place landmarks for further exploration of the properties. We developed a GPU-based implementation of our visualizations with a flexible interactive data exploration mechanism. We designed our techniques in collaboration with domain experts, and provide details about the evaluation. | false | false | [
"Monique Meuschke",
"Samuel Voß",
"Oliver Beuing",
"Bernhard Preim",
"Kai Lawonn"
] | [] | [] | [] |
SciVis | 2,016 | Comparing Cross-Sections and 3D Renderings for Surface Matching Tasks Using Physical Ground Truths | 10.1109/TVCG.2016.2598602 | Within the visualization community there are some well-known techniques for visualizing 3D spatial data and some general assumptions about how perception affects the performance of these techniques in practice. However, there is a lack of empirical research backing up the possible performance differences among the basic techniques for general tasks. One such assumption is that 3D renderings are better for obtaining an overview, whereas cross sectional visualizations such as the commonly used Multi-Planar Reformation (MPR) are better for supporting detailed analysis tasks. In the present study we investigated this common assumption by examining the difference in performance between MPR and 3D rendering for correctly identifying a known surface. We also examined whether prior experience working with image data affects the participant's performance, and whether there was any difference between interactive or static versions of the visualizations. Answering this question is important because it can be used as part of a scientific and empirical basis for determining when to use which of the two techniques. An advantage of the present study compared to other studies is that several factors were taken into account to compare the two techniques. The problem was examined through an experiment with 45 participants, where physical objects were used as the known surface (ground truth). Our findings showed that: 1. The 3D renderings largely outperformed the cross sections; 2. Interactive visualizations were partially more effective than static visualizations; and 3. The high experience group did not generally outperform the low experience group. | false | false | [
"Andreas J. Lind",
"Stefan Bruckner"
] | [] | [] | [] |
SciVis | 2,016 | Correlated Photon Mapping for Interactive Global Illumination of Time-Varying Volumetric Data | 10.1109/TVCG.2016.2598430 | We present a method for interactive global illumination of both static and time-varying volumetric data based on reduction of the overhead associated with re-computation of photon maps. Our method uses the identification of photon traces invariant to changes of visual parameters such as the transfer function (TF), or data changes between time-steps in a 4D volume. This lets us operate on a variant subset of the entire photon distribution. The amount of computation required in the two stages of the photon mapping process, namely tracing and gathering, can thus be reduced to the subset that are affected by a data or visual parameter change. We rely on two different types of information from the original data to identify the regions that have changed. A low resolution uniform grid containing the minimum and maximum data values of the original data is derived for each time step. Similarly, for two consecutive time-steps, a low resolution grid containing the difference between the overlapping data is used. We show that this compact metadata can be combined with the transfer function to identify the regions that have changed. Each photon traverses the low-resolution grid to identify if it can be directly transferred to the next photon distribution state or if it needs to be recomputed. An efficient representation of the photon distribution is presented leading to an order of magnitude improved performance of the raycasting step. The utility of the method is demonstrated in several examples that show visual fidelity, as well as performance. The examples show that visual quality can be retained when the fraction of retraced photons is as low as 40%-50%. | false | false | [
"Daniel Jönsson",
"Anders Ynnerman"
] | [
"HM"
] | [] | [] |
SciVis | 2,016 | Corresponding Supine and Prone Colon Visualization Using Eigenfunction Analysis and Fold Modeling | 10.1109/TVCG.2016.2598791 | We present a method for registration and visualization of corresponding supine and prone virtual colonoscopy scans based on eigenfunction analysis and fold modeling. In virtual colonoscopy, CT scans are acquired with the patient in two positions, and their registration is desirable so that physicians can corroborate findings between scans. Our algorithm performs this registration efficiently through the use of Fiedler vector representation (the second eigenfunction of the Laplace-Beltrami operator). This representation is employed to first perform global registration of the two colon positions. The registration is then locally refined using the haustral folds, which are automatically segmented using the 3D level sets of the Fiedler vector. The use of Fiedler vectors and the segmented folds presents a precise way of visualizing corresponding regions across datasets and visual modalities. We present multiple methods of visualizing the results, including 2D flattened rendering and the corresponding 3D endoluminal views. The precise fold modeling is used to automatically find a suitable cut for the 2D flattening, which provides a less distorted visualization. Our approach is robust, and we demonstrate its efficiency and efficacy by showing matched views on both the 2D flattened colons and in the 3D endoluminal view. We analytically evaluate the results by measuring the distance between features on the registered colons, and we also assess our fold segmentation against 20 manually labeled datasets. We have compared our results analytically to previous methods, and have found our method to achieve superior results. We also prove the hot spots conjecture for modeling cylindrical topology using Fiedler vector representation, which allows our approach to be used for general cylindrical geometry modeling and feature extraction. | false | false | [
"Saad Nadeem",
"Joseph Marino",
"Xianfeng Gu",
"Arie E. Kaufman"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/1810.08850v1",
"icon": "paper"
}
] |
SciVis | 2,016 | Decal-Maps: Real-Time Layering of Decals on Surfaces for Multivariate Visualization | 10.1109/TVCG.2016.2598866 | We introduce the use of decals for multivariate visualization design. Decals are visual representations that are used for communication; for example, a pattern, a text, a glyph, or a symbol, transferred from a 2D-image to a surface upon contact. By creating what we define as decal-maps, we can design a set of images or patterns that represent one or more data attributes. We place decals on the surface considering the data pertaining to the locations we choose. We propose a (texture mapping) local parametrization that allows placing decals on arbitrary surfaces interactively, even when dealing with a high number of decals. Moreover, we extend the concept of layering to allow the co-visualization of an increased number of attributes on arbitrary surfaces. By combining decal-maps, color-maps and a layered visualization, we aim to facilitate and encourage the creative process of designing multivariate visualizations. Finally, we demonstrate the general applicability of our technique by providing examples of its use in a variety of contexts. | false | false | [
"Allan Rocha",
"Usman R. Alim",
"Julio Daniel Silva",
"Mario Costa Sousa"
] | [] | [] | [] |
SciVis | 2,016 | Direct Multifield Volume Ray Casting of Fiber Surfaces | 10.1109/TVCG.2016.2599040 | Multifield data are common in visualization. However, reducing these data to comprehensible geometry is a challenging problem. Fiber surfaces, an analogy of isosurfaces to bivariate volume data, are a promising new mechanism for understanding multifield volumes. In this work, we explore direct ray casting of fiber surfaces from volume data without any explicit geometry extraction. We sample directly along rays in domain space, and perform geometric tests in range space where fibers are defined, using a signed distance field derived from the control polygons. Our method requires little preprocess, and enables real-time exploration of data, dynamic modification and pixel-exact rendering of fiber surfaces, and support for higher-order interpolation in domain space. We demonstrate this approach on several bivariate datasets, including analysis of multi-field combustion data. | false | false | [
"Kui Wu 0003",
"Aaron Knoll",
"Benjamin J. Isaac",
"Hamish A. Carr",
"Valerio Pascucci"
] | [] | [] | [] |
SciVis | 2,016 | GlyphLens: View-Dependent Occlusion Management in the Interactive Glyph Visualization | 10.1109/TVCG.2016.2599049 | Glyph as a powerful multivariate visualization technique is used to visualize data through its visual channels. To visualize 3D volumetric dataset, glyphs are usually placed on 2D surface, such as the slicing plane or the feature surface, to avoid occluding each other. However, the 3D spatial structure of some features may be missing. On the other hand, placing large number of glyphs over the entire 3D space results in occlusion and visual clutter that make the visualization ineffective. To avoid the occlusion, we propose a view-dependent interactive 3D lens that removes the occluding glyphs by pulling the glyphs aside through the animation. We provide two space deformation models and two lens shape models to displace the glyphs based on their spatial distributions. After the displacement, the glyphs around the user-interested region are still visible as the context information, and their spatial structures are preserved. Besides, we attenuate the brightness of the glyphs inside the lens based on their depths to provide more depth cue. Furthermore, we developed an interactive glyph visualization system to explore different glyph-based visualization applications. In the system, we provide a few lens utilities that allows users to pick a glyph or a feature and look at it from different view directions. We compare different display/interaction techniques to visualize/manipulate our lens and glyphs. | false | false | [
"Xin Tong",
"Cheng Li",
"Han-Wei Shen"
] | [] | [] | [] |
SciVis | 2,016 | Glyphs for General Second-Order 2D and 3D Tensors | 10.1109/TVCG.2016.2598998 | Glyphs are a powerful tool for visualizing second-order tensors in a variety of scientic data as they allow to encode physical behavior in geometric properties. Most existing techniques focus on symmetric tensors and exclude non-symmetric tensors where the eigenvectors can be non-orthogonal or complex. We present a new construction of 2d and 3d tensor glyphs based on piecewise rational curves and surfaces with the following properties: invariance to (a) isometries and (b) scaling, (c) direct encoding of all real eigenvalues and eigenvectors, (d) one-to-one relation between the tensors and glyphs, (e) glyph continuity under changing the tensor. We apply the glyphs to visualize the Jacobian matrix fields of a number of 2d and 3d vector fields. | false | false | [
"Tim Gerrits",
"Christian Rössl",
"Holger Theisel"
] | [] | [] | [] |
SciVis | 2,016 | Hairy Slices: Evaluating the Perceptual Effectiveness of Cutting Plane Glyphs for 3D Vector Fields | 10.1109/TVCG.2016.2598448 | Three-dimensional vector fields are common datasets throughout the sciences. Visualizing these fields is inherently difficult due to issues such as visual clutter and self-occlusion. Cutting planes are often used to overcome these issues by presenting more manageable slices of data. The existing literature provides many techniques for visualizing the flow through these cutting planes; however, there is a lack of empirical studies focused on the underlying perceptual cues that make popular techniques successful. This paper presents a quantitative human factors study that evaluates static monoscopic depth and orientation cues in the context of cutting plane glyph designs for exploring and analyzing 3D flow fields. The goal of the study was to ascertain the relative effectiveness of various techniques for portraying the direction of flow through a cutting plane at a given point, and to identify the visual cues and combinations of cues involved, and how they contribute to accurate performance. It was found that increasing the dimensionality of line-based glyphs into tubular structures enhances their ability to convey orientation through shading, and that increasing their diameter intensifies this effect. These tube-based glyphs were also less sensitive to visual clutter issues at higher densities. Adding shadows to lines was also found to increase perception of flow direction. Implications of the experimental results are discussed and extrapolated into a number of guidelines for designing more perceptually effective glyphs for 3D vector field visualizations. | false | false | [
"Andrew H. Stevens",
"Thomas Butkiewicz",
"Colin Ware"
] | [] | [] | [] |
SciVis | 2,016 | Hybrid Tactile/Tangible Interaction for 3D Data Exploration | 10.1109/TVCG.2016.2599217 | We present the design and evaluation of an interface that combines tactile and tangible paradigms for 3D visualization. While studies have demonstrated that both tactile and tangible input can be efficient for a subset of 3D manipulation tasks, we reflect here on the possibility to combine the two complementary input types. Based on a field study and follow-up interviews, we present a conceptual framework of the use of these different interaction modalities for visualization both separately and combined-focusing on free exploration as well as precise control. We present a prototypical application of a subset of these combined mappings for fluid dynamics data visualization using a portable, position-aware device which offers both tactile input and tangible sensing. We evaluate our approach with domain experts and report on their qualitative feedback. | false | false | [
"Lonni Besançon",
"Paul Issartel",
"Mehdi Ammi",
"Tobias Isenberg 0001"
] | [] | [] | [] |
SciVis | 2,016 | In Situ Distribution Guided Analysis and Visualization of Transonic Jet Engine Simulations | 10.1109/TVCG.2016.2598604 | Study of flow instability in turbine engine compressors is crucial to understand the inception and evolution of engine stall. Aerodynamics experts have been working on detecting the early signs of stall in order to devise novel stall suppression technologies. A state-of-the-art Navier-Stokes based, time-accurate computational fluid dynamics simulator, TURBO, has been developed in NASA to enhance the understanding of flow phenomena undergoing rotating stall. Despite the proven high modeling accuracy of TURBO, the excessive simulation data prohibits post-hoc analysis in both storage and I/O time. To address these issues and allow the expert to perform scalable stall analysis, we have designed an in situ distribution guided stall analysis technique. Our method summarizes statistics of important properties of the simulation data in situ using a probabilistic data modeling scheme. This data summarization enables statistical anomaly detection for flow instability in post analysis, which reveals the spatiotemporal trends of rotating stall for the expert to conceive new hypotheses. Furthermore, the verification of the hypotheses and exploratory visualization using the summarized data are realized using probabilistic visualization techniques such as uncertain isocontouring. Positive feedback from the domain scientist has indicated the efficacy of our system in exploratory stall analysis. | false | false | [
"Soumya Dutta",
"Chun-Ming Chen",
"Gregory Heinlein",
"Han-Wei Shen",
"Jen-Ping Chen"
] | [
"HM"
] | [] | [] |
SciVis | 2,016 | Jacobi Fiber Surfaces for Bivariate Reeb Space Computation | 10.1109/TVCG.2016.2599017 | This paper presents an efficient algorithm for the computation of the Reeb space of an input bivariate piecewise linear scalar function f defined on a tetrahedral mesh. By extending and generalizing algorithmic concepts from the univariate case to the bivariate one, we report the first practical, output-sensitive algorithm for the exact computation of such a Reeb space. The algorithm starts by identifying the Jacobi set of f, the bivariate analogs of critical points in the univariate case. Next, the Reeb space is computed by segmenting the input mesh along the new notion of Jacobi Fiber Surfaces, the bivariate analog of critical contours in the univariate case. We additionally present a simplification heuristic that enables the progressive coarsening of the Reeb space. Our algorithm is simple to implement and most of its computations can be trivially parallelized. We report performance numbers demonstrating orders of magnitude speedups over previous approaches, enabling for the first time the tractable computation of bivariate Reeb spaces in practice. Moreover, unlike range-based quantization approaches (such as the Joint Contour Net), our algorithm is parameter-free. We demonstrate the utility of our approach by using the Reeb space as a semi-automatic segmentation tool for bivariate data. In particular, we introduce continuous scatterplot peeling, a technique which enables the reduction of the cluttering in the continuous scatterplot, by interactively selecting the features of the Reeb space to project. We provide a VTK-based C++ implementation of our algorithm that can be used for reproduction purposes or for the development of new Reeb space based visualization techniques. | false | false | [
"Julien Tierny",
"Hamish A. Carr"
] | [
"BP"
] | [] | [] |
SciVis | 2,016 | Molecular Surface Maps | 10.1109/TVCG.2016.2598824 | We present Molecular Surface Maps, a novel, view-independent, and concise representation for molecular surfaces. It transfers the well-known world map metaphor to molecular visualization. Our application maps the complex molecular surface to a simple 2D representation through a spherical intermediate, the Molecular Surface Globe. The Molecular Surface Map concisely shows arbitrary attributes of the original molecular surface, such as biochemical properties or geometrical features. This results in an intuitive overview, which allows researchers to assess all molecular surface attributes at a glance. Our representation can be used as a visual summarization of a molecule's interface with its environment. In particular, Molecular Surface Maps simplify the analysis and comparison of different data sets or points in time. Furthermore, the map representation can be used in a Space-time Cube to analyze time-dependent data from molecular simulations without the need for animation. We show the feasibility of Molecular Surface Maps for different typical analysis tasks of biomolecular data. | false | false | [
"Michael Krone",
"Florian Frieß",
"Katrin Scharnowski",
"Guido Reina",
"Silvia Fademrecht",
"Tobias Kulschewski",
"Jürgen Pleiss",
"Thomas Ertl"
] | [] | [] | [] |
SciVis | 2,016 | OSPRay - A CPU Ray Tracing Framework for Scientific Visualization | 10.1109/TVCG.2016.2599041 | Scientific data is continually increasing in complexity, variety and size, making efficient visualization and specifically rendering an ongoing challenge. Traditional rasterization-based visualization approaches encounter performance and quality limitations, particularly in HPC environments without dedicated rendering hardware. In this paper, we present OSPRay, a turn-key CPU ray tracing framework oriented towards production-use scientific visualization which can utilize varying SIMD widths and multiple device backends found across diverse HPC resources. This framework provides a high-quality, efficient CPU-based solution for typical visualization workloads, which has already been integrated into several prevalent visualization packages. We show that this system delivers the performance, high-level API simplicity, and modular device support needed to provide a compelling new rendering framework for implementing efficient scientific visualization workflows. | false | false | [
"Ingo Wald",
"Gregory P. Johnson",
"Jefferson Amstutz",
"Carson Brownlee",
"Aaron Knoll",
"Jim Jeffers",
"Johannes Günther 0001",
"Paul A. Navrátil"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.