14 - 19 OCTOBER, 2012. SEATTLE, WASHINGTON, USA

2012 IEEE Papers

SCIVIS Papers: Evaluation
Session : 
Evaluation
Date & Time : October 16 02:00 pm - 03:40 pm
Location : Grand Ballroom D
Chair : Dan Keefe
Papers : 
Evaluation of Fast-Forward Video Visualization
Authors:
Markus Hoferlin, Kuno Kurzhals, Benjamin Hoferlin, Gunther Heidemann, Daniel Weiskopf
Abstract :
We evaluate and compare video visualization techniques based on fast-forward. A controlled laboratory user study (n = 24) was conducted to determine the trade-off between support of object identification and motion perception, two properties that have to be considered when choosing a particular fast-forward visualization. We compare four different visualizations: two representing the state-of-the-art and two new variants of visualization introduced in this paper. The two state-of-the-art methods we consider are frame-skipping and temporal blending of successive frames. Our object trail visualization leverages a combination of frame-skipping and temporal blending, whereas predictive trajectory visualization supports motion perception by augmenting the video frames with an arrow that indicates the future object trajectory. Our hypothesis was that each of the state-of-the-art methods satisfies just one of the goals: support of object identification or motion perception. Thus, they represent both ends of the visualization design. The key findings of the evaluation are that object trail visualization supports object identification, whereas predictive trajectory visualization is most useful for motion perception. However, frame-skipping surprisingly exhibits reasonable performance for both tasks. Furthermore, we evaluate the subjective performance of three different playback speed visualizations for adaptive fast-forward, a subdomain of video fast-forward.
Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms
Authors:
Nafees Ahmed, Ziyi Zheng, Klaus Mueller
Abstract :
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled Disguise which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Evaluation of Multivariate Visualization on a Multivariate Task
Authors:
Mark A. Livingston, Jonathan W. Decker, Zhuming Ai
Abstract :
Multivariate visualization techniques have attracted great interest as the dimensionality of data sets grows. One premise of such techniques is that simultaneous visual representation of multiple variables will enable the data analyst to detect patterns amongst multiple variables. Such insights could lead to development of new techniques for rigorous (numerical) analysis of complex relationships hidden within the data. Two natural questions arise from this premise: Which multivariate visualization techniques are the most effective for high-dimensional data sets? How does the analysis task change this utility ranking? We present a user study with a new task to answer the first question. We provide some insights to the second question based on the results of our study and results available in the literature. Our task led to significant differences in error, response time, and subjective workload ratings amongst four visualization techniques. We implemented three integrated techniques (Data-driven Spots, Oriented Slivers, and Attribute Blocks), as well as a baseline case of separate grayscale images. The baseline case fared poorly on all three measures, whereas Datadriven Spots yielded the best accuracy and was among the best in response time. These results differ from comparisons of similar techniques with other tasks, and we review all the techniques, tasks, and results (from our work and previous work) to understand the reasons for this discrepancy.
A Data-Driven Approach to Hue-Preserving Color-Blending
Authors:
Lars Kühne, Joachim Giesen, Zhiyuan Zhang, Sungsoo Ha, Klaus Mueller
Abstract :

Color mapping and semitransparent layering play an important role in many visualization scenarios, such as information visualization and volume rendering. The combination of color and transparency is still dominated by standard alpha-compositing using the Porter-Duff over operator which can result in false colors with deceiving impact on the visualization. Other more advanced methods have also been proposed, but the problem is still far from being solved. Here we present an alternative to these existing methods specifically devised to avoid false colors and preserve visual depth ordering. Our approach is data driven and follows the recently formulated knowledge-assisted visualization (KAV) paradigm. Preference data, that have been gathered in web-based user surveys, are used to train a support-vector machine model for automatically predicting an optimized hue-preserving blending. We have applied the resulting model to both volume rendering and a specific information visualization technique, illustrative parallel coordinate plots. Comparative renderings show a significant improvement over previous approaches in the sense that false colors are completely removed and important properties such as depth ordering and blending vividness are better preserved. Due to the generality of the defined data-driven blending operator, it can be easily integrated also into other visualization frameworks.

Effects of Stereo and Screen Size on the Legibility of Three-Dimensional Streamtube Visualization
Authors:
Jian Chen, Haipeng Cai, Alexander P. Auchus, David H. Laidlaw
Abstract :
We report the impact of display characteristics (stereo and size) on task performance in diffusion magnetic resonance imaging (DMRI) in a user study with 12 participants. The hypotheses were that (1) adding stereo and increasing display size would improve task accuracy and reduce completion time, and (2) the greater the complexity of a spatial task, the greater the benefits of an improved display. Thus we expected to see greater performance gains when detailed visual reasoning was required. Participants used dense streamtube visualizations to perform five representative tasks: (1) determine the higher average fractional anisotropy (FA) values between two regions, (2) find the endpoints of fiber tracts, (3) name a bundle, (4) mark a brain lesion, and (5) judge if tracts belong to the same bundle. Contrary to our hypotheses, we found the task completion time was not improved by the use of the larger display and that performance accuracy was hurt rather than helped by the introduction of stereo in our study with dense DMRI data. Bigger was not always better. Thus cautious should be taken when selecting displays for scientific visualization applications. We explored the results further using the body-scale unit and subjective size and stereo experiences.
INFOVIS Papers: Evaluation and Methodology
Session : 
Evaluation and Methodology
Date & Time : October 16 10:30 am - 12:10 pm
Location : Grand Ballroom C
Chair : Leland Wilkinson
Papers : 
How Capacity Limits of Attention Influence Information Visualization Effectiveness
Authors:
Steve Haroz, David Whitney
Abstract :
In this paper, we explore how the capacity limits of attention influence the effectiveness of information visualizations. We conducted a series of experiments to test how visual feature type (color vs. motion), layout, and variety of visual elements impacted user performance. The experiments tested users' abilities to (1) determine if a specified target is on the screen, (2) detect an odd-ball, deviant target, different from the other visible objects, and (3) gain a qualitative overview by judging the number of unique categories on the screen. Our results show that the severe capacity limits of attention strongly modulate the effectiveness of information visualizations, particularly the ability to detect unexpected information. Keeping in mind these capacity limits, we conclude with a set of design guidelines which depend on a visualization's intended use.
Different Strokes for Different Folks: Visual Presentation Design between Disciplines
Authors:
Steven R. Gomez, Radu Jianu, Caroline Ziemkiewicz, Hua Guo, David H. Laidlaw
Abstract :
We present an ethnographic study of design differences in visual presentations between academic disciplines. Characterizing design conventions between users and data domains is an important step in developing hypotheses, tools, and design guidelines for information visualization. In this paper, disciplines are compared at a coarse scale between four groups of fields: social, natural, and formal sciences; and the humanities. Two commonplace presentation types were analyzed: electronic slideshows and whiteboard chalk talks. We found design differences in slideshows using two methods: coding and comparing manually-selected features, like charts and diagrams, and an image-based analysis using PCA called eigenslides. In whiteboard talks with controlled topics, we observed design behaviors, including using representations and formalisms from a participant's own discipline, that suggest authors might benefit from novel assistive tools for designing presentations. Based on these findings, we discuss opportunities for visualization ethnography and human-centered authoring tools for visual information.
Does an Eye Tracker Tell the Truth about Visualizations?: Findings while Investigating Visualizations for Decision Making
Authors:
Sung-Hee Kim, Zhihua Dong, Hanjun Xian, Benjavan Upatising, Ji Soo Yi
Abstract :
For information visualization researchers, eye tracking has been a useful tool to investigate research participants' underlying cognitive processes by tracking their eye movements while they interact with visual techniques. We used an eye tracker to better understand why participants with a variant of a tabular visualization called 'SimulSort' outperformed ones with a conventional table and typical one-column sorting feature (i.e., Typical Sorting). The collected eye-tracking data certainly shed light on the detailed cognitive processes of the participants; SimulSort helped with decision-making tasks by promoting efficient browsing behavior and compensatory decision-making strategies. However, more interestingly, we also found unexpected eye-tracking patterns with Simul- Sort. We investigated the cause of the unexpected patterns through a crowdsourcing-based study (i.e., Experiment 2), which elicited an important limitation of the eye tracking method: incapability of capturing peripheral vision. This particular result would be a caveat for other visualization researchers who plan to use an eye tracker in their studies. In addition, the method to use a testing stimulus (i.e., influential column) in Experiment 2 to verify the existence of such limitations would be useful for researchers who would like to verify their eye tracking results.
Design Study Methodology: Reflections from the Trenches and the Stacks
Authors:
Michael Sedlmair, Miriah Meyer, Tamara Munzner
Abstract :
Design studies are an increasingly popular form of problem-driven visualization research, yet there is little guidance available about how to do them effectively. In this paper we reflect on our combined experience of conducting twenty-one design studies, as well as reading and reviewing many more, and on an extensive literature review of other field work methods and methodologies. Based on this foundation we provide definitions, propose a methodological framework, and provide practical guidance for conducting design studies. We define a design study as a project in which visualization researchers analyze a specific real-world problem faced by domain experts, design a visualization system that supports solving this problem, validate the design, and reflect about lessons learned in order to refine visualization design guidelines. We characterize two axes - a task clarity axis from fuzzy to crisp and an information location axis from the domain expert's head to the computer - and use these axes to reason about design study contributions, their suitability, and uniqueness from other approaches. The proposed methodological framework consists of 9 stages: learn, winnow, cast, discover, design, implement, deploy, reflect, and write. For each stage we provide practical guidance and outline potential pitfalls. We also conducted an extensive literature survey of related methodological approaches that involve a significant amount of qualitative field work, and compare design study methodology to that of ethnography, grounded theory, and action research.
Graphical Tests for Power Comparison of Competing Designs
Authors:
Heike Hofmann, Lendie Follett, Mahbubul Majumder, Dianne Cook
Abstract :
Lineups [4, 28] have been established as tools for visual testing similar to standard statistical inference tests, allowing us to evaluate the validity of graphical findings in an objective manner. In simulation studies [12] lineups have been shown as being efficient: the power of visual tests is comparable to classical tests while being much less stringent in terms of distributional assumptions made. This makes lineups versatile, yet powerful, tools in situations where conditions for regular statistical tests are not or cannot be met. In this paper we introduce lineups as a tool for evaluating the power of competing graphical designs. We highlight some of the theoretical properties and then show results from two studies evaluating competing designs: both studies are designed to go to the limits of our perceptual abilities to highlight differences between designs. We use both accuracy and speed of evaluation as measures of a successful design. The first study compares the choice of coordinate system: polar versus cartesian coordinates. The results show strong support in favor of cartesian coordinates in finding fast and accurate answers to spotting patterns. The second study is aimed at finding shift differences between distributions. Both studies are motivated by data problems that we have recently encountered, and explore using simulated data to evaluate the plot designs under controlled conditions. Amazon Mechanical Turk (MTurk) is used to conduct the studies. The lineups provide an effective mechanism for objectively evaluating plot designs.
INFOVIS Papers: Graphs and Networks
Session : 
Graphs and Networks
Date & Time : October 16 02:00 pm - 03:40 pm
Location : Grand Ballroom C
Chair : Nathalie Henry-Riche
Papers : 
A User Study on Curved Edges in Graph Visualization
Authors:
Kai Xu, Chris Rooney, Peter Passmore, Dong-Han Ham, Phong H. Nguyen
Abstract :
Recently there has been increasing research interest in displaying graphs with curved edges to produce more readable visualizations. While there are several automatic techniques, little has been done to evaluate their effectiveness empirically. In this paper we present two experiments studying the impact of edge curvature on graph readability. The goal is to understand the advantages and disadvantages of using curved edges for common graph tasks compared to straight line segments, which are the conventional choice for showing edges in node-link diagrams. We included several edge variations: straight edges, edges with different curvature levels, and mixed straight and curved edges. During the experiments, participants were asked to complete network tasks including determination of connectivity, shortest path, node degree, and common neighbors. We also asked the participants to provide subjective ratings of the aesthetics of different edge types. The results show significant performance differences between the straight and curved edges and clear distinctions between variations of curved edges.
Compressed Adjacency Matrices: Untangling Gene Regulatory Networks
Authors:
Kasper Dinkla, Michel A. Westenberg, Jarke J. van Wijk
Abstract :
We present a novel technique-Compressed Adjacency Matrices-for visualizing gene regulatory networks. These directed networks have strong structural characteristics: out-degrees with a scale-free distribution, in-degrees bound by a low maximum, and few and small cycles. Standard visualization techniques, such as node-link diagrams and adjacency matrices, are impeded by these network characteristics. The scale-free distribution of out-degrees causes a high number of intersecting edges in node-link diagrams. Adjacency matrices become space-inefficient due to the low in-degrees and the resulting sparse network. Compressed adjacency matrices, however, exploit these structural characteristics. By cutting open and rearranging an adjacency matrix, we achieve a compact and neatly-arranged visualization. Compressed adjacency matrices allow for easy detection of subnetworks with a specific structure, so-called motifs, which provide important knowledge about gene regulatory networks to domain experts. We summarize motifs commonly referred to in the literature, and relate them to network analysis tasks common to the visualization domain. We show that a user can easily find the important motifs in compressed adjacency matrices, and that this is hard in standard adjacency matrix and node-link diagrams. We also demonstrate that interaction techniques for standard adjacency matrices can be used for our compressed variant. These techniques include rearrangement clustering, highlighting, and filtering.
Visualizing Network Traffic to Understand the Performance of Massively Parallel Simulations
Authors:
Aaditya G. Landge, Joshua A. Levine, Katherine E. Isaacs, Abhinav Bhatele, Todd Gamblin, Martin Schu
Abstract :
The performance of massively parallel applications is often heavily impacted by the cost of communication among compute nodes. However, determining how to best use the network is a formidable task, made challenging by the ever increasing size and complexity of modern supercomputers. This paper applies visualization techniques to aid parallel application developers in understanding the network activity by enabling a detailed exploration of the flow of packets through the hardware interconnect. In order to visualize this large and complex data, we employ two linked views of the hardware network. The first is a 2D view, that represents the network structure as one of several simplified planar projections. This view is designed to allow a user to easily identify trends and patterns in the network traffic. The second is a 3D view that augments the 2D view by preserving the physical network topology and providing a context that is familiar to the application developers. Using the massively parallel multi-physics code pF3D as a case study, we demonstrate that our tool provides valuable insight that we use to explain and optimize pF3D's performance on an IBM Blue Gene/P system.
Memorability of Visual Features in Network Diagrams
Authors:
Kim Marriott, Helen Purchase, Michael Wybrow, Cagatay Goncu
Abstract :
We investigate the cognitive impact of various layout features-symmetry, alignment, collinearity, axis alignment and orthogonality-on the recall of network diagrams (graphs). This provides insight into how people internalize these diagrams and what features should or shouldn't be utilised when designing static and interactive network-based visualisations. Participants were asked to study, remember, and draw a series of small network diagrams, each drawn to emphasise a particular visual feature. The visual features were based on existing theories of perception, and the task enabled visual processing at the visceral level only. Our results strongly support the importance of visual features such as symmetry, collinearity and orthogonality, while not showing any significant impact for node-alignment or parallel edges.
Interactive Level-of-Detail Rendering of Large Graphs
Authors:
Michael Zinsmaier, Ulrik Brandes, Oliver Deussen, Hendrik Strobelt
Abstract :
We propose a technique that allows straight-line graph drawings to be rendered interactively with adjustable level of detail. The approach consists of a novel combination of edge cumulation with density-based node aggregation and is designed to exploit common graphics hardware for speed. It operates directly on graph data and does not require precomputed hierarchies or meshes. As proof of concept, we present an implementation that scales to graphs with millions of nodes and edges, and discuss several example applications.
INFOVIS Papers: Representation and Perception
Session : 
Representation and Perception
Date & Time : October 16 04:15 pm - 05:55 pm
Location : Grand Ballroom C
Chair : Enrico Bertini
Papers : 
Visual Semiotics & Uncertainty Visualization: An Empirical Study
Authors:
Alan M. MacEachren, Robert E. Roth, James O'Brien, Bonan Li, Derek Swingley, Mark Gahegan
Abstract :
This paper presents two linked empirical studies focused on uncertainty visualization. The experiments are framed from two conceptual perspectives. First, a typology of uncertainty is used to delineate kinds of uncertainty matched with space, time, and attribute components of data. Second, concepts from visual semiotics are applied to characterize the kind of visual signification that is appropriate for representing those different categories of uncertainty. This framework guided the two experiments reported here. The first addresses representation intuitiveness, considering both visual variables and iconicity of representation. The second addresses relative performance of the most intuitive abstract and iconic representations of uncertainty on a map reading task. Combined results suggest initial guidelines for representing uncertainty and discussion focuses on practical applicability of results.
Comparing Clusterings Using Bertin's Idea
Authors:
Alexander Pilhofer, Alexander Gribov, Antony Unwin
Abstract :
Classifying a set of objects into clusters can be done in numerous ways, producing different results. They can be visually compared using contingency tables [27], mosaicplots [13], fluctuation diagrams [15], tableplots [20] , (modified) parallel coordinates plots [28], Parallel Sets plots [18] or circos diagrams [19]. Unfortunately the interpretability of all these graphical displays decreases rapidly with the numbers of categories and clusterings. In his famous book A Semiology of Graphics [5] Bertin writes the discovery of an ordered concept appears as the ultimate point in logical simplification since it permits reducing to a single instant the assimilation of series which previously required many instants of study. Or in more everyday language, if you use good orderings you can see results immediately that with other orderings might take a lot of effort. This is also related to the idea of effect ordering [12], that data should be organised to reflect the effect you want to observe. This paper presents an efficient algorithm based on Bertin's idea and concepts related to Kendall's t [17], which finds informative joint orders for two or more nominal classification variables. We also show how these orderings improve the various displays and how groups of corresponding categories can be detected using a top-down partitioning algorithm. Different clusterings based on data on the environmental performance of cars sold in Germany are used for illustration. All presented methods are available in the R package extracat which is used to compute the optimized orderings for the example dataset.
Perception of Visual Variables on Tiled Wall-Sized Displays for Information Visualization Applications
Authors:
Anastasia Bezerianos, Petra Isenberg
Abstract :
We present the results of two user studies on the perception of visual variables on tiled high-resolution wall-sized displays. We contribute an understanding of, and indicators predicting how, large variations in viewing distances and viewing angles affect the accurate perception of angles, areas, and lengths. Our work, thus, helps visualization researchers with design considerations on how to create effective visualizations for these spaces. The first study showed that perception accuracy was impacted most when viewers were close to the wall but differently for each variable (Angle, Area, Length). Our second study examined the effect of perception when participants could move freely compared to when they had a static viewpoint. We found that a far but static viewpoint was as accurate but less time consuming than one that included free motion. Based on our findings, we recommend encouraging viewers to stand further back from the display when conducting perception estimation tasks. If tasks need to be conducted close to the wall display, important information should be placed directly in front of the viewer or above, and viewers should be provided with an estimation of the distortion effects predicted by our work-or encouraged to physically navigate the wall in specific ways to reduce judgement error.
Visualizing Flow of Uncertainty through Analytical Processes
Authors:
Yingcai Wu, Guo-Xun Yuan, Kwan-Liu Ma
Abstract :
Uncertainty can arise in any stage of a visual analytics process, especially in data-intensive applications with a sequence of data transformations. Additionally, throughout the process of multidimensional, multivariate data analysis, uncertainty due to data transformation and integration may split, merge, increase, or decrease. This dynamic characteristic along with other features of uncertainty pose a great challenge to effective uncertainty-aware visualization. This paper presents a new framework for modeling uncertainty and characterizing the evolution of the uncertainty information through analytical processes. Based on the framework, we have designed a visual metaphor called uncertainty flow to visually and intuitively summarize how uncertainty information propagates over the whole analysis pipeline. Our system allows analysts to interact with and analyze the uncertainty information at different levels of detail. Three experiments were conducted to demonstrate the effectiveness and intuitiveness of our design.
Assessing the Effect of Visualizations on Bayesian Reasoning through Crowdsourcing
Authors:
Luana Micallef, Pierre Dragicevic, Jean-Daniel Fekete
Abstract :
People have difficulty understanding statistical information and are unaware of their wrong judgments, particularly in Bayesian reasoning. Psychology studies suggest that the way Bayesian problems are represented can impact comprehension, but few visual designs have been evaluated and only populations with a specific background have been involved. In this study, a textual and six visual representations for three classic problems were compared using a diverse subject pool through crowdsourcing. Visualizations included area-proportional Euler diagrams, glyph representations, and hybrid diagrams combining both. Our study failed to replicate previous findings in that subjects' accuracy was remarkably lower and visualizations exhibited no measurable benefit. A second experiment confirmed that simply adding a visualization to a textual Bayesian problem is of little help, even when the text refers to the visualization, but suggests that visualizations are more effective when the text is given without numerical values. We discuss our findings and the need for more such experiments to be carried out on heterogeneous populations of non-experts.
VAST Papers: Text and Categorical Data Analysis
Session : 
Text and Categorical Data Analysis
Date & Time : October 16 02:00 pm - 03:40 pm
Location : Grand Ballroom B
Chair : David Ebert
Papers : 
Reinventing the Contingency Wheel: Scalable Visual Analytics of Large Categorical Data
Authors:
Bilal Alsallakh, Wolfgang Aigner, Silvia Miksch, Eduard Groller
Abstract :
Contingency tables summarize the relations between categorical variables and arise in both scientific and business domains. Asymmetrically large two-way contingency tables pose a problem for common visualization methods. The Contingency Wheel has been recently proposed as an interactive visual method to explore and analyze such tables. However, the scalability and readability of this method are limited when dealing with large and dense tables. In this paper we present Contingency Wheel++, new visual analytics methods that overcome these major shortcomings: (1) regarding automated methods, a measure of association based on Pearson's residuals alleviates the bias of the raw residuals originally used, (2) regarding visualization methods, a frequency-based abstraction of the visual elements eliminates overlapping and makes analyzing both positive and negative associations possible, and (3) regarding the interactive exploration environment, a multi-level overview+detail interface enables exploring individual data items that are aggregated in the visualization or in the table using coordinated views. We illustrate the applicability of these new methods with a use case and show how they enable discovering and analyzing nontrivial patterns and associations in large categorical data
Visual Classifier Training for Text Document Retrieval
Authors:
Florian Heimerl, Steffen Koch, Harald Bosch, Thomas Ertl
Abstract :
Performing exhaustive searches over a large number of text documents can be tedious, since it is very hard to formulate search queries or define filter criteria that capture an analyst's information need adequately. Classification through machine learning has the potential to improve search and filter tasks encompassing either complex or very specific information needs, individually. Unfortunately, analysts who are knowledgeable in their field are typically not machine learning specialists. Most classification methods, however, require a certain expertise regarding their parametrization to achieve good results. Supervised machine learning algorithms, in contrast, rely on labeled data, which can be provided by analysts. However, the effort for labeling can be very high, which shifts the problem from composing complex queries or defining accurate filters to another laborious task, in addition to the need for judging the trained classifier's quality. We therefore compare three approaches for interactive classifier training in a user study. All of the approaches are potential candidates for the integration into a larger retrieval system. They incorporate active learning to various degrees in order to reduce the labeling effort as well as to increase effectiveness. Two of them encompass interactive visualization for letting users explore the status of the classifier in context of the labeled documents, as well as for judging the quality of the classifier in iterative feedback loops. We see our work as a step towards introducing user controlled classification methods in addition to text search and filtering for increasing recall in analytics scenarios involving large corpora.
Relative N-Gram Signatures: Document Visualization at the Level of Character N-Grams
Authors:
Magdalena Jankowska, Vlado Keselj, Evangelos Milios
Abstract :
The Common N-Gram (CNG) classifier is a text classification algorithm based on the comparison of frequencies of character n-grams (strings of characters of length n) that are the most common in the considered documents and classes of documents. We present a text analytic visualization system that employs the CNG approach for text classification and uses the differences in frequency values of common n-grams in order to visually compare documents at the sub-word level. The visualization method provides both an insight into n-gram characteristics of documents or classes of documents and a visual interpretation of the workings of the CNG classifier.
LeadLine: Interactive Visual Analysis of Text Data through Event Identification and Exploration
Authors:
Wenwen Dou, Xiaoyu Wang, Drew Skau, William Ribarsky, Michelle Zhou
Abstract :
Text data such as online news and microblogs bear valuable insights regarding important events and responses to such events. Events are inherently temporal, evolving over time. Existing visual text analysis systems have provided temporal views of changes based on topical themes extracted from text data. But few have associated topical themes with events that cause the changes. In this paper, we propose an interactive visual analytics system, LeadLine, to automatically identify meaningful events in news and social media data and support exploration of the events. To characterize events, LeadLine integrates topic modeling, event detection, and named entity recognition techniques to automatically extract information regarding the investigative 4 Ws: who, what, when, and where for each event. To further support analysis of the text corpora through events, LeadLine allows users to interactively examine meaningful events using the 4 Ws to develop an understanding of how and why. Through representing large-scale text corpora in the form of meaningful events, LeadLine provides a concise summary of the corpora. LeadLine also supports the construction of simple narratives through the exploration of events. To demonstrate the efficacy of LeadLine in identifying events and supporting exploration, two case studies were conducted using news and social media data.
The Deshredder: A Visual Analytic Approach to Reconstructing Shredded Documents
Authors:
Patrick Butler, Prithwish Chakraborty, Naren Ramakrishnan
Abstract :
Reconstruction of shredded documents remains a significant challenge. Creating a better document reconstruction system enables not just recovery of information accidentally lost but also understanding our limitations against adversaries' attempts to gain access to information. Existing approaches to reconstructing shredded documents adopt either a predominantly manual (e.g., crowd-sourcing) or a near automatic approach. We describe \\textit{Deshredder}, a visual analytic approach that scales well and effectively incorporates user input to direct the reconstruction process.Deshredder represents shredded pieces as time series and uses nearest neighbor matching techniques that enable matching both the contours of shredded pieces as well as the content of shreds themselves. More importantly, Deshredder's interface support visual analytics through user interaction with similarity matrices as well as higher level assembly through more complex stitching functions. We identify a functional task taxonomy leading to design considerations for constructing deshredding solutions, and describe how Deshredder applies to problems from the DARPA Shredder Challenge through expert evaluations.
VAST Papers: Clustering, Classification, and Correlation
Session : 
Clustering, Classification, and Correlation
Date & Time : October 16 10:30 am - 12:10 pm
Location : Grand Ballroom B
Chair : Daniel Weiskopf
Papers : 
A Correlative Analysis Process in a Visual Analytics Environment
Authors:
Abish Malik, Ross Maciejewski, Yun Jang, Whitney Huang, Niklas Elmqvist, David Ebert
Abstract :
Finding patterns and trends in spatial and temporal datasets has been a long studied problem in statistics and different domains of science. This paper presents a visual analytics approach for the interactive exploration and analysis of spatiotemporal correlations among multivariate datasets. Our approach enables users to discover correlations and explore potentially causal or predictive links at different spatiotemporal aggregation levels among the datasets, and allows them to understand the underlying statistical foundations that precede the analysis. Our technique utilizes the Pearson's product-moment correlation coefficient and factors in the lead or lag between different datasets to detect trends and periodic patterns amongst them.
Scatter/Gather Clustering: Flexibly Incorporating User Feedback to Steer Clustering Results
Authors:
Mahmud Hossain, Praveen Ojili, Cindy Grimm, Rolf Muller, Layne Watson, Naren Ramakrishnan
Abstract :
Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.
Inter-Active Learning of Ad-Hoc Classifiers for Video Visual Analytics
Authors:
Benjamin Hoferlin, Rudolf Netzel, Markus Hoferlin, Daniel Weiskopf, Gunther Heidemann
Abstract :
Learning of classifiers to be used as filters within the analytical reasoning process leads to new and aggravates existing challenges. Such classifiers are typically trained ad-hoc, with tight time constraints that affect the amount and the quality of annotation data and, thus, also the users' trust in the classifier trained. We approach the challenges of ad-hoc training by inter-active learning, which extends active learning by integrating human experts' background knowledge to greater extent. In contrast to active learning, not only does inter-active learning include the users' expertise by posing queries of data instances for labeling, but it also supports the users in comprehending the classifier model by visualization. Besides the annotation of manually or automatically selected data instances, users are empowered to directly adjust complex classifier models. Therefore, our model visualization facilitates the detection and correction of inconsistencies between the classifier model trained by examples and the user's mental model of the class definition. Visual feedback of the training process helps the users assess the performance of the classifier and, thus, build up trust in the filter created. We demonstrate the capabilities of inter-active learning in the domain of video visual analytics and compare its performance with the results of random sampling and uncertainty sampling of training sets.
Visual Cluster Exploration of Web Clickstream Data
Authors:
Jishang Wei, Zeqian Shen, Neel Sundaresan, Kwan-Liu Ma
Abstract :
Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.
An Adaptive Parameter Space-Filling Algorithm for Highly Interactive Cluster Exploration
Authors:
Zafar Ahmed, Chris Weaver
Abstract :
For a user to perceive continuous interactive response time in a visualization tool, the rule of thumb is that it must process, deliver, and display rendered results for any given interaction in under 100 milliseconds. In many visualization systems, successive interactions trigger independent queries and caching of results. Consequently, computationally expensive queries like multidimensional clustering cannot keep up with rapid sequences of interactions, precluding visual benefits such as motion parallax. In this paper, we describe a heuristic prefetching technique to improve the interactive response time of KMeans clustering in dynamic query visualizations of multidimensional data. We address the tradeoff between high interaction and intense query computation by observing how related interactions on overlapping data subsets produce similar clustering results, and characterizing these similarities within a parameter space of interaction. We focus on the two-dimensional parameter space defined by the minimum and maximum values of a time range manipulated by dragging and stretching a one-dimensional filtering lens over a plot of time series data. Using calculation of nearest neighbors of interaction points in parameter space, we reuse partial query results from prior interaction sequences to calculate both an immediate best-effort clustering result and to schedule calculation of an exact result. The method adapts to user interaction patterns in the parameter space by reprioritizing the interaction neighbors of visited points in the parameter space. A performance study on Mesonet meteorological data demonstrates that the method is a significant improvement over the baseline scheme in which interaction triggers on-demand, exact-range clustering with LRU caching. We also present initial evidence that approximate, temporary clustering results are sufficiently accurate (compared to exact results) to convey useful cluster structure during rapid and protracted interaction.
VAST Papers: Visual-Computational Analysis of Multivariate Data
Session : 
Visual-Computational Analysis of Multivariate Data
Date & Time : October 16 04:15 pm - 05:55 pm
Location : Grand Ballroom B
Chair : Jean-Daniel Fekete
Papers : 
Dis-Function: Learning Distance Functions Interactively
Authors:
Eli T. Brown, Jingjing Liu, Carla E. Brodley, Remco Chang
Abstract :
The world's corpora of data grow in size and complexity every day, making it increasingly difficult for experts to make sense out of their data. Although machine learning offers algorithms for finding patterns in data automatically, they often require algorithm-specific parameters, such as an appropriate distance function, which are outside the purview of a domain expert. We present a system that allows an expert to interact directly with a visual representation of the data to define an appropriate distance function, thus avoiding direct manipulation of obtuse model parameters. Adopting an iterative approach, our system first assumes a uniformly weighted Euclidean distance function and projects the data into a two-dimensional scatterplot view. The user can then move incorrectly-positioned data points to locations that reflect his or her understanding of the similarity of those data points relative to the other data points. Based on this input, the system performs an optimization to learn a new distance function and then re-projects the data to redraw the scatterplot. We illustrate empirically that with only a few iterations of interaction and optimization, a user can achieve a scatterplot view and its corresponding distance function that reflect the user's knowledge of the data. In addition, we evaluate our system to assess scalability in data size and data dimension, and show that our system is computationally efficient and can provide an interactive or near-interactive user experience.
Visual Pattern Discovery using Random Projections
Authors:
Anushka Anand, Leland Wilkinson, Tuan Nhon Dang
Abstract :
An essential element of exploratory data analysis is the use of revealing low-dimensional projections of high-dimensional data. Projection Pursuit has been an effective method for finding interesting low-dimensional projections of multidimensional spaces by optimizing a score function called a projection pursuit index. However, the technique is not scalable to high-dimensional spaces. Here, we introduce a novel method for discovering noteworthy views of high-dimensional data spaces by using binning and random projections. We define score functions, akin to projection pursuit indices, that characterize visual patterns of the low-dimensional projections that constitute feature subspaces. We also describe an analytic, multivariate visualization platform based on this algorithm that is scalable to extremely large problems.
iLAMP: Exploring High-Dimensional Spacing Through Backward Multidimensional Projection
Authors:
Elisa Portes dos Santos Amorim, Emilio Vital Brazil, Joel Daniel II, Paulo Joia, Luis Gustavo Nonato
Abstract :
Ever improving computing power and technological advances are greatly augmenting data collection and scientific observation. This has directly contributed to increased data complexity and dimensionality, motivating research of exploration techniques for multidimensional data. Consequently, a recent influx of work dedicated to techniques and tools that aid in understanding multidimensional datasets can be observed in many research fields, including biology, engineering, physics and scientific computing. While the effectiveness of existing techniques to analyze the structure and relationships of multidimensional data varies greatly, few techniques provide flexible mechanisms to simultaneously visualize and actively explore high-dimensional spaces. In this paper, we present an inverse linear affine multidimensional projection, coined iLAMP, that enables a novel interactive exploration technique for multidimensional data. iLAMP operates in reverse to traditional projection methods by mapping low-dimensional information into a high-dimensional space. This allows users to extrapolate instances of a multidimensional dataset while exploring a projection of the data to the planar domain. We present experimental results that validate iLAMP, measuring the quality and coherence of the extrapolated data; as well as demonstrate the utility of iLAMP to hypothesize the unexplored regions of a high-dimensional space.
Subspace Search and Visualization to Make Sense of Alternative Clusterings in High-Dimensional Data
Authors:
Andrada Tatu, Fabian Maas, Ines Farber, Enrico Bertini, Tobias Schreck, Thomas Seidl, Daniel Keim
Abstract :
In explorative data analysis, the data under consideration often resides in a high-dimensional (HD) data space. Currently many methods are available to analyze this type of data. So far, proposed automatic approaches include dimensionality reduction and cluster analysis, whereby visual-interactive methods aim to provide effective visual mappings to show, relate, and navigate HD data. Furthermore, almost all of these methods conduct the analysis from a singular perspective, meaning that they consider the data in either the original HD data space, or a reduced version thereof. Additionally, HD data spaces often consist of combined features that measure different properties, in which case the particular relationships between the various properties may not be clear to the analysts a priori since it can only be revealed if appropriate feature combinations (subspaces) of the data are taken into consideration. Considering just a single subspace is, however, often not sufficient since different subspaces may show complementary, conjointly, or contradicting relations between data items. Useful information may consequently remain embedded in sets of subspaces of a given HD input data space. Relying on the notion of subspaces, we propose a novel method for the visual analysis of HD data in which we employ an interestingness-guided subspace search algorithm to detect a candidate set of subspaces. Based on appropriately defined subspace similarity functions, we visualize the subspaces and provide navigation facilities to interactively explore large sets of subspaces. Our approach allows users to effectively compare and relate subspaces with respect to involved dimensions and clusters of objects. We apply our approach to synthetic and real data sets. We thereby demonstrate its support for understanding HD data from different perspectives, effectively yielding a more complete view on HD data.
Just-in-Time Annotation of Clusters, Outliers, and Trends in Point-based Data Visualizations
Authors:
Eser Kandogan
Abstract :
We introduce the concept of just-in-time descriptive analytics as a novel application of computational and statistical techniques performed at interaction-time to help users easily understand the structure of data as seen in visualizations. Fundamental to just-in-time descriptive analytics is (a) identifying visual features, such as clusters, outliers, and trends, user might observe in visualizations automatically, (b) determining the semantics of such features by performing statistical analysis as the user is interacting, and (c) enriching visualizations with annotations that not only describe semantics of visual features but also facilitate interaction to support high-level understanding of data. In this paper, we demonstrate just-in-time descriptive analytics applied to a point-based multi-dimensional visualization technique to identify and describe clusters, outliers, and trends. We argue that it provides a novel user experience of computational techniques working alongside of users allowing them to build faster qualitative mental models of data by demonstrating its application on a few use-cases. Techniques used to facilitate just-in-time descriptive analytics are described in detail along with their run-time performance characteristics. We believe this is just a starting point and much remains to be researched, as we discuss open issues and opportunities in improving accessibility and collaboration.
TVCG Papers: SciVis: Volume Visualization
Session : 
TVCG Papers: SciVis: Volume Visualization
Date & Time : October 16 04:15 pm - 05:55 pm
Location : Grand Ballroom D
Chair : Deborah Silver
Papers : 
A Versatile Optical Model for Hybrid Rendering of Volume Data
Authors:
Fei Yang, Qingde Li, Dehui Xiang, Yong Cao and Jie Tian
Unified Boundary_Aware Texturing for Interactive Volume Rendering
Authors:
Timo Ropinski, Stefan Diepenbrock, Stefan Bruckner, Klaus Hinrichs and Eduard Groller
Parallel Computation of 2D Morse-Smale Complexes
Authors:
Nithin Shivashankar,Senthilnathan Maadaswamy and Vijay Natarajan
Computing Reeb Graphs as a Union of Contour Trees
Authors:
Harish Doraiswamy and Vijay Natarajan
Multimodal Data Fusion Based on Mutual Information
Authors:
Roger Bramon, Imma Boada, Anton Bardera, Joaquim Rodriguez, Miquel Feixas, Josep Puig and Mateu Sber
SCIVIS Papers: Topology and Fields
Session : 
Topology and Fields
Date & Time : October 17 04:15 pm - 05:55 pm
Location : Grand Ballroom D
Chair : Georgeta-Elisabeta Marai
Papers : 
Computing Morse-Smale Complexes with Accurate Geometry
Authors:
Attila Gyulassy, Peer-Timo Bremer, Valerio Pascucci
Abstract :

Topological techniques have proven highly successful in analyzing and visualizing scientific data. As a result, significant efforts have been made to compute structures like the Morse-Smale complex as robustly and efficiently as possible. However, the resulting algorithms, while topologically consistent, often produce incorrect connectivity as well as poor geometry. These problems may compromise or even invalidate any subsequent analysis. Moreover, such techniques may fail to improve even when the resolution of the domain mesh is increased, thus producing potentially incorrect results even for highly resolved functions. To address these problems we introduce two new algorithms: (i) a randomized algorithm to compute the discrete gradient of a scalar field that converges under refinement; and (ii) a deterministic variant which directly computes accurate geometry and thus correct connectivity of the MS complex. The first algorithm converges in the sense that on average it produces the correct result and its standard deviation approaches zero with increasing mesh resolution. The second algorithm uses two ordered traversals of the function to integrate the probabilities of the first to extract correct (near optimal) geometry and connectivity. We present an extensive empirical study using both synthetic and real-world data and demonstrates the advantages of our algorithms in comparison with several popular approaches.

Visualization of Temporal Similarity in Field Data
Authors:
Steffen Frey, Filip Sadlo, Thomas Ertl
Abstract :

This paper presents a visualization approach for detecting and exploring similarity in the temporal variation of field data. We provide an interactive technique for extracting correlations from similarity matrices which capture temporal similarity of univariate functions. We make use of the concept to extract periodic and quasiperiodic behavior at single (spatial) points as well as similarity between different locations within a field and also between different data sets. The obtained correlations are utilized for visual exploration of both temporal and spatial relationships in terms of temporal similarity. Our entire pipeline offers visual interaction and inspection, allowing for the flexibility that in particular time-dependent data analysis techniques require. We demonstrate the utility and versatility of our approach by applying our implementation to data from both simulation and measurement.

Visualizing Nuclear Scission through a Multifield Extension of Topological Analysis
Authors:
David Duke, Hamish Carr, Aaron Knoll, Nicolas Schunck, Hai Ah Nam, Andrzej Staszczak
Abstract :

In nuclear science, density functional theory (DFT) is a powerful tool to model the complex interactions within the atomic nucleus, and is the primary theoretical approach used by physicists seeking a better understanding of fission. However DFT simulations result in complex multivariate datasets in which it is difficult to locate the crucial scission point at which one nucleus fragments into two, and to identify the precursors to scission. The Joint Contour Net (JCN) has recently been proposed as a new data structure for the topological analysis of multivariate scalar fields, analogous to the contour tree for univariate fields. This paper reports the analysis of DFT simulations using the JCN, the first application of the JCN technique to real data. It makes three contributions to visualization: (i) a set of practical methods for visualizing the JCN, (ii) new insight into the detection of nuclear scission, and (iii) an analysis of aesthetic criteria to drive further work on representing the JCN.

Augmented Topological Descriptors of Pore Networks for Material Science
Authors:
Daniela M. Ushizima, Dmitriy Morozov, Gunther H. Weber, Andrea G. C. Bianchi, James A. Sethian, E. W
Abstract :

One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.

Generalized Topological Simplification of Scalar Fields on Surfaces
Authors:
Julien Tierny, Valerio Pascucci
Abstract :

We present a combinatorial algorithm for the general topological simplification of scalar fields on surfaces. Given a scalar field f, our algorithm generates a simplified field g that provably admits only critical points from a constrained subset of the singularities of f, while guaranteeing a small distance jjf _ gjj1 for data-fitting purpose. In contrast to previous algorithms, our approach is oblivious to the strategy used for selecting features of interest and allows critical points to be removed arbitrarily. When topological persistence is used to select the features of interest, our algorithm produces a standard _-simplification. Our approach is based on a new iterative algorithm for the constrained reconstruction of sub- and sur-level sets. Extensive experiments show that the number of iterations required for our algorithm to converge is rarely greater than 2 and never greater than 5, yielding O(n log(n)) practical time performances. The algorithm handles triangulated surfaces with or without boundary and is robust to the presence of multi-saddles in the input. It is simple to implement, fast in practice and more general than previous techniques. Practically, our approach allows a user to arbitrarily simplify the topology of an input function and robustly generate the corresponding simplified function. An appealing application area of our algorithm is in scalar field design since it enables, without any threshold parameter, the robust pruning of topological noise as selected by the user. This is needed for example to get rid of inaccuracies introduced by numerical solvers, thereby providing topological guarantees needed for certified geometry processing. Experiments show this ability to eliminate numerical noise as well as validate the time efficiency and accuracy of our algorithm. We provide a lightweight C++ implementation as supplemental material that can be used for topological cleaning on surface meshes.

SCIVIS Papers: Flow and Turbulence
Session : 
Flow and Turbulence
Date & Time : October 17 08:30 am - 10:10 am
Location : Grand Ballroom D
Chair : Eugene Zhang
Papers : 
Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains
Authors:
Wieland Reich, Gerik Scheuermann
Abstract :
Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.
Derived Metric Tensors for Flow Surface Visualization
Authors:
Harald Obermaier, Kenneth I. Joy
Abstract :
Integral flow surfaces constitute a widely used flow visualization tool due to their capability to convey important flow information such as fluid transport, mixing, and domain segmentation. Current flow surface rendering techniques limit their expressiveness, however, by focusing virtually exclusively on displacement visualization, visually neglecting the more complex notion of deformation such as shearing and stretching that is central to the field of continuum mechanics. To incorporate this information into the flow surface visualization and analysis process, we derive a metric tensor field that encodes local surface deformations as induced by the velocity gradient of the underlying flow field. We demonstrate how properties of the resulting metric tensor field are capable of enhancing present surface visualization and generation methods and develop novel surface querying, sampling, and visualization techniques. The provided results show how this step towards unifying classic flow visualization and more advanced concepts from continuum mechanics enables more detailed and improved flow analysis.
Lagrangian Coherent Structures for Design Analysis of Revolving Doors
Authors:
Benjamin Schindler, Raphael Fuchs, Stefan Barp, Jurgen Waser, Armin Pobitzer, Robert Carnecky, Kre_i
Abstract :
Room air flow and air exchange are important aspects for the design of energy-efficient buildings. As a result, simulations are increasingly used prior to construction to achieve an energy-efficient design. We present a visual analysis of air flow generated at building entrances, which uses a combination of revolving doors and air curtains. The resulting flow pattern is challenging because of two interacting flow patterns: On the one hand, the revolving door acts as a pump, on the other hand, the air curtain creates a layer of uniformly moving warm air between the interior of the building and the revolving door. Lagrangian coherent structures (LCS), which by definition are flow barriers, are the method of choice for visualizing the separation and recirculation behavior of warm and cold air flow. The extraction of LCS is based on the finite-time Lyapunov exponent (FTLE) and makes use of a ridge definition which is consistent with the concept of weak LCS. Both FTLE computation and ridge extraction are done in a robust and efficient way by making use of the fast Fourier transform for computing scale-space derivatives.
Turbulence Visualization at the Terascale on Desktop PCs
Authors:
Marc Treib, Kai Berger, Florian Reichl, Charles Meneveau, Alex Szalay, Rudiger Westermann
Abstract :
Despite the ongoing efforts in turbulence research, the universal properties of the turbulence small-scale structure and the relationships between small- and large-scale turbulent motions are not yet fully understood. The visually guided exploration of turbulence features, including the interactive selection and simultaneous visualization of multiple features, can further progress our understanding of turbulence. Accomplishing this task for flow fields in which the full turbulence spectrum is well resolved is challenging on desktop computers. This is due to the extreme resolution of such fields, requiring memory and bandwidth capacities going beyond what is currently available. To overcome these limitations, we present a GPU system for feature-based turbulence visualization that works on a compressed flow field representation. We use a wavelet-based compression scheme including run-length and entropy encoding, which can be decoded on the GPU and embedded into brick-based volume ray-casting. This enables a drastic reduction of the data to be streamed from disk to GPU memory. Our system derives turbulence properties directly from the velocity gradient tensor, and it either renders these properties in turn or generates and renders scalar feature volumes. The quality and efficiency of the system is demonstrated in the visualization of two unsteady turbulence simulations, each comprising a spatio-temporal resolution of 10244. On a desktop computer, the system can visualize each time step in 5 seconds, and it achieves about three times this rate for the visualization of a scalar feature volume.
Automatic Detection and Visualization of Qualitative Hemodynamic Characteristics in Cerebral Aneurysms
Authors:
Rocco Gasteiger, Dirk J. Lehmann, Roy van Pelt, Gabor Janiga, Oliver Beuing, Anna Vilanova, Holger T
Abstract :
Cerebral aneurysms are a pathological vessel dilatation that bear a high risk of rupture. For the understanding and evaluation of the risk of rupture, the analysis of hemodynamic information plays an important role. Besides quantitative hemodynamic information, also qualitative flow characteristics, e.g., the inflow jet and impingement zone are correlated with the risk of rupture. However, the assessment of these two characteristics is currently based on an interactive visual investigation of the flow field, obtained by computational fluid dynamics (CFD) or blood flow measurements. We present an automatic and robust detection as well as an expressive visualization of these characteristics. The detection can be used to support a comparison, e.g., of simulation results reflecting different treatment options. Our approach utilizes local streamline properties to formalize the inflow jet and impingement zone. We extract a characteristic seeding curve on the ostium, on which an inflow jet boundary contour is constructed. Based on this boundary contour we identify the impingement zone. Furthermore, we present several visualization techniques to depict both characteristics expressively. Thereby, we consider accuracy and robustness of the extracted characteristics, minimal visual clutter and occlusions. An evaluation with six domain experts confirms that our approach detects both hemodynamic characteristics reasonably.
SCIVIS Papers: Interaction and Rendering
Session : 
Interaction and Rendering
Date & Time : October 17 02:00 pm - 03:40 pm
Location : Grand Ballroom D
Chair : Xiaoyu Wang
Papers : 
WYSIWYP: What You See Is What You Pick
Authors:
Alexander Wiebel, Frans M. Vos, David Foerster, Hans-Christian Hege
Abstract :

Scientists, engineers and physicians are used to analyze 3D data with slice-based visualizations. Radiologists for example are trained to read slices of medical imaging data. Despite the numerous examples of sophisticated 3D rendering techniques, domain experts, who still prefer slice-based visualization do not consider these to be very useful. Since 3D renderings have the advantage of providing an overview at a glance, while 2D depictions better serve detailed analyses, it is of general interest to better combine these methods. Recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking in medical imaging data that result in repositioning slices. In this paper, we present a new volume picking technique called WYSIWYP (what you see is what you pick) that, in contrast to previous work, does not require pre-segmented data or metadata and thus is more generally applicable. The positions picked by our method are solely based on the data itself, the transfer function, and the way the volumetric rendering is perceived by the user. To demonstrate the utility of the proposed method, we apply it to automated positioning of slices in volumetric scalar fields from various application areas. Finally, we present results of a user study in which 3D locations selected by users are compared to those resulting from WYSIWYP. The user study confirms our claim that the resulting positions correlate well with those perceived by the user.

Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input
Authors:
Lingyun Yu, Konstantinos Efstathiou, Petra Isenberg, Tobias Isenberg
Abstract :

Data selection is a fundamental task in visualization because it serves as a pre-requisite to many follow-up interactions. Efficient spatial selection in 3D point cloud datasets consisting of thousands or millions of particles can be particularly challenging. We present two new techniques, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner. Specifically, we describe how to spatially select a subset of a 3D particle cloud by simply encircling the target particles on screen using either the mouse or direct-touch input. Based on the drawn lasso, our techniques automatically determine a bounding selection surface around the encircled particles based on their density. This kind of selection technique can be applied to particle datasets in several application domains. TeddySelection and CloudLasso reduce, and in some cases even eliminate, the need for complex multi-step selection processes involving Boolean operations. This was confirmed in a formal, controlled user study in which we compared the more flexible CloudLasso technique to the standard cylinder-based selection technique. This study showed that the former is consistently more efficient than the latter; in several cases the CloudLasso selection time was half that of the corresponding cylinder-based selection.

Sketching Uncertainty into Simulations
Authors:
Hrvoje Ribii, Jurgen Waser, Roman Gurbat, Bernhard Sadransky, M. Eduard Groller
Abstract :

In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.

A Perceptual-Statistics Shading Model
Authors:
Veronika Solteszova, Cagatay Turkay, Mark C. Price, Ivan Viola
Abstract :

The process of surface perception is complex and based on several influencing factors, e.g., shading, silhouettes, occluding contours, and top down cognition. The accuracy of surface perception can be measured and the influencing factors can be modified in order to decrease the error in perception. This paper presents a novel concept of how a perceptual evaluation of a visualization technique can contribute to its redesign with the aim of improving the match between the distal and the proximal stimulus. During analysis of data from previous perceptual studies, we observed that the slant of 3D surfaces visualized on 2D screens is systematically underestimated. The visible trends in the error allowed us to create a statistical model of the perceived surface slant. Based on this statistical model we obtained from user experiments, we derived a new shading model that uses adjusted surface normals and aims to reduce the error in slant perception. The result is a shape-enhancement of visualization which is driven by an experimentally-founded statistical model. To assess the efficiency of the statistical shading model, we repeated the evaluation experiment and confirmed that the error in perception was decreased. Results of both user experiments are publicly-available datasets.

Visual Steering and Verification of Mass Spectrometry Data Factorization in Air Quality Research
Authors:
Daniel Engel, Klaus Greff, Christoph Garth, Keith Bein, Anthony Wexler, Bernd Hamann, Hans Hagen
Abstract :

The study of aerosol composition for air quality research involves the analysis of high-dimensional single particle mass spectrometry data. We describe, apply, and evaluate a novel interactive visual framework for dimensionality reduction of such data. Our framework is based on non-negative matrix factorization with specifically defined regularization terms that aid in resolving mass spectrum ambiguity. Thereby, visualization assumes a key role in providing insight into and allowing to actively control a heretofore elusive data processing step, and thus enabling rapid analysis meaningful to domain scientists. In extending existing black box schemes, we explore design choices for visualizing, interacting with, and steering the factorization process to produce physically meaningful results. A domain-expert evaluation of our system performed by the air quality research experts involved in this effort has shown that our method and prototype admits the finding of unambiguous and physically correct lower-dimensional basis transformations of mass spectrometry data at significantly increased speed and a higher degree of ease.

SCIVIS Papers: Volume Data Handling
Session : 
Volume Data Handling
Date & Time : October 17 10:30 am - 12:10 pm
Location : Grand Ballroom D
Chair : Chaoli Wang
Papers : 
Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach
Authors:
Markus Hadwiger, Johanna Beyer, Won-Ki Jeong, Hanspeter Pfister
Abstract :
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data
Authors:
Nathaniel Fout, Kwan-liu Ma
Abstract :
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
On the Interpolation of Data with Normally Distributed Uncertainty for Visualization
Authors:
Steven Schlegel, Nico Korn, Gerik Scheuermann
Abstract :
In many fields of science or engineering, we are confronted with uncertain data. For that reason, the visualization of uncertainty received a lot of attention, especially in recent years. In the majority of cases, Gaussian distributions are used to describe uncertain behavior, because they are able to model many phenomena encountered in science. Therefore, in most applications uncertain data is (or is assumed to be) Gaussian distributed. If such uncertain data is given on fixed positions, the question of interpolation arises for many visualization approaches. In this paper, we analyze the effects of the usual linear interpolation schemes for visualization of Gaussian distributed data. In addition, we demonstrate that methods known in geostatistics and machine learning have favorable properties for visualization purposes in this case.
Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
Authors:
Alexander Bock, Erik Sunden, Bingchen Liu, Burkhard Wuensche, Timo Ropinski
Abstract :
Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.
ElVis: A System for the Accurate and Interactive Visualization of High-Order Finite Element Solutions
Authors:
Blake Nelson, Eric Liu, Robert Haimes, Robert M. Kirby
Abstract :

This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with highorder finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations, high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular, will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of uncertainty for engineers using ElVis.

INFOVIS Papers: Space and Maps
Session : 
Space and Maps
Date & Time : October 17 08:30 am - 10:10 am
Location : Grand Ballroom C
Chair : Tobias Isenberg
Papers : 
Organizing Search Results with a Reference Map
Authors:
Arlind Nocaj, Ulrik Brandes
Abstract :
We propose a method to highlight query hits in hierarchically clustered collections of interrelated items such as digital libraries or knowledge bases. The method is based on the idea that organizing search results similarly to their arrangement on a fixed reference map facilitates orientation and assessment by preserving a user's mental map. Here, the reference map is built from an MDS layout of the items in a Voronoi treemap representing their hierarchical clustering, and we use techniques from dynamic graph layout to align query results with the map. The approach is illustrated on an archive of newspaper articles.
Spatial Text Visualization Using Automatic Typographic Maps
Authors:
Shehzad Afzal, Ross Maciejewski, Yun Jang, Niklas Elmqvist, David S. Ebert
Abstract :
We present a method for automatically building typographic maps that merge text and spatial data into a visual representation where text alone forms the graphical features. We further show how to use this approach to visualize spatial data such as traffic density, crime rate, or demographic data. The technique accepts a vector representation of a geographic map and spatializes the textual labels in the space onto polylines and polygons based on user-defined visual attributes and constraints. Our sample implementation runs as a Web service, spatializing shape files from the OpenStreetMap project into typographic maps for any region.
Stacking-Based Visualization of Trajectory Attribute Data
Authors:
Christian Tominski, Heidrun Schumann, Gennady Andrienko, Natalia Andrienko
Abstract :
Visualizing trajectory attribute data is challenging because it involves showing the trajectories in their spatio-temporal context as well as the attribute values associated with the individual points of trajectories. Previous work on trajectory visualization addresses selected aspects of this problem, but not all of them. We present a novel approach to visualizing trajectory attribute data. Our solution covers space, time, and attribute values. Based on an analysis of relevant visualization tasks, we designed the visualization solution around the principle of stacking trajectory bands. The core of our approach is a hybrid 2D/3D display. A 2D map serves as a reference for the spatial context, and the trajectories are visualized as stacked 3D trajectory bands along which attribute values are encoded by color. Time is integrated through appropriate ordering of bands and through a dynamic query mechanism that feeds temporally aggregated information to a circular time display. An additional 2D time graph shows temporal information in full detail by stacking 2D trajectory bands. Our solution is equipped with analytical and interactive mechanisms for selecting and ordering of trajectories, and adjusting the color mapping, as well as coordinated highlighting and dedicated 3D navigation. We demonstrate the usefulness of our novel visualization by three examples related to radiation surveillance, traffic analysis, and maritime navigation. User feedback obtained in a small experiment indicates that our hybrid 2D/3D solution can be operated quite well.
Adaptive Composite Map Projections
Authors:
Bernhard Jenny
Abstract :
All major web mapping services use the web Mercator projection. This is a poor choice for maps of the entire globe or areas of the size of continents or larger countries because the Mercator projection shows medium and higher latitudes with extreme areal distortion and provides an erroneous impression of distances and relative areas. The web Mercator projection is also not able to show the entire globe, as polar latitudes cannot be mapped. When selecting an alternative projection for information visualization, rivaling factors have to be taken into account, such as map scale, the geographic area shown, the map_s height-to-width ratio, and the type of cartographic visualization. It is impossible for a single map projection to meet the requirements for all these factors. The proposed composite map projection combines several projections that are recommended in cartographic literature and seamlessly morphs map space as the user changes map scale or the geographic region displayed. The composite projection adapts the map_s geometry to scale, to the map_s height-to-width ratio, and to the central latitude of the displayed area by replacing projections and adjusting their parameters. The composite projection shows the entire globe including poles; it portrays continents or larger countries with less distortio, optionally without areal distortion); and it can morph to the web Mercator projection for maps showing small regions.
Algorithms for Labeling Focus Regions
Authors:
Martin Fink, Jan-Henrik Haunert, Andre Schulz, Joachim Spoerhase, Alexander Wolff
Abstract :
Focus+context techniques, data clustering, mobile and ubiquitous visualization, geographic/geospatial visualization.
INFOVIS Papers: Design Spaces
Session : 
Design Spaces
Date & Time : October 17 10:30 am - 12:10 pm
Location : Grand Ballroom C
Chair : Tamara Munzner
Papers : 
Capturing the Design Space of Sequential Space-Filling Layouts
Authors:
Thomas Baudel, Bertjan Broeksema
Abstract :
We characterize the design space of the algorithms that sequentially tile a rectangular area with smaller, fixed-surface, rectangles. This space consist of five independent dimensions: Order, Size, Score, Recurse and Phrase. Each of these dimensions describe a particular aspect of such layout tasks. This class of layouts is interesting, because, beyond encompassing simple grids, tables and trees, it also includes all kinds of treemaps involving the placement of rectangles. For instance, Slice and dice, Squarified, Strip and Pivot layouts are various points in this five dimensional space. Many classic statistics visualizations, such as 100% stacked bar charts, mosaic plots and dimensional stacking, are also instances of this class. A few new and potentially interesting points in this space are introduced, such as spiral treemaps and variations on the strip layout. The core algorithm is implemented as a JavaScript prototype that can be used as a layout component in a variety of InfoViz toolkits.
Taxonomy-Based Glyph Design-with a Case Study on Visualizing Workflows of Biological Experiments
Authors:
Eamonn Maguire, Philippe Rocca-Serra, Susanna-Assunta Sansone, Jim Davies, Min Chen
Abstract :
Glyph-based visualization can offer elegant and concise presentation of multivariate information while enhancing speed and ease in visual search experienced by users. As with icon designs, glyphs are usually created based on the designers' experience and intuition, often in a spontaneous manner. Such a process does not scale well with the requirements of applications where a large number of concepts are to be encoded using glyphs. To alleviate such limitations, we propose a new systematic process for glyph design by exploring the parallel between the hierarchy of concept categorization and the ordering of discriminative capacity of visual channels. We examine the feasibility of this approach in an application where there is a pressing need for an efficient and effective means to visualize workflows of biological experiments. By processing thousands of workflow records in a public archive of biological experiments, we demonstrate that a cost-effective glyph design can be obtained by following a process of formulating a taxonomy with the aid of computation, identifying visual channels hierarchically, and defining application-specific abstraction and metaphors.
An Empirical Model of Slope Ratio Comparisons
Authors:
Justin Talbot, John Gerth, Pat Hanrahan
Abstract :
Comparing slopes is a fundamental graph reading task and the aspect ratio chosen for a plot influences how easy these comparisons are to make. According to Banking to 45 degrees, a classic design guideline first proposed and studied by Cleveland et al., aspect ratios that center slopes around 45 degrees minimize errors in visual judgments of slope ratios. This paper revisits this earlier work. Through exploratory pilot studies that expand Cleveland et al.'s experimental design, we develop an empirical model of slope ratio estimation that fits more extreme slope ratio judgments and two common slope ratio estimation strategies. We then run two experiments to validate our model. In the first, we show that our model fits more generally than the one proposed by Cleveland et al. and we find that, in general, slope ratio errors are not minimized around 45 degrees. In the second experiment, we explore a novel hypothesis raised by our model: that visible baselines can substantially mitigate errors made in slope judgments. We conclude with an application of our model to aspect ratio selection.
Representative Factor Generation for the Interactive Visual Analysis of High-Dimensional Data
Authors:
Cagatay Turkay, Arvid Lundervold, Astri Johansen Lundervold, Helwig Hauser
Abstract :
Datasets with a large number of dimensions per data item (hundreds or more) are challenging both for computational and visual analysis. Moreover, these dimensions have different characteristics and relations that result in sub-groups and/or hierarchies over the set of dimensions. Such structures lead to heterogeneity within the dimensions. Although the consideration of these structures is crucial for the analysis, most of the available analysis methods discard the heterogeneous relations among the dimensions. In this paper, we introduce the construction and utilization of representative factors for the interactive visual analysis of structures in high-dimensional datasets. First, we present a selection of methods to investigate the sub-groups in the dimension set and associate representative factors with those groups of dimensions. Second, we introduce how these factors are included in the interactive visual analysis cycle together with the original dimensions. We then provide the steps of an analytical procedure that iteratively analyzes the datasets through the use of representative factors. We discuss how our methods improve the reliability and interpretability of the analysis process by enabling more informed selections of computational tools. Finally, we demonstrate our techniques on the analysis of brain imaging study results that are performed over a large group of subjects.
Graphical Overlays: Using Layered Elements to Aid Chart Reading
Authors:
Nicholas Kong, Maneesh Agrawala
Abstract :
Reading a visualization can involve a number of tasks such as extracting, comparing or aggregating numerical values. Yet, most of the charts that are published in newspapers, reports, books, and on the Web only support a subset of these tasks. In this paper we introduce graphical overlays-visual elements that are layered onto charts to facilitate a larger set of chart reading tasks. These overlays directly support the lower-level perceptual and cognitive processes that viewers must perform to read a chart. We identify five main types of overlays that support these processes; the overlays can provide (1) reference structures such as gridlines, (2) highlights such as outlines around important marks, (3) redundant encodings such as numerical data labels, (4) summary statistics such as the mean or max and (5) annotations such as descriptive text for context. We then present an automated system that applies user-chosen graphical overlays to existing chart bitmaps. Our approach is based on the insight that generating most of these graphical overlays only requires knowing the properties of the visual marks and axes that encode the data, but does not require access to the underlying data values. Thus, our system analyzes the chart bitmap to extract only the properties necessary to generate the desired overlay. We also discuss techniques for generating interactive overlays that provide additional controls to viewers. We demonstrate several examples of each overlay type for bar, pie and line charts.
VAST Papers: Sensemaking and Collaboration
Session : 
Sensemaking and Collaboration
Date & Time : October 17 08:30 am - 10:10 am
Location : Grand Ballroom B
Chair : Niklas Elmqvist
Papers : 
Examining the Use of a Visual Analytics System for Sensemaking Tasks: Case Studies with Domain Experts
Authors:
Youn-ah Kang, John Stasko
Abstract :
While the formal evaluation of systems in visual analytics is still relatively uncommon, particularly rare are case studies of prolonged system use by domain analysts working with their own data. Conducting case studies can be challenging, but it can be a particularly effective way to examine whether visual analytics systems are truly helping expert users to accomplish their goals. We studied the use of a visual analytics system for sensemaking tasks on documents by six analysts from a variety of domains. We describe their application of the system along with the benefits, issues, and problems that we uncovered. Findings from the studies identify features that visual analytics systems should emphasize as well as missing capabilities that should be addre ssed. These findings inform design implications for future systems.
Semantic Interaction for Sensemaking: Inferring Analytical Reasoning for Model Steering
Authors:
Alex Endert, Patrick Fiaux, Chris North
Abstract :
Visual analytic tools aim to support the cognitively demanding task of sensemaking. Their success often depends on the ability to leverage capabilities of mathematical models, visualization, and human intuition through flexible, usable, and expressive interactions. Spatially clustering data is one effective metaphor for users to explore similarity and relationships between information, adjusting the weighting of dimensions or characteristics of the dataset to observe the change in the spatial layout. Semantic interaction is an approach to user interaction in such spatializations that couples these parametric modifications of the clustering model with users' analytic operations on the data (e.g., direct document movement in the spatialization, highlighting text, search, etc.). In this paper, we present results of a user study exploring the ability of semantic interaction in a visual analytic prototype, ForceSPIRE, to support sensemaking. We found that semantic interaction captures the analytical reasoning of the user through keyword weighting, and aids the user in co-creating a spatialization based on the user's reasoning and intuition.
SocialNetSense: Supporting Sensemaking of Social and Structural Features in Networks with Interactive Visualization
Authors:
Liang Gou, Xiaolong (Luke) Zhang, Airong Luo, Patricia F Anderson
Abstract :
Increasingly, social network datasets contain social attribute information about actors and their relationship. Analyzing such network with social attributes requires making sense of not only its structural features, but also the relationship between social features in attributes and network structures. Existing social network analysis tools are usually weak in supporting complex analytical tasks involving both structural and social features, and often overlook users' needs for sensemaking tools that help to gather, synthesize, and organize information of these features. To address these challenges, we propose a sensemaking framework of social-network visual analytics in this paper. This framework considers both bottom-up processes, which are about constructing new understandings based on collected information, and top-down processes, which concern using prior knowledge to guide information collection, in analyzing social networks from both social and structural perspectives. The framework also emphasizes the externalization of sensemaking processes through interactive visualization. Guided by the framework, we develop a system, SocialNetSense, to support the sensemaking in visual analytics of social networks with social attributes. The example of using our system to analyze a scholar collaboration network shows that our approach can help users gain insight into social networks both structurally and socially, and enhance their process awareness in visual analytics.
An Affordance-Based Framework for Human Computation and Human-Computer Collaboration
Authors:
R. Jordan Crouser, Remco Chang
Analyst's Workspace: An Embodied Sensemaking Environment for Large, High Resolution Displays
Authors:
Christopher Andrews, Chris North
Abstract :
Distributed cognition and embodiment provide compelling models for how humans think and interact with the environment. Our examination of the use of large, high-resolution displays from an embodied perspective has lead directly to the development of a new sensemaking environment called Analyst's Workspace (AW). AW leverages the embodied resources made more accessible through the physical nature of the display to create a spatial workspace. By combining spatial layout of documents and other artifacts with an entity-centric, explorative investigative approach, AW aims to allow the analyst to externalize elements of the sensemaking process as a part of the investigation, integrated into the visual representations of the data itself. In this paper, we describe the various capabilities of AW and discuss the key principles and concepts underlying its design, emphasizing unique design principles for designing visual analytic tools for large, high-resolution displays.
TVCG Papers: InfoVis
Session : 
TVCG Papers: InfoVis
Date & Time : October 17 04:15 pm - 05:55 pm
Location : Grand Ballroom C
Chair : Stephen North
Papers : 
Empirical Studies in Information Visualization: Seven Scenarios
Authors:
Heidi Lam, Enrico Bertini, Petra Isenberg, Catherine Plaisant and Sheelagh Carpendale
Enhanced Spatial Stability with Hilbert and Moore Treemaps
Authors:
Susanne Tak and Andy Cockburn
LineAO - Improved Three-Dimensional Line Rendering
Authors:
Sebastian Eichelbaum, Mario Hlawitschka and Gerik Scheuermann
Using Patterns to Encode Color Information for Dichromats
Authors:
Behzad Sajadi, Aditi Majumder, Manuel M. Oliveira, Rosalia G. Schneider and Ramesh Raskar
Toward Visualization for Games: Theory, Design Space, and Patterns
Authors:
Brian Bowman, Niklas Elmqvist and T. J. Jankun_Kelly
SCIVIS Papers: Physical Science Applications
Session : 
Physical Science Applications
Date & Time : October 18 04:15 pm - 05:55 pm
Location : Grand Ballroom D
Chair : Heike Leitte
Papers : 
KnotPad: Visualizing and Exploring Knot Theory with Fluid Reidemeister Moves
Authors:
Hui Zhang, Jianguang Weng, Lin Jing, Yiwen Zhong
Abstract :

We present KnotPad, an interactive paper-like system for visualizing and exploring mathematical knots; we exploit topological drawing and math-aware deformation methods in particular to enable and enrich our interactions with knot diagrams. Whereas most previous efforts typically employ physically based modeling to simulate the 3D dynamics of knots and ropes, our tool offers a Reidemeister move based interactive environment that is much closer to the topological problems being solved in knot theory, yet without interfering with the traditional advantages of paper-based analysis and manipulation of knot diagrams. Drawing knot diagrams with many crossings and producing their equivalent is quite challenging and error-prone. KnotPad can restrict user manipulations to the three types of Reidemeister moves, resulting in a more fluid yet mathematically correct user experience with knots. For our principal test case of mathematical knots, KnotPad permits us to draw and edit their diagrams empowered by a family of interactive techniques. Furthermore, we exploit supplementary interface elements to enrich the user experiences. For example, KnotPad allows one to pull and drag on knot diagrams to produce mathematically valid moves. Navigation enhancements in KnotPad provide still further improvement: by remembering and displaying the sequence of valid moves applied during the entire interaction, KnotPad allows a much cleaner exploratory interface for the user to analyze and study knot equivalence. All these methods combine to reveal the complex spatial relationships of knot diagrams with a mathematically true and rich user experience.

Visualization of Electrostatic Dipoles in Molecular Dynamics of Metal Oxides
Authors:
Sebastian Grottel, Philipp Beck, Christoph Muller, Guido Reina, Johannes Roth, Hans-Rainer Trebin, T
Abstract :
Metal oxides are important for many technical applications. For example alumina (aluminum oxide) is the most commonlyused ceramic in microelectronic devices thanks to its excellent properties. Experimental studies of these materials are increasingly supplemented with computer simulations. Molecular dynamics (MD) simulations can reproduce the material behavior very well and are now reaching time scales relevant for interesting processes like crack propagation. In this work we focus on the visualization of induced electric dipole moments on oxygen atoms in crack propagation simulations. The straightforward visualization using glyphs for the individual atoms, simple shapes like spheres or arrows, is insufficient for providing information about the data set as a whole. As our contribution we show for the first time that fractional anisotropy values computed from the local neighborhood of individual atoms of MD simulation data depict important information about relevant properties of the field of induced electric dipole moments. Iso surfaces in the field of fractional anisotropy as well as adjustments of the glyph representation allow the user to identify regions of correlated orientation. We present novel and relevant findings for the application domain resulting from these visualizations, like the influence of mechanical forces on the electrostatic properties.
Cumulative Heat Diffusion Using Volume Gradient Operator for Volume Analysis
Authors:
Krishna Chaitanya Gurijala, Lei Wang, Arie Kaufman
Abstract :
We introduce a simple, yet powerful method called the Cumulative Heat Diffusion for shape-based volume analysis, while drastically reducing the computational cost compared to conventional heat diffusion. Unlike the conventional heat diffusion process, where the diffusion is carried out by considering each node separately as the source, we simultaneously consider all the voxels as sources and carry out the diffusion, hence the term cumulative heat diffusion. In addition, we introduce a new operator that is used in the evaluation of cumulative heat diffusion called the Volume Gradient Operator (VGO). VGO is a combination of the LBO and a data-driven operator which is a function of the half gradient. The half gradient is the absolute value of the difference between the voxel intensities. The VGO by its definition captures the local shape information and is used to assign the initial heat values. Furthermore, VGO is also used as the weighting parameter for the heat diffusion process. We demonstrate that our approach can robustly extract shape-based features and thus forms the basis for an improved classification and exploration of features based on shape.
A Novel Approach to Visualizing Dark Matter Simulations
Authors:
Ralf Kaehler, Oliver Hahn, Tom Abel
Abstract :
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
Visual Data Analysis as an Integral Part of Environmental Management
Authors:
Joerg Meyer, E. Wes Bethel, Jennifer L. Horsman, Susan S. Hubbard, Harinarayan Krishnan, Alexandru R
Abstract :
The U.S. Department of Energy's (DOE) Office of Environmental Management (DOE/EM) currently supports an effort to understand and predict the fate of nuclear contaminants and their transport in natural and engineered systems. Geologists, hydrologists, physicists and computer scientists are working together to create models of existing nuclear waste sites, to simulate their behavior and to extrapolate it into the future. We use visualization as an integral part in each step of this process. In the first step, visualization is used to verify model setup and to estimate critical parameters. High-performance computing simulations of contaminant transport produces massive amounts of data, which is then analyzed using visualization software specifically designed for parallel processing of large amounts of structured and unstructured data. Finally, simulation results are validated by comparing simulation results to measured current and historical field data. We describe in this article how visual analysis is used as an integral part of the decision-making process in the planning of ongoing and future treatment options for the contaminated nuclear waste sites. Lessons learned from visually analyzing our large-scale simulation runs will also have an impact on deciding on treatment measures for other contaminated sites.
SCIVIS Papers: Geo-Applications
Session : 
Geo-Applications
Date & Time : October 18 02:00 pm - 03:40 pm
Location : Grand Ballroom D
Chair : Jens Krueger
Papers : 
Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography
Authors:
Stephan Wenger, Marco Ament, Stefan Guthe, Dirk Lorenz, Andreas Tillmann, Daniel Weiskopf, Marcus Ma
Abstract :
The 3D visualization of astronomical nebulae is a challenging problem since only a single 2D projection is observable from our fixed vantage point on Earth. We attempt to generate plausible and realistic looking volumetric visualizations via a tomographic approach that exploits the spherical or axial symmetry prevalent in some relevant types of nebulae. Different types of symmetry can be implemented by using different randomized distributions of virtual cameras. Our approach is based on an iterative compressed sensing reconstruction algorithm that we extend with support for position-dependent volumetric regularization and linear equality constraints. We present a distributed multi-GPU implementation that is capable of reconstructing high-resolution datasets from arbitrary projections. Its robustness and scalability are demonstrated for astronomical imagery from the Hubble Space Telescope. The resulting volumetric data is visualized using direct volume rendering. Compared to previous approaches, our method preserves a much higher amount of detail and visual variety in the 3D visualization, especially for objects with only approximate symmetry.
Visualization of Flow Behavior in Earth Mantle Convection
Authors:
Simon Schroder, John A. Peterson, Harald Obermaier, Louise H. Kellogg, Kenneth I. Joy, Hans Hagen
Abstract :
A fundamental characteristic of fluid flow is that it causes mixing: introduce a dye into a flow, and it will disperse. Mixing can be used as a method to visualize and characterize flow. Because mixing is a process that occurs over time, it is a 4D problem that presents a challenge for computation, visualization, and analysis. Motivated by a mixing problem in geophysics, we introduce a combination of methods to analyze, transform, and finally visualize mixing in simulations of convection in a self-gravitating 3D spherical shell representing convection in the Earth's mantle. Geophysicists use tools such as the finite element model CitcomS to simulate convection, and introduce massless, passive tracers to model mixing. The output of geophysical flow simulation is hard to analyze for domain experts because of overall data size and complexity. In addition, information overload and occlusion are problems when visualizing a whole-earth model. To address the large size of the data, we rearrange the simulation data using intelligent indexing for fast file access and efficient caching. To address information overload and interpret mixing, we compute tracer concentration statistics, which are used to characterize mixing in mantle convection models. Our visualization uses a specially tailored version of Direct Volume Rendering. The most important adjustment is the use of constant opacity. Because of this special area of application, i. e. the rendering of a spherical shell, many computations for volume rendering can be optimized. These optimizations are essential to a smooth animation of the time-dependent simulation data. Our results show how our system can be used to quickly assess the simulation output and test hypotheses regarding Earth's mantle convection. The integrated processing pipeline helps geoscientists to focus on their main task of analyzing mantle homogenization.
Interactive Retro-Deformation of Terrain for Reconstructing 3D Fault Displacements
Authors:
Rolf Westerteiger, Tracy Compton, Tony Bernardin, Eric Cowgill, Klaus Gwinner, Bernd Hamann, Andreas
Abstract :
Planetary topography is the result of complex interactions between geological processes, of which faulting is a prominent component. Surface-rupturing earthquakes cut and move landforms which develop across active faults, producing characteristic surface displacements across the fault. Geometric models of faults and their associated surface displacements are commonly applied to reconstruct these offsets to enable interpretation of the observed topography. However, current 2D techniques are limited in their capability to convey both the three-dimensional kinematics of faulting and the incremental sequence of events required by a given reconstruction. Here we present a real-time system for interactive retro-deformation of faulted topography to enable reconstruction of fault displacement within a high-resolution (sub 1m/pixel) 3D terrain visualization. We employ geometry shaders on the GPU to intersect the surface mesh with fault-segments interactively specified by the user and transform the resulting surface blocks in realtime according to a kinematic model of fault motion. Our method facilitates a human-in-the-loop approach to reconstruction of fault displacements by providing instant visual feedback while exploring the parameter space. Thus, scientists can evaluate the validity of traditional point-to-point reconstructions by visually examining a smooth interpolation of the displacement in 3D. We show the efficacy of our approach by using it to reconstruct segments of the San Andreas fault, California as well as a graben structure in the Noctis Labyrinthus region on Mars.
A Visual Analysis Concept for the Validation of Geoscientific Simulation Models
Authors:
Andrea Unger, Sven Schulte, Volker Klemann, Doris Dransch
Abstract :

Geoscientific modeling and simulation helps to improve our understanding of the complex Earth system. During the modeling process, validation of the geoscientific model is an essential step. In validation, it is determined whether the model output shows sufficient agreement with observation data. Measures for this agreement are called goodness of fit. In the geosciences, analyzing the goodness of fit is challenging due to its manifold dependencies: 1) The goodness of fit depends on the model parameterization, whose precise values are not known. 2) The goodness of fit varies in space and time due to the spatio-temporal dimension of geoscientific models. 3) The significance of the goodness of fit is affected by resolution and preciseness of available observational data. 4) The correlation between goodness of fit and underlying modeled and observed values is ambiguous. In this paper, we introduce a visual analysis concept that targets these challenges in the validation of geoscientific models; specifically focusing on applications where observation data is sparse, unevenly distributed in space and time, and imprecise, which hinders a rigorous analytical approach. Our concept, developed in close cooperation with Earth system modelers, addresses the four challenges by four tailored visualization components. The tight linking of these components supports a twofold interactive drill-down in model parameter space and in the set of data samples, which facilitates the exploration of the numerous dependencies of the goodness of fit. We exemplify our visualization concept for geoscientific modeling of glacial isostatic adjustments in the last 100,000 years, validated against sea levels indicators; a prominent example for sparse and imprecise observation data. An initial use case and feedback from Earth system modelers indicate that our visualization concept is a valuable complement to the range of validation methods.

SeiVis: An Interactive Visual Subsurface Modeling Application
Authors:
Thomas Hollt, Wolfgang Freiler, Fritz M. Gschwantner, Helmut Doleisch, Gabor Heinemann, Markus Hadwi
Abstract :
The most important resources to fulfill today's energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures is crucial in order to minimize economic and ecological risks. Creating such a model is an inverse problem: reconstructing structures from measured reflection seismics. The major challenge here is twofold: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. This paper presents a novel, integrated approach to interactively creating subsurface models from reflection seismics. It integrates the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain and well data in depth domain. Using a novel joint time/depth visualization, depicting side-by-side views of the original and the resulting depth-converted data, domain experts can directly fit their interpretation in time domain to spatial ground truth data. We have conducted a domain expert evaluation, which illustrates that the presented workflow enables the creation of exact subsurface models much more rapidly than previous approaches.
SCIVIS Papers: Volume Rendering
Session : 
Volume Rendering
Date & Time : October 18 10:30 am - 12:10 pm
Location : Grand Ballroom D
Chair : Carlos Correa
Papers : 
Fuzzy Volume Rendering
Authors:
Nathaniel Fout, Kwan-liu Ma
Abstract :
In order to assess the reliability of volume rendering, it is necessary to consider the uncertainty associated with the volume data and how it is propagated through the volume rendering algorithm, as well as the contribution to uncertainty from the rendering algorithm itself. In this work, we show how to apply concepts from the field of reliable computing in order to build a framework for management of uncertainty in volume rendering, with the result being a self-validating computational model to compute a posteriori uncertainty bounds. We begin by adopting a coherent, unifying possibility-based representation of uncertainty that is able to capture the various forms of uncertainty that appear in visualization, including variability, imprecision, and fuzziness. Next, we extend the concept of the fuzzy transform in order to derive rules for accumulation and propagation of uncertainty. This representation and propagation of uncertainty together constitute an automated framework for management of uncertainty in visualization, which we then apply to volume rendering. The result, which we call fuzzy volume rendering, is an uncertainty-aware rendering algorithm able to produce more complete depictions of the volume data, thereby allowing more reliable conclusions and informed decisions. Finally, we compare approaches for self-validated computation in volume rendering, demonstrating that our chosen method has the ability to handle complex uncertainty while maintaining efficiency.
Automatic Tuning of Spatially Varying Transfer Functions for Blood Vessel Visualization
Authors:
Gunnar Lathen, Stefan Lindholm, Reiner Lenz, Anders Persson, Magnus Borga
Abstract :
Computed Tomography Angiography (CTA) is commonly used in clinical routine for diagnosing vascular diseases. The procedure involves the injection of a contrast agent into the blood stream to increase the contrast between the blood vessels and the surrounding tissue in the image data. CTA is often visualized with Direct Volume Rendering (DVR) where the enhanced image contrast is important for the construction of Transfer Functions (TFs). For increased efficiency, clinical routine heavily relies on preset TFs to simplify the creation of such visualizations for a physician. In practice, however, TF presets often do not yield optimal images due to variations in mixture concentration of contrast agent in the blood stream. In this paper we propose an automatic, optimizationbased method that shifts TF presets to account for general deviations and local variations of the intensity of contrast enhanced blood vessels. Some of the advantages of this method are the following. It computationally automates large parts of a process that is currently performed manually. It performs the TF shift locally and can thus optimize larger portions of the image than is possible with manual interaction. The method is based on a well known vesselness descriptor in the definition of the optimization criterion. The performance of the method is illustrated by clinically relevant CT angiography datasets displaying both improved structural overviews of vessel trees and improved adaption to local variations of contrast concentration.
Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms
Authors:
Cheuk Yiu Ip, Amitabh Varshney, Joseph JaJa
Abstract :
Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensitygradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.
Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
Authors:
Daniel Jonsson, Joel Kronander, Timo Ropinski, Anders Ynnerman
Abstract :
In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.
Structure-Aware Lighting Design for Volume Visualization
Authors:
Yubo Tao, Hai Lin, Feng Dong, Chao Wang, Gordon Clapworthy, and Hujun Bao
Abstract :
Lighting design is a complex, but fundamental, problem in many fields. In volume visualization, direct volume rendering generates an informative image without external lighting, as each voxel itself emits radiance. However, external lighting further improves the shape and detail perception of features, and it also determines the effectiveness of the communication of feature information. The human visual system is highly effective in extracting structural information from images, and to assist it further, this paper presents an approach to structure-aware automatic lighting design by measuring the structural changes between the images with and without external lighting. Given a transfer function and a viewpoint, the optimal lighting parameters are those that provide the greatest enhancement to structural information - the shape and detail information of features are conveyed most clearly by the optimal lighting parameters. Besides lighting goodness, the proposed metric can also be used to evaluate lighting similarity and stability between two sets of lighting parameters. Lighting similarity can be used to optimize the selection of multiple light sources so that different light sources can reveal distinct structural information. Our experiments with several volume data sets demonstrate the effectiveness of the structure-aware lighting design approach. It is well suited to use by novices as it requires little technical understanding of the rendering parameters associated with direct volume rendering.
SCIVIS Papers: Analytics
Session : 
Analytics
Date & Time : October 18 08:30 am - 10:10 am
Location : Grand Ballroom D
Chair : Helwig Hauser
Papers : 
Multivariate Data Analysis Using Persistence-Based Filtering and Topological Signatures
Authors:
Bastian Rieck, Hubert Mara, Heike Leitte
Abstract :
The extraction of significant structures in arbitrary high-dimensional data sets is a challenging task. Moreover, classifying data points as noise in order to reduce a data set bears special relevance for many application domains. Standard methods such as clustering serve to reduce problem complexity by providing the user with classes of similar entities. However, they usually do not highlight relations between different entities and require a stopping criterion, e.g. the number of clusters to be detected. In this paper, we present a visualization pipeline based on recent advancements in algebraic topology. More precisely, we employ methods from persistent homology that enable topological data analysis on high-dimensional data sets. Our pipeline inherently copes with noisy data and data sets of arbitrary dimensions. It extracts central structures of a data set in a hierarchical manner by using a persistence-based filtering algorithm that is theoretically well-founded. We furthermore introduce persistence rings, a novel visualization technique for a class of topological features-the persistence intervals-of large data sets. Persistence rings provide a unique topological signature of a data set, which helps in recognizing similarities. In addition, we provide interactive visualization techniques that assist the user in evaluating the parameter space of our method in order to extract relevant structures. We describe and evaluate our analysis pipeline by means of two very distinct classes of data sets: First, a class of synthetic data sets containing topological objects is employed to highlight the interaction capabilities of our method. Second, in order to affirm the utility of our technique, we analyse a class of high-dimensional real-world data sets arising from current research in cultural heritage.
Surface-Based Structure Analysis and Visualization for Multifield Time-Varying Datasets
Authors:
Samer S. Barakat, Markus Rutten, Xavier Tricoche
Abstract :

This paper introduces a new feature analysis and visualization method for multifield datasets. Our approach applies a surface-centric model to characterize salient features and form an effective, schematic representation of the data. We propose a simple, geometrically motivated, multifield feature definition. This definition relies on an iterative algorithm that applies existing theory of skeleton derivation to fuse the structures from the constitutive fields into a coherent data description, while addressing noise and spurious details. This paper also presents a new method for non-rigid surface registration between the surfaces of consecutive time steps. This matching is used in conjunction with clustering to discover the interaction patterns between the different fields and their evolution over time. We document the unified visual analysis achieved by our method in the context of several multifield problems from large-scale time-varying simulations.

[TVCG Invited Paper] Integrating Isosurface Statistics and Histograms
Authors:
Brian Duffy, Hamish Carr and Torsten Moller
Abstract :
.
[TVCG Invited Paper] ViSizer: A Visualization Resizing Framework
Authors:
Yingcai Wu, Xiaotong Liu, Shixia Liu and Kwan-Liu Ma
Abstract :

.

[TVCG Invited Paper] TripAdvisor-D: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail
Authors:
Julia EunJu Nam and Klaus Mueller
Abstract :

.

INFOVIS Papers: Text and Time
Session : 
Text and Time
Date & Time : October 18 02:00 pm - 03:40 pm
Location : Grand Ballroom C
Chair : Melanie Tory
Papers : 
Facilitating Discourse Analysis with Interactive Visualization
Authors:
Jian Zhao, Fanny Chevalier, Christopher Collins, Ravin Balakrishnan
Abstract :
A discourse parser is a natural language processing system which can represent the organization of a document based on a rhetorical structure tree-one of the key data structures enabling applications such as text summarization, question answering and dialogue generation. Computational linguistics researchers currently rely on manually exploring and comparing the discourse structures to get intuitions for improving parsing algorithms. In this paper, we present DAViewer, an interactive visualization system for assisting computational linguistics researchers to explore, compare, evaluate and annotate the results of discourse parsers. An iterative user-centered design process with domain experts was conducted in the development of DAViewer. We report the results of an informal formative study of the system to better understand how the proposed visualization and interaction techniques are used in the real research environment.
Whisper: Tracing the Spatiotemporal Process of Information Diffusion in Real Time
Authors:
Nan Cao, Yu-Ru Lin, Xiaohua Sun, David Lazer, Shixia Liu, Huamin Qu
Abstract :
When and where is an idea dispersed? Social media, like Twitter, has been increasingly used for exchanging information, opinions and emotions about events that are happening across the world. Here we propose a novel visualization design, Whisper, for tracing the process of information diffusion in social media in real time. Our design highlights three major characteristics of diffusion processes in social media: the temporal trend, social-spatial extent, and community response of a topic of interest. Such social, spatiotemporal processes are conveyed based on a sunflower metaphor whose seeds are often dispersed far away. In Whisper, we summarize the collective responses of communities on a given topic based on how tweets were retweeted by groups of users, through representing the sentiments extracted from the tweets, and tracing the pathways of retweets on a spatial hierarchical layout. We use an efficient flux line-drawing algorithm to trace multiple pathways so the temporal and spatial patterns can be identified even for a bursty event. A focused diffusion series highlights key roles such as opinion leaders in the diffusion process. We demonstrate how our design facilitates the understanding of when and where a piece of information is dispersed and what are the social responses of the crowd, for large-scale events including political campaigns and natural disasters. Initial feedback from domain experts suggests promising use for today's information consumption and dispersion in the wild.
Exploring Flow, Factors, and Outcomes of Temporal Event Sequences with the Outflow Visualization
Authors:
Krist Wongsuphasawat, David Gotz
Abstract :
Event sequence data is common in many domains, ranging from electronic medical records (EMRs) to sports events. Moreover, such sequences often result in measurable outcomes (e.g., life or death, win or loss). Collections of event sequences can be aggregated together to form event progression pathways. These pathways can then be connected with outcomes to model how alternative chains of events may lead to different results. This paper describes the Outflow visualization technique, designed to (1) aggregate multiple event sequences, (2) display the aggregate pathways through different event states with timing and cardinality, (3) summarize the pathways' corresponding outcomes, and (4) allow users to explore external factors that correlate with specific pathway state transitions. Results from a user study with twelve participants show that users were able to learn how to use Outflow easily with limited training and perform a range of tasks both accurately and rapidly.
RankExplorer: Visualization of Ranking Changes in Large Time Series Data
Authors:
Conglei Shi, Weiwei Cui, Shixia Liu, Panpan Xu, Wei Chen, Huamin Qu
Abstract :
For many applications involving time series data, people are often interested in the changes of item values over time as well as their ranking changes. For example, people search many words via search engines like Google and Bing every day. Analysts are interested in both the absolute searching number for each word as well as their relative rankings. Both sets of statistics may change over time. For very large time series data with thousands of items, how to visually present ranking changes is an interesting challenge. In this paper, we propose RankExplorer, a novel visualization method based on ThemeRiver to reveal the ranking changes. Our method consists of four major components: 1) a segmentation method which partitions a large set of time series curves into a manageable number of ranking categories; 2) an extended ThemeRiver view with embedded color bars and changing glyphs to show the evolution of aggregation values related to each ranking category over time as well as the content changes in each ranking category; 3) a trend curve to show the degree of ranking changes over time; 4) rich user interactions to support interactive exploration of ranking changes. We have applied our method to some real time series data and the case studies demonstrate that our method can reveal the underlying patterns related to ranking changes which might otherwise be obscured in traditional visualizations.
Design Considerations for Optimizing Storyline Visualizations
Authors:
Yuzuru Tanahashi, Kwan-Liu Ma
Abstract :
Storyline visualization is a technique used to depict the temporal dynamics of social interactions. This visualization technique was first introduced as a hand-drawn illustration in XKCD's Movie Narrative Charts [21]. If properly constructed, the visualization can convey both global trends and local interactions in the data. However, previous methods for automating storyline visualizations are overly simple, failing to achieve some of the essential principles practiced by professional illustrators. This paper presents a set of design considerations for generating aesthetically pleasing and legible storyline visualizations. Our layout algorithm is based on evolutionary computation, allowing us to effectively incorporate multiple objective functions. We show that the resulting visualizations have significantly improved aesthetics and legibility compared to existing techniques.
INFOVIS Papers: Interaction
Session : 
Interaction
Date & Time : October 18 08:30 am - 10:10 am
Location : Grand Ballroom C
Chair : Miriah Meyer
Papers : 
Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions
Authors:
Bongshin Lee, Petra Isenberg, Nathalie Henry Riche, Sheelagh Carpendale
Abstract :
The importance of interaction to Information Visualization (InfoVis) and, in particular, of the interplay between interactivity and cognition is widely recognized [12][15][32][55][70]. This interplay, combined with the demands from increasingly large and complex datasets, is driving the increased significance of interaction in InfoVis. In parallel, there have been rapid advances in many facets of interaction technologies. However, InfoVis interactions have yet to take full advantage of these new possibilities in interaction technologies, as they largely still employ the traditional desktop, mouse, and keyboard setup of WIMP (Windows, Icons, Menus, and a Pointer) interfaces. In this paper, we reflect more broadly about the role of more natural interactions for InfoVis and provide opportunities for future research. We discuss and relate general HCI interaction models to existing InfoVis interaction classifications by looking at interactions from a novel angle, taking into account the entire spectrum of interactions. Our discussion of InfoVis-specific interaction design considerations helps us identify a series of underexplored attributes of interaction that can lead to new, more natural, interaction techniques for InfoVis.
Intelligent Graph Layout Using Many Users' Input
Authors:
Xiaoru Yuan, Limei Che, Yifan Hu, Xin Zhang
Abstract :
In this paper, we propose a new strategy for graph drawing utilizing layouts of many sub-graphs supplied by a large group of people in a crowd sourcing manner. We developed an algorithm based on Laplacian constrained distance embedding to merge subgraphs submitted by different users, while attempting to maintain the topological information of the individual input layouts. To facilitate collection of layouts from many people, a light-weight interactive system has been designed to enable convenient dynamic viewing, modification and traversing between layouts. Compared with other existing graph layout algorithms, our approach can achieve more aesthetic and meaningful layouts with high user preference.
PivotPaths: Strolling through Faceted Information Spaces
Authors:
Marian Dörk, Nathalie Henry Riche, Gonzalo Ramos, Susan Dumais
Abstract :

We present PivotPaths, an interactive visualization for exploring faceted information resources. During both work and leisure, we increasingly interact with information spaces that contain multiple facets and relations, such as authors, keywords, and citations of academic publications, or actors and genres of movies. To navigate these interlinked resources today, one typically selects items from facet lists resulting in abrupt changes from one subset of data to another. While filtering is useful to retrieve results matching specific criteria, it can be difficult to see how facets and items relate and to comprehend the effect of filter operations. In contrast, the PivotPaths interface exposes faceted relations as visual paths in arrangements that invite the viewer to 'take a stroll' through an information space. PivotPaths supports pivot operations as lightweight interaction techniques that trigger gradual transitions between views. We designed the interface to allow for casual traversal of large collections in an aesthetically pleasing manner that encourages exploration and serendipitous discoveries. This paper shares the findings from our iterative design-and-evaluation process that included semi-structured interviews and a two-week deployment of PivotPaths applied to a large database of academic publications.

Interaction Support for Visual Comparison Inspired by Natural Behavior
Authors:
Christian Tominski, Camilla Forsell, Jimmy Johansson
Abstract :
Visual comparison is an intrinsic part of interactive data exploration and analysis. The literature provides a large body of existing solutions that help users accomplish comparison tasks. These solutions are mostly of visual nature and custom-made for specific data. We ask the question if a more general support is possible by focusing on the interaction aspect of comparison tasks. As an answer to this question, we propose a novel interaction concept that is inspired by real-world behavior of people comparing information printed on paper. In line with real-world interaction, our approach supports users (1) in interactively specifying pieces of graphical information to be compared, (2) in flexibly arranging these pieces on the screen, and (3) in performing the actual comparison of side-by-side and overlapping arrangements of the graphical information. Complementary visual cues and add-ons further assist users in carrying out comparison tasks. Our concept and the integrated interaction techniques are generally applicable and can be coupled with different visualization techniques. We implemented an interactive prototype and conducted a qualitative user study to assess the concept's usefulness in the context of three different visualization techniques. The obtained feedback indicates that our interaction techniques mimic the natural behavior quite well, can be learned quickly, and are easy to apply to visual comparison tasks.
RelEx: Visualization for Actively Changing Overlay Network Specifications
Authors:
Michael Sedlmair, Annika Frank, Tamara Munzner, Andreas Butz
Abstract :
We present a network visualization design study focused on supporting automotive engineers who need to specify and optimize traffic patterns for in-car communication networks. The task and data abstractions that we derived support actively making changes to an overlay network, where logical communication specifications must be mapped to an underlying physical network. These abstractions are very different from the dominant use case in visual network analysis, namely identifying clusters and central nodes, that stems from the domain of social network analysis. Our visualization tool RelEx was created and iteratively refined through a full user-centered design process that included a full problem characterization phase before tool design began, paper prototyping, iterative refinement in close collaboration with expert users for formative evaluation, deployment in the field with real analysts using their own data, usability testing with non-expert users, and summative evaluation at the end of the deployment. In the summative post-deployment study, which entailed domain experts using the tool over several weeks in their daily practice, we documented many examples where the use of RelEx simplified or sped up their work compared to previous practices.
INFOVIS Papers: Sketching and Designing Visualizations
Session : 
Sketching and Designing Visualizations
Date & Time : October 18 04:15 pm - 05:55 pm
Location : Grand Ballroom C
Chair : Caroline Ziemkiewicz
Papers : 
Evaluating the Effect of Style in Information Visualization
Authors:
Andrew Vande Moere, Martin Tomitsch, Christoph Wimmer, Christoph Boesch, Thomas Grechenig
Abstract :
This paper reports on a between-subject, comparative online study of three information visualization demonstrators that each displayed the same dataset by way of an identical scatterplot technique, yet were different in style in terms of visual and interactive embellishment. We validated stylistic adherence and integrity through a separate experiment in which a small cohort of participants assigned our three demonstrators to predefined groups of stylistic examples, after which they described the styles with their own words. From the online study, we discovered significant differences in how participants execute specific interaction operations, and the types of insights that followed from them. However, in spite of significant differences in apparent usability, enjoyability and usefulness between the style demonstrators, no variation was found on the self-reported depth, expert-rated depth, confidence or difficulty of the resulting insights. Three different methods of insight analysis have been applied, revealing how style impacts the creation of insights, ranging from higher-level pattern seeking to a more reflective and interpretative engagement with content, which is what underlies the patterns. As this study only forms the first step in determining how the impact of style in information visualization could be best evaluated, we propose several guidelines and tips on how to gather, compare and categorize insights through an online evaluation study, particularly in terms of analyzing the concise, yet wide variety of insights and observations in a trustworthy and reproducable manner.
Sketchy Rendering for Information Visualization
Authors:
Jo Wood, Petra Isenberg, Tobias Isenberg, Jason Dykes, Nadia Boukhelifa, Aidan Slingsby
Abstract :
We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.
An Empirical Study on Using Visual Embellishments in Visualization
Authors:
Rita Borgo, Alfie Abdul-Rahman, Farhan Mohamed, Philip W. Grant, Irene Reppa, Luciano Floridi, Min C
Abstract :
In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dualtask methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces divided attention, and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.
Evaluating Sketchiness as a Visual Variable for the Depiction of Qualitative Uncertainty
Authors:
Nadia Boukhelifa, Anastasia Bezerianos, Tobias Isenberg, Jean-Daniel Fekete
Abstract :
We report on results of a series of user studies on the perception of four visual variables that are commonly used in the literature to depict uncertainty. To the best of our knowledge, we provide the first formal evaluation of the use of these variables to facilitate an easier reading of uncertainty in visualizations that rely on line graphical primitives. In addition to blur, dashing and grayscale, we investigate the use of 'sketchiness' as a visual variable because it conveys visual impreciseness that may be associated with data quality. Inspired by work in non-photorealistic rendering and by the features of hand-drawn lines, we generate line trajectories that resemble hand-drawn strokes of various levels of proficiency-ranging from child to adult strokes-where the amount of perturbations in the line corresponds to the level of uncertainty in the data. Our results show that sketchiness is a viable alternative for the visualization of uncertainty in lines and is as intuitive as blur; although people subjectively prefer dashing style over blur, grayscale and sketchiness. We discuss advantages and limitations of each technique and conclude with design considerations on how to deploy these visual variables to effectively depict various levels of uncertainty for line marks.
Understanding Pen and Touch Interaction for Data Exploration on Interactive Whiteboards
Authors:
Jagoda Walny, Bongshin Lee, Paul Johns, Nathalie Henry Riche, Sheelagh Carpendale
Abstract :
Current interfaces for common information visualizations such as bar graphs, line graphs, and scatterplots usually make use of the WIMP (Windows, Icons, Menus and a Pointer) interface paradigm with its frequently discussed problems of multiple levels of indirection via cascading menus, dialog boxes, and control panels. Recent advances in interface capabilities such as the availability of pen and touch interaction challenge us to re-think this and investigate more direct access to both the visualizations and the data they portray. We conducted a Wizard of Oz study to explore applying pen and touch interaction to the creation of information visualization interfaces on interactive whiteboards without implementing a plethora of recognizers. Our wizard acted as a robust and flexible pen and touch recognizer, giving participants maximum freedom in how they interacted with the system. Based on our qualitative analysis of the interactions our participants used, we discuss our insights about pen and touch interactions in the context of learnability and the interplay between pen and touch gestures. We conclude with suggestions for designing pen and touch enabled interactive visualization interfaces.
VAST Papers: Applications, Design Studies and Tools
Session : 
Applications, Design Studies and Tools
Date & Time : October 18 10:30 am - 12:10 pm
Location : Grand Ballroom B
Chair : Enrico Bertini
Papers : 
Visual Analytics Methodology for Eye Movement Studies
Authors:
Gennady Andrienko, Natalia Andrienko, Michael Burch, Daniel Weiskopf
Abstract :
Eye movement analysis is gaining popularity as a tool for evaluation of visual displays and interfaces. However, the existing methods and tools for analyzing eye movements and scanpaths are limited in terms of the tasks they can support and effectiveness for large data and data with high variation. We have performed an extensive empirical evaluation of a broad range of visual analytics methods used in analysis of geographic movement data. The methods have been tested for the applicability to eye tracking data and the capability to extract useful knowledge about users' viewing behaviors. This allowed us to select the suitable methods and match them to possible analysis tasks they can support. The paper describes how the methods work in application to eye tracking data and provides guidelines for method selection depending on the analysis tasks
AlVis: Situation Awareness in the Surveillance of Road Tunnels
Authors:
Harald Piringer, Matthias Buchetics, Rudolf Benedik
Abstract :
In the surveillance of road tunnels, video data plays an important role for a detailed inspection and as an input to systems for an automated detection of incidents. In disaster scenarios like major accidents, however, the increased amount of detected incidents may lead to situations where human operators lose a sense of the overall meaning of that data, a problem commonly known as a lack of situation awareness. The primary contribution of this paper is a design study of AlVis, a system designed to increase situation awareness in the surveillance of road tunnels. The design of AlVis is based on a simplified tunnel model which enables an overview of the spatio-temporal development of scenarios in real-time. The visualization explicitly represents the present state, the history, and predictions of potential future developments. Concepts for situation-sensitive prioritization of information ensure scalability from normal operation to major disaster scenarios. The visualization enables an intuitive access to live and historic video for any point in time and space. We illustrate AlVis by means of a scenario and report qualitative feedback by tunnel experts and operators. This feedback suggests that AlVis is suitable to save time in recognizing dangerous situations and helps to maintain an overview in complex disaster scenarios.
Spatiotemporal Social Media Analytics for Abnormal Event Detection using Seasonal-Trend Decomposition
Authors:
Junghoon Chae, Dennis Thom, Harald Bosch, Yun Jang, Ross Maciejewski, David S. Ebert, Thomas Ertl
Abstract :
Recent advances in technology have enabled social media services to support space-time indexed data, and internet users from all over the world have created a large volume of time-stamped, geo-located data. Such spatiotemporal data has immense value for increasing situational awareness of local events, providing insights for investigations and understanding the extent of incidents, their severity, and consequences, as well as their time-evolving nature. In analyzing social media data, researchers have mainly focused on finding temporal trends according to volume-based importance. Hence, a relatively small volume of relevant messages may easily be obscured by a huge data set indicating normal situations. In this paper, we present a visual analytics approach that provides users with scalable and interactive social media data analysis and visualization including the exploration and examination of abnormal topics and events within various social media data sources, such as Twitter, Flickr and YouTube. In order to find and understand abnormal events, the analyst can first extract major topics from a set of selected messages and rank them probabilistically using Latent Dirichlet Allocation. He can then apply seasonal trend decomposition together with traditional control chart methods to find unusual peaks and outliers within topic time series. Our case studies show that situational awareness can be improved by incorporating the anomaly and trend examination techniques into a highly interactive visual analysis process.
Visual Analytics for the Big Data Era -- A Comparative Review of State-of-the-Art Commercial Systems
Authors:
Leishi zhang, Andreas Stoffel, Michael Behrisch, Sebastian Mittelstadt, Tobias Schreck, Rene Pompl,
Smart Super Views - A Knowledge-Assisted Interface for Medical Visualization
Authors:
Gabriel Mistelbauer, Hamed Bouzari, Rudiger Schernthaner, Ivan Baclija, Arnold Kochl, Stefan Bruckne
Abstract :
Due to the ever growing volume of acquired data and information, users have to be constantly aware of the methods for their exploration and for interaction. Of these, not each might be applicable to the data at hand or might reveal the desired result. Owing to this, innovations may be used inappropriately and users may become skeptical. In this paper we propose a knowledge-assisted interface for medical visualization, which reduces the necessary effort to use new visualization methods, by providing only the most relevant ones in a smart way. Consequently, we are able to expand such a system with innovations without the users to worry about when, where, and especially how they may or should use them. We present an application of our system in the medical domain and give qualitative feedback from domain experts.
VAST Papers: Space and Time, and The Analysis Process
Session : 
Space and Time, and The Analysis Process
Date & Time : October 18 08:30 am - 10:10 am
Location : Grand Ballroom B
Chair : Bill Ribarsky
Papers : 
The User Puzzle - Explaining the Interaction with Visual Analytics Systems
Authors:
Margit Pohl, Michael Smuc, Eva Mayr
Abstract :
Visual analytics emphasizes the interplay between visualization, analytical procedures performed by computers and human perceptual and cognitive activities. Human reasoning is an important element in this context. There are several theories in psychology and HCI explaining open-ended and exploratory reasoning. Five of these theories (sensemaking theories, gestalt theories, distributed cognition, graph comprehension theories and skill-rule-knowledge models) are described in this paper. We discuss their relevance for visual analytics. In order to do this more systematically, we developed a schema of categories relevant for visual analytics research and evaluation. All these theories have strengths but also weaknesses in explaining interaction with visual analytics systems. A possibility to overcome the weaknesses would be to combine two or more of these theories.
Enterprise Data Analysis and Visualization: An Interview Study
Authors:
Sean Kandel, Andreas Paepcke, Joseph M. Hellerstein, Jeffrey Heer
Visual Analytics Methods for Categoric Spatio-Temporal Data
Authors:
Tatiana von Landesberger, Sebastian Bremm, Natalia Andrienko, Gennady Andrienko, Maria Tekusova
Abstract :
We focus on visual analysis of space- and time-referenced categorical data, which describe possible states of spatial (geographical) objects or locations and their changes over time. The analysis of these data is difficult as there are only limited possibilities to analyze the three aspects (location, time and category) simultaneously. We present a new approach which interactively combines (a) visualization of categorical changes over time; (b) various spatial data displays; (c) computational techniques for task-oriented selection of time steps. They provide an expressive visualization with regard to either the overall evolution over time or unusual changes. We apply our approach on two use cases demonstrating its usefulness for a wide variety of tasks. We analyze data from movement tracking and meteorologic areas. Using our approach, expected events could be detected and new insights were gained.
A Visual Analytics Approach to Multi-scale Exploration of Environmental Time Series
Authors:
Mike Sips, Patrick Kothur, Andrea Unger, Christian Hege, Doris Dransch
Abstract :
We present a Visual Analytics approach that addresses the detection of interesting patterns in numerical time series, specifically from environmental sciences. Crucial for the detection of interesting temporal patterns are the time scale and the starting points one is looking at. Our approach makes no assumption about time scale and starting position of temporal patterns and consists of three main steps: an algorithm to compute statistical values for all possible time scales and starting positions of intervals, visual identification of potentially interesting patterns in a matrix visualization, and interactive exploration of detected patterns. We demonstrate the utility of this approach in two scientific scenarios and explain how it allowed scientists to gain new insight into the dynamics of environmental systems.
Watch This: A Taxonomy for Dynamic Data Visualization
Authors:
Joseph Cottam, Andrew Lumsdaine, Chris Weaver
Abstract :
Visualizations embody design choices about data access, data transformation, visual representation, and interaction. To interpret a static visualization, a person must identify the correspondences between the visual representation and the underlying data. These correspondences become moving targets when a visualization is dynamic. Dynamics may be introduced in a visualization at any point in the analysis and visualization process. For example, the data itself may be streaming, shifting subsets may be selected, visual representations may be animated, and interaction may modify presentation. In this paper, we focus on the impact of dynamic data. We present a taxonomy and conceptual framework for understanding how data changes influence the interpretability of visual representations. Visualization techniques are organized into categories at various levels of abstraction. The salient characteristics of each category and task suitability are discussed through examples from the scientific literature and popular practices. Examining the implications of dynamically updating visualizations warrants attention because it directly impacts the interpretability (and thus utility) of visualizations. The taxonomy presented provides a reference point for further exploration of dynamic data visualization techniques.
TVCG Papers: VAST
Session : 
TVCG Papers: VAST
Date & Time : October 18 04:15 pm - 05:55 pm
Location : Grand Ballroom B
Chair : Silvia Miksch
Papers : 
Evaluating the Role of Time in Investigative Analysis of Document Collections
Authors:
Bum chul Kwon, Waqas Javed, Sohaib Ghani, Niklas Elmqvist, Ji Soo Yi and David Ebert
Coherent Time-Varying Graph Drawing with MultiFocus+Context Interaction
Authors:
Kun_Chuan Feng, Chaoli Wang, Han_Wei Shen and Tong_Yee Lee
TimeSeer: Scagnostics for High-Dimensional Time Series
Authors:
Tuan Dang, Anushka Anand and Leland Wilkinson
Simplification of Node Position Data for Interactive Visualization of Dynamic Datasets
Authors:
Paul Rosen and Voicu Popescu
Visualization and Visual Analysis of Multi-faceted Scientific Data: a Survey
Authors:
Johannes Kehrer and Helwig Hauser
INFOVIS Papers: Education and Popular Applications
Session : 
Education and Popular Applications
Date & Time : October 19 08:30 am - 10:10 am
Location : Grand Ballroom C
Chair : Danyel Fisher
Papers : 
The DeepTree Exhibit: Visualizing the Tree of Life to Facilitate Informal Learning
Authors:
Florian Block, Michael S. Horn, Brenda Caldwell Phillips, Judy Diamond, E. Margaret Evans, Chia Shen
Abstract :
In this paper, we present the DeepTree exhibit, a multi-user, multi-touch interactive visualization of the Tree of Life. We developed DeepTree to facilitate collaborative learning of evolutionary concepts. We will describe an iterative process in which a team of computer scientists, learning scientists, biologists, and museum curators worked together throughout design, development, and evaluation. We present the importance of designing the interactions and the visualization hand-in-hand in order to facilitate active learning. The outcome of this process is a fractal-based tree layout that reduces visual complexity while being able to capture all life on earth; a custom rendering and navigation engine that prioritizes visual appeal and smooth fly-through; and a multi-user interface that encourages collaborative exploration while offering guided discovery. We present an evaluation showing that the large dataset encouraged free exploration, triggers emotional responses, and facilitates visitor engagement and informal learning.
Living Liquid: Design and Evaluation of an Exploratory Visualization Tool for Museum Visitors
Authors:
Joyce Ma, Isaac Liao, Kwan-Liu Ma, Jennifer Frazier
Abstract :
Interactive visualizations can allow science museum visitors to explore new worlds by seeing and interacting with scientific data. However, designing interactive visualizations for informal learning environments, such as museums, presents several challenges. First, visualizations must engage visitors on a personal level. Second, visitors often lack the background to interpret visualizations of scientific data. Third, visitors have very limited time at individual exhibits in museums. This paper examines these design considerations through the iterative development and evaluation of an interactive exhibit as a visualization tool that gives museumgoers access to scientific data generated and used by researchers. The exhibit prototype, Living Liquid, encourages visitors to ask and answer their own questions while exploring the time-varying global distribution of simulated marine microbes using a touchscreen interface. Iterative development proceeded through three rounds of formative evaluations using think-aloud protocols and interviews, each round informing a key visualization design decision: (1) what to visualize to initiate inquiry, (2) how to link data at the microscopic scale to global patterns, and (3) how to include additional data that allows visitors to pursue their own questions. Data from visitor evaluations suggests that, when designing visualizations for public audiences, one should (1) avoid distracting visitors from data that they should explore, (2) incorporate background information into the visualization, (3) favor understandability over scientific accuracy, and (4) layer data accessibility to structure inquiry. Lessons learned from this case study add to our growing understanding of how to use visualizations to actively engage learners with scientific data.
Visualizing Student Histories Using Clustering and Composition
Authors:
David Trimm, Penny Rheingans, Marie desJardins
Abstract :
While intuitive time-series visualizations exist for common datasets, student course history data is difficult to represent using traditional visualization techniques due its concurrent nature. A visual composition process is developed and applied to reveal trends across various groupings. By working closely with educators, analytic strategies and techniques are developed to leverage the visualization composition to reveal unknown trends in the data. Furthermore, clustering algorithms are developed to group common course-grade histories for further analysis. Lastly, variations of the composition process are implemented to reveal subtle differences in the underlying data. These analytic tools and techniques enabled educators to confirm expected trends and to discover new ones.
SnapShot: Visualization to Propel Ice Hockey Analytics
Authors:
Hannah Pileggi, Charles D. Stolper, J. Michael Boyle, John T. Stasko
Abstract :
Sports analysts live in a world of dynamic games flattened into tables of numbers, divorced from the rinks, pitches, and courts where they were generated. Currently, these professional analysts use R, Stata, SAS, and other statistical software packages for uncovering insights from game data. Quantitative sports consultants seek a competitive advantage both for their clients and for themselves as analytics becomes increasingly valued by teams, clubs, and squads. In order for the information visualization community to support the members of this blossoming industry, it must recognize where and how visualization can enhance the existing analytical workflow. In this paper, we identify three primary stages of today's sports analyst's routine where visualization can be beneficially integrated: 1) exploring a dataspace; 2) sharing hypotheses with internal colleagues; and 3) communicating findings to stakeholders.Working closely with professional ice hockey analysts, we designed and built SnapShot, a system to integrate visualization into the hockey intelligence gathering process. SnapShot employs a variety of information visualization techniques to display shot data, yet given the importance of a specific hockey statistic, shot length, we introduce a technique, the radial heat map. Through a user study, we received encouraging feedback from several professional analysts, both independent consultants and professional team personnel.
TVCG Papers: SciVis: Vector Field Visualization
Session : 
TVCG Papers: SciVis: Vector Field Visualization
Date & Time : October 19 08:30 am - 10:10 am
Location : Grand Ballroom D
Chair : Hans Hagen
Papers : 
Mesh-Driven Vector Field Clustering and Visualization: An Image-Based Approach
Authors:
Zhenmin Peng, Edward Grundy, Robert S. Laramee, Guoning Chen and T. Nick Croft
Hierarchical Streamline Bundles
Authors:
Hongfeng Yu, Chaoli Wang, Ching_Kuang Shene, Jacqueline H. Chen
Efficient Computation of Combinatorial Feature Flow Fields
Authors:
Jan Reininghaus, Jens Kasten, Tino Weinkauf and Ingrid Hotz
A Time-Dependent Vector Field Topology Based on Streak Surfaces
Authors:
Markus Uffinger, Filip Sadlo and Thomas Ertl
Robust Morse Decompositions of Piecewise Constant Vector Fields
Authors:
Andrzej Szymczak and Eugene Zhang