14 - 19 OCTOBER, 2012. SEATTLE, WASHINGTON, USA

2012 IEEE VIS Papers

SCIVIS Papers: Evaluation
Session : 
Evaluation
Date & Time : October 16 02:00 pm - 03:40 pm
Location : Grand Ballroom D
Chair : Dan Keefe
Papers : 
Evaluation of Fast-Forward Video Visualization
Authors:
Markus Hoferlin, Kuno Kurzhals, Benjamin Hoferlin, Gunther Heidemann, Daniel Weiskopf
Abstract :
We evaluate and compare video visualization techniques based on fast-forward. A controlled laboratory user study (n = 24) was conducted to determine the trade-off between support of object identification and motion perception, two properties that have to be considered when choosing a particular fast-forward visualization. We compare four different visualizations: two representing the state-of-the-art and two new variants of visualization introduced in this paper. The two state-of-the-art methods we consider are frame-skipping and temporal blending of successive frames. Our object trail visualization leverages a combination of frame-skipping and temporal blending, whereas predictive trajectory visualization supports motion perception by augmenting the video frames with an arrow that indicates the future object trajectory. Our hypothesis was that each of the state-of-the-art methods satisfies just one of the goals: support of object identification or motion perception. Thus, they represent both ends of the visualization design. The key findings of the evaluation are that object trail visualization supports object identification, whereas predictive trajectory visualization is most useful for motion perception. However, frame-skipping surprisingly exhibits reasonable performance for both tasks. Furthermore, we evaluate the subjective performance of three different playback speed visualizations for adaptive fast-forward, a subdomain of video fast-forward.
Human Computation in Visualization: Using Purpose Driven Games for Robust Evaluation of Visualization Algorithms
Authors:
Nafees Ahmed, Ziyi Zheng, Klaus Mueller
Abstract :
Due to the inherent characteristics of the visualization process, most of the problems in this field have strong ties with human cognition and perception. This makes the human brain and sensory system the only truly appropriate evaluation platform for evaluating and fine-tuning a new visualization method or paradigm. However, getting humans to volunteer for these purposes has always been a significant obstacle, and thus this phase of the development process has traditionally formed a bottleneck, slowing down progress in visualization research. We propose to take advantage of the newly emerging field of Human Computation (HC) to overcome these challenges. HC promotes the idea that rather than considering humans as users of the computational system, they can be made part of a hybrid computational loop consisting of traditional computation resources and the human brain and sensory system. This approach is particularly successful in cases where part of the computational problem is considered intractable using known computer algorithms but is trivial to common sense human knowledge. In this paper, we focus on HC from the perspective of solving visualization problems and also outline a framework by which humans can be easily seduced to volunteer their HC resources. We introduce a purpose-driven game titled Disguise which serves as a prototypical example for how the evaluation of visualization algorithms can be mapped into a fun and addicting activity, allowing this task to be accomplished in an extensive yet cost effective way. Finally, we sketch out a framework that transcends from the pure evaluation of existing visualization methods to the design of a new one.
Evaluation of Multivariate Visualization on a Multivariate Task
Authors:
Mark A. Livingston, Jonathan W. Decker, Zhuming Ai
Abstract :
Multivariate visualization techniques have attracted great interest as the dimensionality of data sets grows. One premise of such techniques is that simultaneous visual representation of multiple variables will enable the data analyst to detect patterns amongst multiple variables. Such insights could lead to development of new techniques for rigorous (numerical) analysis of complex relationships hidden within the data. Two natural questions arise from this premise: Which multivariate visualization techniques are the most effective for high-dimensional data sets? How does the analysis task change this utility ranking? We present a user study with a new task to answer the first question. We provide some insights to the second question based on the results of our study and results available in the literature. Our task led to significant differences in error, response time, and subjective workload ratings amongst four visualization techniques. We implemented three integrated techniques (Data-driven Spots, Oriented Slivers, and Attribute Blocks), as well as a baseline case of separate grayscale images. The baseline case fared poorly on all three measures, whereas Datadriven Spots yielded the best accuracy and was among the best in response time. These results differ from comparisons of similar techniques with other tasks, and we review all the techniques, tasks, and results (from our work and previous work) to understand the reasons for this discrepancy.
A Data-Driven Approach to Hue-Preserving Color-Blending
Authors:
Lars Kühne, Joachim Giesen, Zhiyuan Zhang, Sungsoo Ha, Klaus Mueller
Abstract :

Color mapping and semitransparent layering play an important role in many visualization scenarios, such as information visualization and volume rendering. The combination of color and transparency is still dominated by standard alpha-compositing using the Porter-Duff over operator which can result in false colors with deceiving impact on the visualization. Other more advanced methods have also been proposed, but the problem is still far from being solved. Here we present an alternative to these existing methods specifically devised to avoid false colors and preserve visual depth ordering. Our approach is data driven and follows the recently formulated knowledge-assisted visualization (KAV) paradigm. Preference data, that have been gathered in web-based user surveys, are used to train a support-vector machine model for automatically predicting an optimized hue-preserving blending. We have applied the resulting model to both volume rendering and a specific information visualization technique, illustrative parallel coordinate plots. Comparative renderings show a significant improvement over previous approaches in the sense that false colors are completely removed and important properties such as depth ordering and blending vividness are better preserved. Due to the generality of the defined data-driven blending operator, it can be easily integrated also into other visualization frameworks.

Effects of Stereo and Screen Size on the Legibility of Three-Dimensional Streamtube Visualization
Authors:
Jian Chen, Haipeng Cai, Alexander P. Auchus, David H. Laidlaw
Abstract :
We report the impact of display characteristics (stereo and size) on task performance in diffusion magnetic resonance imaging (DMRI) in a user study with 12 participants. The hypotheses were that (1) adding stereo and increasing display size would improve task accuracy and reduce completion time, and (2) the greater the complexity of a spatial task, the greater the benefits of an improved display. Thus we expected to see greater performance gains when detailed visual reasoning was required. Participants used dense streamtube visualizations to perform five representative tasks: (1) determine the higher average fractional anisotropy (FA) values between two regions, (2) find the endpoints of fiber tracts, (3) name a bundle, (4) mark a brain lesion, and (5) judge if tracts belong to the same bundle. Contrary to our hypotheses, we found the task completion time was not improved by the use of the larger display and that performance accuracy was hurt rather than helped by the introduction of stereo in our study with dense DMRI data. Bigger was not always better. Thus cautious should be taken when selecting displays for scientific visualization applications. We explored the results further using the body-scale unit and subjective size and stereo experiences.
SCIVIS Papers: Topology and Fields
Session : 
Topology and Fields
Date & Time : October 17 04:15 pm - 05:55 pm
Location : Grand Ballroom D
Chair : Georgeta-Elisabeta Marai
Papers : 
Computing Morse-Smale Complexes with Accurate Geometry
Authors:
Attila Gyulassy, Peer-Timo Bremer, Valerio Pascucci
Abstract :

Topological techniques have proven highly successful in analyzing and visualizing scientific data. As a result, significant efforts have been made to compute structures like the Morse-Smale complex as robustly and efficiently as possible. However, the resulting algorithms, while topologically consistent, often produce incorrect connectivity as well as poor geometry. These problems may compromise or even invalidate any subsequent analysis. Moreover, such techniques may fail to improve even when the resolution of the domain mesh is increased, thus producing potentially incorrect results even for highly resolved functions. To address these problems we introduce two new algorithms: (i) a randomized algorithm to compute the discrete gradient of a scalar field that converges under refinement; and (ii) a deterministic variant which directly computes accurate geometry and thus correct connectivity of the MS complex. The first algorithm converges in the sense that on average it produces the correct result and its standard deviation approaches zero with increasing mesh resolution. The second algorithm uses two ordered traversals of the function to integrate the probabilities of the first to extract correct (near optimal) geometry and connectivity. We present an extensive empirical study using both synthetic and real-world data and demonstrates the advantages of our algorithms in comparison with several popular approaches.

Visualization of Temporal Similarity in Field Data
Authors:
Steffen Frey, Filip Sadlo, Thomas Ertl
Abstract :

This paper presents a visualization approach for detecting and exploring similarity in the temporal variation of field data. We provide an interactive technique for extracting correlations from similarity matrices which capture temporal similarity of univariate functions. We make use of the concept to extract periodic and quasiperiodic behavior at single (spatial) points as well as similarity between different locations within a field and also between different data sets. The obtained correlations are utilized for visual exploration of both temporal and spatial relationships in terms of temporal similarity. Our entire pipeline offers visual interaction and inspection, allowing for the flexibility that in particular time-dependent data analysis techniques require. We demonstrate the utility and versatility of our approach by applying our implementation to data from both simulation and measurement.

Visualizing Nuclear Scission through a Multifield Extension of Topological Analysis
Authors:
David Duke, Hamish Carr, Aaron Knoll, Nicolas Schunck, Hai Ah Nam, Andrzej Staszczak
Abstract :

In nuclear science, density functional theory (DFT) is a powerful tool to model the complex interactions within the atomic nucleus, and is the primary theoretical approach used by physicists seeking a better understanding of fission. However DFT simulations result in complex multivariate datasets in which it is difficult to locate the crucial scission point at which one nucleus fragments into two, and to identify the precursors to scission. The Joint Contour Net (JCN) has recently been proposed as a new data structure for the topological analysis of multivariate scalar fields, analogous to the contour tree for univariate fields. This paper reports the analysis of DFT simulations using the JCN, the first application of the JCN technique to real data. It makes three contributions to visualization: (i) a set of practical methods for visualizing the JCN, (ii) new insight into the detection of nuclear scission, and (iii) an analysis of aesthetic criteria to drive further work on representing the JCN.

Augmented Topological Descriptors of Pore Networks for Material Science
Authors:
Daniela M. Ushizima, Dmitriy Morozov, Gunther H. Weber, Andrea G. C. Bianchi, James A. Sethian, E. W
Abstract :

One potential solution to reduce the concentration of carbon dioxide in the atmosphere is the geologic storage of captured CO2 in underground rock formations, also known as carbon sequestration. There is ongoing research to guarantee that this process is both efficient and safe. We describe tools that provide measurements of media porosity, and permeability estimates, including visualization of pore structures. Existing standard algorithms make limited use of geometric information in calculating permeability of complex microstructures. This quantity is important for the analysis of biomineralization, a subsurface process that can affect physical properties of porous media. This paper introduces geometric and topological descriptors that enhance the estimation of material permeability. Our analysis framework includes the processing of experimental data, segmentation, and feature extraction and making novel use of multiscale topological analysis to quantify maximum flow through porous networks. We illustrate our results using synchrotron-based X-ray computed microtomography of glass beads during biomineralization. We also benchmark the proposed algorithms using simulated data sets modeling jammed packed bead beds of a monodispersive material.

Generalized Topological Simplification of Scalar Fields on Surfaces
Authors:
Julien Tierny, Valerio Pascucci
Abstract :

We present a combinatorial algorithm for the general topological simplification of scalar fields on surfaces. Given a scalar field f, our algorithm generates a simplified field g that provably admits only critical points from a constrained subset of the singularities of f, while guaranteeing a small distance jjf _ gjj1 for data-fitting purpose. In contrast to previous algorithms, our approach is oblivious to the strategy used for selecting features of interest and allows critical points to be removed arbitrarily. When topological persistence is used to select the features of interest, our algorithm produces a standard _-simplification. Our approach is based on a new iterative algorithm for the constrained reconstruction of sub- and sur-level sets. Extensive experiments show that the number of iterations required for our algorithm to converge is rarely greater than 2 and never greater than 5, yielding O(n log(n)) practical time performances. The algorithm handles triangulated surfaces with or without boundary and is robust to the presence of multi-saddles in the input. It is simple to implement, fast in practice and more general than previous techniques. Practically, our approach allows a user to arbitrarily simplify the topology of an input function and robustly generate the corresponding simplified function. An appealing application area of our algorithm is in scalar field design since it enables, without any threshold parameter, the robust pruning of topological noise as selected by the user. This is needed for example to get rid of inaccuracies introduced by numerical solvers, thereby providing topological guarantees needed for certified geometry processing. Experiments show this ability to eliminate numerical noise as well as validate the time efficiency and accuracy of our algorithm. We provide a lightweight C++ implementation as supplemental material that can be used for topological cleaning on surface meshes.

SCIVIS Papers: Flow and Turbulence
Session : 
Flow and Turbulence
Date & Time : October 17 08:30 am - 10:10 am
Location : Grand Ballroom D
Chair : Eugene Zhang
Papers : 
Analysis of Streamline Separation at Infinity Using Time-Discrete Markov Chains
Authors:
Wieland Reich, Gerik Scheuermann
Abstract :
Existing methods for analyzing separation of streamlines are often restricted to a finite time or a local area. In our paper we introduce a new method that complements them by allowing an infinite-time-evaluation of steady planar vector fields. Our algorithm unifies combinatorial and probabilistic methods and introduces the concept of separation in time-discrete Markov-Chains. We compute particle distributions instead of the streamlines of single particles. We encode the flow into a map and then into a transition matrix for each time direction. Finally, we compare the results of our grid-independent algorithm to the popular Finite-Time-Lyapunov-Exponents and discuss the discrepancies.
Derived Metric Tensors for Flow Surface Visualization
Authors:
Harald Obermaier, Kenneth I. Joy
Abstract :
Integral flow surfaces constitute a widely used flow visualization tool due to their capability to convey important flow information such as fluid transport, mixing, and domain segmentation. Current flow surface rendering techniques limit their expressiveness, however, by focusing virtually exclusively on displacement visualization, visually neglecting the more complex notion of deformation such as shearing and stretching that is central to the field of continuum mechanics. To incorporate this information into the flow surface visualization and analysis process, we derive a metric tensor field that encodes local surface deformations as induced by the velocity gradient of the underlying flow field. We demonstrate how properties of the resulting metric tensor field are capable of enhancing present surface visualization and generation methods and develop novel surface querying, sampling, and visualization techniques. The provided results show how this step towards unifying classic flow visualization and more advanced concepts from continuum mechanics enables more detailed and improved flow analysis.
Lagrangian Coherent Structures for Design Analysis of Revolving Doors
Authors:
Benjamin Schindler, Raphael Fuchs, Stefan Barp, Jurgen Waser, Armin Pobitzer, Robert Carnecky, Kre_i
Abstract :
Room air flow and air exchange are important aspects for the design of energy-efficient buildings. As a result, simulations are increasingly used prior to construction to achieve an energy-efficient design. We present a visual analysis of air flow generated at building entrances, which uses a combination of revolving doors and air curtains. The resulting flow pattern is challenging because of two interacting flow patterns: On the one hand, the revolving door acts as a pump, on the other hand, the air curtain creates a layer of uniformly moving warm air between the interior of the building and the revolving door. Lagrangian coherent structures (LCS), which by definition are flow barriers, are the method of choice for visualizing the separation and recirculation behavior of warm and cold air flow. The extraction of LCS is based on the finite-time Lyapunov exponent (FTLE) and makes use of a ridge definition which is consistent with the concept of weak LCS. Both FTLE computation and ridge extraction are done in a robust and efficient way by making use of the fast Fourier transform for computing scale-space derivatives.
Turbulence Visualization at the Terascale on Desktop PCs
Authors:
Marc Treib, Kai Berger, Florian Reichl, Charles Meneveau, Alex Szalay, Rudiger Westermann
Abstract :
Despite the ongoing efforts in turbulence research, the universal properties of the turbulence small-scale structure and the relationships between small- and large-scale turbulent motions are not yet fully understood. The visually guided exploration of turbulence features, including the interactive selection and simultaneous visualization of multiple features, can further progress our understanding of turbulence. Accomplishing this task for flow fields in which the full turbulence spectrum is well resolved is challenging on desktop computers. This is due to the extreme resolution of such fields, requiring memory and bandwidth capacities going beyond what is currently available. To overcome these limitations, we present a GPU system for feature-based turbulence visualization that works on a compressed flow field representation. We use a wavelet-based compression scheme including run-length and entropy encoding, which can be decoded on the GPU and embedded into brick-based volume ray-casting. This enables a drastic reduction of the data to be streamed from disk to GPU memory. Our system derives turbulence properties directly from the velocity gradient tensor, and it either renders these properties in turn or generates and renders scalar feature volumes. The quality and efficiency of the system is demonstrated in the visualization of two unsteady turbulence simulations, each comprising a spatio-temporal resolution of 10244. On a desktop computer, the system can visualize each time step in 5 seconds, and it achieves about three times this rate for the visualization of a scalar feature volume.
Automatic Detection and Visualization of Qualitative Hemodynamic Characteristics in Cerebral Aneurysms
Authors:
Rocco Gasteiger, Dirk J. Lehmann, Roy van Pelt, Gabor Janiga, Oliver Beuing, Anna Vilanova, Holger T
Abstract :
Cerebral aneurysms are a pathological vessel dilatation that bear a high risk of rupture. For the understanding and evaluation of the risk of rupture, the analysis of hemodynamic information plays an important role. Besides quantitative hemodynamic information, also qualitative flow characteristics, e.g., the inflow jet and impingement zone are correlated with the risk of rupture. However, the assessment of these two characteristics is currently based on an interactive visual investigation of the flow field, obtained by computational fluid dynamics (CFD) or blood flow measurements. We present an automatic and robust detection as well as an expressive visualization of these characteristics. The detection can be used to support a comparison, e.g., of simulation results reflecting different treatment options. Our approach utilizes local streamline properties to formalize the inflow jet and impingement zone. We extract a characteristic seeding curve on the ostium, on which an inflow jet boundary contour is constructed. Based on this boundary contour we identify the impingement zone. Furthermore, we present several visualization techniques to depict both characteristics expressively. Thereby, we consider accuracy and robustness of the extracted characteristics, minimal visual clutter and occlusions. An evaluation with six domain experts confirms that our approach detects both hemodynamic characteristics reasonably.
SCIVIS Papers: Interaction and Rendering
Session : 
Interaction and Rendering
Date & Time : October 17 02:00 pm - 03:40 pm
Location : Grand Ballroom D
Chair : Xiaoyu Wang
Papers : 
WYSIWYP: What You See Is What You Pick
Authors:
Alexander Wiebel, Frans M. Vos, David Foerster, Hans-Christian Hege
Abstract :

Scientists, engineers and physicians are used to analyze 3D data with slice-based visualizations. Radiologists for example are trained to read slices of medical imaging data. Despite the numerous examples of sophisticated 3D rendering techniques, domain experts, who still prefer slice-based visualization do not consider these to be very useful. Since 3D renderings have the advantage of providing an overview at a glance, while 2D depictions better serve detailed analyses, it is of general interest to better combine these methods. Recently there have been attempts to bridge this gap between 2D and 3D renderings. These attempts include specialized techniques for volume picking in medical imaging data that result in repositioning slices. In this paper, we present a new volume picking technique called WYSIWYP (what you see is what you pick) that, in contrast to previous work, does not require pre-segmented data or metadata and thus is more generally applicable. The positions picked by our method are solely based on the data itself, the transfer function, and the way the volumetric rendering is perceived by the user. To demonstrate the utility of the proposed method, we apply it to automated positioning of slices in volumetric scalar fields from various application areas. Finally, we present results of a user study in which 3D locations selected by users are compared to those resulting from WYSIWYP. The user study confirms our claim that the resulting positions correlate well with those perceived by the user.

Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input
Authors:
Lingyun Yu, Konstantinos Efstathiou, Petra Isenberg, Tobias Isenberg
Abstract :

Data selection is a fundamental task in visualization because it serves as a pre-requisite to many follow-up interactions. Efficient spatial selection in 3D point cloud datasets consisting of thousands or millions of particles can be particularly challenging. We present two new techniques, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner. Specifically, we describe how to spatially select a subset of a 3D particle cloud by simply encircling the target particles on screen using either the mouse or direct-touch input. Based on the drawn lasso, our techniques automatically determine a bounding selection surface around the encircled particles based on their density. This kind of selection technique can be applied to particle datasets in several application domains. TeddySelection and CloudLasso reduce, and in some cases even eliminate, the need for complex multi-step selection processes involving Boolean operations. This was confirmed in a formal, controlled user study in which we compared the more flexible CloudLasso technique to the standard cylinder-based selection technique. This study showed that the former is consistently more efficient than the latter; in several cases the CloudLasso selection time was half that of the corresponding cylinder-based selection.

Sketching Uncertainty into Simulations
Authors:
Hrvoje Ribii, Jurgen Waser, Roman Gurbat, Bernhard Sadransky, M. Eduard Groller
Abstract :

In a variety of application areas, the use of simulation steering in decision making is limited at best. Research focusing on this problem suggests that most user interfaces are too complex for the end user. Our goal is to let users create and investigate multiple, alternative scenarios without the need for special simulation expertise. To simplify the specification of parameters, we move from a traditional manipulation of numbers to a sketch-based input approach. Users steer both numeric parameters and parameters with a spatial correspondence by sketching a change onto the rendering. Special visualizations provide immediate visual feedback on how the sketches are transformed into boundary conditions of the simulation models. Since uncertainty with respect to many intertwined parameters plays an important role in planning, we also allow the user to intuitively setup complete value ranges, which are then automatically transformed into ensemble simulations. The interface and the underlying system were developed in collaboration with experts in the field of flood management. The real-world data they have provided has allowed us to construct scenarios used to evaluate the system. These were presented to a variety of flood response personnel, and their feedback is discussed in detail in the paper. The interface was found to be intuitive and relevant, although a certain amount of training might be necessary.

A Perceptual-Statistics Shading Model
Authors:
Veronika Solteszova, Cagatay Turkay, Mark C. Price, Ivan Viola
Abstract :

The process of surface perception is complex and based on several influencing factors, e.g., shading, silhouettes, occluding contours, and top down cognition. The accuracy of surface perception can be measured and the influencing factors can be modified in order to decrease the error in perception. This paper presents a novel concept of how a perceptual evaluation of a visualization technique can contribute to its redesign with the aim of improving the match between the distal and the proximal stimulus. During analysis of data from previous perceptual studies, we observed that the slant of 3D surfaces visualized on 2D screens is systematically underestimated. The visible trends in the error allowed us to create a statistical model of the perceived surface slant. Based on this statistical model we obtained from user experiments, we derived a new shading model that uses adjusted surface normals and aims to reduce the error in slant perception. The result is a shape-enhancement of visualization which is driven by an experimentally-founded statistical model. To assess the efficiency of the statistical shading model, we repeated the evaluation experiment and confirmed that the error in perception was decreased. Results of both user experiments are publicly-available datasets.

Visual Steering and Verification of Mass Spectrometry Data Factorization in Air Quality Research
Authors:
Daniel Engel, Klaus Greff, Christoph Garth, Keith Bein, Anthony Wexler, Bernd Hamann, Hans Hagen
Abstract :

The study of aerosol composition for air quality research involves the analysis of high-dimensional single particle mass spectrometry data. We describe, apply, and evaluate a novel interactive visual framework for dimensionality reduction of such data. Our framework is based on non-negative matrix factorization with specifically defined regularization terms that aid in resolving mass spectrum ambiguity. Thereby, visualization assumes a key role in providing insight into and allowing to actively control a heretofore elusive data processing step, and thus enabling rapid analysis meaningful to domain scientists. In extending existing black box schemes, we explore design choices for visualizing, interacting with, and steering the factorization process to produce physically meaningful results. A domain-expert evaluation of our system performed by the air quality research experts involved in this effort has shown that our method and prototype admits the finding of unambiguous and physically correct lower-dimensional basis transformations of mass spectrometry data at significantly increased speed and a higher degree of ease.

SCIVIS Papers: Volume Data Handling
Session : 
Volume Data Handling
Date & Time : October 17 10:30 am - 12:10 pm
Location : Grand Ballroom D
Chair : Chaoli Wang
Papers : 
Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach
Authors:
Markus Hadwiger, Johanna Beyer, Won-Ki Jeong, Hanspeter Pfister
Abstract :
This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.
An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data
Authors:
Nathaniel Fout, Kwan-liu Ma
Abstract :
In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.
On the Interpolation of Data with Normally Distributed Uncertainty for Visualization
Authors:
Steven Schlegel, Nico Korn, Gerik Scheuermann
Abstract :
In many fields of science or engineering, we are confronted with uncertain data. For that reason, the visualization of uncertainty received a lot of attention, especially in recent years. In the majority of cases, Gaussian distributions are used to describe uncertain behavior, because they are able to model many phenomena encountered in science. Therefore, in most applications uncertain data is (or is assumed to be) Gaussian distributed. If such uncertain data is given on fixed positions, the question of interpolation arises for many visualization approaches. In this paper, we analyze the effects of the usual linear interpolation schemes for visualization of Gaussian distributed data. In addition, we demonstrate that methods known in geostatistics and machine learning have favorable properties for visualization purposes in this case.
Coherency-Based Curve Compression for High-Order Finite Element Model Visualization
Authors:
Alexander Bock, Erik Sunden, Bingchen Liu, Burkhard Wuensche, Timo Ropinski
Abstract :
Finite element (FE) models are frequently used in engineering and life sciences within time-consuming simulations. In contrast with the regular grid structure facilitated by volumetric data sets, as used in medicine or geosciences, FE models are defined over a non-uniform grid. Elements can have curved faces and their interior can be defined through high-order basis functions, which pose additional challenges when visualizing these models. During ray-casting, the uniformly distributed sample points along each viewing ray must be transformed into the material space defined within each element. The computational complexity of this transformation makes a straightforward approach inadequate for interactive data exploration. In this paper, we introduce a novel coherency-based method which supports the interactive exploration of FE models by decoupling the expensive world-to-material space transformation from the rendering stage, thereby allowing it to be performed within a precomputation stage. Therefore, our approach computes view-independent proxy rays in material space, which are clustered to facilitate data reduction. During rendering, these proxy rays are accessed, and it becomes possible to visually analyze high-order FE models at interactive frame rates, even when they are time-varying or consist of multiple modalities. Within this paper, we provide the necessary background about the FE data, describe our decoupling method, and introduce our interactive rendering algorithm. Furthermore, we provide visual results and analyze the error introduced by the presented approach.
ElVis: A System for the Accurate and Interactive Visualization of High-Order Finite Element Solutions
Authors:
Blake Nelson, Eric Liu, Robert Haimes, Robert M. Kirby
Abstract :

This paper presents the Element Visualizer (ElVis), a new, open-source scientific visualization system for use with highorder finite element solutions to PDEs in three dimensions. This system is designed to minimize visualization errors of these types of fields by querying the underlying finite element basis functions (e.g., high-order polynomials) directly, leading to pixel-exact representations of solutions and geometry. The system interacts with simulation data through runtime plugins, which only require users to implement a handful of operations fundamental to finite element solvers. The data in turn can be visualized through the use of cut surfaces, contours, isosurfaces, and volume rendering. These visualization algorithms are implemented using NVIDIA's OptiX GPU-based ray-tracing engine, which provides accelerated ray traversal of the high-order geometry, and CUDA, which allows for effective parallel evaluation of the visualization algorithms. The direct interface between ElVis and the underlying data differentiates it from existing visualization tools. Current tools assume the underlying data is composed of linear primitives; high-order data must be interpolated with linear functions as a result. In this work, examples drawn from aerodynamic simulations, high-order discontinuous Galerkin finite element solutions of aerodynamic flows in particular, will demonstrate the superiority of ElVis' pixel-exact approach when compared with traditional linear-interpolation methods. Such methods can introduce a number of inaccuracies in the resulting visualization, making it unclear if visual artifacts are genuine to the solution data or if these artifacts are the result of interpolation errors. Linear methods additionally cannot properly visualize curved geometries (elements or boundaries) which can greatly inhibit developers' debugging efforts. As we will show, pixel-exact visualization exhibits none of these issues, removing the visualization scheme as a source of uncertainty for engineers using ElVis.

SCIVIS Papers: Physical Science Applications
Session : 
Physical Science Applications
Date & Time : October 18 04:15 pm - 05:55 pm
Location : Grand Ballroom D
Chair : Heike Leitte
Papers : 
KnotPad: Visualizing and Exploring Knot Theory with Fluid Reidemeister Moves
Authors:
Hui Zhang, Jianguang Weng, Lin Jing, Yiwen Zhong
Abstract :

We present KnotPad, an interactive paper-like system for visualizing and exploring mathematical knots; we exploit topological drawing and math-aware deformation methods in particular to enable and enrich our interactions with knot diagrams. Whereas most previous efforts typically employ physically based modeling to simulate the 3D dynamics of knots and ropes, our tool offers a Reidemeister move based interactive environment that is much closer to the topological problems being solved in knot theory, yet without interfering with the traditional advantages of paper-based analysis and manipulation of knot diagrams. Drawing knot diagrams with many crossings and producing their equivalent is quite challenging and error-prone. KnotPad can restrict user manipulations to the three types of Reidemeister moves, resulting in a more fluid yet mathematically correct user experience with knots. For our principal test case of mathematical knots, KnotPad permits us to draw and edit their diagrams empowered by a family of interactive techniques. Furthermore, we exploit supplementary interface elements to enrich the user experiences. For example, KnotPad allows one to pull and drag on knot diagrams to produce mathematically valid moves. Navigation enhancements in KnotPad provide still further improvement: by remembering and displaying the sequence of valid moves applied during the entire interaction, KnotPad allows a much cleaner exploratory interface for the user to analyze and study knot equivalence. All these methods combine to reveal the complex spatial relationships of knot diagrams with a mathematically true and rich user experience.

Visualization of Electrostatic Dipoles in Molecular Dynamics of Metal Oxides
Authors:
Sebastian Grottel, Philipp Beck, Christoph Muller, Guido Reina, Johannes Roth, Hans-Rainer Trebin, T
Abstract :
Metal oxides are important for many technical applications. For example alumina (aluminum oxide) is the most commonlyused ceramic in microelectronic devices thanks to its excellent properties. Experimental studies of these materials are increasingly supplemented with computer simulations. Molecular dynamics (MD) simulations can reproduce the material behavior very well and are now reaching time scales relevant for interesting processes like crack propagation. In this work we focus on the visualization of induced electric dipole moments on oxygen atoms in crack propagation simulations. The straightforward visualization using glyphs for the individual atoms, simple shapes like spheres or arrows, is insufficient for providing information about the data set as a whole. As our contribution we show for the first time that fractional anisotropy values computed from the local neighborhood of individual atoms of MD simulation data depict important information about relevant properties of the field of induced electric dipole moments. Iso surfaces in the field of fractional anisotropy as well as adjustments of the glyph representation allow the user to identify regions of correlated orientation. We present novel and relevant findings for the application domain resulting from these visualizations, like the influence of mechanical forces on the electrostatic properties.
Cumulative Heat Diffusion Using Volume Gradient Operator for Volume Analysis
Authors:
Krishna Chaitanya Gurijala, Lei Wang, Arie Kaufman
Abstract :
We introduce a simple, yet powerful method called the Cumulative Heat Diffusion for shape-based volume analysis, while drastically reducing the computational cost compared to conventional heat diffusion. Unlike the conventional heat diffusion process, where the diffusion is carried out by considering each node separately as the source, we simultaneously consider all the voxels as sources and carry out the diffusion, hence the term cumulative heat diffusion. In addition, we introduce a new operator that is used in the evaluation of cumulative heat diffusion called the Volume Gradient Operator (VGO). VGO is a combination of the LBO and a data-driven operator which is a function of the half gradient. The half gradient is the absolute value of the difference between the voxel intensities. The VGO by its definition captures the local shape information and is used to assign the initial heat values. Furthermore, VGO is also used as the weighting parameter for the heat diffusion process. We demonstrate that our approach can robustly extract shape-based features and thus forms the basis for an improved classification and exploration of features based on shape.
A Novel Approach to Visualizing Dark Matter Simulations
Authors:
Ralf Kaehler, Oliver Hahn, Tom Abel
Abstract :
In the last decades cosmological N-body dark matter simulations have enabled ab initio studies of the formation of structure in the Universe. Gravity amplified small density fluctuations generated shortly after the Big Bang, leading to the formation of galaxies in the cosmic web. These calculations have led to a growing demand for methods to analyze time-dependent particle based simulations. Rendering methods for such N-body simulation data usually employ some kind of splatting approach via point based rendering primitives and approximate the spatial distributions of physical quantities using kernel interpolation techniques, common in SPH (Smoothed Particle Hydrodynamics)-codes. This paper proposes three GPU-assisted rendering approaches, based on a new, more accurate method to compute the physical densities of dark matter simulation data. It uses full phase-space information to generate a tetrahedral tessellation of the computational domain, with mesh vertices defined by the simulation's dark matter particle positions. Over time the mesh is deformed by gravitational forces, causing the tetrahedral cells to warp and overlap. The new methods are well suited to visualize the cosmic web. In particular they preserve caustics, regions of high density that emerge, when several streams of dark matter particles share the same location in space, indicating the formation of structures like sheets, filaments and halos. We demonstrate the superior image quality of the new approaches in a comparison with three standard rendering techniques for N-body simulation data.
Visual Data Analysis as an Integral Part of Environmental Management
Authors:
Joerg Meyer, E. Wes Bethel, Jennifer L. Horsman, Susan S. Hubbard, Harinarayan Krishnan, Alexandru R
Abstract :
The U.S. Department of Energy's (DOE) Office of Environmental Management (DOE/EM) currently supports an effort to understand and predict the fate of nuclear contaminants and their transport in natural and engineered systems. Geologists, hydrologists, physicists and computer scientists are working together to create models of existing nuclear waste sites, to simulate their behavior and to extrapolate it into the future. We use visualization as an integral part in each step of this process. In the first step, visualization is used to verify model setup and to estimate critical parameters. High-performance computing simulations of contaminant transport produces massive amounts of data, which is then analyzed using visualization software specifically designed for parallel processing of large amounts of structured and unstructured data. Finally, simulation results are validated by comparing simulation results to measured current and historical field data. We describe in this article how visual analysis is used as an integral part of the decision-making process in the planning of ongoing and future treatment options for the contaminated nuclear waste sites. Lessons learned from visually analyzing our large-scale simulation runs will also have an impact on deciding on treatment measures for other contaminated sites.
SCIVIS Papers: Geo-Applications
Session : 
Geo-Applications
Date & Time : October 18 02:00 pm - 03:40 pm
Location : Grand Ballroom D
Chair : Jens Krueger
Papers : 
Visualization of Astronomical Nebulae via Distributed Multi-GPU Compressed Sensing Tomography
Authors:
Stephan Wenger, Marco Ament, Stefan Guthe, Dirk Lorenz, Andreas Tillmann, Daniel Weiskopf, Marcus Ma
Abstract :
The 3D visualization of astronomical nebulae is a challenging problem since only a single 2D projection is observable from our fixed vantage point on Earth. We attempt to generate plausible and realistic looking volumetric visualizations via a tomographic approach that exploits the spherical or axial symmetry prevalent in some relevant types of nebulae. Different types of symmetry can be implemented by using different randomized distributions of virtual cameras. Our approach is based on an iterative compressed sensing reconstruction algorithm that we extend with support for position-dependent volumetric regularization and linear equality constraints. We present a distributed multi-GPU implementation that is capable of reconstructing high-resolution datasets from arbitrary projections. Its robustness and scalability are demonstrated for astronomical imagery from the Hubble Space Telescope. The resulting volumetric data is visualized using direct volume rendering. Compared to previous approaches, our method preserves a much higher amount of detail and visual variety in the 3D visualization, especially for objects with only approximate symmetry.
Visualization of Flow Behavior in Earth Mantle Convection
Authors:
Simon Schroder, John A. Peterson, Harald Obermaier, Louise H. Kellogg, Kenneth I. Joy, Hans Hagen
Abstract :
A fundamental characteristic of fluid flow is that it causes mixing: introduce a dye into a flow, and it will disperse. Mixing can be used as a method to visualize and characterize flow. Because mixing is a process that occurs over time, it is a 4D problem that presents a challenge for computation, visualization, and analysis. Motivated by a mixing problem in geophysics, we introduce a combination of methods to analyze, transform, and finally visualize mixing in simulations of convection in a self-gravitating 3D spherical shell representing convection in the Earth's mantle. Geophysicists use tools such as the finite element model CitcomS to simulate convection, and introduce massless, passive tracers to model mixing. The output of geophysical flow simulation is hard to analyze for domain experts because of overall data size and complexity. In addition, information overload and occlusion are problems when visualizing a whole-earth model. To address the large size of the data, we rearrange the simulation data using intelligent indexing for fast file access and efficient caching. To address information overload and interpret mixing, we compute tracer concentration statistics, which are used to characterize mixing in mantle convection models. Our visualization uses a specially tailored version of Direct Volume Rendering. The most important adjustment is the use of constant opacity. Because of this special area of application, i. e. the rendering of a spherical shell, many computations for volume rendering can be optimized. These optimizations are essential to a smooth animation of the time-dependent simulation data. Our results show how our system can be used to quickly assess the simulation output and test hypotheses regarding Earth's mantle convection. The integrated processing pipeline helps geoscientists to focus on their main task of analyzing mantle homogenization.
Interactive Retro-Deformation of Terrain for Reconstructing 3D Fault Displacements
Authors:
Rolf Westerteiger, Tracy Compton, Tony Bernardin, Eric Cowgill, Klaus Gwinner, Bernd Hamann, Andreas
Abstract :
Planetary topography is the result of complex interactions between geological processes, of which faulting is a prominent component. Surface-rupturing earthquakes cut and move landforms which develop across active faults, producing characteristic surface displacements across the fault. Geometric models of faults and their associated surface displacements are commonly applied to reconstruct these offsets to enable interpretation of the observed topography. However, current 2D techniques are limited in their capability to convey both the three-dimensional kinematics of faulting and the incremental sequence of events required by a given reconstruction. Here we present a real-time system for interactive retro-deformation of faulted topography to enable reconstruction of fault displacement within a high-resolution (sub 1m/pixel) 3D terrain visualization. We employ geometry shaders on the GPU to intersect the surface mesh with fault-segments interactively specified by the user and transform the resulting surface blocks in realtime according to a kinematic model of fault motion. Our method facilitates a human-in-the-loop approach to reconstruction of fault displacements by providing instant visual feedback while exploring the parameter space. Thus, scientists can evaluate the validity of traditional point-to-point reconstructions by visually examining a smooth interpolation of the displacement in 3D. We show the efficacy of our approach by using it to reconstruct segments of the San Andreas fault, California as well as a graben structure in the Noctis Labyrinthus region on Mars.
A Visual Analysis Concept for the Validation of Geoscientific Simulation Models
Authors:
Andrea Unger, Sven Schulte, Volker Klemann, Doris Dransch
Abstract :

Geoscientific modeling and simulation helps to improve our understanding of the complex Earth system. During the modeling process, validation of the geoscientific model is an essential step. In validation, it is determined whether the model output shows sufficient agreement with observation data. Measures for this agreement are called goodness of fit. In the geosciences, analyzing the goodness of fit is challenging due to its manifold dependencies: 1) The goodness of fit depends on the model parameterization, whose precise values are not known. 2) The goodness of fit varies in space and time due to the spatio-temporal dimension of geoscientific models. 3) The significance of the goodness of fit is affected by resolution and preciseness of available observational data. 4) The correlation between goodness of fit and underlying modeled and observed values is ambiguous. In this paper, we introduce a visual analysis concept that targets these challenges in the validation of geoscientific models; specifically focusing on applications where observation data is sparse, unevenly distributed in space and time, and imprecise, which hinders a rigorous analytical approach. Our concept, developed in close cooperation with Earth system modelers, addresses the four challenges by four tailored visualization components. The tight linking of these components supports a twofold interactive drill-down in model parameter space and in the set of data samples, which facilitates the exploration of the numerous dependencies of the goodness of fit. We exemplify our visualization concept for geoscientific modeling of glacial isostatic adjustments in the last 100,000 years, validated against sea levels indicators; a prominent example for sparse and imprecise observation data. An initial use case and feedback from Earth system modelers indicate that our visualization concept is a valuable complement to the range of validation methods.

SeiVis: An Interactive Visual Subsurface Modeling Application
Authors:
Thomas Hollt, Wolfgang Freiler, Fritz M. Gschwantner, Helmut Doleisch, Gabor Heinemann, Markus Hadwi
Abstract :
The most important resources to fulfill today's energy demands are fossil fuels, such as oil and natural gas. When exploiting hydrocarbon reservoirs, a detailed and credible model of the subsurface structures is crucial in order to minimize economic and ecological risks. Creating such a model is an inverse problem: reconstructing structures from measured reflection seismics. The major challenge here is twofold: First, the structures in highly ambiguous seismic data are interpreted in the time domain. Second, a velocity model has to be built from this interpretation to match the model to depth measurements from wells. If it is not possible to obtain a match at all positions, the interpretation has to be updated, going back to the first step. This results in a lengthy back and forth between the different steps, or in an unphysical velocity model in many cases. This paper presents a novel, integrated approach to interactively creating subsurface models from reflection seismics. It integrates the interpretation of the seismic data using an interactive horizon extraction technique based on piecewise global optimization with velocity modeling. Computing and visualizing the effects of changes to the interpretation and velocity model on the depth-converted model on the fly enables an integrated feedback loop that enables a completely new connection of the seismic data in time domain and well data in depth domain. Using a novel joint time/depth visualization, depicting side-by-side views of the original and the resulting depth-converted data, domain experts can directly fit their interpretation in time domain to spatial ground truth data. We have conducted a domain expert evaluation, which illustrates that the presented workflow enables the creation of exact subsurface models much more rapidly than previous approaches.
SCIVIS Papers: Volume Rendering
Session : 
Volume Rendering
Date & Time : October 18 10:30 am - 12:10 pm
Location : Grand Ballroom D
Chair : Carlos Correa
Papers : 
Fuzzy Volume Rendering
Authors:
Nathaniel Fout, Kwan-liu Ma
Abstract :
In order to assess the reliability of volume rendering, it is necessary to consider the uncertainty associated with the volume data and how it is propagated through the volume rendering algorithm, as well as the contribution to uncertainty from the rendering algorithm itself. In this work, we show how to apply concepts from the field of reliable computing in order to build a framework for management of uncertainty in volume rendering, with the result being a self-validating computational model to compute a posteriori uncertainty bounds. We begin by adopting a coherent, unifying possibility-based representation of uncertainty that is able to capture the various forms of uncertainty that appear in visualization, including variability, imprecision, and fuzziness. Next, we extend the concept of the fuzzy transform in order to derive rules for accumulation and propagation of uncertainty. This representation and propagation of uncertainty together constitute an automated framework for management of uncertainty in visualization, which we then apply to volume rendering. The result, which we call fuzzy volume rendering, is an uncertainty-aware rendering algorithm able to produce more complete depictions of the volume data, thereby allowing more reliable conclusions and informed decisions. Finally, we compare approaches for self-validated computation in volume rendering, demonstrating that our chosen method has the ability to handle complex uncertainty while maintaining efficiency.
Automatic Tuning of Spatially Varying Transfer Functions for Blood Vessel Visualization
Authors:
Gunnar Lathen, Stefan Lindholm, Reiner Lenz, Anders Persson, Magnus Borga
Abstract :
Computed Tomography Angiography (CTA) is commonly used in clinical routine for diagnosing vascular diseases. The procedure involves the injection of a contrast agent into the blood stream to increase the contrast between the blood vessels and the surrounding tissue in the image data. CTA is often visualized with Direct Volume Rendering (DVR) where the enhanced image contrast is important for the construction of Transfer Functions (TFs). For increased efficiency, clinical routine heavily relies on preset TFs to simplify the creation of such visualizations for a physician. In practice, however, TF presets often do not yield optimal images due to variations in mixture concentration of contrast agent in the blood stream. In this paper we propose an automatic, optimizationbased method that shifts TF presets to account for general deviations and local variations of the intensity of contrast enhanced blood vessels. Some of the advantages of this method are the following. It computationally automates large parts of a process that is currently performed manually. It performs the TF shift locally and can thus optimize larger portions of the image than is possible with manual interaction. The method is based on a well known vesselness descriptor in the definition of the optimization criterion. The performance of the method is illustrated by clinically relevant CT angiography datasets displaying both improved structural overviews of vessel trees and improved adaption to local variations of contrast concentration.
Hierarchical Exploration of Volumes Using Multilevel Segmentation of the Intensity-Gradient Histograms
Authors:
Cheuk Yiu Ip, Amitabh Varshney, Joseph JaJa
Abstract :
Visual exploration of volumetric datasets to discover the embedded features and spatial structures is a challenging and tedious task. In this paper we present a semi-automatic approach to this problem that works by visually segmenting the intensitygradient 2D histogram of a volumetric dataset into an exploration hierarchy. Our approach mimics user exploration behavior by analyzing the histogram with the normalized-cut multilevel segmentation technique. Unlike previous work in this area, our technique segments the histogram into a reasonable set of intuitive components that are mutually exclusive and collectively exhaustive. We use information-theoretic measures of the volumetric data segments to guide the exploration. This provides a data-driven coarse-to-fine hierarchy for a user to interactively navigate the volume in a meaningful manner.
Historygrams: Enabling Interactive Global Illumination in Direct Volume Rendering using Photon Mapping
Authors:
Daniel Jonsson, Joel Kronander, Timo Ropinski, Anders Ynnerman
Abstract :
In this paper, we enable interactive volumetric global illumination by extending photon mapping techniques to handle interactive transfer function (TF) and material editing in the context of volume rendering. We propose novel algorithms and data structures for finding and evaluating parts of a scene affected by these parameter changes, and thus support efficient updates of the photon map. In direct volume rendering (DVR) the ability to explore volume data using parameter changes, such as editable TFs, is of key importance. Advanced global illumination techniques are in most cases computationally too expensive, as they prevent the desired interactivity. Our technique decreases the amount of computation caused by parameter changes, by introducing Historygrams which allow us to efficiently reuse previously computed photon media interactions. Along the viewing rays, we utilize properties of the light transport equations to subdivide a view-ray into segments and independently update them when invalid. Unlike segments of a view-ray, photon scattering events within the volumetric medium needs to be sequentially updated. Using our Historygram approach, we can identify the first invalid photon interaction caused by a property change, and thus reuse all valid photon interactions. Combining these two novel concepts, supports interactive editing of parameters when using volumetric photon mapping in the context of DVR. As a consequence, we can handle arbitrarily shaped and positioned light sources, arbitrary phase functions, bidirectional reflectance distribution functions and multiple scattering which has previously not been possible in interactive DVR.
Structure-Aware Lighting Design for Volume Visualization
Authors:
Yubo Tao, Hai Lin, Feng Dong, Chao Wang, Gordon Clapworthy, and Hujun Bao
Abstract :
Lighting design is a complex, but fundamental, problem in many fields. In volume visualization, direct volume rendering generates an informative image without external lighting, as each voxel itself emits radiance. However, external lighting further improves the shape and detail perception of features, and it also determines the effectiveness of the communication of feature information. The human visual system is highly effective in extracting structural information from images, and to assist it further, this paper presents an approach to structure-aware automatic lighting design by measuring the structural changes between the images with and without external lighting. Given a transfer function and a viewpoint, the optimal lighting parameters are those that provide the greatest enhancement to structural information - the shape and detail information of features are conveyed most clearly by the optimal lighting parameters. Besides lighting goodness, the proposed metric can also be used to evaluate lighting similarity and stability between two sets of lighting parameters. Lighting similarity can be used to optimize the selection of multiple light sources so that different light sources can reveal distinct structural information. Our experiments with several volume data sets demonstrate the effectiveness of the structure-aware lighting design approach. It is well suited to use by novices as it requires little technical understanding of the rendering parameters associated with direct volume rendering.
SCIVIS Papers: Analytics
Session : 
Analytics
Date & Time : October 18 08:30 am - 10:10 am
Location : Grand Ballroom D
Chair : Helwig Hauser
Papers : 
Multivariate Data Analysis Using Persistence-Based Filtering and Topological Signatures
Authors:
Bastian Rieck, Hubert Mara, Heike Leitte
Abstract :
The extraction of significant structures in arbitrary high-dimensional data sets is a challenging task. Moreover, classifying data points as noise in order to reduce a data set bears special relevance for many application domains. Standard methods such as clustering serve to reduce problem complexity by providing the user with classes of similar entities. However, they usually do not highlight relations between different entities and require a stopping criterion, e.g. the number of clusters to be detected. In this paper, we present a visualization pipeline based on recent advancements in algebraic topology. More precisely, we employ methods from persistent homology that enable topological data analysis on high-dimensional data sets. Our pipeline inherently copes with noisy data and data sets of arbitrary dimensions. It extracts central structures of a data set in a hierarchical manner by using a persistence-based filtering algorithm that is theoretically well-founded. We furthermore introduce persistence rings, a novel visualization technique for a class of topological features-the persistence intervals-of large data sets. Persistence rings provide a unique topological signature of a data set, which helps in recognizing similarities. In addition, we provide interactive visualization techniques that assist the user in evaluating the parameter space of our method in order to extract relevant structures. We describe and evaluate our analysis pipeline by means of two very distinct classes of data sets: First, a class of synthetic data sets containing topological objects is employed to highlight the interaction capabilities of our method. Second, in order to affirm the utility of our technique, we analyse a class of high-dimensional real-world data sets arising from current research in cultural heritage.
Surface-Based Structure Analysis and Visualization for Multifield Time-Varying Datasets
Authors:
Samer S. Barakat, Markus Rutten, Xavier Tricoche
Abstract :

This paper introduces a new feature analysis and visualization method for multifield datasets. Our approach applies a surface-centric model to characterize salient features and form an effective, schematic representation of the data. We propose a simple, geometrically motivated, multifield feature definition. This definition relies on an iterative algorithm that applies existing theory of skeleton derivation to fuse the structures from the constitutive fields into a coherent data description, while addressing noise and spurious details. This paper also presents a new method for non-rigid surface registration between the surfaces of consecutive time steps. This matching is used in conjunction with clustering to discover the interaction patterns between the different fields and their evolution over time. We document the unified visual analysis achieved by our method in the context of several multifield problems from large-scale time-varying simulations.

[TVCG Invited Paper] Integrating Isosurface Statistics and Histograms
Authors:
Brian Duffy, Hamish Carr and Torsten Moller
Abstract :
.
[TVCG Invited Paper] ViSizer: A Visualization Resizing Framework
Authors:
Yingcai Wu, Xiaotong Liu, Shixia Liu and Kwan-Liu Ma
Abstract :

.

[TVCG Invited Paper] TripAdvisor-D: A Tourism-Inspired High-Dimensional Space Exploration Framework with Overview and Detail
Authors:
Julia EunJu Nam and Klaus Mueller
Abstract :

.