Joseph Cottam, Peter Wang, Jeff Baumes, Jeff Heer
Contact: Joseph Cottam (email@example.com)
Visualization and analytical tools face major challenges as datasets become larger and more dynamic. The Defense Advance Research Projects Agency (DARPA) XData initiative is funding projects to meet those challenges. The entire XData catalog projects provide a wide spectrum of analytical and visualization tools. This tutorial will introduce participants to several tools of particular interest to the visualization and visual analytics communities. The selected tools are Bokeh, Tangelo, Vega, Lyra and Blaze. These tools incorporate current research in ready-to-use packages and represent excellent avenues for moving research into practice. This tutorial will provide a basic orientation for each tool and showcase interoperation between them.
Stéfan van der Walt
Contact: Stéfan van der Walt (firstname.lastname@example.org)
From a wider perspective, Data Science be seen as the management and interpretation of data through computation and statistics. This tutorial highlights several of these core elements through an interactive computational workshop. To work with data, we need to access a data source, whereafter the data can be visualized to explore its structure. Based on intuitions gained about this structure, exploratory statistical analyses can then be made. Finally, more sophisticated machine learning models can be ﬁt to the data to draw inferences and make predictions about data yet unseen. This tutorial systematically leads attendees through these steps by way of practical, real-world examples, augmented by hands-on computations in the Python language.
Contact: Bernice Rogowitz (email@example.com)
Imaging, visualization and computer graphics provide visual representations of data in order to communicate, provide insight and enhance problem solving. The human observer actively processes these visual representations using perceptual and cognitive mechanisms that have evolved over millions of years. The goal of this tutorial is to provide an introduction to these processing mechanisms, and to show how this knowledge can guide the decisions we make about how to represent data visually, how we visually represent patterns and relationships in data, and how we can use human pattern recognition to extract features in the data.
This course will help the student:
Rejuvenated Medical Visualization -- Large-scale, whole-body visualization, visualizing physiology, non-standard imaging and simulations, and cohort studies
SUNDAY, OCTOBER 25
Steffen Oeltze-Jafra, Anders Ynnerman, Stefan Bruckner, Helwig Hauser
Contact: Steffen Oeltze-Jafra (firstname.lastname@example.org)
Medicine is one of the primary drivers of visualization research and medical visualization (MedViz) is a vibrant and successful field with a tradition of dozens of years. Traditionally, a lot of MedViz research has been focused on the visualization of a single, uni-modal patient dataset, being usually defined on a regular grid in 3D and capturing a selected part of the human anatomy. As a prominent example, volume rendering has been extensively studied, together with advanced lighting simulation, etc. In recent years, however, the most pressing challenges in MedViz have broadened, not at the least paralleling new developments in image acquisition, and being associated with a growing data complexity, and advances in medical diagnosis and patient treatment. It is now becoming increasingly common, that several datasets are acquired, also at different points in time, and that in-vivo information, related to physiology, is complementing the more traditional anatomical information. Different imaging modalities are applied and whole-body scans facilitate the screening for disease and amplify the opportunities of forensic pathology. Data may also be measured or computed in a numerical simulation over complex grids, e.g., in ultrasound imaging or in the simulation of blood flow in cerebral and aortic aneurysms. All this data needs to be integrated with the anatomical scans. While traditional MedViz usually focuses on data of a single patient, the large data pools that are acquired in longitudinal cohort studies, for example, in epidemiology, involving hundreds to thousands of individuals (the cohort) pose tremendous new challenges. These include the combined visualization of image and non-image data as well as the integrated visualization of heterogeneous data. The effective and efficient interactive exploration of large medical data requires innovative technology and dedicated interaction techniques such as table-top user interfaces and gesture-based interaction.
In this tutorial, we discuss the above-mentioned modern challenges in MedViz, followed by examples of and strategies for the development of new solutions to cope with these challenges with respect to specific (clinical) problems. We explore a variety of advanced MedViz topics. In particular, we discuss the interactive visualization of whole-body medical volume data, visualization techniques addressing the readability problem of Ultrasound by enriching the data with other types of medical data, the visualization of more abstract physiological data in its anatomical context, and the interactive visual analysis of heterogeneous image-centric cohort study data. Sufficient room for discussion is also planned as part of this tutorial.
Alexander Wiebel, Tobias Isenberg, Stefan Bruckner, Timo Ropinski
Contact: Alexander Wiebel (email@example.com)
Natural sciences, medicine and engineering are only a small selection of application domains where volumetric data, continuous as well as scattered, are close to ubiquitous. While the visualization of such data itself is not straightforward, interaction with and manipulation of volumetric data - essential aspects of effective data analysis - pose even further challenges. Due to the three-dimensional nature of the data, it is not straightforward how to select features, pick positions, segment regions or otherwise interact with the rendering or the data themselves in an intuitive manner. In this tutorial we will present state of the art approaches and methods for addressing these challenges with a special focus on the users' analysis and interaction tasks, as well as on the application of the methods in a large variety of application domains.
The tutorial will start by reviewing common classes of interaction tasks in volume visualization, motivating the need for direct interaction and manipulation, and describing the usually encountered difficulties. Interaction with visualization traditionally happens in PC-based environments with mouse and 2D displays. The second part of the tutorial discusses specific interaction methods that deal with the challenges in this context. Furthermore, an overview of the range of applications of these techniques is given to demonstrate their utility. The use of alternative paradigms for interaction with volumes is discussed in the third part. Such paradigms, e.g. in the context of touch interfaces or immersive environments, provide novel opportunities for volume exploration and manipulation, but also pose specific challenges themselves. The last part completes the tutorial's scope by a treatment of higher-level interaction techniques guiding users in navigation and exploration of the data using automatic or semi-automatic methods for identifying relevant parameter ranges. Such techniques employ additional, sometimes workflow-specific, information to assist in choosing effective volume visualization techniques and related attributes.
Martin Falk, Sebastian Grottel, Michael Krone, Guido Reina
Contact: Sebastian Grottel (firstname.lastname@example.org)
We propose a half-day tutorial that covers fundamental techniques for interactive particle-based visualization. Particle data typically originates from measurements and simulations in various fields such as life sciences or physics. Often, the particles are visualized directly, that is, by simple representants like spheres. Interactive rendering facilitates the exploration and visual analysis of the data. With increasing data set sizes in terms of particle numbers, interactive high-quality visualization is a challenging task. This is especially true for dynamic data or abstract representations that are based on the raw particle data. Our intermediate-level tutorial will cover direct particle visualization using simple glyphs as well as abstractions that are application-driven such as clustering and aggregation. It targets visualization researchers and developers who are interested in visualization techniques for large, dynamic particle-based data. We will focus on GPU-accelerated algorithms for high-performance rendering and data processing that run in real-time on modern desktop hardware. Consequently, we will discuss the implementation of said algorithms and the required data structures to make use of the capabilities of modern graphics APIs. Furthermore, we will discuss GPU-accelerated methods for the generation of application-dependent abstract representations. This includes various representations commonly used in application areas such as structural biology, systems biology, thermodynamics, and astrophysics.
Kenneth Moreland, Alan Scott, David DeMarle
Contact: David DeMarle (email@example.com)
ParaView is a powerful open-source turnkey application for analyzing and visualizing large data sets in parallel. Designed to be configurable, extendible, and scalable, ParaView is built upon the Visualization Toolkit (VTK) to allow rapid deployment of visualization components. This tutorial presents the architecture of ParaView and the fundamentals of parallel visualization. Attendees will learn the basics of using ParaView for scientific visualization with hands-on lessons. The tutorial features detailed guidance on scripting and extending ParaView and an introduction to visualizing the massive simulations run on today’s supercomputers. Attendees should bring laptops to install ParaView and follow along with the demonstrations.
Contact: Theresa-Marie Rhyne (firstname.lastname@example.org)
We examine the foundations of color theory & how these methods apply to building effective visualizations. We define color harmony & demonstrate the application of color harmony to case studies. Case studies include ensemble scientific visualizations, historic & new infographics, correlation in biological data, rainbow color deficiency safe examples, & time series animations. The Pantone Matching System, Munsell Color System and other hue systems are reviewed. The features of ColorBrewer, Adobe’s Color app & Josef Albers “Interaction of Color” app are examined. We also introduce “Gamut Mask” & “Color Proportions of an Image” analysis tools. Our tutorial concludes with a hands on session that teaches how to use online and mobile apps to successfully capture, analyze and store color schemes for future use in visual analytics. This includes the evaluations for color deficiencies using Coblis. These color suggestion tools are available online for your continued use in creating new visualizations. Please bring small JPEG examples of your visualizations for performing color analyses during the hands on session.
Contact: Tamara Munzner (email@example.com)
This introductory tutorial will provide a broad foundation for thinking systematically about visualization systems, built around the idea that becoming familiar with analyzing existing systems is a good springboard for designing new ones. The major data types of concern in visual analytics, information visualization, and scientific visualization will all be covered: tables, networks, and sampled spatial data. This tutorial is focused on data and task abstractions, and the design choices for visual encoding and interaction; it will not cover algorithms. No background in computer science or visualization is assumed.