Accepted Tutorials
Here is the list of the accepted tutorials.
- Visualization Analysis and Design
- Color Basics for Creating Visualizations
- ParaView Tutorial
- How to Make Your Empirical Research Transparent
- Theory and Application of Visualization Color Tools and Strategies
- Scientific Visualization in Houdini: How to use Visual Effects Software for a Cinematic Presentation of Science
- Topological Data Analysis Made Easy with the Topology ToolKit, What is New?
- Artifact-Based Rendering: VR Visualization by Hand
Visualization Analysis and Design
Sunday, October 25: 8:00am-11:25am
Tamara Munzner, University of British Columbia
This introductory tutorial will provide a broad foundation for thinking systematically about visualization systems, built around the idea that becoming familiar with analyzing existing systems is a good springboard for designing new ones. The major data types of concern in visual analytics, information visualization, and scientific visualization will all be covered: tables, networks, and sampled spatial data. This tutorial is focused on data and task abstractions, and the design choices for visual encoding and interaction; it will not cover algorithms. No background in computer science or visualization is assumed.
Color Basics for Creating Visualizations
Sunday, October 25: 8:00am-11:25am
Theresa-Marie Rhyne, Visualization Consultant
We provide an overview of the basics of color theory and demonstrate how to apply the concepts to visualization. Our tutorial is intended for a broad audience of individuals interested in understanding the mysteries of color. Our journey includes the introduction to the concepts of color models and harmony, a review of color vision principles, the defining of color gamut, spaces and systems, and demonstrating online and mobile apps for performing color analyses of digital media. Freely available commercial and research tools for your continued use in color selection and color deficiency assessments are highlighted. The tutorial includes concepts from art and design such as extending the fundamentals of the Bauhaus into data visualization as well as overviews of color perception and appearance principals for vision. Emerging trends in automated color selection and deep learning colorization are also highlighted.
ParaView Tutorial
Sunday, October 25: 8:00am-11:25am
John M Patchett, Los Alamos National Laboratory
Ethan Stam, Los Alamos National Laboratory
Dan Lipsa, Kitware
Michael Migliore, Kitware
Kenneth Moreland, Sandia National Laboratories
ParaView is a flexible, extensible, open source, visualization tool that can operate both serially and in distributed memory parallel modes. It can be used to visualize and analyze both large and small data, both interactively and in batch. In this tutorial, we will expose the pipeline based design of the tool and show attendees how they can get meaningful visualizations from their data. We will cover frequently used ParaView workflows, including data wrangling, common filters for transforming data for visualization, choosing representations, exploring the rendering capabilities, and manipulating color maps. Attendees will leave the tutorial confident in their ability to run ParaView and find solutions to their problems.
How to Make Your Empirical Research Transparent
Sunday, October 25: 11:50am-3:15pm
Steve Haroz, Inria
Two fundamental tenets of scientific research are that it can be scrutinized and built-upon. Both require that the collected data, supporting materials, and decision timing be shared, so others can examine, reuse, and extend them. This tutorial will teach how you can share the artifacts of your own research. You will learn about the benefits gained by making different components or stages of research transparent, including decision timing, data collection procedures, raw data, and analysis & code. And for each, there will be tips and tricks as well as a walkthrough on how to share your work using the Open Science Framework. Bringing your laptop is highly encouraged.
We will also discuss what to do (and what you do not need to do) when reviewing a paper with open materials. You will hopefully walk away with an improved ability to make your own research more empirically replicable and computationally reproducible. These skills can enable you to have a greater impact on the field by facilitating reuse and further development of your ideas by both other researchers and those who wish to apply your work.
Theory and Application of Visualization Color Tools and Strategies
Sunday, October 25: 11:50am-3:15pm
Francesca Samsel, University of Texas at Austin
Danielle Albers Szafir, University of Colorado
Karen Schloss, University of Wisconsin-Madison
In this tutorial, we will discuss the theory and usage of color encoding design for visualization, including state-of-the-art strategies and tools. Several new color palette and colormap construction tools such as Colorgorical, Color Crafter, ColorMoves, CCCtool have recently been released. This tutorial is designed to help participants understand their features, strengths and constraints in order to enable the selection of the tool that best aligns with their needs. We will first discuss principles, decisions, and mechanisms for designing effective color encodings and then explore these tools in hands-on sessions, creating color palettes and colormaps, and testing them on data. We will close with a discussion of how future tools may aid in open challenges for using color in visualization.
Scientific Visualization in Houdini: How to use Visual Effects Software for a Cinematic Presentation of Science
Monday, October 26: 8:00am-11:25am
Kalina Borkiewicz, University of Illinois at Urbana-Champaign
AJ Christensen, University of Illinois at Urbana-Champaign
Cinematic scientific visualization makes three dimensional scientific phenomena approachable for mass audiences by using the artistic language of film including elements like camera choreography, lighting design, comprehensive scenic environments, and more. Cinematic scientific visualizations are an engaging way for domain experts to communicate niche information with the public, to refute widely held misconceptions, and to inspire the scientists of the future. Science films that feature these visualizations are screened at science centers to millions of viewers over the span of 10+ years and bridge different languages and cultures. They are shared widely on social media, featured regularly in television programs, and contribute to the success of public lectures.
If you are a domain expert looking to share your data more widely, or a visualization designer who has focused on more analytical tools, what better way is there to get started with a Hollywood style than by using Hollywood tools? This tutorial will introduce participants to Houdini, a visual effects software package that can generate cinematic-quality data visualizations with ease and efficiency. It is used and appreciated by most major animation and visual effects film studios for its procedural architecture, its modular design, and out-of-the-box rendering algorithms, all important features for ease-of-use in the field of data visualization. Houdini is a general-purpose image-making software that differs from most traditional scientific visualization tools in that it is optimized for look development and design functionality.
In this tutorial, participants will learn how to use Houdini to create a production-quality visualization from start to finish. They will translate a tornado cloud simulation into a Houdini-friendly format using Python, then ingest it into Houdini, transform it, add an environment, a camera, and a light source which mimics a sunset. Participants will be able to experiment with their own data transfer functions, camera movement, and lighting design. Several pre-made Houdini sample scenes will be explored to show how to create derivative data, how to turn 2D images into 3D height fields, and how to manipulate polygons, point clouds, and volumes.
Topological Data Analysis Made Easy with the Topology ToolKit, What is New?
Monday, October 26: 8:00am-11:25am
Martin Falk, Linköping University
Christoph Garth, Technische Universität Kaiserslautern
Charles Gueunet, Kitware
Pierre Guillou, Sorbonne Université
Attila Gyulassy, University of Utah
Lutz Hofmann, Heidelberg University
Christopher P Kappe, TU Kaiserslautern
Joshua A Levine, University of Arizona
Jonas Lukasczyk, Technische Universität Kaiserslautern
Julien Tierny, France Sorbonne Université
Jules Vidal, Sorbonne Université
This tutorial presents topological methods for the analysis and visualization of scientific data from a user’s perspective, with the Topology ToolKit (TTK), an open-source library for topological data analysis. Topological methods have gained considerably in popularity and maturity over the last twenty years and success stories of established methods have been documented in a wide range of applications (combustion, chemistry, astrophysics, material sciences, etc.) with both acquired and simulated data, in both post-hoc and in-situ contexts. This tutorial aims to fill a gap by providing a beginner’s introduction to topological methods for practitioners, researchers, students, and lecturers. In particular, instead of focusing on theoretical aspects and algorithmic details, this tutorial focuses on how topological methods can be useful in practice for concrete data analysis tasks such as segmentation, feature extraction or tracking. The tutorial describes in detail how to achieve these tasks with TTK. In comparison to the last two iterations of this tutorial, this iteration emphasizes the features of TTK which now appear to be the most popular, as well as the latest additions to the library. First, we provide a general introduction to topological methods and their application in data analysis, and a brief overview of TTK’s main entry point for end users, namely ParaView, will be presented. Second, we will proceed to a hands-on session demoing the main features of TTK as well as its most recent additions. Third, we will present advanced usages of TTK, including the usage of TTK with Python, the development of a new module for TTK as well as the integration of TTK into a pre-existing system. Presenters of this tutorial include experts in topological methods, core authors of TTK as well as active users, coming from academia and industry. A large part of the tutorial will be dedicated to hands-on exercises and a rich material package will be provided to the participants. This tutorial mostly targets students, practitioners and researchers who are not necessarily experts in topological methods but who are interested in using them in their daily tasks. We also target researchers already familiar to topological methods and who are interested in using or contributing to TTK. We kindly ask potential attendees to optionally pre-register at the following address, in order for us to reach out to them ahead of the tutorial with information updates (for instance, last minute updates, instructions for the download of the tutorial material package, etc.): https://forms.gle/CvrY3oWZB9hWSQJb9 Tutorial web page (including all material, TTK pre-installs in virtual machines, code, data, demos, video tutorials, slides, etc): https://topology-tool-kit.github.io/ieeeVisTutorial.html
Artifact-Based Rendering: VR Visualization by Hand
Monday, October 26: 11:50am-3:15pm
Daniel F. Keefe, University of Minnesota
Francesca Samsel, University of Texas at Austin
Bridger Herman, University of Minnesota
Artifact-Based Rendering (ABR) is a new approach to designing immersive data-driven visualizations entirely from physical materials. Introduced in last year’s technical program, the theory behind ABR is that harnessing the richness of nature and traditional artistic materials can help us create more effective data-driven visualizations by expanding the visual language available to us in digital space. Participants will arrive to see tables with clay, paints, drawing materials and more. The tutorial consists of four sections: 1. introducing ABR concepts and tools, 2. active physical crafting and then digitizing artifacts, and 3. attaching the artifacts to example datasets, and 4. a VR/slideshow presentation of the visualizations created and reflections on the experience and future potential.