2012 IEEE VIS Tutorials

TUTORIAL: Natural Language Processing for Text Visualization
Date & Time : Sunday, October 14 02:00 pm - 05:55 pm
Location : Grand Ballroom B
Topic : 
Natural Language Processing for Text Visualization
Contributors: Daniela Oelke, Saeedeh Momtazi, Daniel A. Keim


Large amounts of information are not available in a structured form but as text. Automatic text analysis techniques are thus an important means to deal with this kind of data. However, due to the impressive flexibility and complexity of natural language, automatic techniques get to a limit where the analysis questions require background knowledge or a thorough understanding of the semantics. Consequently, there is a growing interest for visual analytics techniques that use visualization methods to incorporate the user into the process, thereby helping to bridge this semantic gap.

In contrast to the visualization of structured data, text data cannot be visualized directly but requires at least a preprocessing step with suitable automatic techniques. Complex tasks require a tight connection between the automatic and the visual techniques as suggested by the visual analytics pipeline. However, the effectiveness of such systems highly depends on the visualization researcher’s capability to make an informed choice on which text analysis algorithms to choose and how to integrate them into the visual analytics system.

The goal of this tutorial is to equip interested researchers of the vis community with the necessary competencies in natural language processing (NLP). We will first introduce the basic concepts and techniques of automatic text processing which include stemming, part-of-speech tagging, parsing, topic modeling, concept representation, information extraction etc. In a second part we will inform about state-of-the-art NLP components that are freely available in the NLP research community and can be leveraged for the development of powerful visual analytics tools. The tutorial will conclude with application examples that illustrate the usage of the introduced concepts.


TUTORIAL: Perception and Cognition for Imaging, Visualization, Visual Data Analysis and Computer Graphics
Date & Time : Sunday, October 14 08:30 am - 12:10 pm
Location : Grand Ballroom B
Topic : 
Perception and Cognition for Imaging, Visualization, Visual Data Analysis and Computer Graphics
Contributor: Bernice E. Rogowitz

Imaging, visualization and computer graphics provide visual representations of data in order to communicate, provide insight and enhance problem solving.   The human observer actively processes these visual representations using perceptual and cognitive mechanisms that have evolved over millions of years.  The goal of this tutorial is to provide an introduction to these processing mechanisms, and to show how this knowledge can guide the decisions we make about how to represent data visually, how we visually represent patterns and relationships in data, and how we can use human pattern recognition to extract features in the data. 

This course will help the attendee:

  • Understand basic principles of spatial, temporal, and color processing by the human visual system.
  • Explore basic cognitive processes, including visual attention and semantics.
  • Develop skills in applying knowledge about human perception and cognition to interactive visualization and computer graphics applications


TUTORIAL: Visualizing data in R and ggobi
Date & Time : Sunday, October 14 08:30 am - 05:55 pm
Location : Grand Ballroom A
Topic : 
Visualizing data in R and ggobi
Contributors: Di Cook, Heike Hofmann, Hadley Wickham

R is an open-source statistical programming environment. It is widely used by academic, industry and government statisticians and is becoming increasingly popular in many applied domains. ggobi is an open-source interactive graphics package for visualizing high-dimensional data. 
In this day tutorial, you'll learn about:

  • Extracting knowledge from data by making plots using the ggplot2 package in R.
  • Approaches to visualization from a different tradition, a tradition
 that embraces the study of variation and variability

  • The use of a command line interface that provides vast flexibility but
 requires that users are comfortable with high-level programming.
  • Connecting to R to take advantage of the cutting edge statistical and 
 machine learning models and linking these with data plots using the rggobi package, to become a data explorer.

The course is split into two parts:

  • In the morning, we will be introducing working with R and the ggplot2 package.
Why is learning R worthwhile your time? - 
We will show-case some examples of problems that have been addressed using R and ggobi, and the methods that were employed to solve these problems. 
Following this we will delve deeper into the use of a command line system to create graphics, getting beyond defaults. 
The ggplot2 package allows us to easily get insights about complex relationships in the data and produce graphics that help to uncover the unknown.
  • In the afternoon, we will be discussing interactive graphics with ggobi and the link to R, which opens up all interactive capabilities to R's developers specifications.
We will highlight how to use modern algorithmic techniques in an interactive setting, that allows us to more closely inspect the output, and lift off the veil from the `black box' approach of a lot of these techniques are hampered by.
We will finish with a discussion on determining whether structure seen in plots is "real" or consistent with randomness.


TUTORIAL: Connecting the Dots – Showing Relationships in Data and Beyond
Date & Time : Monday, October 15 02:00 pm - 05:55 pm
Location : Grand Ballroom B
Topic : 
Connecting the Dots – Showing Relationships in Data and Beyond
Contributors: Marc Streit, Hans-Jörg Schulz, Alexander Lex

Relationships are omnipresent in data, views and in how we interact with visualization tools. This tutorial discusses how such relationships can be visually expressed, a process we call linking. The tutorial addresses the three questions, what, how and when to link in three separate parts. The first part - what to link - explains that not only data, but also views and interactions can be linked. The second part - how to link - discusses how relationships can be visually expressed, based on perceptual grouping principles. While we discuss a wide range of methods, we focus on the connectedness grouping principle, specifically on visual links, as it is the most powerful in some respects, but also the most difficult to employ. The final part - when to link - will give an introduction to scenarios where linking is beneficial, taking into account issues such as unconventional display devices and collaboration.

TUTORIAL: Introduction to Data Visualization on the Web with D3.js
Date & Time : Monday, October 15 08:30 am - 12:10 pm
Location : Grand Ballroom B
Topic : 
Introduction to Data Visualization on the Web with D3.js
Contributors: Scott Murray, Jeffrey Heer, Jérôme Cukier

A practical introduction to D3.js, a JavaScript-based tool for creating web-based, interactive data visualizations.  D3 (Data-Driven Documents) is an extremely powerful tool, but it has a steep learning curve for people with little prior programming experience.  This tutorial is intended for beginners familiar with basic concepts of visualization, but with little or no experience with web development technologies such as HTML, CSS, JavaScript, and SVG.  We begin with a brief introduction to those technologies, and then quickly introduce D3, a JavaScript library for expressing data as visual elements in a web page.  Participants will learn how to load their own data sets into a page, bind the data to page elements, and then manipulate those elements based on data values and user input.  No programming experience required, although some familiarity with markup or scripting languages may be helpful.

TUTORIAL: Uncertainty and Parameter Space Analysis in Visualization
Date & Time : Monday, October 15 08:30 am - 05:55 pm
Location : Grand Ballroom A
Topic : 
Uncertainty and Parameter Space Analysis in Visualization
Contributors: Christoph Heinzl, Stefan Bruckner, Eduard Gröller, Alex T. Pang, Hans-Christian Hege, Kristi Potter, Rüdiger Westermann, Torsten Möller

Tutorial Website with schedule and slides!

Within the past decades visualization advanced to a powerful means of exploring and analyzing data. Recent developments in both hard- and software contributed to previously unthinkable evaluations and visualizations of data with strongly increasing sizes and levels of complexity.

Providing just insight into available data of a problem seems not to be sufficient anymore: Uncertainty and parameter space analyses in visualization are becoming more prevalent and may be found in astronomic, (bio)-medical, industrial, and engineering applications. The major goal is to find out, at which stage of the pipeline - from data acquisition to the final rendering of the output image - how much uncertainty is introduced and consequently how the desired result (e.g., a dimensional measurement feature) is affected. Therefore effective methods and techniques are required by domain specialists, which help to understand how data is generated, how reliable is the generated data, and where and why data is uncertain.

Furthermore, as the problems to investigate are becoming increasingly complex, also finding suitable algorithms providing the desired solution tends to be more difficult. Additional questions may arise, e.g., how does a slight parameter change modify the result, how stable is a parameter, in which range is a parameter stable or which parameter set is optimal for a specific problem. Metaphorically speaking, an algorithm for solving a problem may be seen as finding a path through some rugged terrain (the core problem) ranging from the high grounds of theory to the haunted swamps of heuristics. There are many different paths through this terrain with different levels of comfort, length, and stability. Finding all possible paths corresponds in our case to doing an analysis of all possible parameters of a problem solving algorithm, which yields a typically multi-dimensional parameter space. This parameter space allows for an analysis of the quality and stability of a specific parameter set.

In many cases of conventional visualization approaches the issues of uncertainty and parameter space analyses are neglected. For a long time, uncertainty - if visualized at all - used to be depicted as blurred data. But in most cases the uncertainty in the base data is not considered at all and just the quantities of interest are calculated. And even to calculate these quantities of interest, too often an empirically found parameter set is used to parameterize the underlying algorithms without exploring its sensitivity to changes and without exploring the whole parameter space to find the global or a local optimum.

This tutorial aims to open minds and to look at our data and the parameter sets of our algorithms with a healthy skepticism. In the tutorial we combine uncertainty visualization and parameter space analyses which we believe is essential for the acceptance and applicability of future algorithms and techniques. The tutorial provides six sessions starting with an overview of uncertainty visualization including a historical perspective, uncertainty modeling and statistical visualization. The second part of the tutorial will be dedicated to structural uncertainty, parameter space analysis, industrial applications of uncertainty visualization and an outlook in this domain.

TUTORIAL: Good practice of visual communication design in scientific and data visualization
Date & Time : Tuesday, October 16 02:00 pm - 05:55 pm
Location : Willow AB
Topic : 
Good practice of visual communication design in scientific and data visualization
Contributor: Marek Kultys

Graphic design is a broad discipline of organising visual communication in a meaningful, accessible and engaging way. As many fields of scientific activity have become increasingly reliant on visual information, many scientists may find it beneficial to acquire a better understanding of the diverse means of visual explanation, narration and enquiry in order to communicate one’s research more efficiently and effectively.

In this interdisciplinary tutorial the students will learn the basics of good graphic design practice necessary to create legible, clear and engaging scientific/data visualisations that make an impact. Through a series of lectures, hands-on exercises and practical demonstrations the students will become familiar with those visual structures, methods and software tools, which will be most useful in communicating their own scientific work. Following a sketching activity (quick prototyping of design ideas) and a group crit, students will learn how to apply their new design skills when working with popular office software (Microsoft Office).

TUTORIAL: Color Theory Methods for Visualization
Date & Time : Wednesday, October 17 02:00 pm - 05:55 pm
Location : Willow AB
Topic : 
Color Theory Methods for Visualization
Contributor: Theresa-Marie Rhyne

We highlight the usage of various color theory methods and tools for creating effective visualizations and visual analytics. Our tutorial features mobile apps for performing color analyses and steps through recent colormap studies performed for an Isotropic Inverse Model Data Visualization, an Uncertainty Visualization technique using a fiber orientation distribution function, and Visualizing Correlation in Molecular Biological Data. This includes the evaluations for color vision weaknesses using Vischeck. Various artists’ and scientists’ theories of color and how to apply these theories to creating your own  digital media work are reviewed. We also feature the application of color theory to time series animations. Our

tutorial includes a hands on session that teaches you how to build and evaluate color schemes with Adobe’s Kuler, Color Scheme Designer, and Color Brewer online tools. We cover the usage of mobile applications like Color Schemer Touch., myPANTONE mobile app, and the Color Companion mobile app. Each of these color tools are available for your continued use in creating visualizations. Please bring small JPEG examples of your visualizations for performing color analyses during the hands on session.

TUTORIAL: Interactive Visual Analysis of Scientific Data
Date & Time : Thursday, October 18 02:00 pm - 05:55 pm
Location : Willow AB
Topic : 
Interactive Visual Analysis of Scientific Data
Contributors: Steffen Oeltze, Helmut Doleisch, Helwig Hauser, Gunther Weber

In a growing number of application areas, a subject or phenomenon is investigated by means of multiple datasets being acquired over time (spatiotemporal), comprising several attributes per data point (multi-variate), stemming from different data sources (multi-modal) or multiple simulation runs (multirun/ensemble). Interactive visual analysis (IVA) comprises concepts and techniques for a user-guided knowledge discovery in such complex data. Through a tight feedback loop of computation, visualization and user interaction, it provides new insight into the data and serves as a vehicle for hypotheses generation or validation. It is often implemented via a multiple coordinated view framework where each view is equipped with interactive drill-down operations for focusing on data features. Two classes of views are integrated: physical views show information in the context of the spatiotemporal observation space while attribute views show relationships between multiple data attributes. The user may drill-down the data by selecting interesting regions of the observation space or attribute ranges leading to a consistent highlighting of this selection in all other views (brushing-and-linking).

In this tutorial, we discuss examples for successful applications of IVA to scientific data from various fields: automotive engineering, climate research, biology, and medicine. We base our discussions on a theoretical foundation of IVA which helps the tutorial attendees in transferring the subject matter to their own data and application area. This universally applicable knowledge is complemented in a tutorial part on IVA of very large data which accounts for the tera- and petabytes being generated by simulations and experiments in many areas of science, e.g., physics, astronomy, and climate research. The tutorial further provides an overview of off-the-shelf IVA solutions. It is concluded by a summary of the gained knowledge and a discussion of open problems in IVA of scientific data. The tutorial slides will be available before the conference start date at: www.vismd.de/doku.php?id=teaching_tutorials:start.