Every visualization researcher and practitioner knows the painful experience of a beautifully designed network layout breaking down once the input graph scales up to realistic node and edge counts. The resulting "hairball" suffers from cluttering and over-plotting to an extreme that renders it unusable for any practical purposes. Since researchers have had this experience for decades, various approaches have been developed on all stages of the visualization pipeline to alleviate this problem. They range from filtering and clustering techniques on the data level to modern GPU-based techniques on the image level. This tutorial gives an overview of these techniques and discusses their applicability and interplay in different application scenarios. By doing so, it provides a unique problem-oriented perspective on the field of scalable network visualization, which is an area of active research today more than ever. The tutorial serves mainly to further the understanding of network visualization beyond the point of creating an initial layout. It thus caters to an intermediate level audience with some basic knowledge on graph layout and visualization, but it will certainly present an interesting cross-section through the larger domains of network visualization and graph drawing for established researchers as well.
Flow visualization is a central topic in scientific visualization and has been a focused research area for many years. New challenges have arrived as the size and complexity of flow field data continue to grow at astonishing rates. For instance, how to strike a balance between complexity and clarity when visualizing large and complex 3D flow fields? how to design scalable solutions for line integral convolution and particle tracing? and how to detect flow recurrent dynamics via topology study? Traditional flow visualization solutions were not specifically designed to tackle big data in mind and as such, those algorithms and techniques need to be reexamined or new solutions need to be proposed to handle large-scale flow fields.
In an effort to survey recent progress in addressing the above set of diverse challenges, our tutorial covers the following topics: (i) streamlines in 3D: techniques beyond seed placement; (ii) texture-based flow visualization; (iii) graph-based analysis of large scale flow fields; (iv) foundations of data-parallel particle advection; and (v) vector field topology in flow analysis and visualization. The goal of this tutorial is to inform visualization researchers and practitioners the state-of-the-art technologies that have greatly enriched the toolset for analyzing and visualizing large-scale flow field data sets.
To provide accurate, comprehensive picture of data is a cardinal virtue of good practice in visual information design. "Graphical excellence begins with telling the truth about the data" (Tufte). Yet, still too often it poses difficulties to visualization makers. Usually, it is merely due to one's unawareness or lack of understanding of basic design rules that help maintain high graphical integrity. Therefore, familiarizing oneself to the practical rules of truthful, unambiguous data presentation remains in best interest of both the authors, as well as the viewers of data visualizations.
In this interdisciplinary hands-on tutorial the students will learn the basics of good visual design practice necessary to create clear, coherent, unequivocal and impactful visualizations. Through a series of lectures and case study demonstrations the students will learn the rules of graphical integrity, become familiar with most common visualization traps and understand the gravity of data visualization accuracy. Following this primer, the students will engage in a hands-on sketching activity, in which they will explore and exercise various ways of controlling and distorting the meaning of data by manipulating its visual presentation. Equipped with this valuable first-hand knowledge of the mechanisms of visual design misrepresentation, the students will become able to make informed and better design decisions in their own visualization work. In the concluding group critique the students will also learn to critically judge the accuracy and performance of the visualizations made by themselves, as well as by their peers.
This tutorial continues and expands on the VIS tutorial "Good Practice of Visual Communication Design…" which was very positively received at the 2012 conference in Seattle, WA. Digital handouts and presentation synopsis will be provided to all tutorial participants.
This tutorial will introduce the participant to Python tools for data analysis and visualization. The tools covered include NumPy, Numba, Blaze, Pandas, Bokeh and Matplotlib. These tools cover the whole data analysis pipeline, with data ingestion (Blaze, NumPy), manipulation (Numba, NumPy, Pandas), visualization (Bokeh and Matplotlib) and publish to a web-service (Bokeh). A brief introduction to the Python language itself will also be included (though some python experience is strongly suggested). By the end of this workshop, a participant can expect to be able to load a large dataset, do basic analysis and construct web-enabled interactive visualizations.
If you are planning on attending this workshop, please visit our GitHub repo and download the listed software to reduce setup time at the workshop. (You can still attend if you don't download the software in advance, but you can get started faster if you do.)
The objective of this half-day introductory tutorial is to increase awareness of what constitutes a sound scientific approach to evaluation in Visualization and to provide basic theoretical knowledge of, and practical skills in, current research practice of usability and evaluation. The content presents the current challenges and trends related to how to characterize and optimize the complex interactive visual displays present in Visualization today. It will cover the most basic and relevant issues to consider during different phases of evaluation: planning, design, execution, analysis of results and reporting. The content outlines how to proceed to achieve high quality results and point out common pitfalls and mistakes which are threats to high quality results. Taking part in this tutorial will not train a novice participant to be fully capable of designing and conducting an evaluation study and analyzing its outcome, such a goal would require a substantially larger course. The aim is to introduce the topic, provide general knowledge about what is important to consider and what resources are available to guide them in further study in this area. Further, participants will also learn to better judge the relevance and quality of a publication presenting an evaluation when reviewing such work since the same rules apply.
In a growing number of application areas, a subject or phenomenon is investigated by means of multiple datasets being acquired over time (spatiotemporal), comprising several attributes per data point (multi-variate), stemming from different data sources (multi-modal) or multiple simulation runs (multi-run/ensemble) [KH13]. Interactive visual analysis (IVA) comprises concepts and techniques for a user-guided knowledge discovery in such complex data. Through a tight feedback loop of computation, visualization and user interaction, it provides new insight into the data and serves as a vehicle for hypotheses generation or validation. It is often implemented via a multiple coordinated view framework where each view is equipped with interactive drill-down operations for focusing on data features. Two classes of views are integrated: physical views, such as direct volume rendering, show information in the context of the spatiotemporal observation space while attribute views, such as scatter plots and parallel coordinates, show relationships between multiple data attributes. The user may drill-down the data by selecting interesting regions of the observation space or attribute ranges leading to a consistent highlighting of this selection in all other views (brushing-and-linking). Three patterns of explorative/analytical procedures may be accomplished by doing so. In a feature localization, the user searches for places in the 3D/4D observation space where certain attribute values are present. In a multi-variate analysis, relations between data attributes are investigated, e.g., by searching for correlations. In a local investigation, the user inspects the values of selected attributes with respect to certain spatiotemporal subsets of the observation space.
In this tutorial, we discuss examples for successful applications of IVA to scientific data from various fields: climate research, medicine, epidemiology, and flow simulation / computation, in particular for automotive engineering. We base our discussions on a theoretical foundation of IVA which helps the tutorial attendees in transferring the subject matter to their own data and application area. In the course of the tutorial, the attendees will become acquainted with techniques from statistics and knowledge discovery, which proved to be particularly useful for a specific IVA application. The tutorial further comprises an overview of off-the-shelf IVA solutions, which may be be particularly interesting for visualization practitioners. It is concluded by a summary of the gained knowledge and a discussion of open problems in IVA of scientific data.
A full-day, intermediate-level tutorial covering the up-and- coming topics of "Mobile and Cloud Web-Based Graphics and Visualization." The complete rationale and justification for this tutorial is given in the Introduction section below. The organization of the tutorial is provided in the Tentative Topics section, following the Introduction. Finally, a short biography of the instructor is provided.
This tutorial will provide a modern view of visualization and provide the necessary background to understand the issues in the development and usage of visualization and visual analytics systems. We will provide a brief history and overview of data visualization, of analysis, of their integration, and of the role of reasoning, all from a modern viewpoint. We will examine systems that integrate visualization and analysis and explore what a system in 2020 would look like. Many slides, videotapes and demonstrations will be provided.
Video data, generated by the entertainment industry, security and traffic cameras, video conferencing systems, video emails, and so on, is particularly time-consuming to process by human beings. The field of visualization has provided this challenging problem with a collection of techniques that transform videos to different visual forms in order to reduce the time required to watch the video. In this tutorial, we will introduce the concept of video visualization, and several elementary techniques for processing and rendering a video into a compact visual representation. We will describe a family of visual representations, a set of insight obtained from empirical studies, and a collection of applications.