Most of us agree that evaluation is a critical aspect of any visualization research paper. There are many different aspects to the topic of evaluation including: performance-based such as evaluating computational speed and memory requirements. Other angles are human-centered like user-studies, and domain expert reviews to name a few examples. In order to demonstrate that a piece of visualization work yields a contribution, it must undergo some type of evaluation. The peer-review process itself is a type of evaluation. When we referee a research paper, we evaluate whether or not the visualization work being described has been evaluated adequately and appropriately. In an increasing number of cases papers are rejected based on what was judged, at that time, to contain an inadequate evaluation even though the technical or design contributions are acknowledged. However, there are differing opinions as to what constitutes an adequate or appropriate evaluation when it comes to visualization. In this panel, we discuss precisely this topic: What constitutes an adequate and appropriate evaluation of a piece of visualization work?
While interpretations of the term big data vary across different stakeholders, there is no denying the fact we live in the era of big, complex, and rich data. This presents obvious opportunities to visualization for tapping into the analytical value of the data. Being a relatively nascent field compared to its exploratory data analysis siblings: statistics, machine learning, data mining, etc., there is a need for introspection into how visualization can best fit into the big data analysis pipeline. In this pipeline, while infrastructure building for big data has received most of the attention, much less focus has been on devising ways to improve interpretation of such complex data. We believe, by introspecting on the role of visualization in the big data era, we can achieve a two-fold purpose: i) highlight the visualization-specific challenges for handling big data and chart out a roadmap for the immediate future, and ii) establish visualization as a first-class citizen in the big data pipeline and thereby make a significant impact on the state-of-the-art of interpretation and analysis of such complex data.
Over the last twenty-five years, visualization software has evolved into robust frameworks that can be used for research projects, rapid prototype development, or as the basis of richly featured, end-user tools. For this panel, we will describe upcoming challenges facing visualization software in five categories: programming models for future architectures, maximizing performance for future architectures, application architecture and data management, data models, and rendering. Further, for each of these categories, we describe where evolutionary advances are sufficient to meet the visualization software challenges, and posit areas in which revolutionary advances are required.