A fundamental question in visualization is what constitutes a “good” visualization. A related question whether one visualization is better than another. In general, these hard questions are addressed by running user studies. However, evaluating visualizations with user studies a posteriori, in an inductive approach, is neither sufficient nor efficient. Ideally, we would like to have models that not only define what a good visualization is but also tell us how to construct them. Historically, general theories have been born from elimination and/or unification of competing and complementary theories that have emerged from specific domains. Clearly we need more theories of this kind in visualization. In this panel we will discuss example theories of visualization and ponder how they relate to one another.
Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces. One key aspect that separates visual analytics from other related fields (InfoVis, SciVis, HCI) is the fo- cus on analytical reasoning. While the final products generated from an analytical process are of great value, research has shown that the processes of the analyses themselves are just as important if not more so. These processes not only contain information on in- dividual insights discovered, but also how the users arrive at these insights. This area of research that focuses on understanding a user’s reasoning process through the study of their interactions with a visualization is called analytic provenance, and has demonstrated great potential in becoming a foundation of the science of visual an- alytics. This panel builds upon a successful CHI 2011 Workshop to present the current state of analytic provenance tools, the needs of users, and perspectives on areas for future work, ending with a call to action for further research. The goal of the panel is to draw atten- tion to this fertile research area and to elicit interest and critiques from the research community at large.
In his "How to Run a Papermill" essay J. Woodwark stated: "In technical journals, [...] there is a special procedure in place by which your paper is vetted by the editor of the journal -- usually a cynical person -- who sends it out to a few cronies to demolish if they can. This is called refereeing, and the tougher it is (folklore has it) the better is the journal and the more -- not fewer -- submissions it receives."
The overall goal of this panel is to dispel certain myths associated with journal publications (such as moderate impact factors or unacceptably long timelines to publication), while starting a dialogue between the main visualization journal editorial boards and the visualization community at large. The panelists will present and discuss five major visualization journal venues available to researchers for disseminating their work. The goal is to inform the visualization audience of challenges from both sides and encourage a discussion for how to optimize the publication and editing process. Ultimately we hope to connect people and to promote interaction between them. To that end, this panel is one part of the larger, more ambitious Visweek Compass agenda.
Each panelist will have up to 10 minutes to present and quantitatively compare their journal with the others in terms of: mission, scope, impact factors, refereeing procedure, numbers of papers published, acceptance rates and publication timelines, in order to establish the context for the audience. The rest of the time will be open for questions. Questions may also be submitted in advance or during the panel electronically, through the VisWeek Compass wiki (http://vis.cs.pitt.edu/vis11/).
Over the past decade, there has been a concerted effort within the com- putational science and engineering (CS&E) community to articulate both the principles and the implementation of validation and verifica- tion (V&V) for numerical simulation. It was not that prior to this effort no V&V was accomplished within computational engineer- ing, but rather that the community identified a need to assert a com- mon language for the understanding and testing for computational al- gorithms and their implementations. By defining a common language and articulating a paradigm for critical examination, comparison and testing, the CS&E community has attempted to generate a culture for V&V.
As visualization is the lens through which scientists examine their data, it too should undergo the same rigorous V&V analysis as other components of the simulation science pipeline. Like the CS&E com- munity, it is not that this is not done in practice, but rather that there is not necessarily a common language or coherency of perspective that unities the visualization community into a common culture of V&V. The purpose of this panel is to discuss what this common language and paradigm might look like within visualization. Following the lead of earlier work calling for such a culture what might “Verifiable Visualizations” look like and what makes them different than what is already done?
The literature and practice in the areas of information visualization, graphics and information display, and visual facilitation for thinking and strategy are rapidly expanding. The various fields of visualization are diverse and exciting, generating considerable enthusiasm among practitioners as applications spread to different disciplines and practice domains, including public policy-making and management. Scholars and practitioners in information visualization have a strong user-orientation and, more generally, a conviction that better data, linked data, and better representations will inform and improve decision-making and policy-making. However, with the exception of work in the security and crisis management domains, there has been little consideration of how visual representations compete with other streams of information and types of visualization for the attention of policy-makers, often in highly contested, stressful circumstances with high flows of information. Despite evidence of appreciation of how well-presented visualizations can inform sense-making, there is little, if any, discussion of how any of the products from any of the visualization domains would fit in, enhance or compete with other forms of information used in policy-making. Conversely, the literature on policy and public management has not started to explore the potential of visualization for improving analysis, advising, and engagement.
The purpose of this panel is to explore these questions, to consider the possibility of linking theorizing in the field of information visualization with frameworks developed in the fields of knowledge utilization and policy public development, and to set an agenda for future research and theoretical development. The format is designed to stimulate dialogue, rather than have panellists talk “at” the audience.