The IEEE Information Visualization conference (“InfoVis”) solicits research papers on a diverse set of topics related to information visualization. Broadly defined, information visualization is the design of visual data representations and interaction techniques that support human activities where the spatial layout of the visual representation is not a direct mapping of spatial relationships in the data. Papers may contribute novel visual encoding or interaction techniques, evaluations of InfoVis techniques and tools, models or theories related to InfoVis, systems that support visual data analysis, or applications of information visualization to domain-specific problems. None of these guidelines are in any way prescriptive; in fact, many successful papers combine two contribution types, and some of the very best papers often combine several.
Please note that topics primarily involving spatial data (such as scalar, vector and tensor fields) might be a better match for the IEEE SciVis Conference at IEEE VIS. Similarly, topics that clearly focus on visual analytics, e.g., computational solutions facilitated by visual interfaces to support analysis, might be a better match for the IEEE VAST Conference, also at IEEE VIS. Papers chairs reserve the right to move papers between conferences based on its topic and perceived fit.
Research contributions are welcomed across a range of topics including, but not limited to:
Information visualization techniques for
Interaction techniques for visualizations or for supporting the data analysis process, including
Integration of visualizations into the context of use, including
Information visualization fundamentals and methodologies:
Applied information visualization:
VIS papers typically fall into one of five categories: technique, system, design study, evaluation, or model. We briefly discuss these categories below. Although your main paper type has to be specified during the paper submission process, papers can include elements of more than one of these categories; in fact, successful papers sometimes combine elements from several paper types. Please see “Process and Pitfalls in Writing Information Visualization Research Papers” by Tamara Munzner for more detailed discussion on how to write a successful VIS paper.
The paper types below include several example papers demonstrating each respective paper type. All of these example papers are selected from Best Papers or Honorable Mentions at past InfoVis conferences.
Technique papers introduce novel techniques or algorithms that have not previously appeared in the literature, or that significantly extend known techniques or algorithms, for example by scaling to datasets of much larger size than before or by generalizing a technique to a larger class of uses. The technique or algorithm description provided in the paper should be complete enough that a competent graduate student in visualization could implement the work, and the authors should create a prototype implementation of the methods. Relevant previous work must be referenced, and the advantage of the new methods over it should be clearly demonstrated. There should be a discussion of the tasks and datasets for which this new method is appropriate, as well as its limitations. Evaluation is likely to strengthen technique papers.
System papers present a blend of algorithms, technical requirements, user requirements, and design that solves a major problem. The system that is described is both novel and important, and has been implemented. The rationale for significant design decisions is provided, design alternatives and final design choices are discussed, and the system is compared to documented, best-of-breed systems already in use. The comparison includes specific discussion of how the described system differs from and is, in some significant respects, superior to those systems. For example, the described system may offer substantial advancements in the performance or usability of visualization systems, or novel capabilities. Every effort should be made to eliminate external factors (such as advances in processor performance, memory sizes or operating system features) that would affect this comparison. For further suggestions, please review “How (and How Not) to Write a Good Systems Paper” by Roy Levin and David Redell, and “Empirical Methods in CS and AI” by Toby Walsh.
Application/Design Study papers explore the choices made when applying visualization and visual analytics techniques in an application area, for example relating the visual encodings and interaction techniques to the requirements of the target task. In addition, Application/Design Study papers have been the norm when researchers describe the use of visualization techniques to glean insights from problems in engineering and science. Although a significant amount of application domain background information can be useful to provide a framing context in which to discuss the specifics of the target task, the primary focus of the case study must be on the use of visualization in this domain. The results of the Application/Design Study, including insights generated in the application domain, should be clearly conveyed. Describing new techniques and algorithms developed to solve the target problem will strengthen a Design Study paper, but the requirements for novelty are less stringent than in a Technique paper. Where necessary, the identification of the underlying parametric space and its efficient search must be aptly described. The work will be judged by the design lessons learned or insights gleaned for visualization research, on which future contributors can build. We invite submissions on any application area.
Evaluation papers explore the usage of visualization and visual analytics by human users, and typically present an empirical study of visualization techniques or systems. Authors are not necessarily expected to implement the systems used in these studies themselves; the research contribution will be judged on the validity and importance of the results as opposed to the novelty of the systems or techniques under study. The conference committee appreciates the difficulty and importance of designing and performing rigorous evaluation, including the definition of appropriate hypotheses, tasks, data sets, selection of subjects and cases, data collection, validation and conclusions. The goal of such efforts should be to move from description toward prediction and explanation.
Carpendale (2008) provides excellent advice to guide research in InfoVis evaluation:
Theory/Model papers present new interpretations of the foundational theory of visualization and visual analytics, including models, typologies or taxonomies of the design, development, or use of visualization in particular contexts. Implementations are usually not relevant for papers in this category. Papers should focus on basic advancement in our understanding of how visualization techniques complement and exploit properties of human vision and cognition.