The following workshops went through our submission/review process.
- 3rd Workshop on Accessible Data Visualization (Half day)
- BELIV 2026: Learning What’s True, Doing What’s Right (Full day)
- Considering Context: Approaches for Responsible Data Practices (Half day)
- EduVis: 4th IEEE VIS Workshop on Visualization Education, Literacy, and Activities (Half day)
- 2nd GenAI, Agents, and the Future of VIS (Half day)
- Grand Unified Grammar of Graphics (GUGOG) (Half day)
- SciFi-VIS: Way Out There — How SciFi and Visualization Influence Each Other (Half day)
- TopoInVis Connect: Topology meets Artificial Intelligence (Half day)
- Uncertainty Visualization: How to Make it Interpretable, Integrable, and Accessible? (Half day)
- Visual Analytics in the Age of Autonomous Scientific Discovery (Half day)
- 9th Workshop on Visualization for AI Explainability (VISxAI) (Half day)
- VisxVision: Workshop on Novel Directions in Vision Science and Visualization Research (Half day)
- vis4climate: Building a Transdisciplinary Climate Vis Community (Half day)
BELIV 2026: Learning What’s True, Doing What’s Right
Sandra Bae, University of Arizona, Tucson, Arizona, United States
Jürgen Bernard, University of Zurich, Zurich, Switzerland
Michael Correll, Northeastern University, Portland, Maine, United States
Mai Elshehaly, City, University of London, London, United Kingdom
Takanori Fujiwara, University of Arizona, Tucson, Arizona, United States
Daniel F. Keefe, University of Minnesota, Minneapolis, Minnesota, United States
Mahsan Nourani, Northeastern University, Portland, Maine, United States
Contact: Mai Elshehaly (mai.elshehaly@city.ac.uk)
Twenty years after the first BELIV in 2006, the BELIV workshop invites contributions on emerging and under-examined methodological challenges in visualization research, and fosters open discussions on how we establish the validity and scope of knowledge acquired in our domain, including all forms of systematic and empirical methods used to acquire this knowledge. The goal is to create space for members of the visualization research community to engage with a process of reflection and meta-discussion on empirical research practices in our domain, for example, on what level of rigor to require of our methods, how to choose methods and methodologies, and how to best communicate the results of empirical research. This year’s focus will be on two pressing concerns in visualization research: 1) building towards truth in the midst of growing challenges to validity such as the pressures of the replication crisis and the ubiquitous presence of AI-augmented data and analytics as well as 2) building towards ethical research in a world in turmoil while maintaining our integrity as researchers and individuals.
VisxVision: Workshop on Novel Directions in Vision Science and Visualization Research
Arran Zeyu Wang, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
Sheng Long, Northwestern University, Evanston, Illinois, United States
Ghulam Jilani Quadri, University of Oklahoma, Norman, Oklahoma, United States
Clementine Zimnicki, University of Wisconsin-Madison, Madison, Wisconsin, United States
Ouxun Jiang, Northwestern University, Evanston, Illinois, United States
Matthew Kay, Northwestern University, Evanston, Illinois, United States
Danielle Albers Szafir, University of North Carolina-Chapel Hill, Chapel Hill, North Carolina, United States
Contact: Arran Zeyu Wang (zeyuwang@cs.unc.edu)
Visualization relies heavily on the human visual system. While visualization research has drawn on low-level vision science principles, many such principles remain underexplored for complex visualization and interaction designs. Further, there is an empirical bottleneck within the VIS community: many classical studies remain underreplicated under modern conditions, potentially creating a gap between established theory and reproducible practice. Experimental norms in vision science offer a useful starting point for addressing this bottleneck, yet the VIS community still lacks a dedicated venue for sharing replication results and best practices. Beyond their empirical value, replication studies also offer researchers, particularly those newer to experimental work, a structured entry point for developing rigorous methodology. To this end, VisXVision introduces a Replication Paper track alongside regular Research Paper and Lightning Talk tracks, specifically designed to translate the methodological precision and practices of vision science into visualization. VisXVision aims to build toward a more reliable theoretical foundation for visualization research by providing a dedicated forum for researchers at the intersection of vision science, psychology, and data visualization, offering concrete tools and publication pathways.
2nd GenAI, Agents, and the Future of VIS
Chen Zhu-Tian, University of Minnesota-Twin Cities, Minneapolis, Minnesota, United States
Nam Wook Kim, Boston College, Chestnut Hill, Massachusetts, United States
Saeed Boorboor, University of Illinois Chicago, Chicago, Illinois, United States
Shivam Raval, Harvard University, Boston, Massachusetts, United States
Pan Hao, University of Minnesota, Minneapolis, Minnesota, United States
Vidya Setlur, Tableau Research, Palo Alto, California, United States
Contact: Chen Zhu-Tian (ztchen@umn.edu)
Recent advances in agents (i.e., autonomous, goal-driven AI systems that iteratively observe, act, and learn from their environments) offer a fundamentally different approach from traditional AI models that passively respond to input. These agentic AI systems are rapidly reshaping how we approach data-intensive tasks and providing new opportunities for the VIS community. Imagine an agent autonomously generating visualizations to analyze complex data, discovering patterns collaboratively, testing hypotheses, and communicating visual insights at a speed and scale beyond human capability. Yet, the emergence of these powerful systems raises critical questions that the VIS community must address: Could autonomous agents eventually replace human data scientists, and if not, how might they best collaborate? Are current visualization techniques and interfaces, originally designed for human analysts, suitable for agent interactions? How can VIS designers effectively integrate agents into their workflows without compromising human agency? And to what extent should agents help shape and educate the next generation of visualization researchers? Through a mix of keynote talks, paper presentations, and an agentic VIS challenge, this workshop invites researchers and practitioners to share innovative ideas, explore these questions, and discuss strategies to transform the impact of VIS for a future where human and AI agents co-exist.
Visual Analytics in the Age of Autonomous Scientific Discovery
Shayan Monadjemi, Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States
Gabriel Appleby, National Laboratory of the Rockies, Golden, Colorado, United States
Quan Minh Nguyen, Princeton University, Princeton, New Jersey, United States
Ayana Ghosh, Indian Institute of Technology, Madras, India
Christoph Heinzl, University of Passau, Passau, Germany
Remco Chang, Tufts University, Medford, Massachusetts, United States
Contact: Shayan Monadjemi (shayan.monadjemi@gmail.com)
Artificial intelligence is rapidly transforming scientific workflows. In emerging self-driving laboratories (SDLs), autonomous agents design experiments, analyze results, and iteratively refine hypotheses within closed-loop pipelines, fundamentally shifting the role of the scientist. This transition creates new opportunities for visual analytics to enable oversight and steering of autonomous processes, facilitate the inspection and refinement of machine-generated hypotheses, and support effective human–AI collaboration in scientific discovery. This workshop positions visual analytics as a core enabler of autonomous scientific discovery and advances two complementary directions: (1) developing methods that support AI-accelerated science, and (2) leveraging AI-accelerated scientific platforms to advance visualization research into AI-driven workflows. We will encourage submissions at the intersection of visual analytics, self-driving labs, and scientific domains (e.g., materials science). The workshop will include an invited keynote presentation, paper presentations, demos, and group discussions that help us articulate a concrete research agenda for visual analytics in the age of autonomous science.
Considering Context: Approaches for Responsible Data Practices
Ester Scheck, TU Wien, Vienna, Austria
Contact: Ester Scheck (ester.scheck@geo.tuwien.ac.at)
While various frameworks for documenting data production (e.g., metadata, data biographies, and datasheets for datasets) exist, responsible and reflexive data practices too often focus on data analysis processes and visualization decision-making and ethics. In this workshop, we address this gap by centering data and visualization practitioner perspectives to brainstorm and co-create wireframes, guidelines and strategies, and prototype tools that focus on understanding and incorporating data production context in the visualization workflow. Our workshop design is guided by data feminism, design justice, and feminist mapping and prioritizes interactive exchange and the co-production of knowledge (and tools) to better support ethical data practices throughout visualization workflows.
Grand Unified Grammar of Graphics (GUGOG)
Cynthia A Huang, LMU Munich, Munich, Germany & Munich Center for Machine Learning (MCML), Munich, Germany
Matthew Kay, Northwestern University, Evanston, Illinois, United States
Susan R Vanderplas, University of Nebraska, Lincoln, Lincoln, Nebraska, United States
Heike Hofmann, University of Nebraska - Lincoln, Lincoln, Nebraska, United States
Joyce Robbins, Columbia University, New York, New York, United States
Evangeline Reynolds, Posit PBC, Boston, Massachusetts, United States
Contact: Cynthia A Huang (cynthia.huang@lmu.de)
Following Wilkinson’s seminal “Grammar of Graphics” (2005), visualization communities in both statistics and computer science have developed various grammar-based approaches to visualization problems, workflows and usage scenarios. While this diversity reflects the richness of visualization challenges, it also reveals fundamental questions: Why do these grammars differ? What core principles unite them? What opportunities exist for synthesis? Which properties make a visualization system a ‘graphical grammar’? Despite scattered attempts to survey and understand diversity in grammar based systems, we lack systematic frameworks for understanding how these grammars relate, where they succeed or struggle, and what a more unified theoretical foundation might look like. The first workshop for a grand unified grammar of graphics (GUGOG) aims to facilitate interdisciplinary discussion and exploration of these open questions. We invite reflections on past work and recent developments in visualization grammars, synthesis of parallel and overlapping contributions across statistical graphics and information visualization communities, and visions for the future of grammar-based visualization research.
9th Workshop on Visualization for AI Explainability (VISxAI)
Alex Tim Bäuerle, Google DeepMind, Paris, France
Angie Boggust, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Catherine Yeh, Harvard University, Boston, Massachusetts, United States
Fred Hohman, Apple, Seattle, Washington, United States
Mennatallah El-Assady, ETH Zürich, Zürich, Switzerland
Hendrik Strobelt, IBM Research AI, Cambridge, Massachusetts, United States
Contact: Alex Tim Bäuerle (bauerlealex@gmail.com)
The VISxAI workshop, throughout its past eight iterations, has been a platform for knowledge exchange between researchers with different backgrounds interested in explaining machine learning models through visualization. Its focus on explainables submissions that visually and interactively explain machine learning concepts ranges in complexity from clustering methods to algorithmic biases. These explainables have served as educational resources that have had an impact beyond the academic community. The workshop also always hosted great keynote speakers that connected the domains of visualization and state-of-the-art machine learning and explored the impact visualization can have on explainability. Following the success of VISxAI’25, our goal for this upcoming iteration of VISxAI is to combine strong interactive explainables and great presentations with the interactivity of breakout sessions and live demos. Participants will be encouraged to exchange ideas about the future of visual explainability, interactive articles, and explorable explanations. Furthermore, we will provide a platform for new visualization and interaction ideas that explain machine learning models.
EduVis: 4th IEEE VIS Workshop on Visualization Education, Literacy, and Activities
Christina Stoiber, St. Pölten University of Applied Sciences, St. Pölten, Austria
Magdalena Boucher, St. Pölten University of Applied Sciences, St. Pölten, Austria
Fateme Rajabiyazdi, University of Calgary, Calgary, Alberta, Canada
Mandy Keck, University of Applied Sciences Upper Austria, Hagenberg im Mühlkreis, Austria
Jonathan C Roberts, Bangor University, Bangor, Gwynedd, United Kingdom
Lonni Besançon, Linköping University, Norrköping, Sweden
Mathis Brossier, Linköping University, Norrköping, Sweden
Yixuan Li, Georgia Institute of Technology, Atlanta, Georgia, United States
Contact: Christina Stoiber (christina.stoiber@fhstp.ac.at)
This is the 4th workshop on visualization education, literacy, and activities. Following two successful iterations in 2023 and 2024 (30–50 participants and 15–20 submissions annually), the 2025 edition set a new record, with 20 submissions and attendance ranging between 45 and 70 participants. With EduVis, we aim become the primary forum to share and discuss advances, challenges, and methods at the intersection of visualization and education. The workshop addresses an interdisciplinary audience from and beyond visualization, including education, learning analytics, science communication, arts and design, psychology, and people from adjacent fields such as data science and HCI. Now in its 4th year, the workshop’s annual spotlight topic will be Equality, Diversity, and Inclusion (EDI) in education and data visualization. In addition to the regular paper track published in the IEEE Xplore library, we will continue to offer the ‘educators reports’ track, whose submissions will be published in the Nightingale Magazine.
Uncertainty Visualization: How to Make it Interpretable, Integrable, and Accessible?
Timbwaoga A. J. Ouermi, Scientific Computing and Imaging Institute, Salt Lake City, Utah, United States
Tushar M. Athawale, Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States
Chris R. Johnson, University of Utah, Salt Lake City, Utah, United States
Kristi Potter, National Laboratory of the Rockies, Golden, Colorado, United States
Paul Rosen, University of Utah, Salt Lake City, Utah, United States
David Pugmire, Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States
Antigoni Georgiadou, National Center for Computational Sciences (ORNL), Oak Ridge, Tennessee, United States
Tim Gerrits, RWTH Aachen University, Aachen, Germany
Nadia Boukhelifa, INRAE, Paris, France & Université Paris Saclay, Paris, France
Contact: Timbwaoga A. J. Ouermi (touermi@sci.utah.edu)
The 2024 and 2025 IEEE Uncertainty Visualization Workshops were highly successful, attracting over 75 attendees, including leading visualization researchers, and demonstrating strong community interest. Building on this momentum, we propose to run a 2026 version of the Uncertainty Visualization Workshop which aims to address key issues raised in the previous workshops. Specifically, discussions across the previous two workshops consistently highlighted three persistent bottlenecks—interpretability, integrability, and accessibility— of uncertainty visualization that cut across domains, tools, and user groups.
First, although a number of new uncertainty visualization techniques have been developed over the past decade, their growing complexity and diversity make them difficult to interpret for non‑experts and even experienced researchers. Second, this interpretability gap, in turn, hinders integrability: scientists struggle to incorporate uncertainty visualization into their analysis pipelines, and computational overhead of uncertainty propagation further limits its integration into existing workflows. Finally, the lack of uncertainty‑aware capabilities in commonly used tools and software ecosystems reduces accessibility and prevents broader use. This workshop addresses these interconnected challenges by inviting contributions that advance interpretable representations, integrable computational methods, and accessible tools and frameworks.
This year, we propose a more interactive structure featuring paper presentations, breakout discussions, and uncertainty-focused software demos (with sufficient backup plans) to directly tackle the identified bottlenecks. These formats are designed to stimulate interdisciplinary exchange across visualization, AI, high-performance computing, and human-centered computing experts, enabling them to articulate open challenges and define a forward-looking research agenda for deploying practical uncertainty-aware systems.
TopoInVis Connect: Topology meets Artificial Intelligence
Federico Iuricich, Clemson University, Clemson, South Carolina, United States
Yue Zhang, Oregon State University, Corvallis, Oregon, United States
Contact: Federico Iuricich (fiurici@clemson.edu)
Topological methods are increasingly influential across visualization, machine learning, computational geometry, and other data-intensive disciplines, yet research in these communities often progresses in parallel, with limited sustained cross-collaboration. Building on the strong participation and engagement observed in prior VIS workshops on topological data analysis and visualization, we propose a biannual, itinerating workshop series designed to use topology as a bridge between Visualization (VIS) and other research domains where structural reasoning about data is central. Each edition of the workshop will intentionally connect VIS with a different external community. The inaugural installment focuses on Artificial Intelligence (AI), reflecting both the rapid growth of topology-aware machine learning and the need for closer integration with visualization principles such as interpretability, interaction, and user-centered analysis. The workshop will alternate between IEEE VIS and major conferences in the partner community, and will emphasize discussion-driven panels and problem-focused sessions. By rotating venues and explicitly fostering balanced participation across communities, the series aims to broaden engagement and accelerate progress on topological methods.
vis4climate: Building a Transdisciplinary Climate Vis Community
Christina Humer, ETH Zurich, Zurich, Switzerland
Andreas Hinterreiter, Johannes Kepler University, Linz, Austria
Aymeric Ferron, Université de Bordeaux, Bordeaux, France
Fanny Chevalier, University of Toronto, Toronto, Ontario, Canada
Marc Streit, Johannes Kepler University Linz, Linz, Austria
Mennatallah El-Assady, ETH Zürich, Zürich, Switzerland
Luiz A. Morais, Universidade Federal de Pernambuco, Recife, Brazil
Georgia Panagiotidou, King’s College London, London, United Kingdom
Benjamin Bach, Inria, Bordeaux, France
Contact: Christina Humer (christina.humer@inf.ethz.ch)
The need to understand, mitigate, and adapt to climate change and its resulting problems is greater than ever. Solutions can take many forms, ranging from understanding key factors in climate modeling to monitoring forests and species distributions to deciding how to model a sustainable energy or transportation grid, and finally, to communicating the implications to non-experts. This 4th IEEE VIS workshop on visualization and climate change aims to continue the discussion of the role visualization can play in mitigating climate change and to build a strong community of academics and practitioners. In contrast to the interdisciplinary role of visualization in other domains, climate change problems include numerous and diverse stakeholders, and therefore calls for transdisciplinary collaborations among these stakeholders. This workshop aims to elevate the role of visualization in combating climate change by creating a space for interactive discussions with invited guests and the scientific community. To this end, the workshop invites a diverse set of guests from policy, community engagement, and science.
SciFi-VIS: Way Out There — How SciFi and Visualization Influence Each Other
Ulrik Günther, Helmholtz-Zentrum Dresden-Rossendorf e.V., Dresden, Germany
Julián Méndez, TUD Dresden University of Technology, Dresden, Germany
Gabriela Molina León, Aarhus University, Aarhus, Denmark
Samuel Pantze, Center for Advanced Systems Understanding (CASUS), Görlitz, Germany
Mario Romero, Linköping University, Norrköping, Sweden
Abdulhaq Adetunji Salako, University of Rostock, Rostock, Germany
Annalena Ulschmid, TU Vienna, Vienna, Austria
Contact: Gabriela Molina León (leon@cs.au.dk)
We propose a hybrid half-day workshop at IEEE VIS 2026, calling for participation from visualization researchers and science fiction creators in order to develop a systematic understanding of the two-way relationship these communities have long shared. We invite submissions of creative formats showcasing connections and inspiring future research. Our workshop plan includes a keynote, lightning talks, brainstorming, cross-community critique, affinity mapping, and discussion around identified themes.
3rd Workshop on Accessible Data Visualization
Contact: Brianna L Wimer (bwimer@nd.edu)
Data visualization is widely applied in fields such as data science, machine learning, healthcare, business, and education. Nevertheless, visual representations may create barriers for individuals with sensory, motor, cognitive, or neurological disabilities. Consequently, the accessibility and visualization research communities have increasingly prioritized the development of accessible data visualizations. Research efforts encompass user studies with people with disabilities to identify access barriers, the formulation of theoretical frameworks, and the creation of technical solutions, including autogenerated textual descriptions, sonification, and tactile or physical artifacts. Despite increased attention, these perspectives remain fragmented across venues and subcommunities, which limits sustained interdisciplinary dialogue.