IEEE VIS 2024 Content: Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Attention-Aware Visualization: Tracking and Responding to User Perception Over Time

Arvind Srinivasan - Aarhus University, Aarhus, Denmark

Johannes Ellemose - Aarhus University, Aarhus N, Denmark

Peter W. S. Butcher - Bangor University, Bangor, United Kingdom

Panagiotis D. Ritsos - Bangor University, Bangor, United Kingdom

Niklas Elmqvist - Aarhus University, Aarhus, Denmark

Room: Bayshore II

2024-10-16T17:00:00Z GMT-0600 Change your timezone on the schedule page
2024-10-16T17:00:00Z
Exemplar figure, described by caption below
This image illustrates various Attention-aware re-visualization techniques that adapt based on user attention in both 3D and 2D spaces. The left side of the image focuses on our “Data Aware 3D” implementation applying GPU Color Picking, featuring heatmaps and desaturation techniques that respond to user orientation, rotation, and location within a 3D environment. The right side displays our “Data Agnostic 2D” implementation applying a Picture Framing Metaphor, highlighting how user attention, tracked through gaze, pointer, and keyboard input, shapes different frames like bar, area, and heat maps. These revisualizations that adjust dynamically to emphasize areas of interest based on cumulative attention were then qualitatively evaluated across different triggering mechanisms.
Fast forward
Keywords

Attention tracking, eyetracking, immersive analytics, ubiquitous analytics, post-WIMP interaction

Abstract

We propose the notion of attention-aware visualizations (AAVs) that track the user’s perception of a visual representation over time and feed this information back to the visualization. Such context awareness is particularly useful for ubiquitous and immersive analytics where knowing which embedded visualizations the user is looking at can be used to make visualizations react appropriately to the user’s attention: for example, by highlighting data the user has not yet seen. We can separate the approach into three components: (1) measuring the user’s gaze on a visualization and its parts; (2) tracking the user’s attention over time; and (3) reactively modifying the visual representation based on the current attention metric. In this paper, we present two separate implementations of AAV: a 2D data-agnostic method for web-based visualizations that can use an embodied eyetracker to capture the user’s gaze, and a 3D data-aware one that uses the stencil buffer to track the visibility of each individual mark in a visualization. Both methods provide similar mechanisms for accumulating attention over time and changing the appearance of marks in response. We also present results from a qualitative evaluation studying visual feedback and triggering mechanisms for capturing and revisualizing attention.