IEEE VIS 2024 Content: Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Best Paper Award

Rapid and Precise Topological Comparison with Merge Tree Neural Networks

Yu Qin - Tulane University, New Orleans, United States

Brittany Terese Fasy - Montana State University, Bozeman, United States

Carola Wenk - Tulane University, New Orleans, United States

Brian Summa - Tulane University, New Orleans, United States

Screen-reader Accessible PDF

Room: Bayshore I + II + III

2024-10-15T17:10:00Z GMT-0600 Change your timezone on the schedule page
2024-10-15T17:10:00Z
Exemplar figure, described by caption below
Merge tree comparisons are essential in scientific visualization but are often limited by the slow, computationally heavy process of matching tree nodes. Our Merge Tree Neural Network (MTNN) transforming merge tree comparison into a learning task. This innovation significantly reduces computation time by over 100 times, while maintaining near-perfect accuracy. MTNN stands out as a powerful tool for efficient and precise scientific visualization.
Fast forward
Keywords

computational topology, merge trees, graph neural networks

Abstract

Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model’s generalizability across various datasets. Our experimental analysis demonstrates our approach’s superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%.