IEEE VIS 2024 Content: Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts

Sam Yu-Te Lee - University of California Davis, Davis, United States

Aryaman Bahukhandi - University of California, Davis, Davis, United States

Dongyu Liu - University of California at Davis, Davis, United States

Kwan-Liu Ma - University of California at Davis, Davis, United States

Room: Bayshore I

2024-10-16T16:48:00ZGMT-0600Change your timezone on the schedule page
2024-10-16T16:48:00Z
Exemplar figure, described by caption below
Bubble Plot, the key visualization in Awesum, designed to show prompt performance. Yellow curves suggest improvements, and purple curves suggest deterioration. The image suggests a mixed performance.
Fast forward
Full Video
Keywords

Visual analytics, prompt engineering, text summarization, human-computer interaction, dimensionality reduction

Abstract

Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation. In the context of text summarization, our workflow advocates evaluation with summary characteristics (feature metrics) such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements for text summarization through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the xeffectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation of summarization prompts, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction.