Show and Tell: Exploring Large Language Model’s Potential inFormative Educational Assessment of Data Stories
Naren Sivakumar - University of Maryland Baltimore County, Baltimore, United States
Lujie Karen Chen - University of Maryland, Baltimore County, Baltimore, United States
Pravalika Papasani - University of Maryland,Baltimore County, Baltimore, United States
Vigna Majmundar - University of maryland baltimore county, Hanover, United States
Jinjuan Heidi Feng - Towson University, Towson, United States
Louise Yarnall - SRI International, Menlo Park, United States
Jiaqi Gong - University of Alabama, Tuscaloosa, United States
Room: Bayshore VII
2024-10-13T16:00:00ZGMT-0600Change your timezone on the schedule page
2024-10-13T16:00:00Z
Abstract
Crafting accurate and insightful narratives from data visualization is essential in data storytelling. Like creative writing, where one reads to write a story, data professionals must effectively ``read" visualizations to create compelling data stories. In education, helping students develop these skills can be achieved through exercises that ask them to create narratives from data plots, demonstrating both ``show" (describing the plot) and ``tell" (interpreting the plot). Providing formative feedback on these exercises is crucial but challenging in large-scale educational settings with limited resources. This study explores using GPT-4o, a multimodal LLM, to generate and evaluate narratives from data plots. The LLM was tested in zero-shot, one-shot, and two-shot scenarios, generating narratives and self-evaluating their depth. Human experts also assessed the LLM's outputs. Additionally, the study developed machine learning and LLM-based models to assess student-generated narratives using LLM-generated data. Human experts validated a subset of these machine assessments. The findings highlight the potential of LLMs to support scalable formative assessment in teaching data storytelling skills, which has important implications for AI-supported educational interventions.