Poster

Assessing Student Writing Assignments with Large Language Models

Poster
Teaching Methods and Tools
6th Shaw-IAU Workshop
Tuesday Nov. 12, 2024
UTC: 5:30 p.m. - 7 p.m.
, Wednesday Nov. 13, 2024
UTC: 3 p.m. - 4:30 p.m.
, Thursday Nov. 14, 2024
UTC: 10:30 a.m. - noon
, Friday Nov. 15, 2024
UTC: 8 a.m. - 9:30 a.m.

Writing assignments are useful for promoting and assessing student learning. It is difficult to implement student writing in large classes, and nearly impossible to provide iterative feedback. For large online classes, such as Massive Open Online Classes (MOOCs), the only solution has been to use peer graders. Unfortunately peer grading can be unreliable and peer graders do not always leave useful feedback. We used Large Language Models (LLMs) to test whether LLM's can accurately grade student writing assignments and provide feedback. Our results show that LLMs can provide accurate feedback similar to instructors. We also found that the grades assigned by LLMs were consistent with instructor scores, and more accurate than peer graders.

Biography:

Matthew is an education program manager at Steward Observatory with twenty years of experience in science education, public outreach, education research and evaluation. His current projects include researching online learning environments, and developing online classes. Matthew is an instructor for four Massive Open Online Classes (MOOCs) on the topics astronomy, astrobiology, and the history and philosophy of astronomy, and he manages three YouTube channels including Active Galactic, Teach Astronomy, and Astronomy: State of the Art. Matthew also has a background in informal science education and free-choice learning.