PG/Lab - Evaluating LLMs for Grading Support

This project examines the suitability of large language models as assistants in grading. Using human-graded work as ground truth, students will assess (1) workflow integration — how LLMs can support graders effectively — and (2) correctness — how closely LLM assessments match human judgments.


Kontakt

Avatar Tiefenau

Christian Tiefenau

Wird geladen