Automated Consensus Moderation as a Tool for Ensuring Reliability in a Multi-Marker Environment

Bertram Peter Haskins

Abstract


Students in programming subjects frequently have to submit assignments for assessment. Depending on the class size, these assignments may be divided amongst multiple, trained markers to be marked using a pre-defined rubric. Experience and differing opinions might yield different marks by different assessors on the same assessment. This yields an inconsistent marking process.

Consensus moderation is a technique whereby consensus is reached on a student assignment by incorporating the opinions of multiple markers. In this study, an automated form of consensus moderation is proposed in which the opinion of an individual marker on a specific criteria point on a rubric is cast as a vote. In this process a majority vote determines the successful completion of the specific rubric criteria point.

Tests are conducted in order to determine whether such an automated consensus moderation process yields more reliable results than that of individual markers. Using Krippendorf’s alpha the average level of agreement between individual markers on 4 programming assignments is calculated as 0.522. This score is deemed unreliable. The individual markers show an average level of agreement of 0.811 with the automated consensus moderated result. This is classified as an acceptable level of reliability.


Full Text:

PDF

Refbacks

  • There are currently no refbacks.


The ASEE Computers in Education (CoED) Journal
1818 N Street N.W. Suite 600, Washington DC 20036
ISSN: 1069-3769 (Print)
ISSN: 2577-221X (Online)