Type of Publication: Article in Collected Edition

Automatic Assessment of Source Code Highlighting Tasks - Investigation of different means of measurement

Author(s):
Kramer, M.; Barkmin, M.; Brinda, T.; Tobinski, D.
Editor:
Joy, M.; Ihantola, P.
Title of Anthology:
Proceedings of the 18th Koli Calling Conference on Computing Education Research
Publisher:
ACM Press
Location(s):
New York
Publication Date:
2018
ISBN:
978-1-4503-6535-2
Language:
englisch
Keywords:
Object-Oriented Programming, Highlighting, Assessment, Evaluation
Digital Object Identifier (DOI):
doi:10.1145/3279720.3279729
Link to complete version:
https://www.ddi.wiwi.uni-due.de/forschung/publikationen/acm/#kramer-barkmin-brinda-2-2018
Citation:
Download BibTeX

Abstract

In order to define and create certain elements of object-oriented programming source code, a necessary prerequisite for prospective programmers is being able to identify these elements in a given piece of source code. A reasonable task is therefore, to hand out existing source code to students and to ask them to highlight all occurrences of a certain concept, e.g. class identifiers or method signatures. This usually results in wide range of highlights. To quantify the received results and therefore to make plausible inferences about possible abilities, it is vital to have a reliable and valid method of measurement. First, we investigate various means of measurement towards this concern, including measures from inter-rater reliability and agreement, Cohen's κ and Krippendorff's α, as well as measures from binary classification, such as F1. Those values are then applied on constructed examples. We found that Cohen's κ already represents a given response in an adequate manner.