Type of Publication: Article in Collected Edition
Automatic Assessment of Source Code Highlighting Tasks - Investigation of different means of measurement
- Kramer, M.; Barkmin, M.; Brinda, T.; Tobinski, D.
- Joy, M.; Ihantola, P.
- Title of Anthology:
- Proceedings of the 18th Koli Calling Conference on Computing Education Research
- ACM Press
- New York
- Publication Date:
- Object-Oriented Programming, Highlighting, Assessment, Evaluation
- Digital Object Identifier (DOI):
- Link to complete version:
- Download BibTeX
In order to define and create certain elements of object-oriented programming source code, a necessary prerequisite for prospective programmers is being able to identify these elements in a given piece of source code. A reasonable task is therefore, to hand out existing source code to students and to ask them to highlight all occurrences of a certain concept, e.g. class identifiers or method signatures. This usually results in wide range of highlights. To quantify the received results and therefore to make plausible inferences about possible abilities, it is vital to have a reliable and valid method of measurement. First, we investigate various means of measurement towards this concern, including measures from inter-rater reliability and agreement, Cohen's κ and Krippendorff's α, as well as measures from binary classification, such as F1. Those values are then applied on constructed examples. We found that Cohen's κ already represents a given response in an adequate manner.