Art der Publikation: Beitrag in Sammelwerk

Analysis of Programming Assessments — Building an Open Repository for Measuring Competencies

Autor(en):
Barkmin, M.; Brinda, T.
Herausgeber:
Falkner, N.; Seppala, O.
Titel des Sammelbands:
Koli Calling '20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research
Verlag:
Association for Computing Machinery
Ort(e):
New York, NY, USA
Veröffentlichung:
2020
ISBN:
9781450389211
Sprache:
Englisch
Schlagworte:
assessment, upper secondary education, programming, higher education, competency framework
Digital Object Identifier (DOI):
doi:10.1145/3428029.3428039
Zitation:
Download BibTeX

Kurzfassung

Within different approaches and aims to teach programming, context-specific languages are used which might support different paradigms. Therefore, we are developing a framework for modeling programming competencies regardless of the used language or paradigm. In this paper, we present an open repository for measuring competencies to support our theoretical model. Our goal is to make use of already existing assessments for programming by evaluating their quality and fit to our competency framework. We conducted a systematic literature review to find assessments present in the ACM DL, develop a scheme for evaluating the quality of the assessments following three criteria (objectivity, reliability, and validity) and a scheme for evaluating their fit to the competency framework. An in-depth analysis of 13 assessments showed that all fit to our competency framework with an average coverage of 39% of all concepts. Regarding the quality of the assessments, three reported the reliability by evaluating Cronbach’s alpha and five the validity by using different methods. To expand our open repository and to improve our framework we plan a five-step program: analyze more, develop a guide, fill gaps, specialize and replicate assessments. We hope that providing this framework will foster the development of competency models in the field of programming.