Type of Publication: Article in Collected Edition
Analysis of Programming Assessments — Building an Open Repository for Measuring Competencies
- Barkmin, M.; Brinda, T.
- Falkner, N.; Seppala, O.
- Title of Anthology:
- Koli Calling '20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research
- Association for Computing Machinery
- New York, NY, USA
- Publication Date:
- assessment, upper secondary education, programming, higher education, competency framework
- Digital Object Identifier (DOI):
- Download BibTeX
Within different approaches and aims to teach programming, context-specific languages are used which might support different paradigms. Therefore, we are developing a framework for modeling programming competencies regardless of the used language or paradigm. In this paper, we present an open repository for measuring competencies to support our theoretical model. Our goal is to make use of already existing assessments for programming by evaluating their quality and fit to our competency framework. We conducted a systematic literature review to find assessments present in the ACM DL, develop a scheme for evaluating the quality of the assessments following three criteria (objectivity, reliability, and validity) and a scheme for evaluating their fit to the competency framework. An in-depth analysis of 13 assessments showed that all fit to our competency framework with an average coverage of 39% of all concepts. Regarding the quality of the assessments, three reported the reliability by evaluating Cronbach’s alpha and five the validity by using different methods. To expand our open repository and to improve our framework we plan a five-step program: analyze more, develop a guide, fill gaps, specialize and replicate assessments. We hope that providing this framework will foster the development of competency models in the field of programming.