M.Ed. Mike Barkmin

Wissenschaftlicher Mitarbeiter

M.Ed. Mike Barkmin

Raum:
SA-224
Telefon:
+49 201 18-37246
Fax:
+49 201 18-36897
E-Mail:
Sprechstunde:
n.V.
Homepage:
https://www.barkmin.eu
Adresse:
Universität Duisburg-Essen, Campus Essen
Fakultät für Wirtschaftswissenschaften
Didaktik der Informatik
Schützenbahn 70
45127 Essen

Zur Person:

PGP Key ID: 8AF96072

Lebenslauf:

Publikationen:

Filter:
  • Borukhovich-Weis, S.; Brinda, T.; Burovikhina, V.; Beißwenger, M.; Bulizek, B.; Cyra, K.; Gryl, I.; Tobinski, D.; Barkmin, M.: An Integrated Model of Digitalisation-Related Competencies in Teacher Education, Passey, D.; Leahy, D.; Williams, L.; Holvikivi, J.; Ruohonen, M. (Hrsg.), Springer International Publishing, Cham 2022. (ISBN 978-3-030-97986-7) VolltextBIB DownloadDetails

    This paper presents a model of digitalisation-related competencies for teacher education, developed by a working group on digitalisation in teacher education (WG DidL) at the University of Duisburg-Essen. Currently, there are various models available that outline the competencies teachers should develop for being equipped to work in a digital world. These approaches often mention various, widely applicable digitalisation-related competences that teachers are meant to acquire, or they are based on a limited or only implicit understanding of digitalisation. The aim of the presented model is to contribute to the discussion of how to best integrate existing models. It is based on an integrated understanding of digitalisation-related competencies that encompass teaching and learning with digital media, as well as learning about digitalisation as a subject matter in its own right. At the center of the model are generally formulated competency goals for teaching and learning, for professional engagement, and for reflective, critical-constructive teaching practice. The potential for achieving these goals is then illustrated by means of interdisciplinary and/or subject-specific examples. In this way, the model can also be applied to specific subject areas and to their teaching methodologies.

  • Barkmin, M.; Beißwenger, M.; Borukhovich‐Weis, S.; Brinda, T.; Bulizek, B.; Burovikhina, V.; Gryl, I.; Tobinski, D.: Vermittlung digitalisierungsbezogener Kompetenzen an Lehramtsstudierende - Werkstattbericht einer interdisziplinären Arbeitsgruppe. In: Kaspar, K.; Becker‐Mrotzek, M.; Hofhues, S.; König, J.; Schmeinck, D. (Hrsg.): Bildung, Schule und Digitalisierung. Waxmann, Münster 2020. BIB DownloadDetails
  • Barkmin, M.; Bergner, N.; Bröll, L.; Huwer, J.; Menne, A.; Seegerer, S.: Informatik für alle?! – Informatische Bildung als Baustein in der Lehrkräftebildung (in print). In: Beißwenger, M.; Bulizek, B.; Gryl, I.; Schacht, F. (Hrsg.): Digitale Innovationen und Kompetenzen in der Lehramtsausbildung. Universitätsverlag Rhein-Ruhr, Duisburg 2020. BIB DownloadDetails
  • Barkmin, M.; Brinda, T.: Informatiksysteme aus fachdidaktischer Sicht (in print). In: Beißwenger, M.; Bulizek, B.; Gryl, I.; Schacht, F. (Hrsg.): Digitale Innovationen und Kompetenzen in der Lehramtsausbildung. Universitätsverlag Rhein-Ruhr, Duisburg 2020. BIB DownloadDetails
  • Barkmin, M.; Brinda, T.: Analysis of Programming Assessments — Building an Open Repository for Measuring Competencies. In: Falkner, N.; Seppala, O. (Hrsg.): Koli Calling '20: Proceedings of the 20th Koli Calling International Conference on Computing Education Research. Association for Computing Machinery, New York, NY, USA 2020. doi:10.1145/3428029.3428039BIB DownloadDetails

    Within different approaches and aims to teach programming, context-specific languages are used which might support different paradigms. Therefore, we are developing a framework for modeling programming competencies regardless of the used language or paradigm. In this paper, we present an open repository for measuring competencies to support our theoretical model. Our goal is to make use of already existing assessments for programming by evaluating their quality and fit to our competency framework. We conducted a systematic literature review to find assessments present in the ACM DL, develop a scheme for evaluating the quality of the assessments following three criteria (objectivity, reliability, and validity) and a scheme for evaluating their fit to the competency framework. An in-depth analysis of 13 assessments showed that all fit to our competency framework with an average coverage of 39% of all concepts. Regarding the quality of the assessments, three reported the reliability by evaluating Cronbach’s alpha and five the validity by using different methods. To expand our open repository and to improve our framework we plan a five-step program: analyze more, develop a guide, fill gaps, specialize and replicate assessments. We hope that providing this framework will foster the development of competency models in the field of programming.

  • Borwoy, S.; Barkmin, M.: Assessing programming tasks of central final exams in Germany: which competencies are required?. In: Brinda, T.; Armoni, M. (Hrsg.): Proceedings of the 15th Workshop on Primary and Secondary Computing Education. Association for Computing Machinery, New York, NY, USA 2020. doi:10.1145/3421590.3421617BIB DownloadDetails

    The implementation of central final exams at the end of a students' high school career is common among numerous countries. Many of those countries rely on a centralized approach where exams are designed by a national or federal government department. This paper presents the findings of a thesis that examined central final exams in Germany in order to determine the specific competencies required to solve given tasks from the field of programming.

  • Barkmin, M.: An open platform for assessment and training of competencies. In: Brinda, T.; Armoni, M. (Hrsg.): Proceedings of the 15th Workshop on Primary and Secondary Computing Education. Association for Computing Machinery, New York, NY, USA 2020. doi:10.1145/3421590.3421616BIB DownloadDetails

    This paper introduces an open platform for assessment and training of competencies, short OpenPatch. Its aim is to ease the creation and conduction of assessments in the field of computer science, in particular in programming. OpenPatch offers interactive tools for assessment development, means for conducting assessments with instantaneous evaluation and visualization as well as social network features like commenting, sharing, liking, following and remixing for exchanging ideas with fellow researchers and educators.

  • Barkmin, M.: Competency structure model for programming for the transition from school to university. In: Brinda, T.; Armoni, M. (Hrsg.): Proceedings of the 15th Workshop on Primary and Secondary Computing Education. Association for Computing Machinery, New York, NY, USA 2020. doi:10.1145/3421590.3421591BIB DownloadDetails

    The learning of programming can take manifold starting points, e.g. via friends or family, a school course, a compulsory requirement in vocational education (e.g. industrial robotics) or even in an academic setting (e.g. formalization and execution of mathematical algorithms in numerical mathematics). Within these approaches, context-specific programming languages are used, which might support different paradigms. In this paper, a proposal for a language and paradigm overarching competency structure model is developed from theoretical considerations. By analyzing different programming languages, textbooks and papers three content dimensions are derived. Principles and the application of high-Level paradigms build the first dimension. The second dimension elements represents paradigm and language independent concepts of program components, data types, basic data structures and algorithmic paradigms. The last content dimension language is defined by syntax, semantics, standard library and the build/compile and run process of programming languages.

  • Barkmin, M.; Brinda, T.: Informatiksysteme für den Unterricht aufbereiten - Workshop. In: Pasternak, A. (Hrsg.): Informatik für alle. 18. GI-Fachtagung Informatik und Schule. Köllen, Bonn 2019, S. 373-373. BIB DownloadDetails

    Gegenstände des Alltags werden zunehmend vernetzter, sodass immer mehr Lernende smarte Systeme als selbstverständlich in ihrem Alltag begreifen und nutzen. Die Rolle des Informatikunterrichts besteht darin, Lernende über die dahinter liegenden Konzepte aufzuklären und sie zur Gestaltung einfacher eigener Informatiksysteme zu befähigen. In diesem Beitrag wird zunächst ein Analyseschema vorgestellt, mit dem Informatiksysteme untersucht und verglichen werden können. Anhand des Projekts "Smartlights" wird die Analyse exemplarisch durchgeführt. Bei dem Projekt handelt es sich um smarte Glühbirne, die über eine Basisstation im Netzwerk mit einer App gesteuert werden können. Es werden zwei Szenarien zur Entwicklung des Systems mit Schwerpunkt auf der Auswahl geeigneter Tools für unterschiedliche Lernausgangsniveaus vorgestellt. Darüber hinaus wird ein Ausblick auf weitere reduzierte Informatiksysteme von Studierenden gegeben.

  • Kramer, M.; Barkmin, M.; Brinda, T.: Identifying Predictors for Code Highlighting Skills - A regressional analysis of knowledge, syntax abilities and highlighting skills. In: Kurkovsky, Stanislav; Paterson, Jim (Hrsg.): Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE ’19). ACM, New York, NY, USA 2019, S. 367-373. doi:10.1145/3304221.3319745VolltextBIB DownloadDetails
  • Kramer, M.; Barkmin, M.; Brinda, T.; Tobinski, D.: Automatic Assessment of Source Code Highlighting Tasks - Investigation of different means of measurement. In: Joy, M.; Ihantola, P. (Hrsg.): Proceedings of the 18th Koli Calling Conference on Computing Education Research. ACM Press, New York 2018. doi:10.1145/3279720.3279729VolltextBIB DownloadDetails

    In order to define and create certain elements of object-oriented programming source code, a necessary prerequisite for prospective programmers is being able to identify these elements in a given piece of source code. A reasonable task is therefore, to hand out existing source code to students and to ask them to highlight all occurrences of a certain concept, e.g. class identifiers or method signatures. This usually results in wide range of highlights. To quantify the received results and therefore to make plausible inferences about possible abilities, it is vital to have a reliable and valid method of measurement. First, we investigate various means of measurement towards this concern, including measures from inter-rater reliability and agreement, Cohen's κ and Krippendorff's α, as well as measures from binary classification, such as F1. Those values are then applied on constructed examples. We found that Cohen's κ already represents a given response in an adequate manner.

  • Kramer, M.; Barkmin, M.; Brinda, T.: Evaluating Submissions in Source Code Highlighting Tasks - Preliminary Considerations for Automatic Assessment. In: Mühling, A.; Cutts, Q.; Schwill, A. (Hrsg.): Proceedings of the 13th Workshop in Primary and Secondary Computing Education (WIPSCE 2018), Potsdam, Germany, 4.-6. Oktober 2018. ACM Press, New York 2018. doi:10.1145/3265757.3265775VolltextBIB DownloadDetails

    The reading and understanding of source code is a necessary prerequisite for prospective programmers to write their own code. Thus, a reasonable task is to determine certain concepts like class identifiers in a given source code. In order to automatically evaluate the responses, they must be quantified. Hence, it is vital to have reliable and valid methods of measurement. In this paper we apply different means of measurement on constructed examples and compare them. Cohen's κ seems to be an adequate measure.

  • Barkmin, M.; Brinda, T.: Exploring and Evaluating Computing Systems for Use in Learning Scenarios by Creating an E-Portfolio - Course Design and First Experiences. In: Mühling, A.; Cutts, Q. (Hrsg.): Proceedings of the 13th Workshop in Primary and Secondary Computing Education (WIPSCE 2018), Potsdam, Germany, 4.-6. Oktober 2018. ACM Press, New York 2018. doi:10.1145/3265757.3265792VolltextBIB DownloadDetails

    In this paper we present a course design for future CS teachers with special attention to the principles of experiential learning and reflective practice. The course aims at enabling the students to explore and evaluate computing systems for their usage in learning scenarios by a hands-on approach. The students are asked to document reflections and their own creative work in an e-portfolio.

  • Kramer, M.; Barkmin, M.; Tobinski, D.; Brinda, T.: Understanding the Differences Between Novice and Expert Programmers in Memorizing Source Code. In: Tatnall, A.; Webb, M. (Hrsg.): Tomorrow's Learning: Involving Everyone. Learning with and about Technologies and Computing 11th IFIP TC 3 World Conference on Computers in Education, WCCE 2017, Dublin, Ireland, July 3-6, 2017, Revised Selected Papers. Springer, Cham, Switzerland 2018, S. 630-639. doi:10.1007/978-3-319-74310-3_63VolltextBIB DownloadDetails

    This study investigates the difference between novice and expert programmers in memorizing source code. The categorization was based on a questionnaire, which measured the self-estimated programming experience. An instrument for assessing the ability to memorize source code was developed. Also well-known cognitive tests for measuring working memory capacity and attention were used, based on the work of Kellog and Hayes. Thirty-eight participants transcribed items which were hidden initially but could be revealed by the participants at will. We recorded all keystrokes, counted the lookups and measured the lookup time. The results suggest that experts could memorize more source code at once, because they used fewer lookups and less lookup time. By investigating the items in more detail we found that it is possible that experts memorize short source codes in semantic entities, whereas novice programmers memorize them line by line. Because our experts were significantly better in the performed memory capacity tests, our findings must be viewed with caution. Therefore, there is a definite need to investigate the correlation between working memory and self-estimated programming experience.

  • Barkmin, M.; Tobinski, D.; Kramer, M.; Brinda, T.: Code structure difficulty in OOP: an exploration study regarding basic cognitive processes. In: Proceedings of the 17th Koli Calling Conference on Computing Education Research . ACM Press, New York 2017, S. 185-186. doi:10.1145/3141880.3141913VolltextBIB DownloadDetails
  • Barkmin, M.; Kramer, M.; Tobinski, D.; Brinda, T.: Unterschiede beim Memorieren von Quelltexten zwischen NovizInnen und ExpertInnen der objektorientierten Programmierung. In: Diethelm, I. (Hrsg.): Informatische Bildung zum Verstehen und Gestalten der digitalen Welt (GI-Fachtagung "Informatik und Schule - INFOS 2017", 13.-15.09.2017 in Oldenburg). Köllen, Bonn 2017, S. 407-408. VolltextBIB DownloadDetails

    Dieser Artikel befasst sich mit den unterschiedlichen Vorgehensweisen von ExpertInnen und NovizInnen der Programmierung beim Memorieren und Interpretieren von Quelltexten. ExpertInnen haben die Quelltexte überwiegend auf einer semantischen Ebenen memoriert, NovizInnen hingegen zeilenweise.

Vorträge:

Filter:
  • Barkmin, M.; Kramer, M.: Automatische Auswertung von Leistungserhebungen zur unmittelbaren Diagnostik im (Informatik-)Unterricht, Kolloquium des Lehrstuhls Metheval, 15.01.2020, Jena. Details
  • Barkmin, Mike: Entwicklung eines Messinstruments zur Erfassung von Kompetenzen in den Dimensionen OOP Wissen & Fähigkeiten und Umgang mit Repräsentationen - Teilvaliderung eines Kompetenzstrukturmodells zur OOP, Doktorandenkolloquium Informatikdidaktik 2018, 03.10.2018, Potsdam. Details
  • Barkmin, Mike; Kramer, Matthias: Automatische Auswertung von Leistungserhebungen zur unmittelbaren Diagnostik im (Informatik-)Unterricht, MINT-Kongress, 08.09.2018, Universität Duisburg-Essen. Details

    In dem Workshop soll anhand eines webbasierten Tools, welches vom Lehrstuhl für Didaktik der Informatik entwickelt wurde, aufgezeigt werden, wie komplexere Aufgabentypen durch automatische Auswertung in den Unterricht zur unmittelbaren Diagnostik genutzt werden können. Die Teilnehmenden werden zunächst die verschiedenen Aufgabentypen kennenlernen und im späteren Verlauf die automatisch ausgewerteten Abgaben analysieren und zur Diagnose unter verschiedenen Gesichtspunkten einsetzen. Abschließend soll mit den Teilnehmenden diskutiert werden, inwiefern sich die vorhandenen Aufgabentypen und automatischen Analysen für den (Informatik-)Unterricht eignen und welche zusätzlichen Analysen und Aufgabentypen aus ihrer Sicht für eine gute Diagnose sinnvoll sind.

  • Barkmin, Mike; Kramer, Matthias: Entwicklung eines Online-Tools zur Bestimmung von Programmierkompetenzen - Aktueller Zwischenstand, Oberseminar "Didaktik der Informatik", 24.01.2018, Essen. Details
  • Barkmin, Mike; Kramer, Matthias: Towards Automatic Competency Assessment of Programming Skills - Preliminary results of a current development, IZfB "Scientist in Residence" Symposium: "Digitalisation and the Future of Higher Education and Work", 05.12.2017, Essen. Details

    Background

    Due to the results of international student assessment studies like PISA, the normative determination and empirical verification of competencies are topics in the focus of educational and psychological research during the last years. In the area of object-oriented programming the project COMMOOP focuses on the identification of such competencies. Currently, in the framework of this project a test instrument for comparative studies of larger cohorts as well as for individual diagnostics is under construction.

    Following a comparing analysis of various definitions of the term *competency*, we understand competencies in the area of object-oriented programming as domain specific cognitive and metacognitive abilities and skills that enable individuals to solve problems in the area of object-oriented programming. During an intensive literature research a four-dimensional competency model could be derived. First steps towards verification were done by interviewing experts and integrating competency formulations from national and international curricula. Furthermore 12 test items were derived and piloted in a first pen and paper test. This format has been shown resulted in a quite inefficient and time-consuming procedure due to the process of digitalization and evaluation as well as due to the reach-out for further test subjects. It was thus considered detrimental. Hence, it was reasonable to seek an alternative approach in a digital solution.

    Implementation & current status

    Considering the situation that teachers often do not have administrative rights and furthermore to avoid installation issues we decided to develop a web-based application, which could be run in any web browser on any device with internet access. The web application is independent from the field of computer science and could be used in other fields as well. So there are simple generic item formats, like multiple choice and questionnaire. But there are also special item formats, which were custom-built for accessing programming competencies. Because some item formats are more complex than others requiring more than one step to complete, it would not been enough to use the final submission for enhancing the items. So we implemented a generic way to record all actions the user has taken during the process. These recordings could be helpful to explore cognitive processes while solving the tasks. Given an inside into difficulties, the improvement of the item pool will be facilitated. To make it easy to create new items and tests, we have implemented a graphical user interface for this purpose.

    Potential and outlook

    The tool has the potential to let interested researchers assess and evaluate the competency level of students with regard to a competency model. Especially the recording of all actions could be very helpful to get an insight into how students deal with the item, to know where they struggle. In the future, when we have collected enough data, we will implement adaptive tests to assess the competency of our pupil with a far better accuracy.

Betreute Abschlussarbeiten:

  • Automatische Auswertung von SQL-Aufgaben zur unmittelbaren Diagnostik im Informatikunterricht - Erweiterung der Assessment-Plattform OpenPatch (Masterarbeit Informatik, 2020)
  • Analyse von Abituraufgaben Informatik und Ländervergleich – welche Kompetenzen werden benötigt? (Bachelorarbeit Informatik, 2020)
  • Synopse zur informatischen Bildung in Deutschland - Ein analytischer Vergleich der veröffentlichten Richtlinien und Lehrpläne allgemeinbildender Schulen der Bundesländer Deutschlands (Masterarbeit Informatik, 2019)
  • Empirische Untersuchung zu Lösungsbeispielen zur Entwicklung von Klassendiagrammen (Bachelorarbeit Informatik, 2019)
  • Schülerinteresse an digitalen Medien (Staatsexamensarbeit Informatik, 2018)
  • Schülervorstellungen von Datenbanken - eine interviewbasierte Studie (Staatsexamensarbeit Informatik, 2018)
  • Inklusion in der informatischen Bildung (Bachelorarbeit Informatik, 2018)

Mitgliedschaften:

International Federation for Information Processing (IFIP)

URL: http://www.ifip.or.at/

  • Working Group 3.1 "Informatics and Digital Technologies in School Education"
    • Intending Member seit 2018

Gesellschaft für Informatik (GI)

URL: http://www.gi.de/

Zentrum für Lehrerbildung (ZLB) der Universität Duisburg-Essen

URL: https://zlb.uni-due.de