Page 96 - Teaching Innovation for the 21st Century
P. 96

94
  Teaching Innovation for the 21st Century | Showcasing UJ Teaching and Learning 2021
 Due to the ongoing nature of the Covid-19 pandemic, South African universities were expected to function mostly − if not entirely − online.
Introduction
Interest in physics pedagogy in the arena of physics education research (PER) has grown in recent decades, with a particular emphasis on confronting the ‘misconceptions’ incoming undergraduate students generally carry (Martin-Blas et al. 2020; Bani-Salameh 2016a; Bani-Salameh 2016b) and explored more recently in Crogman, et al. 2018; Wells et al. 2019 and Scott & Schumayer 2021. To identify these ‘misconceptions’,
a number of multiple-choice diagnostic tools have been developed for various disciplines, e.g. the Calculus Concept Inventory (CCI) (Epstein 2013) and the Force Concept Inventory (FCI) (Halloun & Hestenes 1985; Hestenes et al. 1992; Hestenes 1998). With the aid of
an online testing platform, we deployed the latter, in order to evaluate student ‘misconceptions’ in classical mechanics.
The usual administration of the FCI and other such tools involves a closed-book test held at the beginning of the semester, to provide instructors with an indication of student baseline skills. For the FCI, 30 predefined questions based on the concept of ‘force’ as interpreted through a Newtonian lens (Hestenes et al. 1992) are used to establish this baseline. Note that students are not expected to prepare for
the assessment. This is known as the ‘pre-test’. Once the pre-test
is completed, it is not reviewed in class; students do not receive feedback on their attempts. At the end of the semester, the same test is administered once again to students, without their prior knowledge of the re-testing. This is the ‘post-test’ phase. Different ways of administering diagnostic tools can be implemented. Traditionally, the method is ‘paper-based’: students complete the question paper by hand, which leads to the tedious task of collating individual results. However, emergent technologies can allow for the streamlining of
this process (such as through a phone app), as compared to traditional methods for collecting paper-based results (Alinea 2020). The more common approach, particularly during Covid-19 lockdowns, has been to put such tests onto learning management systems, such as Blackboard (Blackboard 2016), or other similar systems. This will be discussed in Section 2.
Furthermore, there are various ways in which to interpret FCI data, most notably including a question-by-question breakdown of the test and
the answers students provide.1 This can also be useful in circumstances
1 See Martın-Blas et al. 2010, Bani-Salameh, 2016a, Bani-Salameh, 2016b and Yasuda et al. 2018 for a discussion of some of the ways to analyse and interpret individual question responses in the FCI.
     




















































































   94   95   96   97   98