Page 98 - Teaching Innovation for the 21st Century
P. 98
96
Teaching Innovation for the 21st Century | Showcasing UJ Teaching and Learning 2021
of the test), time their test attempt and force submission precisely 30 minutes after the test was initiated. We were also able to make a test visible for a set period of time. For our data collection, we enabled these features to reduce cheating and to ensure students submitted their responses after 30 minutes. The level of control that an online system such as Blackboard (or similar systems such as MOODLE, Canvas, etc.) allows for new possibilities such as looking
at the time taken, for example. Quite interestingly, the testing of new parameters may in turn provide us with a more comprehensive understanding of the areas in which students struggle most.
Once the deployment period was over and each class had completed the assignment, the data was downloaded and processed within a single Microsoft Excel spreadsheet for the cohort. All personal data was anonymised in accordance with the requirements of the Protection of Personal Information (POPI) Act. Excel can be especially useful for data processing due to the availability of in-built analysis software like the Data Analysis ToolPak, which allows the user to produce in-depth statistical analyses, such as the student t-test for paired data − useful to confirm that the difference in the means between two groups is indeed statistically significant.4 Analysis templates like the Assess SpreadSheet (available on the PhysPort website from the FCI page, under the ‘Scoring’ tab) have been developed within the Excel environment.
In this article, we have used IGOR PRO (which is a graphical visualisation tool popular with condensed matter physicists) to present the way students come across ‘misconceptions’. In the next section, we will present the results and analyses for post-test data for the 2021 cohort at UJ. The analyses break down the responses for each question to look for dominant ‘misconceptions’ (Mart´ın-Blas et al. 2010).
Pre-test/post-test analyses and
results
To explain the post-test data, we will first discuss the results for the pre-test data for the 2021 cohort at the UJ, N = 353, for each of the 30 questions, as summarised in Fig. 1 (top panel). The green shaded per cent shows the percentage
of students who got the correct answer, with the black outline showing the per cent for the most commonly chosen answer. For example, questions 1–4 have the majority correct and this being the most chosen response. The choices where the majority response was incorrect correspond to those questions with a larger red percentage difference, showing that more students chose the wrong question; for this cohort of students this corresponded to questions 5, 11, 17, 19, 26 and 30, respectively.
These questions appear to show what Martin-Blas et al. called ‘dominant misconceptions’5 (Martın-Blas et al. 2010). For example, question 5 focuses on circular motion and we can see here that it was not well understood by most of the cohort. Given that this information was collected at the pre- test stage, the course instructors could attempt to adjust the teaching and learning to try to reinforce such a concept.
In the post-test data, we observe that students performed fairly well, with engineering students achieving a mean score of 15.73 (52.4%). The average for the earth sciences students was 9.88 (32.9%) (an increase from 8.23, i.e. 27.4%, in the pre-test). The average for the earth sciences students was 9.88 (32.9%) (an increase from 8.23, i.e., 27.4%, in the pre-test). However, since only seven students were involved in both the pre- and post-test phase, we acknowledge
that comparisons between these two data sets must be approached with some level of caution. We do not perform a quantitative analysis here (where an assessment of
gains was carried out with 2020 data in our previous work) (Carleschi et al. 2021), but instead opt for a qualitative commentary of features of interest that shall be examined with greater rigour in the coming years. For example, in considering the polarising questions, shown in the pre-test to coincide well with dominant ‘misconceptions’, we find that the polarising effect is suppressed: the correct answer was the most commonly selected option for each; questions 5, 11 and 29 did not have two dominant answers. This is
in direct contrast to studies conducted in the Phillipines (Alinea 2000) and in Japan (Alinea and Naylor 2017), where the correct choice and the polarising choice were the dominant answers. In questions 13 and 18, we observe a
5 Note that the majority wrong answer questions − where the red shading percentage is greater than the green shading percentage − lines up quite well with ‘polarising’ questions of 5, 11, 13, 18, 29 and 30.
4 Other packages such as IBM’s SPSS and R [R Core Team, 2013] can also be used.