Evaluating the Impact of a Laboratory Teaching Innovation: The Case of the Bioskills at Home Kit

This article has been Reviewed by the following groups

Read the full article

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Practical laboratory training is a key component of Biosciences education, essential for developing critical skills and bridging the gap between theory and practice. The ‘Bioskills at home kit’, originally developed during the COVID-19 pandemic, was created to support first-year students in developing core laboratory skills at a time when access to in-person sessions was limited. Each kit contained equipment and guided activities focused on pipetting, microscopy, experimental design and microbial growth curve analysis. This study aimed to assess the educational impact of this kit using a mixed-methods approach. Data were collected across three student cohorts (2020/21, 2021/22 and 2024/25) using surveys and a focus group and analysed using appropriate statistical analyses and reflexive thematic analysis, respectively. The study evaluated the impact of the Bioskills at home kit on student learning across the cognitive, affective and psychomotor domains. Aspects of student experience including perceived benefits, barriers to engagement and suggestions for improvement were investigated and student attainment from a compulsory laboratory assessment spanning seven academic years (2018/19 – 2024/25) were analysed. Results show that students experienced improved understanding of theoretical concepts, confidence, enjoyment and enhanced technical skills. Student attainment in practical assessment was also improved. Embedding the delivery of the kit in a more structured way within the personal tutorial system in subsequent years improved student engagement which was a challenge during the first year of intervention. Here, we demonstrate the successful implementation of the Bioskills at home kit which offers a scalable and inclusive model for flexible laboratory teaching.

Article activity feed

  1. Comments to Author

    This is an interesting article describing a valuable teaching innovation, and the concept itself is good. However, the manuscript seems to overstate the impact and does not provide very robust evidence to support all conclusions. The focus‑group and survey data suffer from participation limitations. The focus group (n=4) was very small (what was the total cohort size in each year?), and survey response rates are also low. It is likely that only a specific subset of students, eg those more motivated, participated, risking a participation bias. Also, engagement is mentioned throughout, but how is it measured beyond participation in the focus group or survey? Positive quotes from a few students may not be representative - and how were the focus group participants recruited? The manuscript makes claims about community building, but there is limited evidence that such a community actually formed. Engagement on discussion boards appears to have been low, and the focus‑group feedback indicates reluctance to participate, which seems to contradict the community building claims? The analysis of attainment relies on the pass rate, which is already very high pre‑intervention. This could lead to a ceiling effect, so it is not clear whether any differences are meaningful. At least statistical analysis (e.g., confidence intervals or p‑values) is needed, and all cohort sizes should be reported. Furthermore, the very high pass rate in 2019/20—still pre‑intervention but attributed to unspecified measures—is not really discussed in later interpretation. The impact of embedding the kit within the personal tutorial system also needs clarification. Attendance at these sessions was only 66%, which is low if this was meant to be part of the personal tutor system. Was there tutor‑level variability, as tutor engagement (perhaps depending on their background) may differ substantially? Finally, what was the cost of the Bioskills kit? It seems fairly expensive, and I wonder about sustainability and long‑term implementation of this kit, in particular considering the financial constraints that many universities now face.

    Please confirm that no generative AI tools or large language models have been used to generate this peer review report or to assist with any part of the peer review process.

    I confirm no generative AI tools were used in preparation of this review.

    Please rate the manuscript for methodological rigour

    Satisfactory

    Please rate the quality of the presentation and structure of the manuscript

    Good

    To what extent are the conclusions supported by the data?

    Partially support

    Do you have any concerns of possible image manipulation, plagiarism or any other unethical practices?

    No

    Is there a potential financial or other conflict of interest between yourself and the author(s)?

    No

    If this manuscript involves human and/or animal work, have the subjects been treated in an ethical manner and the authors complied with the appropriate guidelines?

    Yes

  2. Comments to Author

    Overview: The article evaluates the impact of a 'Bioskills at home' kit for students who were unable to enter the lab during the pandemic and then in the years afterwards. This is an important area of research, not just due to the importance of maintaining skills levels in the pandemic, but optimising expensive lab time and ensuring maximum student experience is important in the era of reduced university finances and increases in cohort sizes. The creation of the kit, the well-thought out contents and the associated activities was an innovative idea to ensure students had some experience of lab skills even when unable to attend on campus. The time and effort that has gone into the kit and the instructions is excellent and shows a great level of care and consideration for supporting students during the pandemic. The methodology of the study has many points of robustness in its design, with surveys of experience/enjoyment and link to student attainment, but there are important data and metrics not explained here that are later mentioned in the discussion i.e. engagement with the kit. Analysis of the data could have included more statistical significance calculation and presentation of strength of relationships within the data. There is an absence of absolute numbers given in this research, and without the total cohort numbers for each of the student groups being given, it is not easy to determine what proportion of students engaged with the kit and how representative the outcomes are. In some cases the experiences of a very small (n=4) proportion of students is projected to represent the whole cohort. Careful wording when discussing the outcomes i.e. instead of 'all students' state 'all survey participants', may help with over generalisation. Similarly, stating the percentage of students who passed the assessment, doesn't clearly identify what the pass mark was or how the skills in the assessment compared to those that were specifically supported by the kit. The discussion section actually contains lots of new results data not presented in the results. The format of the paper would therefore be better written as a combined results/discussion section or if this is not compatible with the author instructions, the results data should be extracted and put into the dedicated results section. Whilst there is a lot of discussion regarding the theoretical link between confidence, enjoyment and outcomes, this wasn't directly assessed and there is therefore minimal correlation possible but it can be inferred, with the limitation that there are only 10-17% of students who completed the survey and it would be assumed by the high pass rate, it appeared that the majority of students who did not engage with the kit still passed the assessment. The wider implications of this study are beneficial, including widening participation, outreach and diversifying student experiences and opportunities outside of specialist teaching spaces, however the financial implications would also be worth stating. It would be helpful to have an inventory of the kit for each cohort group that was studied, if there were substantial revisions to the contents and also an overview of what the pipetting assessment entailed in the Supplementary material, so that the relationships that have been proposed between use of the kit and attainment in the assessment are better understood. Specific comments: Line 37: I think the word 'access' should be assess Line 55-56: The terms 'practical skills' and 'employability' would be relevant searchable terms that could also be used here. Line 160-161: It would be helpful to have an approximate cost of the contents of the kit, just to assess how scalable this would be and to estimate the cost/benefit. Line 163: The kit contained a digital microscope, micropipettes, safety glasses, dried yeast, haemocytometer slides. Were the pipettes the same as used in the practical assessment? Line 199 Section 1 Methods: What were the total student numbers for each of the cohorts? Line 213 Section 2.1: Focus group: Was the first focus group (n=4) volunteers or selected at random? Was the leader/facilitator of the focus group someone with a conflict of interest/bias? Should state that the survey wasn't anonymous. Line 236-238: This is confusing with participant numbers in words and numbers. Could be better clarified by saying: The number of participants in the 2021/22 feedback survey was sixty-two (17%) while there were twenty-eight (10%) responses in the 2024/25 academic session. Line 254: The kit is discussed as being refined. It would help to have a full itinerary of the kit each year to determine whether its contents were comparable, as the assessment of the effectiveness of the kit is the same (compulsory pipetting assessment within the Practical Techniques module) for all cohorts. Was this assessment run in the COVID year with the 2020/21 cohort? Need to state in the methods how you measured engagement with the kit. Lin3 308-317: It is assumed here that the numbers presented are a percentage of the students who completed the surveys i.e. 62 students and 28 students for 2021/22 and 2024/25 years respectively? Line 347/348: A Chi Square or Fishers Exact test could be carried out on this data to calculate the statistical strength of this comparison. Line 347/348: It would be informative to see whether there is a statistical correlation between 'I enjoyed performing the experiments' and 'I will use kit for additional independent scientific activities' as these seem the most divisive questions and could identify a relationship between these two factors. Figure 5. A word cloud can be informative, but it is clear that the responses from students were in sentences rather than individual words. The responses from students would be better presented in groupings of similar comments to indicate how many students were in agreement with each other on a certain topic. Line 366: It could be more informative to show the pipetting assessment mean grade +/- deviation/error for the cohort rather than the percentage of the cohort who have passed as the pass mark/threshold isn't stated. The attainment is very high, so it might be more interesting to identify what part of the test the students lost marks on and how this could have related to their use of the kit. Line 373: 99.6 needs a % units Line 390 -395: This paragraph reads more like results than discussion. Engagement data would be an important part to show in the results. Line 399: The use of tutors to increase engagement was good and the community aspect of a small group activity is very beneficial, but this relies on each tutor giving comparable support, so was this standardised or could this have also introduced some bias with some students getting a higher level of support than others? Line 463: The phrase 'the majority of the students' should be the majority of the survey respondents' just to make this distinction clear Line 530 - 536: This is an over-confident assumption made here, based on the projection of a small group of student outcomes on a larger cohort and without the statistical significance being calculated to support this. Lines 566-574: Additional limitations could also state that enjoyment and engagement with the kit weren't directly correlated with attainment through following individual students scores on both metrics.

    Please rate the manuscript for methodological rigour

    Good

    Please rate the quality of the presentation and structure of the manuscript

    Good

    To what extent are the conclusions supported by the data?

    Partially support

    Do you have any concerns of possible image manipulation, plagiarism or any other unethical practices?

    No

    Is there a potential financial or other conflict of interest between yourself and the author(s)?

    No

    If this manuscript involves human and/or animal work, have the subjects been treated in an ethical manner and the authors complied with the appropriate guidelines?

    Yes