Enhancing Statistical Power While Maintaining Small Sample Sizes in Behavioral Neuroscience Experiments Evaluating Success Rates
This article has been Reviewed by the following groups
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
- Evaluated articles (Peer Community in Neuroscience)
Abstract
Studies with low statistical power reduce the probability of detecting true effects and often lead to overestimated effect sizes, undermining the reproducibility of scientific results. While several free statistical software tools are available for calculating statistical power, they often do not account for the specialized aspects of experimental designs in behavioral studies that evaluate success rates. To address this gap, we developed "SuccessRatePower" a free and user-friendly power calculator based on Monte Carlo simulations that takes into account the particular parameters of these experimental designs. Using "SuccessRatePower", we demonstrated that statistical power can be increased by modifying the experimental protocol in three ways: 1) reducing the probability of succeeding by chance (chance level), 2) increasing the number of trials used to calculate subject success rates, and, in some circumstance, 3) employing statistical analyses suited for discrete values. These adjustments enable even studies with small sample sizes to achieve high statistical power. Finally, we performed an associative behavioral task in mice, confirming the simulated statistical advantages of reducing chance levels and increasing the number of trials in such studies
Article activity feed
-
-
Recommendation
An important way to reduce animal use in research is to design adequately statistically powered experiments. Critically, one needs to select an experimental task that is not too easy to solve by chance, and an appropriate number of repetitions each animal is required to perform. Optimising such parameters allows selecting the lowest number of animals required to get a robust result. While there are some estimation tools available for optimising these parameters, behavioural paradigms in neuroscience are highly diverse, and not all available tools are easily amenable for the design of all types of experiments. One such example is spatial memory paradigms such as mazes where the animal must remember the correct path to a reward (e.g. plus-maze, or radial arm maze). Desachy et al (Desachy et al., 2025) report …
Recommendation
An important way to reduce animal use in research is to design adequately statistically powered experiments. Critically, one needs to select an experimental task that is not too easy to solve by chance, and an appropriate number of repetitions each animal is required to perform. Optimising such parameters allows selecting the lowest number of animals required to get a robust result. While there are some estimation tools available for optimising these parameters, behavioural paradigms in neuroscience are highly diverse, and not all available tools are easily amenable for the design of all types of experiments. One such example is spatial memory paradigms such as mazes where the animal must remember the correct path to a reward (e.g. plus-maze, or radial arm maze). Desachy et al (Desachy et al., 2025) report a new freely available online tool for this type of cognitive tasks.
The authors provide a description of the statistical analyses done to design the SuccessRatePower tool, where the modification of three main parameters in the statistical design (i.e., number of trials, use of lower chance level or use of test suited for proportion comparison) allows for the increment of statistical power without the need of increasing sample size. This way, by running simulations multiple times, with different sample sizes, users can reach the experimental design that will have a specific statistical power.
The authors also include different analysis approaches, including defining the unit of measurement as either all trials across a cohort, or average performance of each animal. In addition, they include a choice of different statistical testing approaches, including summary statistics (t-test) and multilevel modelling. These are highly useful illustrations as each approach has been used in the literature, and has advantages and limitations in hierarchically clustered datasets where multiple measurements are from one animal (see, (Galbraith et al., 2010; McNabb & Murayama, 2021; Bloom et al., 2022; Eleftheriou et al., 2025)).
We also wish to highlight the journey of peer-review with this specific article. First of all, we praise the dedication and time invested by all reviewers, and in particular statistics expert Daniël Lakens. The development of this preprint throughout the process is a testimony of the usefulness of peer-review. The dialogue, exchange of knowledge and acknowledgement of other colleagues’ work, all focused on achieving the common goal of ensuring the conclusions of the manuscript are consistent with its results and methodology, is the essence of what we are working for in this community initiative. At the same time, the process also evidenced to us a need for discussion and common ground between expert statisticians and wet-lab neuroscientists, which could be achieved by better training in statistical testing (Lakens, 2021; Alger, 2022).
While there may be an element of ‘it’s a matter of taste’ involved in selecting the statistical test, it is important to consider different approaches as this helps develop an intuition of the data-generating process (Bloom et al., 2022), and to avoid pitfalls, such as pseudoreplication, which has become increasingly prevalent despite increasingly rigorous statistical reporting guidelines (Eleftheriou et al., 2025).
References
- Theo Desachy, Marc Thevenet, Samuel Garcia, Anistasha Lightning, Anne Didier, Nathalie Mandairon, Nicola Kuczewski (2025) Enhancing Statistical Power While Maintaining Small Sample Sizes in Behavioral Neuroscience Experiments Evaluating Success Rates. bioRxiv, ver.6 peer-reviewed and recommended by PCI Neuroscience https://doi.org/10.1101/2024.07.25.605060
- Alger BE (2022) Neuroscience Needs to Test Both Statistical and Scientific Hypotheses. The Journal of Neuroscience, 42, 8432–8438. https://doi.org/10.1523/JNEUROSCI.1134-22.2022
- Bloom PA, Thieu MKN, Bolger N (2022) Commentary on Unnecessary reliance on multilevel modelling to analyse nested data in neuroscience: When a traditional summary-statistics approach suffices. Current Research in Neurobiology, 3, 100041. https://doi.org/10.1016/j.crneur.2022.100041
- Desachy T, Thevenet M, Garcia S, Lightning A, Didier A, Mandairon N, Kuczewski N (2025) Enhancing Statistical Power While Maintaining Small Sample Sizes in Behavioral Neuroscience Experiments Evaluating Success Rates. , 2024.07.25.605060. https://doi.org/10.1101/2024.07.25.605060
- Eleftheriou C, Giachetti S, Hickson R, Kamnioti-Dumont L, Templaar R, Aaltonen A, Tsoukala E, Kim N, Fryer-Petridis L, Henley C, Erdem C, Wilson E, Maio B, Ye J, Pierce JC, Mazur K, Landa-Navarro L, Petrović NG, Bendova S, Woods H, Rizzi M, Salazar-Sanchez V, Anstey N, Asiminas A, Basu S, Booker SA, Harris A, Heyes S, Jackson A, Crocker-Buque A, McMahon AC, Till SM, Wijetunge LS, Wyllie DJ, Abbott CM, O’Leary T, Kind PC (2025) Better statistical reporting does not lead to statistical rigour: lessons from two decades of pseudoreplication in mouse-model studies of neurological disorders. Molecular Autism, 16, 30. https://doi.org/10.1186/s13229-025-00663-3
- Galbraith S, Daniel JA, Vissel B (2010) A Study of Clustered Data and Approaches to Its Analysis. Journal of Neuroscience, 30, 10601–10608. https://doi.org/10.1523/JNEUROSCI.0362-10.2010
- Lakens D (2021) The Practical Alternative to the p Value Is the Correctly Used p Value. Perspectives on Psychological Science, 16, 639–648. https://doi.org/10.1177/1745691620958012
- McNabb CB, Murayama K (2021) Unnecessary reliance on multilevel modelling to analyse nested data in neuroscience: When a traditional summary-statistics approach suffices. Current Research in Neurobiology, 2, 100024. https://doi.org/10.1016/j.crneur.2021.100024
-

