Decisions under Uncertainty: A Statistical Framework for Evaluating Practical Relevance in Interval-Based Hypothesis Testing

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Psychological researchers are increasingly encouraged to move beyond a narrow focus on detecting statistically significant effects that differ from zero under the traditional null-hypothesis significance testing framework. To interpret findings more meaningfully, best practices emphasize evaluating the magnitude of effects to determine whether they are practically meaningful or negligible. This requires specifying a smallest effect size of interest and conducting interval-based hypothesis tests, such as minimum-effect or equivalence tests. Although these approaches improve current practices by explicitly taking into account effect sizes, interval-based hypothesis tests can still yield inconclusive results, leaving uncertainty about whether an effect is practically relevant or negligible. In this article, we first introduce interval-based hypothesis testing and highlight the challenge posed by inconclusive outcomes. We then propose a set of complementary tools—implemented in an accompanying Shiny application (https://paulriesthuis.shinyapps.io/SESOIdecisions/)—to support more informed decision making under such uncertainty, including threshold alpha, the robustness index, the practical relevance replication probability, Bayesian posterior probabilities, and a meta-analytic approach. By explicitly incorporating uncertainty and replication potential, these tools aim to help researchers draw more nuanced statistical and practical decisions and improve the interpretation of results obtained from interval-based hypothesis testing.

Article activity feed