Beyond P-values: A Multi-Metric Framework for Robust Feature Selection and Predictive Modeling

This article has been Reviewed by the following groups

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

Log in to save this article

Abstract

High-dimensional biomedical datasets routinely contain sparse signals embedded among vast, correlated features, making variable selection central to building models that generalize. Although significance-based selection is widely used across modalities (e.g., imaging, EHR, multi-omics), statistical significance does not guarantee predictive utility, and vice versa. Yet few methods unify inferential and predictive evidence within a single selection framework. We introduce MIXER (Multi-metric Integration for eXplanatory and prEdictive Ranking), a domain-agnostic approach that integrates multiple selection metrics into one consensus model via adaptive weighting that quantifies each criterion’s contribution. Through simulation studies, we demonstrate that different selection metrics identified markedly different feature sets whose over-laps depended on the underlying feature distributions and signal strength. Applied to Alzhemier’s disease in UK Biobank, MIXER outperformed every individual criterion, including statistical significance, and generalized to an external disease-specific cohort, Alzheimer’s Disease Sequencing Project, yielding higher discrimination and stronger risk stratification. The MIXER framwork is also modular and readily extends to other selection criteria and data modalities, providing a practical route to more accurate, interpretable, and transportable predictive models.

Article activity feed

  1. Figure 5 shows the normalized PIM values across the three SNP selection thresholds. F1score consistently emerges as the most influential criterion, with its importance growing asmore SNPs are included

    The PIM values (Figure 5) determine the relative contribution of each selection metric to the final model, yet these appear to be point estimates from a single train/validation split. How stable are these PIM rankings across different random splits, bootstrap samples, or subsampling of the training data? If F1 score's dominance as the top weighted metric is sensitive to the specific data partition, this could affect reproducibility of MIXER-selected feature sets in independent applications. Have you characterized the variance of PIM estimates and does the ranking of metrics remain consistent?