Exact Optimal Robust Out-of-time Extrapolative Inference

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Without explicit optimality theory for extrapolating beyond a sample, researchers need data partitioning and resampling to improve prediction. But this obscures how estimation, hypothesis testing, and future predictive uncertainty best relate when jointly considering these agendas. I propose such an optimality theory without partitioning or resampling, then study a substantively connected problem for comparing models under this framework. First, I separate the full data and model from a distinct out-of-time (OOT) stochastic process instead of partitioning. Distributional assumptions are only needed for prediction but allow either optimistic or pessimistic misspecification of predictive drift, diffusion, and duration. I use dynamic programming to derive robust estimators elegantly adjusting for uncertain future outcomes where predictors can differ from the sample. Then estimation, significance, and extrapolation are explicitly related and jointly optimal. I discuss results and show other depictions. Second, I derive functions to compare models using their optimality conditions instead of direct cost comparison. This allows to 1) compare models with known conditions but unknown costs, 2) infer cost-change probabilities given an OOT distribution despite unknown costs, and 3) enlarge the domain of predictions, fixed across models, for chosen cost-changes. I give examples using prior results and show robustness calibration with this approach. I conclude with future directions.

Article activity feed