Robust Bellman State Prediction with Model Preferences and Learning
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
I contribute to stochastic modeling methodology in a framework spanning core decisions in the model's lifetime. These are predicting latent time-evolving states even from asychronous or non-series data, choosing models from important trade-offs, and deciding when to start and stop learning about the state variable. States have linear dynamics with time-varying predictors and coefficients (drift), and Brownian diffusion. Coefficients address misprediction costs, data complexity, and distributional uncertainty (ambiguity) about the state's diffusion and lifetime. I exactly solve a stochastic dynamic program addressing both best and worst alternatives to a reference diffusion and lifetime. The Bellman optimal coefficients extend generalized ridge estimation to jointly predict latent states, and with ambiguity adjustments exactly give the predictive variable and distribution. I show other beneficial depictions. I derive preference and indifference functions to compare models assuming knowledge of the optimality conditions but not the value function, showing how inputs and states relate to satisfy value-change targets. Performance issues trigger method-general sequential analysis of whether learning other models, given the effort, is better than keeping baseline. Simple formulas show learning can stop in fewest average attempts within decision errors, address unknown dependencies in attempts, and exploits prior conditions so ranking and selection is a non-random search of best model for all states. I finally remark on future directions.