Replication Study on “Machine Learning from a ‘Universe’ of Signals: The Role of Feature Engineering” (Li et al., 2025)
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This paper replicates and extends the study of Li et al. (2025) to investigate the role of feature engineering in machine learning (ML)-based cross-sectional stock return prediction. We construct a 3-tier feature system with 78 effective features, including basic financial ratios, financial change features, and growth quality features, using CRSP and Compustat data. Through a recursive rolling window approach from 1969 to 2018, we compare the performance of boosted regression trees (BRT), neural networks (NN), and the newly added extreme gradient boosting (XGBoost) models. The results show that XGBoost produces the highest accuracy in prediction since it captures statistical correlations among features efficiently, while it underperforms in terms of investment return due to its sensitivity to limited feature quality and the gap between statistical fitting and economic profitability. On the contrary, the BRT model generates the most robust performance for a strategy since it is more tolerant of noisy features within an incomplete information environment. Compared with Li et al. (2025), our strategy exhibits a lower Sharpe ratio and an insignificant risk-adjusted alpha. It is mainly due to the smaller number of features and the different sample period. This paper confirms the core conclusion of the original paper that feature engineering rather than model complexity is crucial for ML investment strategies. It offers empirical knowledge regarding real-time portfolio construction.