MTLNFM: A Multi-task Framework Using Neural Factorization Machines to Predict Patient Clinical Outcomes
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Accurately predicting patient clinical outcomes is a complex task that requires integrating diverse factors, including individual characteristics, treatment histories, and environmental influences. This challenge is further exacerbated by missing data and inconsistent data quality, which often hinder the effectiveness of traditional single-task learning (STL) models. Multi-Task Learning (MTL) has emerged as a promising paradigm to address these limitations by jointly modeling related prediction tasks and leveraging shared information. In this study, we proposed MTLNFM, a multi-task learning framework built upon Neural Factorization Machines, to jointly predict patient clinical outcomes. We designed a preprocessing strategy in the framework that transforms missing values into informative representations, mitigating the impact of sparsity and noise in clinical data. We leveraged the shared representation layers, composed of a factorization machine and dense neural layers that can capture high-order feature interactions and facilitate knowledge sharing across tasks for the prediction. We conducted extensive comparative experiments, demonstrating that MTLNFM outperforms STL baselines across all three tasks, achieving AUROC scores of 0.7514, 0.6722, and 0.7754, respectively. A detailed case analysis further revealed that MTLNFM effectively integrates both task-specific and shared representations, resulting in more robust and realistic predictions aligned with actual patient outcome distributions. Overall, our findings suggest that MTLNFM is a promising and practical solution for clinical outcome prediction, particularly in settings with limited or incomplete data, and can support more informed clinical decision-making and resource planning.