Undefinable True Target Learning: Towards Learning with Democratic Supervision
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Assumptions regarding the true target (TT), which is a computationally equivalent transformation of the Ground Truth, are crucial for the formulation of diverse machine learning (ML) paradigms. In this article, drawing on a systematic review of TT assumptions across current ML paradigms and insights from our previous work, we explicitly posit the assumption that the TT does not objectively exist in the real world. We investigate the implications of this non-existence assumption of TT and analyse how it may redefine our understanding of designing ML paradigms. These implications and analyses lead us to propose the undefinable true target learning (UTTL) framework as a pathway towards learning with democratic supervision (LDS). We establish the definition of UTTL, illustrate its principles for revealing the undefinable TT, and discuss its practicability for LDS and its uniqueness compared with existing similar learning settings. Based on these, we summarize example UTTL principle-based solutions regarding existing works to show the practical value of UTTL in enabling LDS. In summary, this article philosophically examines how shifts in assumptions regarding the existence of the TT give rise to new perspectives and insights for ML-based predictive modelling, and correspondingly derives a new ML paradigm termed UTTL for enabling LDS.