Can AI serve as a bridge between laboratory and real-world pupillometry? A perspective on real-world neuroscience

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Measuring human brain activity in the real world has long remained a major challenge in conventional neuroscience. Pupil dynamics offer a promising non-invasive proxy for arousal-related multidimensional brain states. However, interpreting pupil data collected outside the laboratory is notoriously difficult due to complex, multilayered factors, including lighting conditions, arousal states, and higher-order cognitive control, that can confound results. In this perspective, I propose an AI-driven analytical strategy as a potential solution to this interpretational bottleneck. I discuss a potential approach that employs deep learning to predict and model human pupil dynamics from multimodal contexts, such as first-person vision (FPV) video and physiological and physical data as well as the application of translational data from non-human studies. This AI strategy may enable the disentanglement and extraction of internal-state factors from complex real-world pupil data, as well as the virtual estimation or prediction of internal states even in the absence of pupil data. Additionally, I discuss how this approach contributes to the NeuroAI and how it could open the door to real-world neuroscience in fields such as sports science, clinical medicine, and next-generation human-computer interaction, alongside the challenges to realizing this vision.

Article activity feed