CapuchinAI 1.0: Development of a machine learning-based touchscreen paradigm to test cognition in wild capuchins
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Advancing the study of primate cognition requires methods that preserve ecological validity while enabling the experimental control typical of laboratory research. We introduce CapuchinAI , a field-deployable touchscreen system that integrates real-time facial recognition with automated cognitive testing, providing a novel methodological framework for studying cognition in wild primates. Our approach combines a high-performing (>97% accuracy) YOLOv7-based facial recognition model ( Multiple Capuchins v1.0 ) with a portable Raspberry Pi–driven touchscreen–reward apparatus designed for automated operation in natural habitats. The system detects approaching capuchins, initiates video recording, presents touchscreen stimuli, and dispenses food rewards contingent on task performance. During a two-week presentation to two habituated groups of wild white-faced capuchins ( Cebus imitator ) at the Taboga Forest Reserve, 16 individuals voluntarily interacted with the apparatus, 10 learned to trigger rewards , and 8 formed and retained robust screen–reward associations . The rapid habituation and learning rates demonstrate the feasibility of deploying AI-mediated cognitive experiments in the wild. CapuchinAI addresses several long-standing challenges in field cognition research by enabling: (1) autonomous, individualized task administration without researcher intervention; (2) standardized, repeatable trials across individuals and sessions; (3) scalable deployment across groups and sites; and (4) parallel data collection on behavior, identity, and performance. This methodology provides a blueprint for integrating machine learning, touchscreen testing, and automated reward delivery to study within- and between-individual cognitive variation under natural conditions. CapuchinAI represents a significant step toward long-term comparative research on primate cognition by making laboratory experimental paradigms accessible in the wild, bridging the gap between lab and field.
Research Highlights
-
We present CapuchinAI, a field-ready touchscreen testing station that uses real-time facial recognition to study cognition in wild capuchin monkeys
-
We developed a YOLOv7-based facial recognition model (Multiple Capuchins v1.0) that identifies individual capuchins with >97% precision and recall from static images, video, and live footage, enabling fully automated, individualized testing in the wild.
-
We integrated a version of this model into a closed-loop touchscreen–reward pipeline that detects an approaching monkey, presents a basic learning task, and automatically delivers food rewards based on the monkey’s responses.
-
Wild capuchins rapidly habituated and learned touchscreen–reward associations, showing that AI-enabled touchscreens provide a scalable field method for deploying lab-style cognitive tests and mapping individual differences across tasks, species, and sites.