Horizontal information binds human face identity across views
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
How we recognise objects and people despite their physical appearance can change dramatically across encounters is a central yet unresolved question in vision science. In particular, the visual information that supports the human ability to recognize face identity across views is unknown. Past research suggests horizontally oriented face information plays a key role. We tested this hypothesis by characterizing the orientation of the visual information physically available in the face stimulus to support view-tolerant face recognition and how human observers make use of it.Human observers performed an old/new identity recognition task with face stimuli presented under different viewpoints, achieved by rotating the faces in yaw (from full-frontal to profile) and filtered to preserve contrast in selective orientation ranges. Human performance remained tuned to the horizontal range of face information irrespective of yaw. We used a model observer approach to define the information physically available in the stimulus for matching face identity within a single viewpoint or across different viewpoints. The view-selective (within-view) model indicated that face identity is carried by orientation ranges shifting from horizontal in frontal views to vertical in profile views. In contrast, the view-tolerant (across-views) model showed that the horizontal range provides the most stable identity cues across views. The horizontally-tuned orientation profile of human recognition performance was predicted by the high diagnosticity of horizontal information in frontal views and the stability of the horizontal identity cues across views.Our findings indicate that the invariant representation of a face, gradually learned through repeated exposure to its natural appearance statistics, relies primarily on horizontal facial information. By identifying the spatial information supporting view-tolerant face recognition in humans, the present work yields concrete, data-driven constraints for the refinement of visual recognition models.