Trade-off between performance and human-like perception in face recognition models
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
In the past decade, different computational models have been developed for face recognition. Despite their widespread use in cognitive neuroscience studies, it is not well understood whether their excellent recognition performance means that they “see” faces like humans do. Here, we collected a large dataset of human similarity judgments across a diverse set of faces, and shared it publicly. We examined whether state-of-the-art recognition models mimic how humans subjectively perceive faces. We observed that models with superior recognition ability often diverged from humans’ subjective similarity judgments, i.e., how they rated faces as similar or dissimilar to each other. Models with high, but not superior, recognition performance are often best aligned with these ratings. Although these models may not explicitly maintain the same similarity relations, we tested whether such information may still exist within them, potentially allowing them to be transformed into a more human-like form. Therefore, we computationally derived such a transformation function for each of the models. Models with superior recognition benefited the least from the transformation, suggesting a deep structural mismatch with human perception. Furthermore, the transformation generally reduced recognition performance, except in low-tier recognition models, where it slightly boosted their recognition ability instead. Overall, our results indicate that, in computational models, there exists a trade-off between the ability to recognize faces and their semblance to human-like perception. This may inform future work on developing more human-like computational models for face recognition.