Generative Similarity, Generalization, and Abstraction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Determining the degree of similarity between different stimuli is central for intelligent behavior. Psychologists have long debated the nature of similarity, considering whether it is continuous and space-like or discrete and feature-like. Here, we argue that the idea that stimuli are similar to the extent that they are likely to have been generated by the same process unifies these spatial and featural accounts as special cases. We formulate this notion of generative similarity in terms of Bayesian inference over hierarchical generative processes and show how it can illuminate key ideas from the literature including the universal law of generalization, feature contrast models, exemplar and prototype theories, and metric violations in semantic organization. Moreover, just as previous work on similarity drew on connections to multi-dimensional scaling and additive clustering, generative similarity provides a new way to construct representations of stimuli by drawing on contrastive learning, a modern machine learning procedure for learning representations by pushing similar stimuli together and dissimilar stimuli apart. We show how this approach can provide a way to translate hierarchical generative processes into spatial representations that support human-like abstraction and generalization, even in complex domains that are parameterized by probabilistic programs.

Article activity feed