The Learning Ising Attitude Model (LIAM): Explaining Attitude Change and Stability

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This article introduces the Learning Ising Attitude Model (LIAM), which explains how attitude networks reduce entropy—the dissonance within an attitude. LIAM extends the network theory of attitudes by adding Hebbian learning, thereby aligning it with classic neural network models like the Hopfield network and the Boltzmann machine. In the original theory, attention and thought temporarily reduce entropy in balanced attitude networks, but it remained unclear why such networks are balanced in the first place. We demonstrate that higher attention, when paired with Hebbian learning, causes the system to evolve toward balanced network structures that effectively reduce entropy. In addition, we model feedback between attitudinal unstableness and attention, incorporate the learning of dispositional tendencies of attitude elements, and illustrate how attitude change can emerge from the model. Finally, we discuss how LIAM contributes to the unification of attitude research, its relation to previous connectionist models, and directions for future work.

Article activity feed