When Planes Fly Better Than Birds: Should AIs Think Like Humans?
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
As artificial intelligence (AI) systems continue to outperform humans in an increasingrange of specialised tasks, a fundamental question emerges at the intersection of philosophy,cognitive science, and engineering: should we aim to build AIs that think like humans, orshould we embrace nonhumanlike architectures that may be more efficient or powerful, evenif they diverge radically from biological intelligence?This paper draws on a compelling analogy from the history of aviation: the fact that airplanes,while inspired by birds, do not fly like birds. Instead of flapping wings or mimickingavian anatomy, engineers developed fixed-wing aircraft governed by aerodynamic principlesthat enabled superior performance. This decoupling of function from biological form invitesus to ask whether intelligence, like flight, can be achieved without replicating the mechanismsof the human brain.We explore this analogy through three main lenses. First, we consider the philosophicalimplications: What does it mean for an entity to be intelligent if it does not share our cognitiveprocesses? Can we meaningfully compare different forms of intelligence across radicallydifferent substrates? Second, we examine engineering trade-offs in building AIs modelledon human cognition (e.g., through neural-symbolic systems or cognitive architectures) versusthose designed for performance alone (e.g., deep learning models). Finally, we explore theethical consequences of diverging from human-like thinking in AI systems. If AIs do not thinklike us, how can we ensure alignment, predictability, and shared moral frameworks?By critically evaluating these questions, the paper advocates for a pragmatic and pluralisticapproach to AI design: one that values human-like understanding where it is useful (e.g., forinterpretability or human-AI interaction), but also recognises the potential of novel architecturesunconstrained by biological precedent. Intelligence may ultimately be a broader conceptthan the human example suggests, and embracing this plurality may be key to building robust,beneficial AI systems.