Human–AI complementarity needs augmentation, not emulation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Gonzalez and Heidari propose cognitive AI—systems that emulate human cognitive processes—as essential for human–AI complementarity in dynamic decision-making. I argue this framework rests on two questionable premises. First, the distinction between cognitive AI and data-driven approaches lacks practical significance: modern AI trained on behavioral data already exhibits emergent human-like properties through implicit modeling of statistical regularities in human decision-making. Second, the framework assumes complementarity requires AI to mirror human cognition, including human limitations and constraints. Yet if noise and systematic biases fundamentally characterize human cognition, complementary AI should compensate for these limitations rather than reproduce them. I propose that effective human–AI complementarity requires design principles emphasizing appropriate role allocation, transparent uncertainty communication, adaptive personalization that improves decision quality, and mutual modeling of functionally relevant features without necessarily replicating cognitive mechanisms. These principles can be instantiated through various technical approaches and should be evaluated by team outcomes rather than adherence to cognitive theories. Complementarity requires AI that augments human capabilities, not cognitive architectures that reproduce human limitations.