Somagraphic Learning Framework: A Human-First, AI-Supported Visual Cognitive Approach

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Artificial intelligence systems increasingly generate explanations, summaries, and analytical outputs at speeds that exceed the natural pace of human cognition. While these technologies expand informational access, they may compress the orientation processes through which conceptual understanding normally develops. Research suggests that reliance on AI-generated summaries may reduce conceptual depth compared with active knowledge construction processes (Melumad & Yun, 2025). Somagraphic Learning introduces a visual orientation layer that precedes language, explanation, or AI output. In this stage, learners externalize conceptual relationships using simple shapes, spatial arrangements, and motion cues before engaging with symbolic reasoning. The learning process unfolds through a three-stage cycle: Attempt → Map → Refine. Grounded in embodied cognition (Lakoff & Johnson, 1999; Wilson, 2002), cognitive load theory (Sweller, 1988), and human-AI interaction research (Amershi et al., 2019), the framework positions visual cognition as an interface between human reasoning and AI-assisted learning.

Article activity feed