CoG-MeM: A Cognitive-Behavior-Inspired and Logic-Aligned Design for Memory Encoding, Retrieval, and Synthesis

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We propose CoG-MeM, a cognitive-behavior-inspired memory design for LLMs that extends beyond traditional RAG via a logic-aligned pipeline. CoG-MeM features: (1) Logical Encoding, using SFT and DPO to compress dialogues into high-fidelity ``logical chunks'' that aim to preserve core axioms; (2) End-to-End Retrieval, fine-tuning the LLM to map queries directly to memory entries; and (3) Logical Arbitration, a reasoning mechanism that facilitates prioritizing non-parametric memory over parametric priors during logic conflicts. Our results show that CoG-MeM allows models to adopt counterfactual rules through memory injection without weight updates. As a proof-of-concept, this design demonstrates promising logical adaptability and potential for data-efficient, non-parametric continual learning in smaller LLMs.

Article activity feed