Simple Reasoning and Knowledge States in a LSTM-Based Agent

Read the full article

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

This research introduces a novel approach to self-awareness in artificial intelligence by developing an LSTM-based agent capable of monitoring its own knowledge state during logical reasoning tasks. Unlike previous work that focused primarily on architectural or hardware-level monitoring, our agent demonstrates explicit knowledge-state awareness by detecting gaps in its understanding, recognizing contradictions in its knowledge base, and proactively seeking assistance from other agents when necessary. The agent is trained to reason about propositional logic statements involving logical operators (NOT, AND, OR, XOR, implication, and bidirectional implication) and can condense repeated information from its knowledge base. Using a sequence-to-sequence LSTM architecture implemented in Keras with TensorFlow, the agent achieved an error rate of less than 1% on validation data. The system demonstrates multi-agent cooperation, where agents share knowledge to solve problems that would be unsolvable individually. This research bridges neural-symbolic AI and self-awareness by implementing knowledge-state introspection capabilities that were unique at the time of completion in 2020. While representing an incremental rather than revolutionary advance, this work demonstrates how self-awareness principles can be embedded into practical neural reasoning systems, potentially informing future developments in cooperative AI systems.

Article activity feed