Implicit Semantic Control Manifolds for Learning-Enabled Multi-UAV Coordination

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

We present an implicit semantic control representation for learning-enabled autonomous flight in which biologically inspired coordination behaviors are embedded within a large language model (LLM) and constrained by a six parameter motion–LED control manifold derived from nonlinear six-degree-of freedom quadrotor dynamics. The framework encodes behavioral constraints as an implicit generative representation that integrates model-based flight physics with data-driven policy learning, enabling decentralized aerial agents to generate dynamically feasible actions and semantically consistent visual communication under radio-frequency-degraded conditions. A high-capacity teacher LLM is trained within this manifold and distilled and quantized into a compact stu dent model suitable for edge deployment on lightweight aerial platforms. The teacher, student, and classical regression baselines (multilayer perceptron and k nearest neighbor) are evaluated in a closed-loop target search simulation of 200 trials with both nominal on-manifold inputs and corrupted off-manifold pertur bations. LLM-based policies achieve higher semantic robustness (86.0% teacher, 83.2% student) than kNN (71.0%) and MLP (65.0%) and degrade more grace fully under severe corruption. The teacher and student also reduce final position error (8.37 m and 10.46 m) relative to kNN (14.02 m) and MLP (15.54 m), while distillation reduces mean inference latency from 5.24 s to 2.81 s.

Article activity feed