SENSEYE: A Resource-Aware Visionary Framework for Assisting Individuals with Visual Disabilities

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Despite significant recent advances, visual aid systems are still limited by the use of conventional computer vision algorithms, constrained sensor capabilities, high power consumption, and reliance on cloud-based processing, which introduce latency and privacy risks. Current assistive technologies for visually impaired individuals often suffer from a lack of secure and competent communication and an inability to deal with complex computer vision tasks. This paper introduces SENSEYE, a resource-aware visionary framework that employs edge computing with a secure and competent communication mechanism. The proposed architecture integrates IoT Edge and virtual decentralized services in a portable system that is small, cost-effective, and power-efficient. The integration of open AI models with advanced functionality in this system design helps to recognize objects, locate moving obstacles, detect sudden changes, perceive a summary from a live feed, and represent them as audio in real time. SENSEYE integrates real-time object detection, scene comprehension, and global positioning system (GPS)-based navigation into a portable, low-latency device. This system leverages optimized lightweight AI models, e.g., SSD-MobileNetV2 and VILA1.5-3b, to provide accurate environmental awareness and seamless auditory feedback through efficient speech processing. The system also enables secure remote assistance via video streaming and real-time GPS location sharing, ensuring enhanced user safety and connectivity. The evaluations confirm superior accuracy, power efficiency, and responsiveness compared to traditional sensor-based or cloud-reliant systems. This development will serve as a basis for future research on the application and interpretability of AI-driven devices to assist visually impaired individuals without compromising their physical inadequacy and safety.

Article activity feed