Enhancing Smart Tourism through Conversational AI and Real-Time Visual Translation

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Smart tourism is a rapidly evolving field that integrates emerging technologies with the changing needs of modern travelers. In recent years, tourist experiences have been transformed through advances in artificial intelligence, embedded systems, computer vision, and the Internet of Things (IoT). The integration of chatbots, automatic translators, and personalized assistants into mobile devices represents a significant shift in how users interact with their surroundings. In this context, this work presents the design and implementation of an intelligent mobile application for tourist assistance that offers personalized services such as real-time recommendations, visual translation, and support for visually impaired users through object detection and audio-based orientation. The proposed system combines several technologies: a conversational chatbot, a camera for environmental visual analysis, a multilingual translation module, an interactive map for geolocation, and audio output (via devices like AirPods) to deliver spoken feedback. The architecture is decentralized and includes both the user’s smartphone and a Raspberry Pi board for embedded processing. This research aims to provide an inclusive, mobile, intelligent, and context-aware solution for all types of travelers, with a particular focus on enhancing accessibility and autonomy for visually impaired individuals.

Article activity feed