AI-Driven Multi-Modal Assessment of Visual Impression in Architectural Event Spaces: A Cross-Cultural Behavioral and Sentiment Analysis
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Visual Impression in Architectural Space (VIAS) plays a central role in user response to environments, yet designer-controlled spatial variables often produce uncertain perceptual outcomes across cultural contexts. This study develops a multi-modal framework integrating VIAS theory, spatial documentation, and sentiment-aware NLP to evaluate temporary event spaces. Using a monthly market in Matsue, Japan as a case study, we introduce (1) systematic documentation of controlled spatial variables (layout, visibility, advertising strategy, (2) culturally balanced datasets comprising native Japanese and international participants across onsite, video, and virtual interviews, and (3) an adaptive sentiment-weighted keyword extraction algorithm suppressing interviewer bias and verbosity imbalance. Results demonstrate systematic modality effects: onsite participants exhibit festive atmosphere bias (+18% positive sentiment vs. video), while remote modalities elicit balanced critique of signage clarity and missing amenities. Cross-linguistic analysis reveals native participants emphasize holistic atmosphere, whereas international participants identify discrete focal points. The adaptive algorithm reduces verbosity-driven score inflation by 45%, enabling fair cross-participant comparison. By integrating spatial variable documentation with sentiment-weighted linguistic patterns, this framework provides a replicable methodology for validating architectural intent through computational analysis, offering evidence-based guidance for inclusive event space design.