Affective Tactility: Cross-Modal Translation from Natural Language to Procedural Height Maps
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Text-to-image generation has rapidly matured, yet translating paralinguistic cues in text into physically perceivable tactility remains underexplored. We present Affective Tactility, a procedural framework that maps natural-language input to affect coordinates, valence (V) and arousal (A), and renders a manufacturable height map through an interpretable procedural texture generator. For end-to-end reproducibility, we describe a transparent lexicon-based baseline for estimating (V, A) and also validate the downstream mapping using controlled affect coordinates. The design is grounded in crossmodal correspondences (e.g., bouba–kiki) and is physiologically motivated by known tactile pathways. As a proof of concept, we demonstrate qualitative diversity of generated height maps and computational auditability: two simple descriptors, root-mean-square (RMS) roughness (Rq) and a spatial-frequency centroid, enable linear support vector machine (SVM) separation between affect regimes with 96.7% test accuracy, with ablations and a permuted control reported to address trivial separability. Beyond purely computational checks, we outline a no-participant physical verification protocol based on surface metrology and controlled scanning dynamics (profilometry, vibration, and friction measurements), positioning the method as a stimulus-generation foundation for future psychophysics and accessible haptic media.