Enhancing robot understandability - a model to estimate varying levels of discrepancy
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
For the widespread deployment of robots in everyday tasks, robot’s actions, decisions and intentions must be understood by users. A fundamental factor affecting robot understandability is the underlying discrepancy between the robot’s and human’s state-of-minds. This paper contributes to the field of robot understandability by providing valuable insights into how discrepancy and robot understandability are connected, how human behavior indicates underlying discrepancy and how robots can use hidden Markov models to estimate varying discrepancy levels during interaction. We propose a systematic method to study human behavior indicators for assessing discrepancies. An exploratory study that involved 36 participants interacting with a robot revealed that the smaller the discrepancy level between robot and human, the more efficient and successful the interactions are, despite vague or short robot instructions. The findings of the exploratory study were used to implement and train hidden Markov models to estimate varying levels of discrepancy. With this model, a robot may continuously assess the discrepancy during an interaction and adapt its behavior aiming to decrease discrepancy.