Improving Performance in a Speeded, Goal-Directed Object Placement Task Using Familiar and Novel Visual Cues to Weight

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Efficient manual interactions rely on predictive cues to object weight. While familiar cues (e.g., size) readily support motor planning, it is unclear whether newly learned cues can enhance performance in demanding tasks. Understanding the predictive utility of novel cues has implications for sensory augmentation. We investigated this using a goal-directed hitting task, where participants earned a performance-based reward for squashing virtual bugs. Targets appeared briefly at unpredictable locations on a touchscreen, and participants used light and heavy containers to squash the bugs. Container weight was signaled with a familiar (visual volume), novel (checkerboard patterns), or no cue (identical containers). The number of bugs hit was highest with the familiar cue, followed by the novel cue, and smallest with no cue. Response time followed a corresponding pattern: movements were fastest with the familiar cue, followed by the novel cue, and slowest with no cue. No differences were found in precision, but constant bias was highest with the novel cue, followed by no cue, and smallest with the familiar cue. These findings demonstrate that novel cues, though less effective than familiar ones, can be rapidly integrated to guide efficient motor behavior, suggesting their potential for augmenting human performance in real-world tasks.

Article activity feed