The Origin of Movement Biases During Reaching

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    This valuable study uses an original approach to address the longstanding question of why reaching movements are often biased. The combination of a wide range of experimental conditions and computational models is a strength. However, the modeling assumptions are not well-substantiated, the modeling analysis is insufficient with its focus on fits to average and not individual subject data, and the results are limited to biases in reach direction and do not consider biases in reach extent. Taken together, the evidence supporting the main claims is incomplete.

This article has been Reviewed by the following groups

Read the full article

Abstract

Goal-directed movements can fail due to errors in our perceptual and motor systems. While these errors may arise from random noise within these sources, they also reflect systematic motor biases that vary with the location of the target. The origin of these systematic biases remains controversial. Drawing on data from an extensive array of reaching tasks conducted over the past 30 years, we evaluated the merits of various computational models regarding the origin of motor biases. Contrary to previous theories, we show that motor biases do not arise from systematic errors associated with the sensed hand position during motor planning or from the biomechanical constraints imposed during motor execution. Rather, motor biases are primarily caused by a misalignment between eye-centric and the body-centric representations of position. This model can account for motor biases across a wide range of contexts, encompassing movements with the right versus left hand, proximal and distal effectors, visible and occluded starting positions, as well as before and after sensorimotor adaptation.

Article activity feed

  1. eLife Assessment

    This valuable study uses an original approach to address the longstanding question of why reaching movements are often biased. The combination of a wide range of experimental conditions and computational models is a strength. However, the modeling assumptions are not well-substantiated, the modeling analysis is insufficient with its focus on fits to average and not individual subject data, and the results are limited to biases in reach direction and do not consider biases in reach extent. Taken together, the evidence supporting the main claims is incomplete.

  2. Reviewer #1 (Public review):

    Wang et al. studied an old, still unresolved problem: Why are reaching movements often biased? Using data from a set of new experiments and from earlier studies, they identified how the bias in reach direction varies with movement direction, and how this depends on factors such as the hand used, the presence of visual feedback, the size and location of the workspace, the visibility of the start position and implicit sensorimotor adaptation. They then examined whether a visual bias, a proprioceptive bias, a bias in the transformation from visual to proprioceptive coordinates and/or biomechanical factors could explain the observed patterns of biases. The authors conclude that biases are best explained by a combination of transformation and visual biases.

    A strength of this study is that it used a wide range of experimental conditions with also a high resolution of movement directions and large numbers of participants, which produced a much more complete picture of the factors determining movement biases than previous studies did. The study used an original, powerful, and elegant method to distinguish between the various possible origins of motor bias, based on the number of peaks in the motor bias plotted as a function of movement direction. The biomechanical explanation of motor biases could not be tested in this way, but this explanation was excluded in a different way using data on implicit sensorimotor adaptation. This was also an elegant method as it allowed the authors to test biomechanical explanations without the need to commit to a certain biomechanical cost function.

    The main weakness of the study is that it rests on the assumption that the number of peaks in the bias function is indicative of the origin of the bias. Specifically, it is assumed that a proprioceptive bias leads to a single peak, a transformation bias to two peaks, and a visual bias to four peaks, but these assumptions are not well substantiated. Especially the assumption that a transformation bias leads to two peaks is questionable. It is motivated by the fact that biases found when participants matched the position of their unseen hand with a visual target are consistent with this pattern. However, it is unclear why that task would measure only the effect of transformation biases, and not also the effects of visual and proprioceptive biases in the sensed target and hand locations. Moreover, it is not explained why a transformation bias would lead to this specific bias pattern in the first place. Also, the assumption that a visual bias leads to four peaks is not well substantiated as one of the papers on which the assumption was based (Yousif et al., 2023) found a similar pattern in a purely proprioceptive task. Another weakness is that the study looked at biases in movement direction only, not at biases in movement extent. The models also predict biases in movement extent, so it is a missed opportunity to take these into account to distinguish between the models.

    Overall, the authors have done a good job mapping out reaching biases in a wide range of conditions, revealing new patterns in one of the most basic tasks, but unambiguously determining the origin of these biases remains difficult, and the evidence for the proposed origins is incomplete. Nevertheless, the study will likely have a substantial impact on the field, as the approach taken is easily applicable to other experimental conditions. As such, the study can spark future research on the origin of reaching biases.

  3. Reviewer #2 (Public review):

    Summary:

    This work examines an important question in the planning and control of reaching movements - where do biases in our reaching movements arise and what might this tell us about the planning process? They compare several different computational models to explain the results from a range of experiments including those within the literature. Overall, they highlight that motor biases are primarily caused by errors in the transformation between eye and hand reference frames. One strength of the paper is the large number of participants studied across many experiments. However, one weakness is that most of the experiments follow a very similar planar reaching design - with slicing movements through targets rather than stopping within a target. Moreover, there are concerns with the models and the model fitting. This work provides valuable insight into the biases that govern reaching movements, but the current support is incomplete.

    Strengths:

    The work uses a large number of participants both with studies in the laboratory which can be controlled well and a huge number of participants via online studies. In addition, they use a large number of reaching directions allowing careful comparison across models. Together these allow a clear comparison between models which is much stronger than would usually be performed.

    Weaknesses:

    Although the topic of the paper is very interesting and potentially important, there are several key issues that currently limit the support for the conclusions. In particular I highlight:

    Almost all studies within the paper use the same basic design: slicing movements through a target with the hand moving on a flat planar surface. First, this means that the authors cannot compare the second component of a bias - the error in the direction of a reach which is often much larger than the error in reaching direction. Second, there are several studies that have examined biases in three-dimensional reaching movements showing important differences to two-dimensional reaching movements (e.g. Soechting and Flanders 1989). It is unclear how well the authors' computational models could explain the biases that are present in these much more common-reaching movements.

    The model fitting section is under-explained and under-detailed currently. This makes it difficult to accurately assess the current model fitting and its strength to support the conclusions. If my understanding of the methods is correct, then I have several concerns. For example, the manuscript states that the transformation bias model is based on studies mapping out the errors that might arise across the whole workspace in 2D. In contrast, the visual bias model appears to be based on a study that presented targets within a circle (but not tested across the whole workspace). If the visual bias had been measured across the workspace (similar to the transformation bias model), would the model and therefore the conclusions be different? There should be other visual bias models theoretically possible that might fit the experimental data better than this one possible model. Such possibilities also exist for the other models.

    Although the authors do mention that the evidence against biomechanical contributions to the bias is fairly weak in the current manuscript, this needs to be further supported. Importantly both proprioceptive models of the bias are purely kinematic and appear to ignore the dynamics completely. One imagines that there is a perceived vector error in Cartesian space whereas the other imagines an error in joint coordinates. These simply result in identical movements which are offset either with a vector or an angle. However, we know that the motor plan is converted into muscle activation patterns which are sent to the muscles, that is, the motor plan is converted into an approximation of joint torques. Joint torques sent to the muscles from a different starting location would not produce an offset in the trajectory as detailed in Figure S1, instead, the movements would curve in complex patterns away from the original plan due to the non-linearity of the musculoskeletal system. In theory, this could also bias some of the other predictions as well. The authors should consider how the biomechanical plant would influence the measured biases.

  4. Reviewer #3 (Public review):

    The authors make use of a large dataset of reaches from several studies run in their lab to try to identify the source of direction-dependent radial reaching errors. While this has been investigated by numerous labs in the past, this is the first study where the sample is large enough to reliably characterize isometries associated with these radial reaches to identify possible sources of errors.

    The sample size is impressive, but the authors should include confidence intervals and ideally, the distribution of responses across individuals along with average performance across targets. It is unclear whether the observed "averaged function" is consistently found across individuals, or if it is mainly driven by a subset of participants exhibiting large deviations for diagonal movements. Providing individual-level data or response distributions would be valuable for assessing the ubiquity of the observed bias patterns and ruling out the possibility that different subgroups are driving the peaks and troughs. It is possible that the Transformation or some other model (see below) could explain the bias function for a substantial portion of participants, while other participants may have different patterns of biases that can be attributable to alternative sources of error.

    The different datasets across different experimental settings/target sets consistently show that people show fewer deviations when making cardinal-directed movements compared to movements made along the diagonal when the start position is visible. This reminds me of a phenomenon referred to as the oblique effect: people show greater accuracy for vertical and horizontal stimuli compared to diagonal ones. While the oblique effect has been shown in visual and haptic perceptual tasks (both in the horizontal and vertical planes), there is some evidence that it applies to movement direction. These systematic reach deviations in the current study thus may reflect this epiphenomenon that applies across modalities. That is, estimating the direction of a visual target from a visual start position may be less accurate, and may be more biased toward the horizontal axis, than for targets that are strictly above, below, left, or right of the visual start position. Other movement biases may stem from poorer estimation of diagonal directions and thus reflect more of a perceptual error than a motor one. This would explain why the bias function appears in both the in-lab and on-line studies although the visual targets are very different locations (different planes, different distances) since the oblique effects arise independent of plane, distance, or size of the stimuli.

    When the start position is not visible like in the Vindras study, it is possible that this oblique effect is less pronounced; masked by other sources of error that dominate when looking at 2D reach endpoint made from two separate start positions, rather than only directional errors from a single start position. Or perhaps the participants in the Vindras study are too variable and too few (only 10) to detect this rather small direction-dependent bias.

    A bias in estimating visual direction or visual movement vector is a more realistic and relevant source of error than the proposed visual bias model. The Visual Bias model is based on data from a study by Huttenlocher et al where participants "point" to indicate the remembered location of a small target presented on a large circle. The resulting patterns of errors could therefore be due to localizing a remembered visual target, or due to relative or allocentric cues from the clear contour of the display within which the target was presented, or even movements used to indicate the target. This may explain the observed 4-peak bias function or zig-zag pattern of "averaged" errors, although this pattern may not even exist at the individual level, especially given the small sample size. The visual bias source argument does not seem well-supported, as the data used to derive this pattern likely reflects a combination of other sources of errors or factors that may not be applicable to the current study, where the target is continuously visible and relatively large. Also, any visual bias should be explained by a coordinates centre on the eye and should vary as a function of the location of visual targets relative to the eyes. Where the visual targets are located relative to the eyes (or at least the head) is not reported.

    The Proprioceptive Bias Model is supposed to reflect errors in the perceived start position. However, in the current study, there is only a single, visible start position, which is not the best design for trying to study the contribution. In fact, my paradigms also use a single, visual start position to minimize the contribution of proprioceptive biases, or at least remove one source of systematic biases. The Vindras study aimed to quantify the effect of start position by using two sets of radial targets from two different, unseen start positions on either side of the body midline. When fitting the 2D reach errors at both the group and individual levels (which showed substantial variability across individuals), the start position predicted most of the 2D errors at the individual level - and substantially more than the target direction. While the authors re-plotted the data to only illustrate angular deviations, they only showed averaged data without confidence intervals across participants. Given the huge variability across their 10 individuals and between the two target sets, it would be more appropriate to plot the performance separately for two target sets and show confidential intervals (or individual data). Likewise, even the VT model predictions should differ across the two targets set since the visual-proprioceptive matching errors from the Wang et al study that the model is based on, are larger for targets on the left side of the body.

    I am also having trouble fully understanding the V-T model and its associated equations, and whether visual-proprioception matching data is a suitable proxy for estimating the visuomotor transformation. I would be interested to first see the individual distributions of errors and a response to my concerns about the Proprioceptive Bias and Visual Bias models.

  5. Author response:

    We are pleased that the reviewers found our study thought-provoking and appreciate the care they have taken in providing constructive feedback. Focusing on the main issues raised by the reviewers, we provide here a provisional response to the Public Comments and outline our revision plan.

    A) Reviewers 1 and 2 were concerned that our task and analyses were limited by the fact that we only tested the model based on biases in movement direction (angular biases) and did not examine biases in movement extent (radial biases).

    While we think the angular biases provide a sufficient test to compare the set of models presented in the paper, we appreciate that there was a missed opportunity to also look at movement extent. Looking at predictions concerning both movement direction and extent would provide a stronger basis for model comparison. To this end, we will take a two-step approach:

    (1) Re-analysis of existing datasets from experiments that involve a pointing task (movements terminate at the target position) rather than a shooting task (movements terminate further than the target distance). We will conduct a model comparison using these data.

    (2) If we are unable to obtain a suitable dataset or datasets because we cannot access individual data or there are too few participants, we will conduct a new experiment using a pointing task. We will use these new data to evaluate whether the transformation model can accurately predict biases in both movement direction and extent.

    We will incorporate those new results in our revision.

    B) Reviewer 3 noted that model fitting was based on group average data. They questioned if this was representative across individuals and how well the model would account for individual patterns of reach biases.

    To address this issue, we propose to do the following:

    (1) We will first fit the model to individual data in Exp 1 and assess whether a two-peak function, the signature of the transformation model, is characteristic of most the fits. We recognize that the results at the individual level may not support the model. This could occur because the model is not correct. Alternatively, the model could be correct but difficult to evaluate at the individual level for several reasons. First, the data set may be underpowered at the individual level. Second, motor biases can be idiosyncratic (e.g., within subject correlation is greater than between subject correlation), a point we noted in the original submission. Third, as observed in previous studies, transformation biases also show considerable individual variability (Wang et al, 2020); as such, even if the model is correct, a two-peaked function may not hold for all individuals.

    (2) If the individual variability is too large to draw meaningful conclusions, we will conduct a new experiment in which we measure motor and proprioceptive biases. Our plan would be to collect a large data set from a limited number of participants. These data should allow us to evaluate the models on an individual basis, including using each participant’s own transformation/proprioceptive bias function to predict their motor biases.

    C) The reviewers have comments regarding the assumptions and form of the different models. Reviewer 3 questioned the visual bias model presented in the paper, and Reviewers 2 and 3 suggested additional visual bias/ biomechanical models to consider.

    We agree that what we call a visual bias effect is not confined to the visual modality: It is observed when the target is presented visually or proprioceptively, and in manifest in both reaching movements, saccades, and pressing keys to adjust a dot to match with the remembered target (Kosovicheva & Whitney, 2017; Yousif et al. 2023). As such, the bias may reflect a domain-general distortion in the representation of goals within polar space. We refer to this component as a "visual bias" because it is associated with the representation of the visual target in the reaching task.

    We do think the version of the visual bias model in the original submission is reasonable given that the bias pattern has been observed in perceptual tasks with stimuli that were very similar to ours (e.g., Kosovicheva & Whitney, 2017). We have explored other perceptual models in evaluating the motor biases observed in Experiment 1. For example, several models discuss how visual biases may depend on the direction of a moving object or the orientation of an object (Wei & Stocker, 2015; Patten, Mannion & Clifford, 2017). However, these models failed to account for the motor biases observed in our experiments, a not surprising outcome since the models were not designed to capture biases in perceived location. There are also models of visual basis associated with viewing angle (e.g., based on retina/head position). Since we allow free viewing, these biases are unlikely to make substantive contributions to the biases observed in our reaching tasks.

    Given that some readers are likely to share the reviewers’ concerns on this issue, we will extend our discussion to describe alternative visual models and provide our arguments about why these do not seem relevant/appropriate for our study.

    In terms of biomechanical models, we plan to explore at least one alternative model, the MotorNet Model (https://elifesciences.org/articles/88591). This recently published model combines a six-muscle planar arm model with artificial neural networks (ANNs) to generate a control policy. The model has been used to predict movement curvature in various contexts. We will focus on its utility to predict biases in reaching to visual targets.

    D) Reviewer 1 had concerns with how we measured the transformation bias. In particular, they asked why the data from Wang et al (2020) are used as an estimate of transformation biases, and not as the joint effects of visual and proprioceptive biases in the sensed target and hand location, respectively.

    We define transformation error as the misalignment between the visual target and the hand position. We quantify this transformation bias by referencing studies that used a matching task in which participants match their unseen hand to a visual target, or vice versa. Errors observed in these tasks are commonly attributed to proprioceptive bias, although they could also reflect a contribution from visual bias. We utilized the same data set to simulate both the transformation bias model and the proprioceptive bias model.

    Although it may seem that we are simply renaming concepts, the concept of transformation error addresses biases that arise during motor planning. For the proprioceptive bias model, the bias only influences the perceived start position but not the goal since proprioception will influence the perceived position of the target before the movement begins. In contrast, the transformation bias model proposes that movements are planned toward a target whose location is biased due to discrepancies between visual and proprioceptive representations.

    The question then arises whether measurements of proprioceptive bias also reflect a transformation bias. We believe that the transformation bias is influenced by proprioceptive feedback, or at the very least, proprioceptive and transformation bias share a common source of error and thus, are highly correlated. We will revise the Introduction and Results sections to more clearly articulate these relationships and assumptions.

    E) Reviewer 3 asked whether the oblique effect in visual perception could account for our motor bias.

    The potential link between the oblique effect and the observed motor bias is an intriguing idea, one that we had not considered. However, after giving this some thought, we see several arguments against the idea that the oblique effect accounts for the pattern of motor biases.

    First, by the oblique effect, variance is greater for diagonal orientations compared to Cartesian orientations. These differences in perceptual variability can explain the bias pattern in visual perception through a Bayesian efficient coding model (Wei & Stocker, 2015). We note that even though participants showed large variability for stimuli at diagonal orientations, the bias for these stimuli was close to zero. As such, we do not think it can explain the motor bias function given the large bias for targets at locations along the diagonal axes.

    Second, the reviewer suggested an "oblique effect" within the motor system, proposing that motor variability is greater for diagonal directions due to increased visual bias. If this hypothesis is correct, a visual bias model should account for the motor bias observed, particularly for diagonal targets. In other words, when estimating the visual bias from a reaching task, a similar bias pattern should emerge in tasks that do not involve movement. However, this prediction is not supported in previous studies. For example, in a position judgment task that is similar to our task but without the reaching response, participants exhibited minimal bias along the diagonals (Kosovicheva & Whitney, 2017).

    Despite our skepticism, we will keep this idea in mind during the revision, investigating variability in movement across the workspace.