Putting perception into action with inverse optimal control for continuous psychophysics

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    The paper presents a Bayesian model framework for estimating individual perceptual uncertainty from continuous tracking data, taking into account motor variability, action cost, and possible misestimation of the generative dynamics. While the contribution is mostly technical, the analyses are well done and clearly explained. The paper provides therefore a didactic resource for students wishing to implement similar models on continuous action data.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Psychophysical methods are a cornerstone of psychology, cognitive science, and neuroscience where they have been used to quantify behavior and its neural correlates for a vast range of mental phenomena. Their power derives from the combination of controlled experiments and rigorous analysis through signal detection theory. Unfortunately, they require many tedious trials and preferably highly trained participants. A recently developed approach, continuous psychophysics, promises to transform the field by abandoning the rigid trial structure involving binary responses and replacing it with continuous behavioral adjustments to dynamic stimuli. However, what has precluded wide adoption of this approach is that current analysis methods do not account for the additional variability introduced by the motor component of the task and therefore recover perceptual thresholds that are larger compared to equivalent traditional psychophysical experiments. Here, we introduce a computational analysis framework for continuous psychophysics based on Bayesian inverse optimal control. We show via simulations and previously published data that this not only recovers the perceptual thresholds but additionally estimates subjects’ action variability, internal behavioral costs, and subjective beliefs about the experimental stimulus dynamics. Taken together, we provide further evidence for the importance of including acting uncertainties, subjective beliefs, and, crucially, the intrinsic costs of behavior, even in experiments seemingly only investigating perception.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    The paper presents a Bayesian model framework for estimating individual perceptual uncertainty from continuous tracking data, taking into account motor variability, action cost, and possible misestimation of the generative dynamics. While the contribution is mostly technical, the analyses are well done and clearly explained. The paper provides therefore a didactic resource for students wishing to implement similar models on continuous action data.

    First off, the paper is lucidly written - which made it a very pleasant read, especially compared to many other modeling papers, and the authors are to be congratulated for this. As such, the paper provides a valuable resource for didactic purposes alone. While the employed methods are not necessarily individually novel, the assembly of various parts into a coherent framework appears nonetheless valuable.

    Thank you for the positive evaluation!

    I have two major concerns, though:

    1). My main comment regards the model comparison using WAIC (Figure 4E) or cross-validation (Figure S4a): If we translate these numbers into Bayes factors, they are extraordinarily high. I assume that the p(x_i|\theta_s) in equation 7 are calculated assuming that the motor noise on u_{i,t} is independent? This would assume that motor processes act i.i.d with a timeframe of 60ms, which is probably not a very realistic assumption- given that much of the motor variability (as stated by the authors) comes likely from a central (i.e. planning) origin. Would the delta-WAIC not be much smaller if motor noise was assumed to be correlated across time points? Would this assumption change the \sigma estimates?

    Thank you for posing this question. First, sequential models tend to have much larger differences in the likelihood of parameters given data because of the large number of individual data points within a single sequence. Thus, it is not uncommon for model comparison to show much more extreme differences between models for sequential data, as is the case in the present manuscript.

    Second, since our computational framework is based on LQG control, the model indeed assumes that motor noise is independent across time steps. We agree that this assumption might not be realistic for time steps of 16ms duration. While this assumption is certainly a simplification, the assumption of independent noise across time steps is very common both in perceptual models as well as in models of motor control, and there is to our knowledge no computationally straightforward way around it in the LQG framework. It thus applies to all of the models considered in this paper, as they all assume temporally uncorrelated noise, both in perception and action. Therefore, the ranking between the models in the model comparison should hopefully not be affected in a systematic way favoring individual models disproportionately more than others, although the magnitudes of differences in WAIC might be smaller. Since the differences in WAIC are currently in the range of 1e4, we think that they will still be significant, even when accounting for correlated noise.

    Third, we think that the simplifying assumption of independent noise does not invalidate the calculation of the WAIC, which assumes independence across trials. The p(x_i | theta_s) in equation (8) are the likelihoods of whole trials. To compute them, we assume independence of the motor noise across time steps.

    We have added a short passage in the subsection ‘model comparison’:

    “Note that the assumption of independent noise across time steps might lead to WAIC values that are larger than those obtained under a more realistic noise model involving correlations across time. However, this should not necessarily affect the ranking between models in a systematic way, i.e. favoring individual models disproportionately more than others.”

    and a passage in the discussion that points out that modeling the noise as being independent across time points is a simplifying assumption:

    “Finally, assuming independent noise across time steps at the experimental sampling rate of (60Hz) is certainly a simplifying assumption. Nevertheless, the assumption of independent noise across time steps is very common both in models of perceptual inference as well as in models of motor control, and there is to our knowledge no computationally straightforward way around it in the LQG framework.”

    2). While the results in Figure 4a are interesting, the deviation of the \sigma estimates from the standard psychophysical estimates for the most difficult condition remains unexplained. What are the limits of this method in estimating perceptual acuity near the perceptual threshold? Is there a problem that subjects just "give up" and the motor cost becomes overwhelming? Would this not invalidate the method for threshold detection?

    We fully agree that for the most difficult conditions at the lowest contrasts all sequential models we considered are biased with respect to the uncertainties obtained with the 2AFC experiment, which is supposed to be equivalent. Interestingly, when considering synthetic data, we did not see such a discrepancy. Thus, the observed bias points towards an additional mechanism such as a computational cost or computational uncertainty, that is not captured by the current models at very low contrast.

    For the results in Fig. 4, we assumed a constant behavioral cost across all conditions. The assumption that the cost is independent of perceptual uncertainty might not hold in reality, exactly in line with your hypothesis that subjects might just "give up". There are other possible explanations, though, that could potentially be relevant here. For example, the visual system is known to integrate visual signals over longer times, when contrast is lower. This may introduce additional non-linearities in the integration, which could affect the sensitivity, as already pointed out in the study by Bonnen et al. (2015).

    We have added the following passage in the discussion section:

    “In the lowest contrast conditions, all models we considered show a large and systematic deviation in the estimated perceptual uncertainty compared to the equivalent 2AFC task. Note that when considering synthetic data, we did not see such a discrepancy. Thus, the observed bias points towards additional mechanisms such as a computational cost or computational uncertainty, that are not captured by the current models at very low contrast. One reason for this could be that the assumption of constant behavioral costs across different contrast conditions might not hold at very low contrasts, because subjects might simply give up tracking the target although they can still perceive its location. Another possible explanation is that the visual system is known to integrate visual signals over longer times at lower contrasts [Dean & Tolhurst, 1986; Bair & Movshon, 2004], which could affect not only sensitivity in a nonlinear fashion but could also lead to nonlinear control actions extending across a longer time horizon. Further research will be required to isolate the specific reasons.“

    Reviewer #2 (Public Review):

    This manuscript develops and describes a framework for the analysis of data from so-called continuous psychophysics experiments, a relatively recent approach that leverages continuous behavioral tracking in response to dynamic stimuli (e.g. targets following a position random walk). Continuous psychophysics has the potential to dramatically improve the pace of data collection without sacrificing the ability to accurately estimate parameters of psychophysical interest. The manuscript applies ideas from optimal control theory to enrich the analysis of such data. They develop a nested set of data-analytic models: Model 1: the Kalman filter (KF), Model 2: the optimal actor (which is a special case of a linear quadratic regulator appropriate for linear dynamics and Gaussian variability), Model 3: the bounded actor w. behavioral costs, and Model 4: the bounded actor w. behavioral costs and subjective beliefs. Each successive model incorporates parameters that the previous model did not. Each parameter is of potential importance in any serious attempt to human model visuomotor behavior. They advertise that their methods improve the accuracy the inferred values of certain parameters relative to previous methods. And they advertise that their methods enable the estimation of certain parameters that previous analyses did not.

    What were the parameters? In this context, the Kalman filter model has one free parameter: perceptual uncertainty of target position (\sigma). The optimal actor (Model 2) incorporates perceptual uncertainty of cursor position (\sigma_p) and motor variability (\sigma_m), in addition to perceptual uncertainty of target position (\sigma) that is included in the Kalman filter (Model 1). The bounded actor with behavioral costs (Model 3) incorporates a control cost parameter (c) that penalizes effort ('movement energy'). And the bounded actor with behavioral costs and subjective beliefs (Model 4) further incorporates the human observer possibly mistaken 'beliefs' about target dynamics (i.e. how the human's internal model of target motion differs from the true generative model. Model allows for the true target dynamics (position-random-walk with drift = \sigma_rw) to be mistakenly believed to be governed by a position-random-walk with drift = \sigma_s plus a velocity-random-walk with drift = \sigma_v).

    The authors develop each of these models, show on simulated data that true model parameters can be accurately inferred, and then analyze previously collected data from three papers that helped to introduce the continuous psychophysics approach (Bonnen et al. 2015, 2017 & Knoll et al. 2018). They report that, of the considered models, the most sophisticated model (Model 4) provides the best accounting of previously collected data. This model more faithfully approximates the cross-correlograms relating target and human tracking velocities than the Kalman filter model, and is favored by the widely applicable information criterion (WAIC).

    The manuscript makes clear and timely contributions. Methods that are capable of accurately estimating the parameters described above from continuous psychophysics experiments have obvious value to the community. The manuscript tackles a difficult problem and seems to have made important progress.
    Some topics of central importance were not discussed with sufficient detail to satisfy an interested reader, so I believe that additional discussion and/or analyses are required. But the work appears to be well-executed and poised to make a nice contribution to the field.

    The manuscript, however, was an uneven read. Parts of it were very nicely written, and clearly explained the issues of interest. Other parts seemed organized around debatable logic, making inappropriate comparisons to--and misleading characterizations of--previous work. Other parts still were weakened by poor editing, typos, and grammatical mistakes.

    Overall, it is a nice piece of work. But the authors should provide substantially more discussion so that readers will develop a better intuition and how and why the inference routines enable accurate estimation, and how the values of certain parameters trade off with one another. Most especially, the authors should be very careful to accurately describe and appropriately use the previous literature.

    Thanks for the generous overall assessment and the thorough review! We hope that we can address the points you raised in our revised manuscript with the answers to your specific comments below.

    To summarize, we have substantially revised the discussion section to clarify our reasoning and avoid potential misinterpretations of parts of our manuscript as a misrepresentation of previous work. We have also extended the introduction and the exposition of our models in the results section to help readers develop an intuition about the models and inference routines.

  2. Evaluation Summary:

    The paper presents a Bayesian model framework for estimating individual perceptual uncertainty from continuous tracking data, taking into account motor variability, action cost, and possible misestimation of the generative dynamics. While the contribution is mostly technical, the analyses are well done and clearly explained. The paper provides therefore a didactic resource for students wishing to implement similar models on continuous action data.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

  3. Reviewer #1 (Public Review):

    The paper presents a Bayesian model framework for estimating individual perceptual uncertainty from continuous tracking data, taking into account motor variability, action cost, and possible misestimation of the generative dynamics. While the contribution is mostly technical, the analyses are well done and clearly explained. The paper provides therefore a didactic resource for students wishing to implement similar models on continuous action data.

    First off, the paper is lucidly written - which made it a very pleasant read, especially compared to many other modeling papers, and the authors are to be congratulated for this. As such, the paper provides a valuable resource for didactic purposes alone. While the employed methods are not necessarily individually novel, the assembly of various parts into a coherent framework appears nonetheless valuable. I have two major concerns, though:

    1. My main comment regards the model comparison using WAIC (Figure 4E) or cross-validation (Figure S4a): If we translate these numbers into Bayes factors, they are extraordinarily high. I assume that the p(x_i|\theta_s) in equation 7 are calculated assuming that the motor noise on u_{i,t} is independent? This would assume that motor processes act i.i.d with a timeframe of 60ms, which is probably not a very realistic assumption- given that much of the motor variability (as stated by the authors) comes likely from a central (i.e. planning) origin. Would the delta-WAIC not be much smaller if motor noise was assumed to be correlated across time points? Would this assumption change the \sigma estimates?
    2. While the results in Figure 4a are interesting, the deviation of the \sigma estimates from the standard psychophysical estimates for the most difficult condition remains unexplained. What are the limits of this method in estimating perceptual acuity near the perceptual threshold? Is there a problem that subjects just "give up" and the motor cost becomes overwhelming? Would this not invalidate the method for threshold detection?

  4. Reviewer #2 (Public Review):

    This manuscript develops and describes a framework for the analysis of data from so-called continuous psychophysics experiments, a relatively recent approach that leverages continuous behavioral tracking in response to dynamic stimuli (e.g. targets following a position random walk). Continuous psychophysics has the potential to dramatically improve the pace of data collection without sacrificing the ability to accurately estimate parameters of psychophysical interest. The manuscript applies ideas from optimal control theory to enrich the analysis of such data. They develop a nested set of data-analytic models: Model 1: the Kalman filter (KF), Model 2: the optimal actor (which is a special case of a linear quadratic regulator appropriate for linear dynamics and Gaussian variability), Model 3: the bounded actor w. behavioral costs, and Model 4: the bounded actor w. behavioral costs and subjective beliefs. Each successive model incorporates parameters that the previous model did not. Each parameter is of potential importance in any serious attempt to human model visuomotor behavior. They advertise that their methods improve the accuracy the inferred values of certain parameters relative to previous methods. And they advertise that their methods enable the estimation of certain parameters that previous analyses did not.

    What were the parameters? In this context, the Kalman filter model has one free parameter: perceptual uncertainty of target position (\sigma). The optimal actor (Model 2) incorporates perceptual uncertainty of cursor position (\sigma_p) and motor variability (\sigma_m), in addition to perceptual uncertainty of target position (\sigma) that is included in the Kalman filter (Model 1). The bounded actor with behavioral costs (Model 3) incorporates a control cost parameter (c) that penalizes effort ('movement energy'). And the bounded actor with behavioral costs and subjective beliefs (Model 4) further incorporates the human observer possibly mistaken 'beliefs' about target dynamics (i.e. how the human's internal model of target motion differs from the true generative model. Model allows for the true target dynamics (position-random-walk with drift = \sigma_rw) to be mistakenly believed to be governed by a position-random-walk with drift = \sigma_s plus a velocity-random-walk with drift = \sigma_v).

    The authors develop each of these models, show on simulated data that true model parameters can be accurately inferred, and then analyze previously collected data from three papers that helped to introduce the continuous psychophysics approach (Bonnen et al. 2015, 2017 & Knoll et al. 2018). They report that, of the considered models, the most sophisticated model (Model 4) provides the best accounting of previously collected data. This model more faithfully approximates the cross-correlograms relating target and human tracking velocities than the Kalman filter model, and is favored by the widely applicable information criterion (WAIC).

    The manuscript makes clear and timely contributions. Methods that are capable of accurately estimating the parameters described above from continuous psychophysics experiments have obvious value to the community. The manuscript tackles a difficult problem and seems to have made important progress.
    Some topics of central importance were not discussed with sufficient detail to satisfy an interested reader, so I believe that additional discussion and/or analyses are required. But the work appears to be well-executed and poised to make a nice contribution to the field.

    The manuscript, however, was an uneven read. Parts of it were very nicely written, and clearly explained the issues of interest. Other parts seemed organized around debatable logic, making inappropriate comparisons to--and misleading characterizations of--previous work. Other parts still were weakened by poor editing, typos, and grammatical mistakes.

    Overall, it is a nice piece of work. But the authors should provide substantially more discussion so that readers will develop a better intuition and how and why the inference routines enable accurate estimation, and how the values of certain parameters trade off with one another. Most especially, the authors should be very careful to accurately describe and appropriately use the previous literature.

  5. Reviewer #3 (Public Review):

    This paper represents a powerful extension of previous work on continuous or "tracking-based" psychophysics, an approach introduced by Bonnen et al 2015. Bonnen et al showed that one can infer an observer's psychophysical sensitivity far more rapidly using a tracking task than with traditional binary or 2AFC psychophysical experiments. Bonnen at al modeled an observer performing a tracking task with a Kalman filter, which assumes that the observer produces an optimal estimate of the stimulus in each time bin by combining knowledge of the task statistics with noisy measurements of a moving sensory stimulus. However, the quantitative estimates of an observer's perceptual sensitivity using the Kalman filter method were an order of magnitude larger than the sensitivity estimates obtained using traditional psychophysics experiments. Here, Straub & Rothkopf show that this mismatch can be overcome by building a more accurate model of the psychophysical observer, incorporating realistic aspects of sensory-motor behavior that are missing from the Kalman filter model, namely sensory uncertainty about the observer's cursor or hand position, motor noise, movement costs, and possible mismatch between the true dynamic stimulus statistics and the observer's assumptions about those statistics. The paper can therefore be viewed as a natural extension of Bonnen et al and other work on continuous psychophysics. However, it is an extremely powerful and important extension, which allows for a far more accurate description of observer behavior during a tracking task, and for more precise connection between sensitivity estimates obtained from continuous and traditional psychophysics. The paper is thus a theoretical tour de force, and I expect it to have a major impact on the field.