Perceptual learning improves discrimination while distorting appearance

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This work presents a potentially important behavioral finding: that perceptual learning may not only improve but also distort the appearance of visual stimuli. The strength of the presented evidence in support of the main claim is however incomplete, and requires further analyses to confirm that perceptual learning does increase overestimation bias, and clarify why a very large baseline overestimation bias is present in the data.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Perceptual sensitivity often improves with training, a phenomenon known as ‘perceptual learning’. Another important perceptual dimension is appearance, the subjective sense of stimulus magnitude. Are training-induced improvements in sensitivity accompanied by more accurate appearance? Here, we examine this question by measuring both discrimination and estimation capabilities for nearhorizontal motion perception, before and after training. Observers trained on either discrimination or estimation exhibited improved sensitivity, along with increases in already-large estimation biases away from horizontal. To explain this counterintuitive finding, we developed a computational observer model in which perceptual learning arises from changes in the precision of underlying neural representations. For each observer, the fitted model accounted for both discrimination performance and the distribution of estimates, and their changes after training. Our empirical findings and modeling suggest that learning enhances distinctions between categories, a potentially important aspect of real-world perception and perceptual learning.

Article activity feed

  1. eLife assessment

    This work presents a potentially important behavioral finding: that perceptual learning may not only improve but also distort the appearance of visual stimuli. The strength of the presented evidence in support of the main claim is however incomplete, and requires further analyses to confirm that perceptual learning does increase overestimation bias, and clarify why a very large baseline overestimation bias is present in the data.

  2. Reviewer #1(Public Review):

    In this manuscript the authors report an experiment to assess how training on a perceptual task may not only increase performance on that task but impact on the appearance of the trained stimuli. They compare discrimination performance, coherence thresholds, and estimation biases for random dot motion direction relative to horizontal rightward in three groups of observers before and after 3 days in which they either trained on a discrimination task, an estimation task, or did not train. The authors report significant increases in discrimination performance post training compared to not training. They also report increases in estimation biases when assessed as the average estimate (over a bimodal distribution that crosses 0) but not when assessed as the mode of the bimodal distribution. They conclude that training resulted in "increases in already-large estimation biases away from horizontal".

    The methods and results are strengthened by the combination of classical psychophysical techniques and sophisticated computational modelling. One weakness is the possibility is misleading summary statistics when dealing with bimodal distributions. Convincing evidence that observers perceived stimulus directions as further from horizontal (in the absolute sense) following training is not presented in the current manuscript. Irrespective, this work is likely to impact the field.

  3. Reviewer #2 (Public Review):

    It is well-known that repeated exposure to perceptual stimuli improves discrimination performance, but less is known about the effects on perceptual appearance. In the present work, the authors tackle this question and focus on one particular effect on perceptual appearance termed boundary avoidance, i.e. the tendency to perceive (or report) a stimulus as biased away from a discrimination boundary.

    In the study, participants performed either a motion discrimination task (clockwise or counterclockwise with respect to a reference axis) or an estimation task (reproducing the orientation of the motion stimulus). Participants were divided in three groups which either i) trained on the discrimination task, ii) trained on the estimation task or iii) received no training (control group). Performance in both tasks was assessed prior and after training. The main behavioral finding is that training (which did not involve feedback) improved discrimination performance and increased estimation precision, but at the same time appeared to increase the boundary avoidance effect. Thus, the authors conclude that perceptual learning improved performance at the cost of appearance.

    To explain these effects, the authors created a computational model in which performance improvements were implemented as a gain increase of neurons sensitive to the trained motion directions. Repulsive biases away from the reference orientation were implemented by a combination of two modeling choices: i) Even during estimation, participants perform an implicit categorization such that they assume that their percept was created by a stimulus in line with their categorization (clockwise or counterclockwise). This effectively biases their response away from the boundary. ii) There is an abundance of neurons tuned to the horizontal reference axis (the "boundary") which likewise leads to a repulsive bias. Overall, the authors conclude that the model was able to explain the major behavioral effects, including the a priori presence of repulsive biases, the increase in performance, the increase in estimation precision and the increase of the repulsive bias.

    It is well-known that repeated exposure to perceptual stimuli improves discrimination performance, but less is known about the effects on perceptual appearance. In the present work, the authors tackle this question and focus on one particular effect on perceptual appearance termed boundary avoidance, i.e. the tendency to perceive (or report) a stimulus as biased away from a discrimination boundary. On first glance, it was a pleasure reading this paper due to a number of aspects the authors got quite right in my opinion:
    - A clear and well-explained research question.
    - The results are generally well-presented. Much effort and expertise was put into the Figures and many helpful auxiliary Figures are included as a Supplement.
    - The writing was concise and clear.

    However, as outlined below, I'm afraid that the main conclusion of the study and the main motivation for computational modeling are not backed up by the data.

    (1) No evidence for a change in overestimation
    Overestimation is (rightly) defined by the authors as a bias of the perceived orientations towards more extreme values (visualized also in Fig. 2F). However, as acknowledged by the authors, there is nearly no evidence for such an effect. The modal estimation response (correct trials) doesn't change significantly between the sessions. The mean, which is the primary measure used by the authors, is not an appropriate measure for an overestimation, as it is severely biased by accuracy. It was unclear to me why it was chosen as the primary measure for nearly all figures and analyses, given that the authors were aware of (and reported) a more suited measure.

    In my opinion, the mode of the correct responses would be the best way to quantify the overestimation bias. An alternative would be looking at the average absolute (unsigned) distance from the boundary, possibly including both correct and incorrect responses. However, such a "mean of absolute differences" approach would be affected by lucky guessing trials, which could manifest in a probability mass close to the boundary (and the proportion of which changes with overall accuracy). Therefore I see the mode as the strongest and least confounded measure.

    (2) Nature of the biases
    Although, as outlined in 1), there might actually be no evidence for a *change* in overestimation bias, there clearly was a baseline overestimation bias. However, the reported biases appear extremely large. For instance, for the 2{degree sign} orientation the modal estimation is close to 20{degree sign}. To me this raises the question whether we're really dealing with a pure perceptual effect (18{degree sign} misperception seems quite suboptimal) or whether there are some other psychological effects at work that could be rather classified as a response bias.

    In particular, I wondered whether the baseline bias is partly explained by participants "wanting to make sure" they indicate the correct category in estimation and therefore bias their estimation response away from the ambiguous proximity of the cardinal axes? Does it require more effort to set estimation orientation close to a cardinal axis while still making sure that it has the correct categorical orientation. I guess there was no horizontal reference line on the screen which would help with this?

    The overall discrimination-focused task design might have contributed to this bias. First, because the participants trained on estimation also performed a discrimination task (pre/post) which very likely could have affected their response style. Second, the presented orientations during estimation were likewise 50:50 around the horizontal reference which could shift the focus towards "getting the sign right" rather than "getting the precise orientation right".

    (3) The mechanism of the model
    As a disclaimer a priori, I am not very familiar with this particular modeling literature (but this may be the case for other readers as well). For this reason I could have used a bit more guidance about how the model works. My understanding is that there a three key mechanisms: 1) Gain modulation which explains the improvement in discrimination; 2) Warping which partly explains boundary avoidance; 3) Implicit categorization which likewise partly explains boundary avoidance. In addition, there are two levels of analysis: 1) the pre-training state (a priori presence of a repulsive bias) and 2) learning effects (bias and performance increase through training). If the models were to be kept as part of a revised manuscript, my suggestion would be to structure the corresponding section in the Results ("Observer Model") a bit more along these anchors. I suggest also providing a bit more explanation already at this point. For instance, I consider the fact that implicit categorization effectively works through Bayes rule by assuming a uniform(?) prior over either the negative or positive orientation axis, as very relevant. I assume that other priors would have been conceivable for conditioning on the response, e.g. taking into account the actual (objective or subjective) distribution of orientations for the particular choice category, so this is a non-trivial modeling choice.
    Intuitively, I would have also thought that if more resources are devoted to the cardinal directions (and the decoder is unaware of this), this would lead to a bias *towards* the cardinal directions. If more neurons fire particularly strong to near-cardinal orientations (such as the +-4{degree sign} in training), why would the decoder be repulsed *away* from the cardinal orientation? I trust the authors that the presentation is correct, but to me, this was not obvious and I would have wished for some guidance.