Judgments of agency are affected by sensory noise without recruiting metacognitive processing

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    This is a well-designed and well-executed study on the computational mechanisms underlying judgements of agency in an action-outcome delay task. The authors report that unlike judgments of confidence, judgments of agency do not recruit metacognitive processes. This difference between agency and confidence could be an important insight, but more needs to be done to address conceptual issues associated with the definition of metacognition, and the specific features of the task and modeling approach used to obtain and interpret the findings.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Acting in the world is accompanied by a sense of agency, or experience of control over our actions and their outcomes. As humans, we can report on this experience through judgments of agency. These judgments often occur under noisy conditions. We examined the computations underlying judgments of agency, in particular under the influence of sensory noise. Building on previous literature, we studied whether judgments of agency incorporate uncertainty in the same way that confidence judgments do, which would imply that the former share computational mechanisms with metacognitive judgments. In two tasks, participants rated agency, or confidence in a decision about their agency, over a virtual hand that tracked their movements, either synchronously or with a delay and either under high or low noise. We compared the predictions of two computational models to participants’ ratings and found that agency ratings, unlike confidence, were best explained by a model involving no estimates of sensory noise. We propose that agency judgments reflect first-order measures of the internal signal, without involving metacognitive computations, challenging the assumed link between the two cognitive processes.

Article activity feed

  1. Evaluation Summary:

    This is a well-designed and well-executed study on the computational mechanisms underlying judgements of agency in an action-outcome delay task. The authors report that unlike judgments of confidence, judgments of agency do not recruit metacognitive processes. This difference between agency and confidence could be an important insight, but more needs to be done to address conceptual issues associated with the definition of metacognition, and the specific features of the task and modeling approach used to obtain and interpret the findings.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #1 agreed to share their name with the authors.)

  2. Reviewer #1 (Public Review):
    This paper examines whether judgments of agency (JoA) are best characterized as 1st order measures of internal signals (prediction error type) or as metacognitive reports - i.e. 2nd order measures of lower-order agency signals. The authors make a clear prediction: if JoAs are metacognitive reports, they should treat noise/uncertainty in the same way as other metacognitive reports such as confidence.

    To test this prediction, the authors designed two tasks in which participants were asked to report their (maximum) agency on movements of a virtual hand presented with or without delay, under high or low noise, and/or their confidence in their agency decision. The hypothesis tested is the following: if confidence and agency judgements share the same computational characteristics (2nd order uncertainty monitoring), then they should show similar sensitivity to internal estimates of sensory noise in the task.

    The predictive power of two models is compared. The model that best explains patients' agency ratings does not involve any estimates of sensory noise (no metacognitive monitoring of noise), unlike the model that best explains confidence measures. Based on these results, the authors conclude that JoAs are not meta-cognitive (in the sense that they are not 2nd order reports of agency signals) but rather reflect 1st order measures of internal signals.

    The study features a nice combination of experimental tasks and computational models specifically designed to address the question at hand. The originality of the approach is to take into account the uncertainty associated with the processing of one's agency, which classical experimental work on JoA generally considers as variability of non-interest, and to offer a nice computational characterization of how JoA responds to this uncertainty (i.e. through scaling of subjective ratings). All this is well articulated with the existing literature and previous models on agency - e.g. the comparator model.

    This paper is important. It offers convincing evidence that confidence mesures, but also (pre-reflexive) feeling of agency and (explicit) judgment of one's agency tap into different computational mechanisms and factor in different contextual information, which is relevant for research on action control but also for the science of consciousness, and which could potentially inform methodological choices about how to measure cognition (e.g. implicit or explicit measures).

  3. Reviewer #2 (Public Review):

    This study investigated the influence of sensory noise on judgements of agency (JoA); the subjective feeling that an action is caused by ourselves. The idea is that this influence can reveal whether JoA's are only like metacognitive judgements conceptually, in that they entail cognition about cognition, or also computationally, in that they incorporate the uncertainty of the signal in a similar way that confidence judgments do. An elegant combination of pre-registered hypotheses, psychophysics and computational modelling is used to answer this question. The authors find convincing evidence for a 'rescaling model' in which JoA's are rescaled differently depending on the noise condition. This is different to the computational mechanism underlying confidence judgments, which instead incorporates sensory noise to make Bayesian optimal decisions about confidence. The paper is well written, the experiments are well designed, the analyses are sophisticated and appropriate and the conclusions largely follow from the results. My comments are about conceptual clarification, the interpretation of the two tasks and about whether the results can support one of the main claims.

    It is really great that the authors have pre-registered the research questions, methods and analyses in such detail and also explicitly indicate where they diverge from this pre-registration. I also want to applaud their data and code sharing. This paper is a really great example of open science. The sharing of data and code will also make it easier for other researchers to use these methods in future studies, which is great because the authors have developed very elegant paradigms to study (metacognition of) sense of agency.

    I was a bit confused about the relationship between the confidence task and the judgment of agency task. The confidence task (Fig. 1) measures confidence about a discrimination based on judgments of agency, while the judgement of agency task (Fig. 2) is about directly inferring agency from one stimulus. So, in a way the confidence task by design reflects a higher-order judgement about judgements of agency and, in this task setting, the JoA's are treated as the first-order judgements. I wonder whether this set-up leads to the difference in computations underlying the confidence responses and JoA's found and whether alternative set-ups could show similar effects for confidence and JoA. This does not question the main point of the manuscript, which is about determining which kind of computations underlie JoA's, but it is important in relating the results of these two tasks.

    Finally, the authors state that their results show that judgements of agents are not computationally metacognitive, but I am not sure whether this conclusion fully follows from their results. They found evidence for a rescaling model over a Bayesian model, but if I understood the rescaling model correctly, it still requires participants to estimate whether their signals contain high or low noise, and then rescale their JoA's accordingly (less extreme judgments when there is more noise). This means that participants do take into account the noise to make their judgements of agency, but they do so in a different way than what has been found for confidence judgements. I am not sure whether this is enough to say that JoA's are not metacognitive in nature.

  4. Reviewer #3 (Public Review):

    Constant et al describe a study investigating an important issue - are judgements of agency metacognitive in nature? While this topic has received a lot of theoretical attention, empirically the issue is underexplored, partly due to a lack of appropriate frameworks and tools. Here the authors suggest the issue can be tackled by thinking more precisely about the computations involved in both judging agency over an outcome and in forming a (metacognitive) confidence report. This focus on constituent computations is an important conceptual strength of the paper.

    The authors choose to operationalise metacognitive computations as those where agents have "second order access to sensory noise" and design two similar tasks - a confidence judgement task and an agency judgement task - where observers report their experience of controlling a virtual hand that can move synchronously or be delayed. Crucially, the uncertainty of the incoming sensory signals is varied, and the authors explore whether agency and confidence judgements are influenced by this sensory noise, and which kind of computational processes can best explain how. While the authors find empirically noise has an effect on both kinds of judgements, computational modelling suggests that agency judgements are best explained by a 'rescaling' model which does not include an explicit representation of the noise, whereas confidence judgements are better explained by a 'Bayesian' model which does represent noise.

    There is lots to enjoy about this paper. It is particularly inspired to have an agency and confidence task that are so similar, making them more directly comparable. Indeed, they are compared in the paper with basically identical computational models, something which to my knowledge has never been achieved in this field of work. The models themselves all seem well chosen given certain design assumptions, though I suspect the more general insight of generating explicit computational models of agency-like judgements is one that could inspire other researchers in this field, and charts a route to progress on thorny issues on this and related topics.

    However, while this approach is intriguing, I think the main weakness of this study relates to the core experimental manipulation: introducing temporal delays between actions and outcomes to influence ratings of control. While this is a popular approach in the field, recent authors (e.g., Wen, 2020, Consciousness and Cognition) have suggested that this manipulation may be problematic for a number of reasons. In similar types of paradigm, Wen (2020) notes that agents are able to accurately judge their control over action outcomes that are substantially delayed (e.g., well over 1000 ms) and thus it is possible that 'delay manipulation' designs actually introduce response biases, where participants are somewhat artificially reporting variance in the delays they experience rather than their actual experience/belief about what they can and cannot control. Indeed, in the methods of this present paper, the authors note participants were asked to "focus specifically on the timing of the movement" of the virtual hand, which may make this concern particularly apposite.

    Because of this manipulation, all of the computational modelling (naturally) assumes that agents are engaged in a task where they have to detect the delay and compare this to some criterion value. Indeed, there is nothing else they could be doing in these tasks. The report of "agency" is thus generated directly from this internal variable that encodes "did I detect a delay?", and any confidence report is a metacognitive judgement about that decision.

    This raises an important issue of conceptual validity: is a judgement of agency equivalent to judging whether an outcome was delayed or not? Many results (see review by Wen, 2020) suggest that agents can simultaneously tell an action outcome was delayed, but still judge themselves to be the agent, suggesting that an equivalence along these lines is unlikely. If so, this would mean acknowledging the generalisability of these is conclusions is potentially limited: rather than concluding that agency judgements in general are non-metacognitive, the conclusion would be the sensorimotor delay judgements in particular are non-metacognitive. The latter conclusion is by no means uninteresting, but has a somewhat narrower theoretical significance for the key debate used to frame this paper ("do agency judgements monitor uncertainty in a metacognitive way?")

    A second important issue relates to what exactly makes a computation 'metacognitive'. For example, the authors argue their Bayesian model is a metacognitive one, because it requires the observer to have second-order access to an estimate of their own sensory noise. I am not completely sure this follows: the Bayesian model in this paper clearly incorporates an estimate of the noise/uncertainty in the signal, but not all representations of noise are second-order or metacognitive. For example, Shea (2012) has noted that in precision-weighted Bayesian inference models throughout neuroscience (e.g., Bayesian cue combination, also discussed in this paper) the models contain noise estimates but the models are not metacognitive in nature. For example, when we combine a noisy visual estimate and a noisy auditory estimate, the Bayesian solution requires you account for the noise in the unimodal signals. But - as Shea argues - the precision parameters in these models do not necessarily refer to uncertainty in the agent's perceptions or beliefs, but uncertainty in the outside world. It seems a similar argument is relevant to the Bayesian model of agency offered by the authors in the present paper. It is not clear to me why we should think the uncertainty parameter in the Bayesian model is something metacognitive (e.g., about the agent's internal comparator representations) rather than something about the outside world too (e.g., the sensory environment is noisy).

    References:

    Shea (2012) Reward prediction error signals are meta-representational. Nous, DOI: 10.1111/j.1468-0068.2012.00863.x

    Wen (2020). Does delay in feedback diminish sense of agency? A Review. Consciousness and Cognition, DOI: 10.1016/j.concog.2019.05.007