Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    In this study, Guggenmos proposes a process model for predicting confidence reports following perceptual choices, via the evidence available from stimuli of various intensities. The mechanisms proposed are principled, but a number of choices are made that should be better motivated - I develop below a number of concerns by order of importance.

    I’d like to thank the reviewer for their thorough and excellent review. It’s no set phrase that this review substantially improved the manuscript.

    1. Lack of separability of the two metacognitive modules.

    Can the author show that the proposed model can actually discriminate between the noisy readout module and the noisy report module? The two proposed modules have a different psychological meaning, but seem to similarly impact the confidence output. Are these two mutually exclusive (as Fig 1 suggests), or could both sources of noise co-exist? It will be important to show model recovery for introducing readout vs. report at the metacognitive level, e.g., show that a participant best-fitted by a nested model or a subpart of the full model, with a restricted number of modules (some of the parameters set to zero or one), is appropriately recovered? (focusing on these two modules) This raises the question of how the two types of sigma_m are recoverable/separable from each other (and should they both be called sigma_m, even if they both represent a standard deviation)? If they capture independent aspects of noise, one could imagine a model with both modules. More evidence is needed to show that these two capture separate aspects of noise.

    Testing the separability of the two noise types (readout, report) is a great idea and I have now performed a corresponding recovery analysis. Specifically, I have simulated data with both noise types for different regimes of sensory and metacognitive noise. As shown in the new Figure 7—figure supplement 6, the noise type can be precisely recovered in the most typical regimes.

    I now refer to this analysis in the subsection 2.4 Model recovery (Line 521ff):

    “One strength of the present modeling framework is that it allows testing whether inefficiencies of metacognitive reports are better described by metacognitive noise at readout (noisy-readout model) or at report (noisy-report model). To validate this type of application, I performed an additional model recovery analysis which tested whether data simulated by either model are also best fitted by the respective model. Figure 7—figure supplement 6 shows that the recovery probability was close to 1 in most cases, thus demonstrating excellent model identifiability. With fewer trials per observer, recovery probabilities decrease expectedly, but are still at a very good level. The only edge case with poorer recovery was a scenario with low metacognitive noise and high sensory noise. Model identification is particularly hard in this regime because low metacognitive noise reduces the relevance of the metacognitive noise source, while high sensory noise increases the general randomness of responses.”

    In principle, both noise modules can co-exist and model inversion should be possible (though mathematically more complicated). On the other hand, I anticipate that parameter recovery would be extremely noisy in such a scenario. For this work, I decided to not test this possibility as it would add a lot of complexity, with a high probability of ultimately being unfeasible.

    1. The trade-off between the flexibility of the model (modularity of the metacognitive part, choice of the link functions) and the generalisability of the process proposed seems in favor of the former. Does the current framework really allow to disambiguate between the different models? Or at least, the process modeled is so flexible that I am not sure it allows us to draw general conclusions? Fig 7 and section 3 of the results explain that all models are similar, regardless of module of functions specified; Fig 7 supp shows that half of participants are best fitted by noisy readout, while the other half is best fitted by noisy report; plus, idiosyncrasies across participants are all captured. Does this compromise the generalisability of the modeling of the group as a whole?

    This is a fair point and I understand the question has two components: a) is the model too flexible, potentially preventing generalized conclusions? b) is the flexibility of the model recoverable?

    Regarding a), I should emphasize that the manuscript (and toolbox) provides a modeling framework, rather than a single specific model. In other words, researchers applying the framework/toolbox must make a number of decisions: which noise type? which metacognitive biases should be considered? which link function? To ensure interpretability / generalizability, researchers have to sufficiently constrain the model. Due to this framework character, it makes sense that the manuscript is submitted under the Tools & Resources Article format rather than the Research Article format.

    On the other hand, I agree that it is the duty of the manuscript introducing the framework to provide all necessary information to help the researcher make these decisions. This is where the reviewer’s point b) is critical and I hope that with the new parameter and model recovery analyses in the present revision (see other comments) I meet this requirement to a satisfactory degree.

    To clarify the scope and aim of the paper, I now put a new subsection in front of the example application to the data from Shekhar and Rahnev, 2021 (Line 534ff):

    “It is important to note that the present work does not propose a single specific model of metacognition, but rather provides a flexible framework of possible models and a toolbox to engage in a metacognitive modeling project. Applying the framework to an empirical dataset thus requires a number of user decisions: which metacognitive noise type is likely more dominant? which metacognitive biases should be considered? which link function should be used? These decisions may be guided either by a priori hypotheses of the researcher or can be informed by running a set of candidate models through a statistical model comparison. As an exemplary workflow, consider a researcher who is interested in quantifying overconfidence in a confidence dataset with a single parameter to perform a brain-behavior correlation analysis. The concept of under/overconfidence already entails the first modeling decision, as only a link function that quantifies probability correct (Equation 6) allows for a meaningful interpretation of metacognitive bias parameters. Moreover, the researcher must decide for a specific metacognitive bias parameter. The researcher may not be interested in biases at the level of the confidence report, but, due to a specific hypothesis, rather at metacognitive biases at the level of readout/evidence, thus leaving a decision between the multiplicative and the additive evidence bias parameter. Also, the researcher may have no idea whether the dominant source of metacognitive noise is at the level of the readout or report. To decide between these options, the researcher computes the evidence (e.g., AIC) for all four combinations and chooses the best-fitting model (ideally, this would be in a dataset independent from the main dataset).”

    In addition, the website of the toolbox now provides a lot more information about typical use cases: https://github.com/m-guggenmos/remeta

    1. More extensive parameter recovery needs to be done/shown. We would like to see a proper correlation matrix between parameters, and recovery across the parameter space, not only for certain regimes (i.e. more than fig 6 supp 3), that is, the full grid exploration irrespective of how other parameters were set.

    The recovery of the three metacognitive bias parameters is displayed in Fig 4, but what about the other parameters? We need to see that they each have a specific role. The point in the Discussion "the calibration curves and the relationships between type 1 performance and confidence biases are quite distinct between the three proposed metacognitive bias parameters may indicate that these are to some degree dissociable" is only very indirect evidence that this may be the case.

    A comprehensive parameter recovery analysis is indeed a key analysis that was missing in the first version of the manuscript. I now performed several analyses to address this, rewrote and extended section 2.3 on parameter recovery. The new parameter recovery analysis was performed as follows (Line 455ff):

    “To ensure that the model fitting procedure works as expected and that model parameters are distinguishable, I performed a parameter recovery analysis. To this end, I systematically varied each parameter of a model with metacognitive evidence biases and generated data. Specifically, each of the six parameters (σs, ϑs, δs, σm, 𝜑m, δm) was varied in 500 equidistant steps between a sensible lower and upper bound. The model was then fit to each dataset. To assess the relationship between fitted and generative parameters, I computed linear slopes between each generative parameter (as the independent variable) and each fitted parameter (as the dependent variable), resulting in a 6 x 6 slope matrix. Note that I computed (robust) linear slopes instead of correlation coefficients, as correlation coefficients are sample-sizedependent and approach 1 with increasing sample size even for tiny linear dependencies. Thus, as opposed to correlation coefficients, slopes quantify the strength of a relationship. Comparability between the slopes of different parameters is given because i) slopes are – like correlation coefficients – expected to be 1 if the fitted values precisely recover the true parameter values (i.e., the diagonal of the matrix) and ii) all parameters have a similar value range which makes a comparison of off-diagonal slopes likewise meaningful. To test whether parameter recovery was robust against different settings of the respective other parameters, I performed this analysis for a coarse parameter grid consisting of three different values for each of the six parameters except σm, for which five different values were considered. This resulted in 35·51 = 1215 slope matrices for the entire parameter grid.”

    In addition, I computed additional supplementary analyses assessing a case with fewer trials, a model with confidence biases, and models with mixed evidence and confidence biases. For details about these analyses, I kindly point the reviewer to section 2.3. Together, these new analyses demonstrate that parameter recovery works extremely well across different regimes and for all model parameters, including the metacognitive bias parameters mentioned in the reviewer’s comment.

    1.8: It would be important to report under what regimes of other parameters these simulations were conducted. This is because, even if dependence of Mratio onto type 1 performance is reproduced, and that is not the case for sigma_m, it would be important to know whether that holds true across different combinations of the other parameter values.

    I now repeated this analysis for various settings of other parameters and include the results as new Figure 6—figure supplement 2. While the settings of other parameters affect the type 1 performance dependency of Mratio (with some interesting effects such as Mratio > 1), parameter recovery of sigma_m is largely unaffected. The same basic point thus holds: Mratio shows a nonlinear dependency with type 1 performance, but sigma_m can be recovered largely without bias under most regimes (the main exception is a combination of low sensory noise and high metacognitive noise under the noisy-readout model, which is also mentioned in the manuscript).

    Is lambda_m meaningfully part of the model, and if so, could it be introduced into the Fig 1 model, and be properly part of the parameter recovery?

    I now reworked the part about metacognitive biases to make it more consistent and to introduce lambda_m on equal footing with the other metacognitive bias parameters. I now distinguish between metacognitive evidence biases (the two main bias parameters of the original model, phi_m and theta_m) and metacognitive confidence biases, i.e. lambda_m and a new additive confidence bias parameter kappa_m. The schematic presentation of the model framework in Figure 1 is updated in accordance:

    This change also complies with reviewer 2, who rightfully pointed out that the original model framework put much stronger emphasis on bias parameters loading on evidence than on confidence. The metacognitive confidence bias parameters are now also part of the parameter recovery analyses (Figure 7—figure supplement 2).

    While it is still feasible to combine the two evidence-related bias parameters and lambda_m – as queried by the reviewer – not all mixed combinations of evidence- and confidence-related bias parameters perform well in terms of model recovery (in particular, combining all four parameters; cf. Figure 7—figure supplement 3). Hence, a decision on the side of the modeler is required. I comment on this important aspect at the end of the section 1.4 about metacognitive biases (Line 276ff):

    “Finally, note that the parameter recovery shown in Figure 4 was performed with four separate models, each of which was specified with a single metacognitive bias parameter (i.e., 𝜑m, δm, λm, or Km). Parameter recovery can become unreliable when more than two of these bias parameters are specified in parallel (see section 2.3; in particular, Figure 7—figure supplement 3). In practice, the researcher thus must make an informed decision about which bias parameters to include in a specific model (in most scenarios one or two metacognitive bias parameters are a good choice). While the evidence-related bias parameters 𝜑m and δm have a more principled interpretation (e.g., as an under/overestimation of sensory noise), it is not unlikely that metacognitive biases also emerge at the level of the confidence report (λm, km). The first step thus must always be a process of model specification or a statistical comparison of candidate models to determine the final specification (see also section 3.1).”

    1. An important nuance in comparing the present sigma_m to Mratio is that the present model requires that multiple difficulty levels are tested, whereas instead, the Mratio model based on signal detection theory assumes a constant signal strength. How does this impact the (unfair?) comparison of these two metrics on empirical data that varied in difficulty level across trials? Relatedly, the Discussion paragraph that explained how the present model departs from type 2 AUROC analysis similarly omits to account for the fact that studies relying on the latter typically intend to not vary stimulus intensity at the level of the experimenter.

    I thank the reviewer for this comment which made me realize that I incorrectly assumed that my model requires multiple stimulus difficulty levels. The only parameter that would require multiple stimulus intensities is the sensory threshold parameter, but for this parameter I already state that it requires additional stimulus difficulties close to threshold (Line 147ff). Otherwise I now made extensive tests that the model works just fine with constant stimuli. My reasoning mistake (iirc) was related to the fact that I fit a metacognitive link function, which I thought would require variance on the x-axis; but of course there is already plenty of variance introduced through noise at the sensory level, so multiple difficulty levels are not required to fit the metacognitive level. I now removed the relevant references to this requirement from the manuscript.

    Nevertheless, I agree that it is interesting to perform the comparison between Mratio and sigma_m also for a scenario with constant stimuli. See both the new Figure 6–supplement 1 with constant stimuli, and the (updated) main Figure 6 with multiple stimulus levels for comparison.

    The general point still holds also for constant stimuli: Mratio is not independent of type 1 performance. Thus, the observed dependence on type 1 performance is not due to the presence of varying stimulus levels. I now reference this new supplementary figure in Result section 1.8 (Line 389).

    1. 'Parameter fitting minimizes the negative log-likelihood of type 1 choices (sensory level) or type 2 confidence ratings (metacognitive level)'. Why not fitting both choices and confidence at the same time instead of one after the other? If I understood correctly, it is an assumption that these are independent, why not allow confidence reports to stem from different sources of choice and metacognitive noise? Is it because sensory level is completely determined by a logistic (but still, it produces the decision values that are taken up to the metacognitive level)?

    The decision to separate the two levels during parameter inference was deliberate. I now explain this choice in the beginning of Result section 2 (Line 416ff):

    “The reason for the separation of both levels is that choice-based parameter fitting for psychometric curves at the type 1 / sensory level is much more established and robust compared to the metacognitive level for which there are more unknowns (e.g., the type of link function or metacognitive noise distribution). Hence, the current model deliberately precludes the possibility that the estimates of sensory parameters are influenced by confidence ratings.”

    Indeed, I would regard it as highly problematic if the estimates of sensory parameters were influenced by confidence ratings, which are shaped by a manifold of interindividual quirks and biases and for which computational models are still in a developmental stage. Yet, from a pure simulation-based parameter recovery perspective, in which the true confidence model is known, using confidence ratings would indeed make sensory parameter estimation more precise (because of the rich information contained in continuous confidence ratings which is lost in the binarization of type 1 choices).

    1. Fig 4 left panels: could you clarify the reasoning that due to sensory noise, overconfidence is expected, instead of having objective and subjective probability correct aligning on the diagonal? Shouldn't the effects of sensory noise average out? In other words, why would the presence of sensory noise systematically push towards overconfidence rather than canceling out on average?

    As an intuitive explanation consider the case that no signal is present in a stimulus, e.g., a line grating in a clockwise/counterclockwise orientation discrimination task with an angle of 0 degrees. Since there is no true information in the stimulus, type 1 performance will be at chance level irrespective of sensory noise.

    However, sensory noise matters for the metacognitive level. Assuming no sensory noise (i.e., sigma_s = 0), the observer’s stimulus/decision variable would be zero and thus confidence would be zero. Thus, confidence would exactly match type 1 performance. Yet, assuming the presence of sensory noise, the stimulus estimate (“decision value”) will be always different from point-zero, if ever so slightly. While the average estimate of the stimulus variable across trials will indeed cancel out to zero, each individual trial will be different from zero (in either direction) and hence also the confidence will be different from zero in each trial. Since confidence is unsigned, the average confidence will be greater than zero and thus give the impression of an overconfident observer.

    Note that this explanation was implicitly included in the paragraph on the 0.75 signature of confidence (“When evidence discriminability is zero, an ideal Bayesian metacognitive observer will show an average confidence of 0.75 and thus an apparent (over)confidence bias of 0.25. Intuitively this can be understood from the fact that Bayesian confidence is defined as the area under a probability density in favor of the chosen option. Even in the case of zero evidence discriminability, this area will always be at least 0.5 − otherwise the other choice option would have been selected, but often higher.”, Line 257ff).

    1. The same analysis as Fig 6 but for noisy readout instead of noisy reports do not show the same results: both sigma_m and m-ratio vary as a function of type 1 performance. Does this mean that the present model with readout module does not solve the issue of dependency upon type 1 performance?

    I refer to this in the Result section: “The exception is a regime with very high metacognitive noise and low sensory noise under the noisy-readout model, in which recovery becomes biased” (Line 391ff). Indeed, the type 1 performance dependency of sigma_m recovery in this edge case is not as good as in the noisyreport model. However, note that recovery is stable across a large range of d’ including the range typical aimed for in metacognition experiments (i.e., medium performance levels to ensure sufficient variance in confidence ratings).

    It is also important to point out that a failure to recover true parameters under certain conditions is not a failure of the model, but a reflection of the fact that information can be lost at the level of confidence reports. For example, if sensory noise is very high, the relationship between evidence and confidence becomes essentially flat (Figure 3), producing confidence ratings close to zero irrespective of the level of stimulus evidence. It becomes increasingly impossible to recover any parameters in such a scenario. Vice versa if sensory noise is extremely low, confidence ratings approach a value of 1 irrespective of stimulus evidence, and the same issue arises. In both cases there is no meaningful variance for an inference about latent parameters. This issue is more pronounced in the noisy-readout case because it requires an inversion of precisely the relationship between evidence and confidence.

    1. In Eq8, could you explain why only the decision values consistent with the empirical choice are filtered. Is this an explicit modeling of the 'decision-congruence' phenomenon reported elsewhere (eg. Peters et al 2017)? What are the implications of not keeping only the congruent decision values?

    I apologize, this was a mistake in the manuscript. The integration is over all decision values, not just those consistent with the choice. I corrected it accordingly.

    Reviewer #2 (Public Review):

    This paper presents a novel computational model of confidence that parameterises links between sensory evidence, metacognitive sensitivity and metacognitive bias. While there have been a number of models of confidence proposed in the literature, many of these are tailored to bespoke task designs and/or not easily fit to data. The dominant model that sees practical use in deriving metacognitive parameters is the meta-d' framework, which is tailored for inference on metacognitive sensitivity rather than metacognitive biases (over- and underconfidence). This leaves a substantial gap in the literature, especially as in recent years many interesting links between metacognitive bias and mental health have started to be uncovered. In this regard, the ReMeta model and toolbox is likely to have significant impact on the field, and is an excellent example of a linked publication of both paper and code. It's possible that this paper could do for metacognitive bias what the meta-d' model did for metacognitive sensitivity, which is to say have a considerable beneficial impact on the level of sophistication and robustness of empirical work in the field.

    The rationale for many of the modelling choices is clearly laid out and justified (such as the careful handling of "flips" in decision evidence). My main concern is that the limits to what can be concluded from the model fits need much clearer delineation to be of use in future empirical work on metacognition. Answering this question may require additional parameter/model recovery analysis to be convincing.

    I thank the reviewer for these encouraging and constructive comments!

    Specific comments:

    • The parameter recovery demonstrated in Figure 4 across range of d's is impressive. But I was left wondering what happens when more than one parameter needs to be inferred, as in real data. These plots don't show what the other parameters are doing when one is being recovered (nor do the plots in the supplement to Figure 6). The key question is whether each parameter is independently identifiable, or whether there are correlations in parameter estimates that might limit the assignment of eg metacognitive bias effects to one parameter rather than another. I can think of several examples where this might be the case, for instance the slope and metacognitive noise may trade off against each other, as might the slope and delta_m. This seems important to establish as a limit of what can be inferred from a ReMeta model fit.

    This is an excellent point and was also raised by reviewer #1. See major comment 3 of reviewer #1 for a detailed response. In short, I now provide comprehensive analyses that demonstrate successful parameter recovery across different regimes and both noisy types (noisy-readout, noisy-report). See Figure 7.

    Regarding the anticipated trade-offs between the confidence slope (now referred to as multiplicative evidence bias) and metacognitive noise / delta_m (now additive evidence bias), there is a single scenario in which this becomes an issue. I describe this in the Results section as follows (Line 480ff):

    “Here, the only marked trade-off emerges between metacognitive noise σm and the metacognitive evidence biases (𝜑m, δm) in the noisy-readout model, under conditions of low sensory noise. In this regime, the multiplicative evidence bias 𝜑m becomes increasingly underestimated and the additive evidence bias δm overestimated with increasing metacognitive noise. Closer inspection shows that this dependency emerges only when metacognitive noise is high – up to σm  0.3 no such dependency exists. It is thus a scenario in which there is little true variance in confidence ratings (due to low sensory noise many confidence ratings would be close to 1 in the absence of metacognitive noise), but a lot of measured variance due to high metacognitive noise. It is likely for this reason that parameter inference is problematic. Overall, except for this arguably rare scenario, all parameters of the model are highly identifiable and separable.” In my experience, certain trade-offs in specific edge cases are almost inescapable for more complex models. Overall, I think it is fair to say that parameter recovery works extremely well, including the ‘trinity’ of metacognitive noise / multiplicative evidence bias / additive evidence bias.

    • Along similar lines, can the noisy readout and noisy report models really be distinguished? I appreciate they might return differential AICs. But qualitatively, it seems like the only thing distinguishing them is that the noise is either applied before or after the link function, and it wasn't clear whether this was sufficient to distinguish one from the other. In other words, if you created a 2x2 model confusion matrix from simulated data (see Wilson & Collins, 2019 eLife) would the correct model pathway from Figure 1 be recovered?

    Great point. I introduced a new subsection 2.4 “Model recovery”, in which I demonstrate successful recovery of noisy-readout versus noisy-report models. See also my response to the first comment of Reviewer #1, which includes the new model recovery figure and the associated paragraph in the manuscript. The key new figure is Figure 7—figure supplement 6.

    • Again on a similar theme: isn't the slope parameter rho_m better considered a parameter governing metacognitive sensitivity, given that it maps the decision values onto confidence? If this parameter approaches zero, the function flattens out which seems equivalent to introducing additional metacognitive noise. Are these parameters distinguishable?

    Indeed, the parameter recovery analysis shows a slight negative correlation between the slope parameter (now termed multiplicative evidence bias) and metacognitive noise (Figure 7). As the reviewer mentions, this is likely caused by the fact that both parameters lead to a flattening /steepening of the evidenceconfidence relationship. For reference, in the empirical dataset by Shekhar & Rahnev, the correlation between AUROC2 and the multiplicative evidence bias is almost absent at r = −0.017. Critically, however, while an increase of the metacognitive noise parameter σm will ultimately lead to a truly flat/indifferent relationship between evidence and confidence, the multiplicative evidence parameter 𝜑m only affects the slope (i.e., asymptotically confidence will still reach 1). This is one reason why parameter recovery for both σm and 𝜑m works overall very well. The differential effects of σm and 𝜑m are now better illustrated in the updated Figure 3:

    Also conceptually, the multiplicative evidence parameter 𝜑m plausibly represents a metacognitive bias, with either interpretation that I suggest in the manuscript: as a an under/overestimation of the evidence or as a an over/underestimation of one’s own sensory noise, leading to under/overconfidence, respectively. In sum, I think there are strong arguments for the present formalization and interpretation.

    • The final paragraph of the discussion was interesting but potentially concerning for a model of metacognition. It explains that data on empirical trial-by-trial accuracy is not used in the model fits. I hadn't appreciated this until this point in the paper. I can see how in a process model that simulates decision and confidence data from stimulus features, accuracy should not be an input into such a model. But in terms of a model fit, it seems odd not to use trial by trial accuracy to constrain the fits at the metacognitive level, given that the hallmark of metacognitive sensitivity is a confidence-accuracy correlation. Is it not possible to create accuracy-conditional likelihood functions when fitting the confidence rating data (similar to how the meta-d' model fit is handled)? Psychologically, this also makes sense given that the observer typically knows their own response when giving a confidence rating.

    While I agree of course that metacognitive sensitivity quantifies the relationship confidence-accuracy relationship, a process model is a distinct approach and requires distinct methodology. Briefly, the current model fit cannot be improved upon, as it is based on a precise inversion of the forward model. Computing accuracy-conditional likelihoods would lead to a biased parameter estimates, because it would incorrectly imply that the observer has access to the accuracy of their choice. While the observer knows their choice, as the reviewer correctly notes, they do not know the true stimulus category and hence not their accuracy.

    I argue in the manuscript that both approaches (descriptive meta-d’, explanatory process model) have their advantages and disadvantages. The concept of meta-d’ / metacognitive sensitivity does not care why a particular confidence rating is the way it is, or whether an incorrect response is caused by sensory noise or by an attentional lapse. On the one hand, this implies that one cannot draw any conclusions about the causes and mechanisms of metacognitive inefficiency, which could be perceived as a major drawback. In this respect, it is a purely descriptive measure (cf. last comment of Reviewer #1). On the other hand, because it is descriptive, it can simply compare the confidence between correct and incorrect choices and thus, in a sense, capture a more thorough picture of metacognitive sensitivity; that is, being metacognitively aware not only of the consequences one’s own sensory noise (as in typical process models), but also of all other sources of error (attentional lapses, finger errors, etc.). I now added an additional paragraph in which I summarize the comparison of type 2 ROC / meta-d’ and process models along these lines (Line 800ff):

    “In sum, while a type 2 ROC analysis, as a descriptive approach, does not allow any conclusions about the causes of metacognitive inefficiency, it is able to capture a more thorough picture of metacognitive sensitivity: that is, it quantifies metacognitive awareness not only about one’s own sensory noise, but also about other potential sources of error (attentional lapses, finger errors, etc.). While it cannot distinguish between these sources, it captures them all. On the other hand, only a process model approach will allow to draw specific conclusions about mechanisms – and pin down sources – of metacognitive inefficiency, which arguably is of major importance in many applications.”

    • I found it concerning that all the variability in scale usage were being assumed to load onto evidencerelated parameters (eg delta_m) rather than being something about how subjects report or use an arbitrary confidence scale (eg the "implicit biases" assumed to govern the upper and lower bounds of the link function). It strikes me that you could have a similar notion of offset at the level of report - eg an equivalent parameter to delta_m but now applied to c and not z. Would these be distinguishable? They seem to have quite different interpretations psychologically: one is at the level of a bias in confidence formation, and the other at the level of a public report.

    I substantially reworked the section about metacognitive biases, including an additive metacognitive bias (κm) also at the level of confidence. The previous version of the manuscript already included a multiplicative bias parameter loading onto confidence (previously referred to as ‘confidence scaling’ parameter, now multiplicative confidence bias λm), but it was considered optional and e.g. not part of the parameter recovery analyses.

    My previous emphasis on biases that load onto evidence-related variables was due to a more principled interpretation (e.g. ‘underestimation of sensory noise’), but I agree that metacognitive biases must not necessarily be principled and may be driven e.g. by the idiosyncratic usage of a particular confidence scale. Updated Figure 1 sketches the new, more complete model.

    Is a mix of evidence- and confidence-related metacognitive bias parameters distinguishable? I tested this in Figure 7—figure supplement 3.

    The slope matrices show that e.g., the model suggested by the reviewer (two evidence-related bias parameters 𝜑m and δm + an additive confidence-based bias parameter κm) is to some degree dissociable, although slight tradeoffs start to emerge with such a complex model. By contrast, a mix of only one evidence-related and one confidence-related bias parameter is much more robust. In general, I thus recommend using at most two metacognitive bias parameters, which are selected either based on a priori hypotheses or on a model comparison. I comment on the necessity of choosing one’s bias parameters in a new paragraph in section 1.4 about metacognitive biases (Line 276ff):

    “Finally, note that the parameter recovery shown in Figure 4 was performed with four separate models, each of which was specified with a single metacognitive bias parameter (i.e., 𝜑m, δm, λm, or m). Parameter recovery is more unreliable when more than two of these bias parameters are specified in parallel (see section 2.3; in particular, Figure 7—figure supplement 3). In practice, the researcher thus must make an informed decision about which bias parameters to include in a specific model (in most scenarios 1 or 2 metacognitive bias parameters is a good choice). While the evidence-related bias parameters 𝜑m and δm have a more principled interpretation (e.g., as an under/overestimation of sensory noise), it is not unlikely that metacognitive biases also emerge at the level of the confidence report (λm, km). The first step thus must always be a process of model specification or a statistical comparison of candidate models to determine the final specification (see also section 3.1).”

    Read the original source
    Was this evaluation helpful?
  2. Evaluation Summary:

    This paper presents a novel computational model of metacognition that parameterizes links between sensory evidence and confidence. The proposed model relies on perceptual decision-making to formalize different sources of noise and bias that impact confidence, with the aim of developing metacognitive metrics that are independent of perceptual sensitivity - a continued endeavor in the field. Despite the clear merits of this approach, more evidence is needed to validate the proposed architecture, which is particularly modular, and may therefore impair the generalizability of the proposed mechanisms.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 agreed to share their name with the authors.)

    Read the original source
    Was this evaluation helpful?
  3. Reviewer #1 (Public Review):

    In this study, Guggenmos proposes a process model for predicting confidence reports following perceptual choices, via the evidence available from stimuli of various intensities. The mechanisms proposed are principled, but a number of choices are made that should be better motivated - I develop below a number of concerns by order of importance.

    1. Lack of separability of the two metacognitive modules.

    Can the author show that the proposed model can actually discriminate between the noisy readout module and the noisy report module? The two proposed modules have a different psychological meaning, but seem to similarly impact the confidence output. Are these two mutually exclusive (as Fig 1 suggests), or could both sources of noise co-exist? It will be important to show model recovery for introducing readout vs. report at the metacognitive level, e.g., show that a participant best-fitted by a nested model or a subpart of the full model, with a restricted number of modules (some of the parameters set to zero or one), is appropriately recovered? (focusing on these two modules)

    This raises the question of how the two types of sigma_m are recoverable/separable from each other (and should they both be called sigma_m, even if they both represent a standard deviation)? If they capture independent aspects of noise, one could imagine a model with both modules. More evidence is needed to show that these two capture separate aspects of noise.

    1. The trade-off between the flexibility of the model (modularity of the metacognitive part, choice of the link functions) and the generalisability of the process proposed seems in favor of the former. Does the current framework really allow to disambiguate between the different models? Or at least, the process modeled is so flexible that I am not sure it allows us to draw general conclusions?

    Fig 7 and section 3 of the results explain that all models are similar, regardless of module of functions specified; Fig 7 supp shows that half of participants are best fitted by noisy readout, while the other half is best fitted by noisy report; plus, idiosyncrasies across participants are all captured. Does this compromise the generalisability of the modeling of the group as a whole?

    1. More extensive parameter recovery needs to be done/shown. We would like to see a proper correlation matrix between parameters, and recovery across the parameter space, not only for certain regimes (i.e. more than fig 6 supp 3), that is, the full grid exploration irrespective of how other parameters were set.

    The recovery of the three metacognitive bias parameters is displayed in Fig 4, but what about the other parameters? We need to see that they each have a specific role. The point in the Discussion "the calibration curves and the relationships between type 1 performance and confidence biases are quite distinct between the three proposed metacognitive bias parameters may indicate that these are to some degree dissociable" is only very indirect evidence that this may be the case.

    1.8: It would be important to report under what regimes of other parameters these simulations were conducted. This is because, even if dependence of Mratio onto type 1 performance is reproduced, and that is not the case for sigma_m, it would be important to know whether that holds true across different combinations of the other parameter values.

    Is lambda_m meaningfully part of the model, and if so, could it be introduced into the Fig 1 model, and be properly part of the parameter recovery?

    1. An important nuance in comparing the present sigma_m to Mratio is that the present model requires that multiple difficulty levels are tested, whereas instead, the Mratio model based on signal detection theory assumes a constant signal strength. How does this impact the (unfair?) comparison of these two metrics on empirical data that varied in difficulty level across trials?

    Relatedly, the Discussion paragraph that explained how the present model departs from type 2 AUROC analysis similarly omits to account for the fact that studies relying on the latter typically intend to not vary stimulus intensity at the level of the experimenter.

    1. 'Parameter fitting minimizes the negative log-likelihood of type 1 choices (sensory level) or type 2 confidence ratings (metacognitive level)'. Why not fitting both choices and confidence at the same time instead of one after the other? If I understood correctly, it is an assumption that these are independent, why not allow confidence reports to stem from different sources of choice and metacognitive noise? Is it because sensory level is completely determined by a logistic (but still, it produces the decision values that are taken up to the metacognitive level)?

    2. Fig 4 left panels: could you clarify the reasoning that due to sensory noise, overconfidence is expected, instead of having objective and subjective probability correct aligning on the diagonal? Shouldn't the effects of sensory noise average out? In other words, why would the presence of sensory noise systematically push towards overconfidence rather than canceling out on average?

    3. The same analysis as Fig 6 but for noisy readout instead of noisy reports do not show the same results: both sigma_m and m-ratio vary as a function of type 1 performance. Does this mean that the present model with readout module does not solve the issue of dependency upon type 1 performance?

    4. In Eq8, could you explain why only the decision values consistent with the empirical choice are filtered. Is this an explicit modeling of the 'decision-congruence' phenomenon reported elsewhere (eg. Peters et al 2017)? What are the implications of not keeping only the congruent decision values?

    Read the original source
    Was this evaluation helpful?
  4. Reviewer #2 (Public Review):

    This paper presents a novel computational model of confidence that parameterises links between sensory evidence, metacognitive sensitivity and metacognitive bias. While there have been a number of models of confidence proposed in the literature, many of these are tailored to bespoke task designs and/or not easily fit to data. The dominant model that sees practical use in deriving metacognitive parameters is the meta-d' framework, which is tailored for inference on metacognitive sensitivity rather than metacognitive biases (over- and underconfidence). This leaves a substantial gap in the literature, especially as in recent years many interesting links between metacognitive bias and mental health have started to be uncovered. In this regard, the ReMeta model and toolbox is likely to have significant impact on the field, and is an excellent example of a linked publication of both paper and code. It's possible that this paper could do for metacognitive bias what the meta-d' model did for metacognitive sensitivity, which is to say have a considerable beneficial impact on the level of sophistication and robustness of empirical work in the field.

    The rationale for many of the modelling choices is clearly laid out and justified (such as the careful handling of "flips" in decision evidence). My main concern is that the limits to what can be concluded from the model fits need much clearer delineation to be of use in future empirical work on metacognition. Answering this question may require additional parameter/model recovery analysis to be convincing.

    Specific comments:

    - The parameter recovery demonstrated in Figure 4 across range of d's is impressive. But I was left wondering what happens when more than one parameter needs to be inferred, as in real data. These plots don't show what the other parameters are doing when one is being recovered (nor do the plots in the supplement to Figure 6). The key question is whether each parameter is independently identifiable, or whether there are correlations in parameter estimates that might limit the assignment of eg metacognitive bias effects to one parameter rather than another. I can think of several examples where this might be the case, for instance the slope and metacognitive noise may trade off against each other, as might the slope and delta_m. This seems important to establish as a limit of what can be inferred from a ReMeta model fit.

    - Along similar lines, can the noisy readout and noisy report models really be distinguished? I appreciate they might return differential AICs. But qualitatively, it seems like the only thing distinguishing them is that the noise is either applied before or after the link function, and it wasn't clear whether this was sufficient to distinguish one from the other. In other words, if you created a 2x2 model confusion matrix from simulated data (see Wilson & Collins, 2019 eLife) would the correct model pathway from Figure 1 be recovered?

    - Again on a similar theme: isn't the slope parameter rho_m better considered a parameter governing metacognitive sensitivity, given that it maps the decision values onto confidence? If this parameter approaches zero, the function flattens out which seems equivalent to introducing additional metacognitive noise. Are these parameters distinguishable?

    - The final paragraph of the discussion was interesting but potentially concerning for a model of metacognition. It explains that data on empirical trial-by-trial accuracy is not used in the model fits. I hadn't appreciated this until this point in the paper. I can see how in a process model that simulates decision and confidence data from stimulus features, accuracy should not be an input into such a model. But in terms of a model fit, it seems odd not to use trial by trial accuracy to constrain the fits at the metacognitive level, given that the hallmark of metacognitive sensitivity is a confidence-accuracy correlation. Is it not possible to create accuracy-conditional likelihood functions when fitting the confidence rating data (similar to how the meta-d' model fit is handled)? Psychologically, this also makes sense given that the observer typically knows their own response when giving a confidence rating.

    - I found it concerning that all the variability in scale usage were being assumed to load onto evidence-related parameters (eg delta_m) rather than being something about how subjects report or use an arbitrary confidence scale (eg the "implicit biases" assumed to govern the upper and lower bounds of the link function). It strikes me that you could have a similar notion of offset at the level of report - eg an equivalent parameter to delta_m but now applied to c and not z. Would these be distinguishable? They seem to have quite different interpretations psychologically: one is at the level of a bias in confidence formation, and the other at the level of a public report.

    Read the original source
    Was this evaluation helpful?