Distancing alters the controllability of emotional states by affecting both intrinsic stability and extrinsic sensitivity

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife Assessment

    This important manuscript proposes a dual behavioral/computational approach to assess emotional regulation in humans. The authors present solid evidence for the idea that emotional distancing (as routinely used in clinical interventions for e.g. mood and anxiety disorders) enhances emotional control.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Emotion regulation strategies such as distancing are a core component of many evidence-based, effective psychotherapeutic interventions. They allow individuals to exert more ‘control’ over their emotional state. However, objectively disentangling how emotion regulation increases control has been difficult for reasons including a lack of a coherent theoretical framework for emotion control and insufficient experimental control over external inputs. Here, we apply a well-established theoretical framework for controllability to a tightly controlled experimental setup to examine the computational mechanisms by which emotion regulation interventions enhance emotional controllability.109 participants were randomized to either a short emotion regulation intervention (distancing) or a control intervention. Both before and after the intervention, participants reported their emotional state along five dimensions repeatedly while watching a series of short, standardized, emotional video clips. A Kalman Filter was used to quantify how multidimensional emotional states changed with video inputs. The consequences of the emotion regulation intervention were examined by Bayesian model comparison, comparing models allowing for a change in intrinsic dynamics and/or input weights. Controllability was quantified using the controllability Gramian.The Kalman filter captured participants’ emotional trajectories, showing that emotional states were affected by the emotional videos; persisted; and interacted with each other. The distancing strategy made emotional states less externally controllable. It did so by altering two aspects of the dynamical system: by stabilizing specific emotional patterns and by reducing the impact of the external video clips.Our study used a novel approach to examine emotion regulation, finding that a brief distancing intervention increased perceived emotion control by reducing how much external stimuli can control emotional states. This is due to both an increase in the intrinsic stability of certain emotional states; and a reduction of the sensitivity to certain extrinsic affective stimuli.

Article activity feed

  1. eLife Assessment

    This important manuscript proposes a dual behavioral/computational approach to assess emotional regulation in humans. The authors present solid evidence for the idea that emotional distancing (as routinely used in clinical interventions for e.g. mood and anxiety disorders) enhances emotional control.

  2. Reviewer #1 (Public review):

    Summary:

    Using sequences of short videos to elicit emotional changes in participants, Malamud & Huys demonstrate how a brief, controlled emotion regulation intervention (distancing) can effectively alter subsequent emotion ratings. The authors employ latent state-space models to capture the trajectories of emotion ratings, leveraging tools from control theory to quantify the intervention's impact on emotion dynamics.

    Strengths:

    The experiment is well-designed and tailored to the computational modeling approach advanced in the paper. It also relies on a selection of stimuli that were previously validated. Within the constraints of a controlled experiment, the intervention successfully implements a relatively common tool of psychotherapeutic treatment, ensuring its clinical relevance.

    The computational modeling is grounded in the well-established framework of dynamical systems and control theory. This foundation offers a conceptually clear formalization along with powerful quantification tools that go beyond previous more data-driven approaches.

    Overall, the study presents a coherent approach that bridges concepts from clinical psychology and computational theories, providing a timely stepping stone toward advancing quantified, evidence-based psychological interventions targetting emotion control.

    Weaknesses:

    A primary limitation of this study, acknowledged by the authors, is its reliance on self-reports of participants' emotional states. Although considerable effort was made to minimize expectation effects, further research is needed to confirm that the observed behavioral changes reflect genuine alterations in emotional states. Additionally, the generalizability of the findings to long-term remediation strategies remains an open question.

    Second, the statistical analysis, particularly the computational approach, sometimes lacks sufficient detail and refinement. While I will not elaborate on specific points here, one notable issue is the interpretation of the intrinsic matrix (A). The model-free analysis reveals correlations between emotions at a given time or within an emotional state across time points. However, it does not provide evidence to support lagged interactions across states that would justify non-diagonal elements in A. The other result concerning the dynamics matrix only highlights a trend in the dominant eigenvalue, which is difficult to interpret in isolation. The absence of a statistically significant group x intervention interaction furthermore makes this finding a little compelling. This weakens the study's conclusions about the importance of intrinsic dynamics, as claimed in the title.

    Finally, to avoid potential misunderstandings of their work, the authors should be more careful about their use of terms pertaining to the control theory and take the time to properly define them. For example, the "controllability" of emotional states can either denote that those states are more changeable (control theory definition), or, conversely, more tightly regulated (common interpretation, as used in the abstract). This is true for numerous terms (stability, sensitivity, Gramian, etc.) for which no clear definition nor references are provided. Readers unfamiliar with the framework of control theory will likely be at a loss without more guidance.

  3. Reviewer #2 (Public review):

    Summary:

    In this well-conceived and timely study, the authors assess the controllability of emotions in a quantitative way using the framework of control theory. They use a controlled distancing intervention halfway through an emotion rating task where emotion-inducing short videos from a validated database are shown and find that the intervention enables a better controllability of externally induced emotions in the experimental group.

    Strengths:

    It is a highly original idea to address the external controllability of emotions using the formal framework of control theory. It is also a very propitious approach to take what could be called a 'micro-therapeutic' perspective which looks at the immediate effect of an intervention instead of the 'macro-therapeutic' mid- or long-term effect of a whole course of therapy.

    Weaknesses:

    Acquiring data online inevitably gives rise to selection and self-selection effects. This needs to be acknowledged clearly. Exacerbating this, participant remuneration seems low at an amount below the minimum or living wage in Western countries (do the authors know where their participants came from?).

    Another concern is that the intervention does not simply take place before the second block begins but is ongoing during the whole of the second block in that it is integrated into the phrasing of the task on each trial. It is therefore somewhat misleading to speak of a period 'after the intervention', and it would have been interesting to assess the effect of this by including a third group where the phrasing does not change, but the floating leaves intervention takes place.

    As mentioned in the Limitations section, observation noise was assumed and not estimated. While this is understandable in this case, the effect of this assumption could have been assessed by simulation with varying levels of observation (and process) noise.

    Relatedly, the reliance on formal model comparison is unfortunate since the outcome of such comparisons is easily influenced by slight changes to assumptions such as noise levels. An alternative approach would have been to develop a favoured model based on its suitability to address the research question and its ability, established by simulation, to distill relevant changes of behaviour into reliable parameter estimates.

    The statistical analyses clearly show the limitations of classical statistical testing with highly complex models of the kind the authors (commendably) use. Hunting for statistically significant interactions in a multivariate repeated-measures design relying on inputs from time series-derived point estimates is a difficult proposition. While the authors make the best of the bad situation they create by using null-hypothesis significance testing, a more promising approach would have been to estimate parameters using a sampler like Stan or PyMC and then draw conclusions based on posterior predictive simulations.

  4. Reviewer #3 (Public review):

    Summary:

    The manuscript takes a dynamical systems perspective on emotion regulation, meaning that rather than a simplistic model conceptualising regulation as applying to a single emotion (e.g. regulation of sadness), emotion regulation could cause a shift in the dynamics of a whole system of emotions (which are linked mathematically to one another). This builds on the idea that there are 'attractor states' of emotions between which people transition, governed by both the system's intrinsic characteristics (e.g. temporal autocorrelation of a particular emotion/person) and external driving forces (having a stressful week). Conceptually this is a very useful advance because it is very unlikely that emotions are elicited (or reduced) singly, without affecting other emotions. This paper is a timely implementation of these ideas in the context of psychotherapeutic intervention, distancing, which participants were trained (randomised) to perform while watching emotion-inducing videos.

    The authors' main conclusion is that distancing both stabilises specific emotional patterns and reduces the impact of external video clips. I would consider these results strong and believable, and to have the potential to impact models of emotion regulation as well as the field's broader views on the mechanisms of psychological therapies.

    Strengths:

    This paper has very many strengths: I would especially note the authors' very-well-matched active control condition and the robustness of their model comparison approach. One feature of the authors' approach is that they explicitly add noise - not what you typically see in an emotion time-series analysis - which allows participants to make errors in their own subjective ratings (a reasonable thing to assume); this noise can then be smoothed during filtering. In their model comparison approach, they explicitly test whether a true dynamical system explains emotion change/emotion regulation effect on emotions - demonstrating that both intrinsic dynamics and external inputs were needed to explain subjective emotion. Powerfully, they also used this approach to test the differential effects of the treatment groups (see below).

    The main result seems quite robust statistically. Verifying the effects of the distancing intervention on emotion, the authors found an interaction between time (pre- to post-intervention) and intervention group (distancing vs. relaxation) suggesting that distancing (but not relaxation) reduced ratings of almost all emotions. Participants allocated to the distancing intervention also showed decreased variability of emotion ratings compared to those in the relaxation intervention (though note this interaction was not significant).

    Using a model comparison approach, the authors then demonstrated that whilst the control group was best explained by a model that did not change its dynamics of emotions, the active intervention (distancing) group was best explained by a model that captured both changing emotion dynamics and a changing input weights (influence of the videos) - results confirmed in follow-up analyses. This is convincing evidence that emotion regulation strategies may specifically affect the dynamics of emotions - both their relationships to one another and their susceptibility to changes evoked by external influences.

    The authors also perform analyses that suggest their result is not attributable to a demand effect (finding that participants were quicker during the control intervention, which one would expect if they had already decided how to respond in advance of the emotion question). I personally also think a demand effect is unlikely given the robustness of their control intervention (which participants would be just as likely to interpret as mental health-enhancing training as distancing), and I am convinced by the notion that demand effects would be unlikely to elicit their more specific effects on the dynamic quality of emotions.

    Weaknesses:

    An interesting but perhaps at present slightly confusing aspect of their described results relates to the 'controllability' of emotions, which they define as their susceptibility to external inputs. Readers should note this definition is (as I understand it) quite distinct from, and sometimes even orthogonal to, concepts of emotional control in the emotion literature, which refer to intentional control of emotions (by emotion regulation strategies such as distancing). The authors also use this second meaning in the discussion. Because of the centrality of control/controllability (in both meanings) to this paper, at present it is key for readers to bear these dual meanings in mind for juxtaposed results that distancing "reduces controllability" while causing "enhanced emotional control".

    As above the authors use an active control - a relaxation intervention - which is extremely closely matched with their active intervention (and a major strength). However, there was an additional difference between the groups (as I currently understand it): "in the group allocated to the distancing intervention, the phrasing of the question about their feelings in the second video block reminded participants about the intervention, stating: "You observed your emotions and let them pass like the leaves floating by on the stream." I do wonder if the effects of distancing also have been partially driven by some degree of reappraisal (considered a separate emotion regulation strategy) since this reminder might have evoked retrospective changes in ratings.

    Not necessarily a weakness, but an unanswered question is exactly how distancing is producing these effects. As the authors point out, there is a possibility that eye-movement avoidance of the more emotionally salient aspects of scenes could be changing participants' exposure to the emotions somewhat. Not discussed by the authors, but possibly relevant, is the literature on differences between emotion types on oculomotor avoidance, which could have contributed to differential effects on different emotions.

  5. Author response:

    Reviewer 1:

    A primary limitation of this study, acknowledged by the authors, is its reliance on self-reports of participants’ emotional states. Although considerable effort was made to minimize expectation effects, further research is needed to confirm that the observed behavioral changes reflect genuine alterations in emotional states.

    Thank you very much for raising this point. We fully agree that self-reported emotional states are inherently subjective and that the ramifications of this need to be clarified in the manuscript. However, we would suggest that the focus on self-report may be a strength rather than a limitation. First, the regularities and rules underlying and determining emotional self-report are of primary importance and interest in their own right, and the work presented here does, we believe, shed light on a rich structure present in multivariate timeseries of subjective self-reports and their response to external inputs. Second, there is no clear definition of what a ”genuine emotion state” might be; particularly if there is a discrepancy with self-reported emotions.

    Additionally, the generalizability of the findings to long-term remediation strategies remains an open question.

    Yes, we agree that what we have described is limited to a short-term intervention and change.

    Whether these changes bear on longer-term changes remains to be assessed. Furthermore, the mechanisms or processes that would support such a maintenance are of substantial interest, and will be the focus of future work.

    Second, the statistical analysis, particularly the computational approach, sometimes lacks sufficient detail and refinement. While I will not elaborate on specific points here, one notable issue is the interpretation of the intrinsic matrix (A). The model-free analysis reveals correlations between emotions at a given time or within an emotional state across time points. However, it does not provide evidence to support lagged interactions across states that would justify non-diagonal elements in A. The other result concerning the dynamics matrix only highlights a trend in the dominant eigenvalue, which is difficult to interpret in isolation. The absence of a statistically significant group x intervention interaction furthermore makes this finding a little compelling. This weakens the study’s conclusions about the importance of intrinsic dynamics, as claimed in the title.

    We appreciate the reviewer’s detailed feedback on the statistical analysis and interpretation of the intrinsic dynamics matrix. It is true that the model-free analysis as presented focuses on within-state correlations and that we have not provided such model-free evidence for lagged interactions across states. We do note that the model comparison suggested that the intervention caused changes in the full A matrix. This would be unlikely if there had not been meaningful cross-emotion lagged effects. Similarly, inference of the A matrix could have revealed a diagonal matrix, and we preferred not to impose such an assumption a priori, as it is very restrictive. Nevertheless, in the absence of a statistically significant group x intervention interaction, the findings regarding the A matrix are less compelling than those related to the control analyses. While this is likely due to a lack of statistical power, these are important points which we will consider in more detail in the revision.

    Finally, to avoid potential misunderstandings of their work, the authors should be more careful about their use of terms pertaining to the control theory and take the time to properly define them. For example, the ”controllability” of emotional states can either denote that those states are more changeable (control theory definition), or, conversely, more tightly regulated (common interpretation, as used in the abstract). This is true for numerous terms (stability, sensitivity, Gramian, etc.) for which no clear definition nor references are provided. Readers unfamiliar with the framework of control theory will likely be at a loss without more guidance.

    Thank you for this point. We recognize the potential for misunderstanding due to the dual usage of terms such as ”controllability” and will improve the clarity to avoid any misunderstanding.

    Reviewer 2:

    Acquiring data online inevitably gives rise to selection and self-selection effects. This needs to be acknowledged clearly. Exacerbating this, participant remuneration seems low at an amount below the minimum or living wage in Western countries (do the authors know where their participants came from?).

    Thank you for this point. We certainly agree that different experimental settings can induce different biases, and this is no different for online settings. However, online tasks such as the one used here, have become accepted, and there is now a substantial literature showing that in-lab effects are often well-replicated in online settings (Gillan and Rutledge, 2021) . For the current study, it is not clear that an inperson setting may not induce comparably complex biases, e.g. to do with differences between experimenters. All participants were from the UK. Remuneration rates were comparable to other experimental settings, in keeping with other online studies, UK living wage recommendations, and ultimately determined according to institutional ethical guidance.

    Another concern is that the intervention does not simply take place before the second block begins but is ongoing during the whole of the second block in that it is integrated into the phrasing of the task on each trial. It is therefore somewhat misleading to speak of a period ’after the intervention’, and it would have been interesting to assess the effect of this by including a third group where the phrasing does not change, but the floating leaves intervention takes place.

    Thank you for this point. We acknowledge that the phrasing of the emotion question in the second block may have influenced the observed effects. Including a third group without the reminder would have provided valuable insights and is an important consideration for future studies. We will acknowledge this limitation.

    As mentioned in the Limitations section, observation noise was assumed and not estimated. While this is understandable in this case, the effect of this assumption could have been assessed by simulation with varying levels of observation (and process) noise.

    Thank you for this comment. We would like to clarify that both observation noise and process noise were estimated in the analyses. We will ensure this is emphasized better in the revised version to avoid future misunderstandings.

    Relatedly, the reliance on formal model comparison is unfortunate since the outcome of such comparisons is easily influenced by slight changes to assumptions such as noise levels. An alternative approach would have been to develop a favoured model based on its suitability to address the research question and its ability, established by simulation, to distill relevant changes of behaviour into reliable parameter estimates.

    We agree that model comparison alone is insufficient. This is why we have also included extensive simulations, including posterior predictive checks, and have followed established best-practice procedures (Wilson and Collins, 2019). We have focused on a relatively simple model space to avoid overfitting to the dataset, and hence reduce the risk of spurious findings. While we agree that outcomes will be influenced by underlying assumptions, this would persist with the suggested approach of relying on a favoured model. Simulations themselves rely on predefined structures and noise specifications, which inherently shape parameter recovery and inference. Relying only on a favoured model might risk model misspecification, whereby the model may not actually capture the data, and the parameters intended to capture the intervention effect could be confounded. We will clarify the reasoning behind our approach in the revised version.

    The statistical analyses clearly show the limitations of classical statistical testing with highly complex models of the kind the authors (commendably) use. Hunting for statistically significant interactions in a multivariate repeated-measures design relying on inputs from time seriesderived point estimates is a difficult proposition. While the authors make the best of the bad situation they create by using null-hypothesis significance testing, a more promising approach would have been to estimate parameters using a sampler like Stan or PyMC and then draw conclusions based on posterior predictive simulations.

    This comment raises several interesting points. First, we agree that the value of classical test on individual parameters within such complex situations is limited. This is why our main focus is on global measures like model comparison. Our use of the classical tests is more to support the understanding of the nature of the data, i.e. they have a more descriptive aim. We will hope to clarify this further in the revision. Second, in terms of sampling, we would like to emphasize that the Kalman filter is both efficient and analytical tractable, making it well-suited to our data and research question. It may have been possible to use sampling to obtain posterior distributions rather than point estimates. However, we did not judge this to be worth the (substantial) additional computational cost.

    Reviewer 3:

    An interesting but perhaps at present slightly confusing aspect of their described results relates to the ’controllability’ of emotions, which they define as their susceptibility to external inputs. Readers should note this definition is (as I understand it) quite distinct from, and sometimes even orthogonal to, concepts of emotional control in the emotion literature, which refer to intentional control of emotions (by emotion regulation strategies such as distancing). The authors also use this second meaning in the discussion. Because of the centrality of control/controllability (in both meanings) to this paper, at present it is key for readers to bear these dual meanings in mind for juxtaposed results that distancing ”reduces controllability” while causing ”enhanced emotional control”.

    We fully agree with the reviewer’s observation that ”controllability” can be interpreted in different ways. we will revise the text to ensure consistent usage and explicitly state the distinction between the control theory definition of controllability and its interpretation in the emotion regulation literature.

    As above the authors use an active control - a relaxation intervention - which is extremely closely matched with their active intervention (and a major strength). However, there was an additional difference between the groups (as I currently understand it): ”in the group allocated to the distancing intervention, the phrasing of the question about their feelings in the second video block reminded participants about the intervention, stating: ”You observed your emotions and let them pass like the leaves floating by on the stream.” I do wonder if the effects of distancing also have been partially driven by some degree of reappraisal (considered a separate emotion regulation strategy) since this reminder might have evoked retrospective changes in ratings.

    We appreciate this substantial point. While our study was designed to isolate the effects of distancing, we acknowledge that elements of reappraisal may also have influenced the results. We will discuss this in the revised version. Additionally, as noted in our response to Reviewer 2, including a third group without the reminder could have provided valuable information, and we consider this to be an important direction for future research.

    Not necessarily a weakness, but an unanswered question is exactly how distancing is producing these effects. As the authors point out, there is a possibility that eye-movement avoidance of the more emotionally salient aspects of scenes could be changing participants’ exposure to the emotions somewhat. Not discussed by the authors, but possibly relevant, is the literature on differences between emotion types on oculomotor avoidance, which could have contributed to differential effects on different emotions.

    Thank you very much for these suggestions. It is very true that different emotions can elicit different patterns of oculomotor avoidance, which could have contributed to our observed effects. Research suggests that emotions such as disgust are associated with visual avoidance (Armstrong et al., 2014; Dalmaijer et al., 2021), whereas anxiety and other negative emotions exhibited increased attentional bias after fear conditioning (Kelly and Forsyth, 2009; Pischek-Simpson et al., 2009). It would be very interesting to repeat the experiment with eye-tracking to examine these possibilities. What would be particularly interesting to examine is whether a distancing intervention induces multiple, emotionally-specific behaviours, or not.

    References

    Armstrong, T., McClenahan, L., Kittle, J., and Olatunji, B. O. (2014). Don’t look now! Oculomotor avoidance as a conditioned disgust response. Emotion (Washington, D.C.), 14(1):95–104.

    Dalmaijer, E. S., Lee, A., Leiter, R., Brown, Z., and Armstrong, T. (2021). Forever yuck: Oculomotor avoidance of disgusting stimuli resists habituation. Journal of Experimental Psychology. General, 150(8):1598– 1611.

    Gillan, C. M. and Rutledge, R. B. (2021). Smartphones and the Neuroscience of Mental Health. Annual Review of Neuroscience, 44(Volume 44, 2021):129–151. Publisher: Annual Reviews.

    Kelly, M. M. and Forsyth, J. P. (2009). Associations between emotional avoidance, anxiety sensitivity, and reactions to an observational fear challenge procedure. Behaviour Research and Therapy, 47(4):331–338. Place: Netherlands Publisher: Elsevier Science.

    Pischek-Simpson, L. K., Boschen, M. J., Neumann, D. L., and Waters, A. M. (2009). The development of an attentional bias for angry faces following Pavlovian fear conditioning. Behaviour Research and Therapy, 47(4):322–330.

    Wilson, R. C. and Collins, A. G. (2019). Ten simple rules for the computational modeling of behavioral data. eLife, 8:e49547. Publisher: eLife Sciences Publications, Ltd.