Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    When theta phase precession was discovered (O'Keefe & Recce, 1993; place cell firing shifting from late to early theta phases as the rat moves through the firing field, averaged over many runs), it was realized that, correspondingly, firing moves from cells with firing fields that have been run through (early phase) to those whose fields are being entered (late phase), with the consequence that a broader range of cells will be firing at this late phase (Skaggs et al., 1996; Burgess et al., 1993; see also Chadwick et al., 2015). Thus, these sweeps could represent the distribution of possible future trajectories, with the broadening distribution representing greater uncertainty in the future trajectory.

    Using data from Pfeiffer and Foster (2013), they examine how neurons could encode the distribution of future locations, including its breadth (i.e. uncertainty), testing a couple of proposed methods and suggesting one of their own. The results show that decoded location has increasing variability at later phases (corresponding to locations further ahead), and greater deviation from the actual trajectory. Further results (when testing the models below) include that population firing rate increased from early to late phases; decoding uncertainty does not change within-cycle, and the cycle-by-cycle variability (CCV) increases from early to late phases more rapidly than the trajectory encoding error (TEE).

    They then use synthetic data to test ideas about neural coding of the location probability distribution, i.e. that: a) place cell firing corresponds to the tuning functions on the mean future trajectory (w/o uncertainty); b) the distribution is represented in the immediate population firing as the product of the tuning functions of active cells or c) (DDC) the distribution is represented by its overlap with the tuning curves of individual neurons; d) (their suggestion) that different possible trajectories are sampled from the target distribution in different theta cycles.

    The product scheme has decreasing uncertainty with population firing rate, so would have to have maximal firing at early phases (corresponding to locations behind the rat), contradicting what was observed in the data, so this scheme is discarded.

    The DDC scheme has an increased diversity of cells firing as the target distribution gets wider within each cycle, whereas the mean and sampling schemes do not have increasing variance within-cycle (representing a single trajectory throughout). The decoding uncertainty in the data did not vary within-cycle, so the DDC scheme was discarded.

    The mean and sampling schemes are distinguished by the increase in CCV vs TEE with phase, which is consistent with the sampling scheme.

    The analyses are well done and the results with synthetic data (assuming future trajectories are randomly sampled from the average distribution) and real data match nicely, although there is excess variability in the real data. Overall, this paper provides the most thorough analyses so far of place cell theta sweeps in open fields.

    We thank the Reviewer for the accurate summary and the encouragement.

    I found the framing of the paper confusing in a way that made it harder to understand the actual contribution made here. As noted in the discussion, the field has moved on from the 1990s and cycle-by-cycle decoding of theta sweeps has consistently shown that they correspond to specific trajectories moving from the current trajectory to potential future trajectories, consistent with continuous attractor-based models (in which the width of the activity bump cannot change, e.g. Hopfield, 2010). Thus it seems odd to use theta sweeps to test models of encoding uncertainty - since Johnson & Reddish (2007) we know that they seem to encode specific trajectories (e.g. either going one way or the other at a choice point) rather than an average direction with variance covering the possible alternatives.

    We thank the reviewer for emphasising the connections to earlier work on theta sweeps during decision making, which suggests that alternative options before a decision point are assessed individually by hippocampal neuron populations in a simple maze. However, as also noted by the reviewer below, previous analysis of theta sweeps in the hippocampus were limited to discrete decisions in a linear maze, which only permits a limited exploration of the alternative hypotheses an animal might experience in a planning situation.

    In particular, the dominant source of future uncertainty in a binary decision task is the chosen option (left or right) providing a distinctly bimodal predictive distribution. Bimodal distributions can not be easily approximated by variational methods (that includes the DDC or product schemes) but can be efficiently approximated by sampling. In contrast, in an open field the available options (changes in direction and speed) are not restricted by the geometry of the environment and the predictive distribution is relatively similar to a Gaussian distribution which can be efficiently approximated by all of the investigated encoding schemes.

    Moreover, it has been widely reported that the hippocampal spatial code has somewhat different properties in linear tracks, where the physical movement of the animal is restricted by the geometry of the environment, than in open field navigation. Specifically, in linear tracks most neurons develop unidirectional place fields and the hippocampal population uses different maps to represent the two opposite running directions, whereas a single map and omnidirectional place fields are used in open fields (Buzsaki, 2005). In terms of representing future alternatives, it remains to be an open question if the scheme that is compatible with planning in a 1D environment generalises to two 2D environments. Our detailed comparison of the alternative encoding schemes provides an opportunity to demonstrate that a sampling scheme can be applied as a general computational algorithm to represent quantities necessary for probabilistic planning, while also demonstrating that alternative schemes are incompatible with it.

    Moreover, these previous studies did not rule out the possibility that, in addition to alternating between discrete options, specific features of the population activity might also represent uncertainty (conditional to the chosen option) instantaneously as in the product or the DDC schemes.

    We added a new paragraph (lines 74-88) to the introduction to clarify that one of the novel contributions of the paper is the generalisation of previous intuitions, largely based on work on binary decision tasks in mazes, to unrestricted open field environments.

    The point that schemes that assume varying-width activity distribution might be unfit for modelling hippocampal theta activity is an interesting insight. Let us note that new results have pointed out that the fixed width activity bump is not a necesssary feature of attractor networks. It has recently been shown that in continuous attractors (modelling head direction cells in the fly) the amplitude of the bump can change and the changes can be consistent with the represented uncertainty (Kutschireiter et al., 2021 Biorxiv; https://doi.org/ 10.1101/2021.12.17.473253). We believe that similar principles also apply to higher-dimensional continuous attractor networks and therefore it is entirely possible to represent uncertainty via the amplitude of the bump (equivalent to the population gain) in the hippocampus.

    Thus, the main outcomes of the simulations could reasonably be predicted in advance, and the possibility of alternative neural models of uncertainty explaining firing data remains: in situations where it is more reasonable to believe that the brain is in fact encoding uncertainty as the breadth of a distribution.

    Having said that, most previous examples of trajectory decoding of theta sweeps have not been for navigation in open fields, and the analysis of Pfeiffer and Foster (2013; in open fields) was restricted to sequential 'replay' during sharp-wave ripples rather than theta sweeps. This paper provides the nicest decoding analyses so far of place cell theta sweeps in open field data. However, there are already examples of theta sweeps in entorhinal cortex in open fields (Gardner et al., 2019) showing the same alternating left/right sweeps as seen on mazes (Kay et al., 2020). Such alternation could explain the additional cycle-by-cycle variability observed (cf random sampling).

    We thank the reviewer for encouraging us to more directly test the idea that alternating left right sweeps could explain the increased cycle-to-cylce variability in the data. We thoroughly analysed the data (see our answer to essential revisions 1.) and found that trajectories at subsequent theta cycles are strongly anticorrelated (Fig. 7, Fig. S11, lines 375-415)

    Reviewer #2 (Public Review):

    This study investigates how uncertainty about spatial position is represented in hippocampal theta sequences. Understanding the neural coding of uncertainty is important issue in general, because computational and theoretical work clearly demonstrates the advantages of tracking uncertainty to support decision-making, behavioural work in many domains shows that animals and humans are sensitive to it in myriad ways, and signatures of the neural representations of uncertainty have been demonstrated in many different systems/ circuits.

    We thank the reviewer for the comment.

    However, studies of whether and how uncertainty is signalled in the hippocampus has remained understudied. The question of how spatial uncertainty is represented is already interesting but recent interest in interpreting hippocampal sequences as important for planning and decision-making provide additional motivation.

    A variety of experimental paradigms such as recordings in light vs. darkness, dual rotation experiments in which different cues are placed in conflict with another, "morph" and "teleportation" experiments and so on, all speak to this issue in some sense (and as I note below, could nicely complement the present study); and a number of computational models of the hippocampus have included some representation of uncertainty (e.g. Penny et al. PLoS Comp Biol 2013, Barron et al. Prog Neurobiol 2020). However, the present study fills an important gap in that it connects a theory-driven approach of when and how uncertainty could be represented in principle, with experimental data to determine which is the most likely scheme.

    The analyses rely on the fundamental insight that states/positions further into the future are associated with higher uncertainty than those closer to the present. In support of this idea, the authors first show that in the data (navigation in a square environment, using the wonderful data from Pfeiffer & Foster 2013), decoding error increases within a theta sequence, even after correcting for the optimal time shift.

    The authors then lay out the leading theoretical proposals of how uncertainty can be represented in principle in populations of neurons, and apply them to hippocampal place cells. They show that for all of these schemes, the same overall pattern results. The key advance of the paper seems to be enabled by a sophisticated generative model that produces realistic probability distributions to be encoded (that take into account the animal's uncertainty about its own position). Using this model, the authors show that each uncertainty coding scheme is associated with distinct neural signatures that they then test against the data. They find that the intuitive and commonly employed "product" and "DDC" schemes are not consistent with the data, but the "sampling" scheme is.

    The final conclusion that the sampling scheme is most consistent with the data is perhaps not surprising, because similar conclusions have been reached from showing alternating representation of left and right at choice points cited by the authors (Johnson and Redish 2007; Kay et al. 2020; Tang et al. 2021) and "flickering" from one theta cycle to the next (Jezek et al. 2011). So, the most novel parts of the work to me are the rigorous ruling out of the alternative "product" and "DDC" schemes.

    We thank the reviewer for helping us to clarify the main novelty of our work compared to previous studies. We have updated the introduction (lines ~74–88) to state more clearly how our analysis extends previous work largely restricted to binary decision tasks in mazes and not explicitly considering alternative probabilistic representations.

    Overall I am very enthusiastic about this work. It addresses an important open question, and the structure of the paper is very satisfying, moving from principles of uncertainty encoding to simulated data to identifying signatures in actual data. In this structure, the generative model that produces the synthetic data is clearly playing an important role, and intuitively, it seems the conclusions of the paper depend on how well this testbed maps onto the actual data. I think this model is a real strength of the paper and moves the field forward in both its conceptual sophistication (taking into account the agent's uncertainty) and in how carefully it is compared to the actual data (Figures S2, S3).

    We thank the reviewer for the encouraging words.

    I have two overall concerns that can be addressed with further analyses.

    First, I think the authors should test which of the components of this model are necessary for their results. For instance, if the authors simply took the successor representation (distribution of expected future state occupancy given current location) and compressed it into theta timescale, and took that as the probability distribution to be encoded under the various schemes, would the same predictions result? Figuring out which elements of the model are necessary for the schemes to become distinguishable seems important for future empirical work inspired by this paper.

    The crucial part of our generative model is its probabilistic nature. Explicit formulation of the generative model under different coding schemes enables us to quantitatively account for the different factors contributing to the variability in the data. Specifically, when we compared sampling and mean codes, we partitioned variability of the represented locations across theta cycles into specific factors related to 1) decoding error; 2) difference between the true position of the animal and its own location estimate; 3) the animal’s own uncertainty about its spatial location; 4) updating this estimate in each theta cycle. This enabled us to derive quantities (CCV, TEE and EVindex) that can discriminate between sampling and mean schemes, and that could be directly measured experimentally. This would not be possible in a simpler model lacking an explicit representation of the animal’s internal uncertainty.

    We believe that the assumptions of the model are rather general and those do not limit the scope of the model. Here we list the specific features of the model for clarity (Fig S1a):

    1. Planned position (Fig S1a, left): the planned position is required to guide movements in the model. The specific way we generated the planned position was not essential for the simulations but we tuned the movement parameters to generate trajectories matching the real movement of the animal. It is defined as a random walk process for velocity which is the simplest model for smooth trajectories.

    2. The inference part (Fig S1a, middle) is crucial for the model since we believe that hippocampal population activity is driven by the animal’s own beliefs about its position, which tells our approach apart from earlier studies (see paragraph around line 466). If the animal represents its predictions optimally then the predictions should be consistent with its movement within the environment. Thus, the consistency of the inference is a critical statistical property of the model, which can be guaranteed if the predictions are generated by the same model that is used for inferring the animal’s position. The simplest model that can be used for inference and predictions is the Kalman filter, which we opted for in our simulations.

    3. The assumptions of the encoding model (Fig S1a, right and Fig 1b) are solely determined by the representational scheme being tested. All of the schemes rely on encoding the result of inference in population activity during theta cycles and the scheme determines how this encoding happens. This part of the model is clearly necessary for the analysis.

    Alternatively, we could use the above mentioned successor representation (SR) framework (Dayan 1993) to represent possible trajectories and their associated uncertainty in our models of hippocampal population activity. However, this option introduces extra challenges: First, in the SR framework (Stachenfeld et al., 2017) neuronal firing rates are proportional to the discounted expected future number of times a particular location is going to be visited given the current policy and position. Thus, the SR does sum over all possible future visits and does not specify when exactly a particular state might be reached in the future which is inconsistent with the idea that trajectories are represented during theta sequences. Second, the SR represents the probability of occupying all future states in parallel without providing possible trajectories defining specific combinations of future state visits. This property is consistent with the product and the DDC encoding schemes but not with the other two. These two properties of the SR implies that this framework per se does not provide a fine-scale temporal description of how expected future state probabilities are related to the dynamics of the hippocampal population activity during theta oscillation.

    Taken together, implementing theta time-scale dynamics using the SR framework would also require several additional model choices to generate consistent temporal trajectories from the expected future state occupancies, and even in this case the subjective uncertainty of the animal would not be consistently represented in the simulated data. Representing the animal’s subjective uncertainty in our model was an important component in contributing to the EV-index and had profound implications on the signatures of generative cycling in a two dimensional arena.

    We have to note that on a slower time scale (calculating the average firing rate over multiple theta cycles) all of our encoding schemes are consistent with the SR framework (line 548).

    Second, the analyses are generally very carefully and rigorously performed, and I particularly appreciated how the authors addressed bias resulting from noisy estimation of tuning curves (Figure S7). However, the conclusion that the "sampling" scheme is correct relies on there being additional variance in the spiking data. This is reminiscent of the discussions about overdispersion and how "multiple maps" account for it (Jackson & Redish Hippocampus 2007, Kelemen & Fenton PLoS Biol 2010), and the authors should test if this kind of explanation is also consistent with their data. In particular, the task has two distinct behavioral contexts, when animals are searching for the (not yet known) "away" location compared to returning to the known home location, which extrapolating from Jackson & Redish, could be associated with distinct (rate) maps leading to excess variance.

    We thank the reviewer for this constructive comment. We note that the signature of the sampling scheme is variability in the decoded trajectory across subsequent theta cycles while overdispersion is usually defined as the supra-Poisson variability in the spiking of individual neurons evaluated across multiple runs or trials. Nevertheless, we tested the existence of multiple maps corresponding to the two distinct task phases and found that the maps representing the two task phases are very similar (Fig S11).

    Such an analysis could also potentially speak to an overall limitation of the work (not a criticism, more of a question of scope) which is that there are no experimental manipulations/conditions of different amounts of uncertainty that are analyzed. Comparing random search (high uncertainty, I assume) to planning a path to a known goal (low uncertainty) could be one way to address this and further bolster the authors' conclusions.

    We agree with the reviewer that the proposed framework provides additional insights into the way the population activity should change with specific experimental manipulations and can therefore inspire further experiments. In particular, a hallmark of probabilistic computations is that experimental manipulations that control the uncertainty of the animal should be reflected in population responses. In the visual processing such manipulations are indeed reflected in changing response variability, as predicted by sampling (Orban et al, Neuron 2016). In the current experimental paradigm there was no direct manipulation of uncertainty (we discuss this around lines 573-576). While one might argue that there are differences in the planning strategy in trials where the animal was heading for away reward and in those heading for home, this is not a very explicit test of the question. Still, to check if we can find traces of changes in uncertainty in the two conditions, we analysed the EV-index separately on home and away trials (Fig. S11e). We did not find systematic differences in the EV-index across these trial types.

    Reviewer #3 (Public Review):

    Summary of the goals:

    The authors set out to test the hypothesis that neural activity in hippocampus reflects probabilistic computations during navigation and planning. They did so by assuming that neural activity during theta waves represents the animal's location, and that uncertainty about this location should grow along the path from the recent past to the future. They next generated empirical signatures for each of the main four proposals for how probabilities may be encoded in neural responses (PPC, DDC, Sampling) and contrasted them with each other and a non-probabilistic representation (scalar estimate of location). Finally, the authors compared their predictions to previously published neural activity and concluded that a sampling-based representation best explained neural activity.

    Impact & Significance: This manuscript can make a significant impact on many fields in neuroscience from hippocampal research studying the functions and neural coding in hippocampus, through theoretical works linking the representation of uncertainty to neural codes, to modeling experimental paradigms using navigation tasks. The manuscript provides the following novel contribution to cognitive neuroscience:

    • It exploits the inherent change in uncertainty about a parsimonious internal variable over time during planning to test hypotheses about probabilistic computations.
    • A full model comparison of competing hypotheses for the neural implementation of probabilistic beliefs. This is a topic of wide interest and direct comparisons using data have been elusive.
    • The study presents substantial empirical evidence for a sampling-based neural representation of the probability distribution over trajectories in the hippocampus, a finding with potential implications for other parts of neural processing. Strengths:
    • Creative exploitation of a naturally occurring change in uncertainty over a parsimonious latent variable (location).
    • Derivation of three empirical signatures using a combination of analytical and numerical work.
    • Novel computational modelling & linking it to neural coding using 4 existing implementational models
    • Comprehensive and rigorous data analysis of a large and high-quality neural dataset, with supplemental analyses of a second dataset
    • Mostly very clear and high quality presentation We thank the Reviewer for the summary and for the positive feedback on the manuscript. Weaknesses:
    • It is unclear to what degree the "signatures" depend on the details of the numerical simulation used by the authors to generate them. At least two of them (gain for the product scheme and excess variability for the sampling scheme) appear very general, but the degree of robustness should be discussed for all three signatures.

    The generality of the signatures follows from the fact that we derived them from the fundamental properties of the encoding schemes. We tested their robustness using both idealised test data (Fig S6c-d, Fig S7b) and our simulated hippocampal model (Fig. 4c, Fig5b-c, Fig6b-g).

    The reviewer is right that the sensitivity and robustness is a potential issue. These schemes have been originally proposed to encode static distributions ie., the neuronal activity was supposed to encode a specific probability distribution for an extended period of time. Therefore, when we test the signatures we make the simplifying assumption that a static distribution is encoded in the three separate phases of the theta cycle. It is currently unknown whether during theta sequences the trajectories are represented via discrete jumps in positions or as continuously changing locations. Therefore we used our numerical simulations to test whether the proposed signatures are sufficiently sensitive to discriminate the encoding schemes using the limited amount of data available and in the face of biological noise but also robust to the parameter choices and modelling assumptions.

    Regarding the product code, the inverse relationship between the gain and the variance has been previously derived analytically for special cases (Ma et al., 2006). In the manuscript we show numerically that the same relationship holds for general tuning curve shapes (Fig. S6d). Finally we demonstrate that the gain is a robust signature that changes systematically along the theta cycles in the case of a product coding scheme.

    Second, in the case of the DDC code we used the decoded variance of the posterior as the signature. Since DDC code relies on the overlap between the target distribution and the neuronal basis functions, potentially the most important source of error is if we overestimate the size of the encoding basis functions. To control for this factor, we first explored this effect in an idealised setting (in fig S7) and found that the decoded variance correlates with the encoded uncertainty both if we used the estimated basis functions or the empirical tuning curves for decoding. Next we performed the analysis in our simulated dataset in 4 different ways - either using empirical tuning curves (Fig 5c-d) or the estimated basis functions (Fig S8a-b), focusing on high spike count theta cycles or including all theta cycles. The fact that all these analyses led to similar results confirms the robustness of this signature.

    Our third measure, the EV-index measures the variability of the encoded trajectories across theta cycles. The cycle-to-cycle variability is also affected by factors independent of whether a randomly sampled trajectory or the posterior mean is encoded. In particular, the encoded trajectory can start at different distances in the past and can be played at different speeds in different theta cycles. These factors are probably present in the data and all inflate the CCV. Another factor is the start and end time of the trajectories, which we may not be able to accurately find in the real data and confusing the end of a previous trajectory with the start of a new one can also inflate CCV. In our simulations we tested how these potential errors influence our analysis, and found that the EV index is surprisingly robust to such changes (Fig 6fg). An additional factor that the EV-index is sensitive to is the specific sampling algorithm used to sample the posterior: an algorithm that produces correlated samples is hard to distinguish from the MAP scheme. Our newly introduced analysis (Fig 7b) demonstrates this and explores the level of correlation between subsequent trajectories, providing evidence that trajectories decoded during exploration reflect the properties of anticorrelated samples, also a signature of efficient inference.

    • The claims about "efficiency" lack a definition of what exactly is meant by that, and empirical support.

    We thank the reviewer for pointing out this inconsistency in our terminology. What we generally meant by efficiency was a claim that pertains the computational level, according to Marr’s classification, i.e.that computations are probabilistic, that is, representation in the hippocampus takes into account uncertainty by representing a full posterior distribution. We performed an additional test, which concerns the algorithmic-level efficiency of the computations. We explored the efficiency of the sampling process by assessinga signature of efficientsampling, the expected number of sampled trajectories required to represent the distribution of possible future locations. We found that subsequent samples tended to be anti-correlated which is a signature of efficient sampling algorithms (Fig 7). In the revised manuscript we thus use the word efficient solely when we refer to the anticorrelated samples.

    Was this evaluation helpful?
  2. Evaluation Summary:

    This paper will be of interest to neuroscientists interested in predictive coding and planning. It presents a novel analysis of hippocampal place cells during exploration of an open arena. It performs a comprehensive comparison of real and synthetic data to determine which encoding model best explains population activity in the hippocampus.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. The reviewers remained anonymous to the authors.)

    Was this evaluation helpful?
  3. Reviewer #1 (Public Review):

    When theta phase precession was discovered (O'Keefe & Recce, 1993; place cell firing shifting from late to early theta phases as the rat moves through the firing field, averaged over many runs), it was realized that, correspondingly, firing moves from cells with firing fields that have been run through (early phase) to those whose fields are being entered (late phase), with the consequence that a broader range of cells will be firing at this late phase (Skaggs et al., 1996; Burgess et al., 1993; see also Chadwick et al., 2015). Thus, these sweeps could represent the distribution of possible future trajectories, with the broadening distribution representing greater uncertainty in the future trajectory.

    Using data from Pfeiffer and Foster (2013), they examine how neurons could encode the distribution of future locations, including its breadth (i.e. uncertainty), testing a couple of proposed methods and suggesting one of their own. The results show that decoded location has increasing variability at later phases (corresponding to locations further ahead), and greater deviation from the actual trajectory. Further results (when testing the models below) include that population firing rate increased from early to late phases; decoding uncertainty does not change within-cycle, and the cycle-by-cycle variability (CCV) increases from early to late phases more rapidly than the trajectory encoding error (TEE).

    They then use synthetic data to test ideas about neural coding of the location probability distribution, i.e. that: a) place cell firing corresponds to the tuning functions on the mean future trajectory (w/o uncertainty); b) the distribution is represented in the immediate population firing as the product of the tuning functions of active cells or c) (DDC) the distribution is represented by its overlap with the tuning curves of individual neurons; d) (their suggestion) that different possible trajectories are sampled from the target distribution in different theta cycles.

    The product scheme has decreasing uncertainty with population firing rate, so would have to have maximal firing at early phases (corresponding to locations behind the rat), contradicting what was observed in the data, so this scheme is discarded.

    The DDC scheme has an increased diversity of cells firing as the target distribution gets wider within each cycle, whereas the mean and sampling schemes do not have increasing variance within-cycle (representing a single trajectory throughout). The decoding uncertainty in the data did not vary within-cycle, so the DDC scheme was discarded.

    The mean and sampling schemes are distinguished by the increase in CCV vs TEE with phase, which is consistent with the sampling scheme.

    The analyses are well done and the results with synthetic data (assuming future trajectories are randomly sampled from the average distribution) and real data match nicely, although there is excess variability in the real data. Overall, this paper provides the most thorough analyses so far of place cell theta sweeps in open fields.

    I found the framing of the paper confusing in a way that made it harder to understand the actual contribution made here. As noted in the discussion, the field has moved on from the 1990s and cycle-by-cycle decoding of theta sweeps has consistently shown that they correspond to specific trajectories moving from the current trajectory to potential future trajectories, consistent with continuous attractor-based models (in which the width of the activity bump cannot change, e.g. Hopfield, 2010). Thus it seems odd to use theta sweeps to test models of encoding uncertainty - since Johnson & Reddish (2007) we know that they seem to encode specific trajectories (e.g. either going one way or the other at a choice point) rather than an average direction with variance covering the possible alternatives.

    Thus, the main outcomes of the simulations could reasonably be predicted in advance, and the possibility of alternative neural models of uncertainty explaining firing data remains: in situations where it is more reasonable to believe that the brain is in fact encoding uncertainty as the breadth of a distribution. Having said that, most previous examples of trajectory decoding of theta sweeps have not been for navigation in open fields, and the analysis of Pfeiffer and Foster (2013; in open fields) was restricted to sequential 'replay' during sharp-wave ripples rather than theta sweeps. This paper provides the nicest decoding analyses so far of place cell theta sweeps in open field data. However, there are already examples of theta sweeps in entorhinal cortex in open fields (Gardner et al., 2019) showing the same alternating left/right sweeps as seen on mazes (Kay et al., 2020). Such alternation could explain the additional cycle-by-cycle variability observed (cf random sampling).

    Refs not in paper:
    Burgess N., O'Keefe J. and Recce M. (1993) Using Hippocampal Place Cells for Navigation, Exploiting Phase Coding, Neural Information Processing Systems 5: 929-936.
    Chadwick A., van Rossum M. C. W. and Nolan M. F. (2015) Independent theta phase coding accounts for ca1 population sequences and enables flexible remapping. eLife 4: e03542
    Gardner R. J., Vollan A. Z., Moser M.-B., Moser E. I. (2019) A novel directional signal expressed during grid-cell theta sequences. Soc. Neurosci. Abstr. 604.13/AA9
    Hopfield J. J. (2010) Neurodynamics of mental exploration. PNAS 107: 1648-1653.

    Was this evaluation helpful?
  4. Reviewer #2 (Public Review):

    This study investigates how uncertainty about spatial position is represented in hippocampal theta sequences. Understanding the neural coding of uncertainty is important issue in general, because computational and theoretical work clearly demonstrates the advantages of tracking uncertainty to support decision-making, behavioral work in many domains shows that animals and humans are sensitive to it in myriad ways, and signatures of the neural representations of uncertainty have been demonstrated in many different systems/circuits.

    However, studies of whether and how uncertainty is signaled in the hippocampus has remained understudied. The question of how spatial uncertainty is represented is already interesting but recent interest in interpreting hippocampal sequences as important for planning and decision-making provide additional motivation.

    A variety of experimental paradigms such as recordings in light vs. darkness, dual rotation experiments in which different cues are placed in conflict with another, "morph" and "teleportation" experiments and so on, all speak to this issue in some sense (and as I note below, could nicely complement the present study); and a number of computational models of the hippocampus have included some representation of uncertainty (e.g. Penny et al. PLoS Comp Biol 2013, Barron et al. Prog Neurobiol 2020). However, the present study fills an important gap in that it connects a theory-driven approach of when and how uncertainty could be represented in principle, with experimental data to determine which is the most likely scheme.

    The analyses rely on the fundamental insight that states/positions further into the future are associated with higher uncertainty than those closer to the present. In support of this idea, the authors first show that in the data (navigation in a square environment, using the wonderful data from Pfeiffer & Foster 2013), decoding error increases within a theta sequence, even after correcting for the optimal time shift.

    The authors then lay out the leading theoretical proposals of how uncertainty can be represented in principle in populations of neurons, and apply them to hippocampal place cells. They show that for all of these schemes, the same overall pattern results. The key advance of the paper seems to be enabled by a sophisticated generative model that produces realistic probability distributions to be encoded (that take into account the animal's uncertainty about its own position). Using this model, the authors show that each uncertainty coding scheme is associated with distinct neural signatures that they then test against the data. They find that the intuitive and commonly employed "product" and "DDC" schemes are not consistent with the data, but the "sampling" scheme is.

    The final conclusion that the sampling scheme is most consistent with the data is perhaps not surprising, because similar conclusions have been reached from showing alternating representation of left and right at choice points cited by the authors (Johnson and Redish 2007; Kay et al. 2020; Tang et al. 2021) and "flickering" from one theta cycle to the next (Jezek et al. 2011). So, the most novel parts of the work to me are the rigorous ruling out of the alternative "product" and "DDC" schemes.

    Overall I am very enthusiastic about this work. It addresses an important open question, and the structure of the paper is very satisfying, moving from principles of uncertainty encoding to simulated data to identifying signatures in actual data. In this structure, the generative model that produces the synthetic data is clearly playing an important role, and intuitively, it seems the conclusions of the paper depend on how well this testbed maps onto the actual data. I think this model is a real strength of the paper and moves the field forward in both its conceptual sophistication (taking into account the agent's uncertainty) and in how carefully it is compared to the actual data (Figures S2, S3).

    I have two overall concerns that can be addressed with further analyses.

    First, I think the authors should test which of the components of this model are necessary for their results. For instance, if the authors simply took the successor representation (distribution of expected future state occupancy given current location) and compressed it into theta timescale, and took that as the probability distribution to be encoded under the various schemes, would the same predictions result? Figuring out which elements of the model are necessary for the schemes to become distinguishable seems important for future empirical work inspired by this paper.

    Second, the analyses are generally very carefully and rigorously performed, and I particularly appreciated how the authors addressed bias resulting from noisy estimation of tuning curves (Figure S7). However, the conclusion that the "sampling" scheme is correct relies on there being additional variance in the spiking data. This is reminiscent of the discussions about overdispersion and how "multiple maps" account for it (Jackson & Redish Hippocampus 2007, Kelemen & Fenton PLoS Biol 2010), and the authors should test if this kind of explanation is also consistent with their data. In particular, the task has two distinct behavioral contexts, when animals are searching for the (not yet known) "away" location compared to returning to the known home location, which extrapolating from Jackson & Redish, could be associated with distinct (rate) maps leading to excess variance.

    Such an analysis could also potentially speak to an overall limitation of the work (not a criticism, more of a question of scope) which is that there are no experimental manipulations/conditions of different amounts of uncertainty that are analyzed. Comparing random search (high uncertainty, I assume) to planning a path to a known goal (low uncertainty) could be one way to address this and further bolster the authors' conclusions.

    Was this evaluation helpful?
  5. Reviewer #3 (Public Review):

    Summary of the goals:

    The authors set out to test the hypothesis that neural activity in hippocampus reflects probabilistic computations during navigation and planning. They did so by assuming that neural activity during theta waves represents the animal's location, and that uncertainty about this location should grow along the path from the recent past to the future. They next generated empirical signatures for each of the main four proposals for how probabilities may be encoded in neural responses (PPC, DDC, Sampling) and contrasted them with each other and a non-probabilistic representation (scalar estimate of location). Finally, the authors compared their predictions to previously published neural activity and concluded that a sampling-based representation best explained neural activity.

    Impact & Significance:

    This manuscript can make a significant impact on many fields in neuroscience from hippocampal research studying the functions and neural coding in hippocampus, through theoretical works linking the representation of uncertainty to neural codes, to modeling experimental paradigms using navigation tasks. The manuscript provides the following novel contribution to cognitive neuroscience:

    - It exploits the inherent change in uncertainty about a parsimonious internal variable over time during planning to test hypotheses about probabilistic computations.

    - A full model comparison of competing hypotheses for the neural implementation of probabilistic beliefs. This is a topic of wide interest and direct comparisons using data have been elusive.

    - The study presents substantial empirical evidence for a sampling-based neural representation of the probability distribution over trajectories in the hippocampus, a finding with potential implications for other parts of neural processing.

    Strengths:
    - Creative exploitation of a naturally occurring change in uncertainty over a parsimonious latent variable (location).

    - Derivation of three empirical signatures using a combination of analytical and numerical work.

    - Novel computational modelling & linking it to neural coding using 4 existing implementational models

    - Comprehensive and rigorous data analysis of a large and high-quality neural dataset, with supplemental analyses of a second dataset

    - Mostly very clear and high quality presentation

    Weaknesses:

    - It is unclear to what degree the "signatures" depend on the details of the numerical simulation used by the authors to generate them. At least two of them (gain for the product scheme and excess variability for the sampling scheme) appear very general, but the degree of robustness should be discussed for all three signatures.

    - The claims about "efficiency" lack a definition of what exactly is meant by that, and empirical support.

    Was this evaluation helpful?