Using adversarial networks to extend brain computer interface decoding accuracy over time
Curation statements for this article:-
Curated by eLife
eLife assessment
In its current form, the reviewers felt that the work describing the use of a CycleGAN for alignment of neural activity from a neural interface across sessions was useful, with solid evidence showing that it improved performance over similar-concept previous approaches.
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (eLife)
Abstract
Existing intracortical brain computer interfaces (iBCIs) transform neural activity into control signals capable of restoring movement to persons with paralysis. However, the accuracy of the ‘decoder’ at the heart of the iBCI typically degrades over time due to turnover of recorded neurons. To compensate, decoders can be recalibrated, but this requires the user to spend extra time and effort to provide the necessary data, then learn the new dynamics. As the recorded neurons change, one can think of the underlying movement intent signal being expressed in changing coordinates. If a mapping can be computed between the different coordinate systems, it may be possible to stabilize the original decoder’s mapping from brain to behavior without recalibration. We previously proposed a method based on Generalized Adversarial Networks (GANs), called ‘Adversarial Domain Adaptation Network’ (ADAN), which aligns the distributions of latent signals within underlying low-dimensional neural manifolds. However, we tested ADAN on only a very limited dataset. Here we propose a method based on Cycle-Consistent Adversarial Networks (Cycle-GAN), which aligns the distributions of the full-dimensional neural recordings. We tested both Cycle-GAN and ADAN on data from multiple monkeys and behaviors and compared them to a third, quite different method based on Procrustes alignment of axes provided by Factor Analysis. All three methods are unsupervised and require little data, making them practical in real life. Overall, Cycle-GAN had the best performance and was easier to train and more robust than ADAN, making it ideal for stabilizing iBCI systems over time.
Article activity feed
-
-
Author Response
Reviewer #1 (Public Review):
Alignment between high dimensional data which express their dynamics in a subspace is a challenge which has recently been addressed both with analytic-based solutions like the Procrustes transformation, and, most interestingly, via deep learning approaches based on adversarial networks. The authors have previously proposed an adversarial network approach for alignment which relied on first dimensionally-reducing the binned neural spikes using an autoencoder. Here, they use an alternative approach to align data without use of an initial dimensional-reduction step.
The results are fairly clear - the Cycle-GAN approach works better than their previous ADAN approach and one based on dimensionality reduction followed by the Procrustes transform. In general, a criticism of this entire field is …
Author Response
Reviewer #1 (Public Review):
Alignment between high dimensional data which express their dynamics in a subspace is a challenge which has recently been addressed both with analytic-based solutions like the Procrustes transformation, and, most interestingly, via deep learning approaches based on adversarial networks. The authors have previously proposed an adversarial network approach for alignment which relied on first dimensionally-reducing the binned neural spikes using an autoencoder. Here, they use an alternative approach to align data without use of an initial dimensional-reduction step.
The results are fairly clear - the Cycle-GAN approach works better than their previous ADAN approach and one based on dimensionality reduction followed by the Procrustes transform. In general, a criticism of this entire field is to understand what alignment teaches us about the brain or how it specifically will be used in a BCI context.
There are a few issues with the paper.
1.) To increase the impact of their work, the investigators have now used it to align data in multiple types of tasks. There was an unanswered question about this related to neuroscience - does alignment in one task predict alignment for another?
This is a great question! We anticipate that it will be challenging for an alignment learned on one task to be used on another task, because we know that M1 decoders trained on data from one behavior often do not generalize when tested using a different behavior (Naufel et al., 2019)*. The same nonlinearities that prevent zero-shot decoding across tasks are also likely to impair the ability of an aligner trained on data from one task to successfully align data from another task. Furthermore, the results of Naufel et al. indicate that even if neural alignment is successful, we would need a decoder already trained on the new task to produce reliable predictions-- in which case the data needed to train that decoder could simply be used for alignment. A systematic study of the relation between the ability to align and decode from data is well warranted, but beyond the scope of our current work.
*Naufel, S., Glaser, J. I., Kording, K. P., Perreault, E. J., & Miller, L. E. (2019). A muscle-activity-dependent gain between motor cortex and EMG. Journal of neurophysiology, 121(1), 61-73.
Action in the text: none.
- Investigators use decoding as a way of comparing alignment performance. The description of the cycle GAN was not super detailed, and it wasn't clear whether there was any dynamic information stored in the network that might create questions of causality in actual use. It seems that input is simply the neural activity at a current time point rather than neural activity across the trial, which would alleviate this concern. However, they mention temporal alignment but never describe in detail whether all periods of spikes are properly modeled by the system or if only subsets of data (specific portions of task or non-task time) will work. Perhaps this is more a question of the Wiener filter, for which precise details are missing.
As intuited by the reviewer, we did only use the neural activity at a current time point as the inputs for Cycle-GAN training, so the system is causal and can be used in real time. We have modified the text to clarify this.
We apologize for any confusion caused by our use of the term "temporal alignment", which was for the sake of consistency with earlier-published, CCA-based alignment methods (e.g., in Gallego et al., 2020), but is indeed confusing. In the revised manuscript, we have switched to the term ‘trial alignment’ which we believe better reflects this pre-processing step, and we have included additional explanations in the introduction.
Importantly, while CCA-style trial alignment is not required by our methods, we do still preprocess our data to exclude behaviors not related to the investigated task. Since monkeys were resting or performing task-irrelevant movements during inter-trial period, we chose to use data only from trial start to trial end, but without any explicit trial matching or alignment (see Appendix 1 - Behavior tasks). In the revised manuscript, we now show that our methods still works well when applied even to the continuous recordings, with Cycle-GAN significantly outperforming both ADAN and PAF.
Action in the text (page 2, lines 72-74): clarifying CCA description and replacing “temporal alignment” with “trial alignment”.
Action in the text (page 5, lines 191-192): stating that ADAN and Cycle-GAN have no knowledge of dynamics.
Action in the text (page 6, lines 258-272): documenting performance on full-day recordings without trial matching.
Action in the text (page 13, lines 647-649): again, stating that Cycle-GAN has no knowledge of dynamics.
- In general, precise details of the algorithms should have been provided.
We appreciate the reviewer noting this-- in the submitted manuscript, the full descriptions of Cycle-GAN and ADAN were included as supplementary methods in Appendix 4, but we did not extensively reference this and it may have been missed. In the revised manuscript, we added more references to Appendix 4 and in the Methods section of the main text. We provided further details on the choice of hyperparameters for each method (including PAF) in Appendix 4 itself.
Action in the text (page 13, lines 643-644): added “For a full description of the ADAN architecture and its training strategy, please refer to “ADAN based aligner” in Appendix 4 and (Farshchian et al., 2018).”
Action in the text (page 14, lines 669): added “Further details about the Cycle-GAN based aligner are provided in “Cycle-GAN based aligner”, Appendix 4.” Action in the text (Appendix 4 Tables 1-2): We have added a summary table of hyperparameters for each method in Appendix 4 (ADAN: Appendix 4 Table 1; CycleGAN: Appendix 4 Table 2).
- Cross validation for day-0 alignment is not explained.
As mentioned above, the training and validation details of day-0 models were included in Appendix 4, which was not extensively referenced in the manuscript and may have been missed. We have now added more references to the Appendix in the revised manuscript.
Action in the text (page 13, lines 627-629): added “(Note that this LSTM based decoder is only used for latent space discovery, not the later decoding stage that is used for performance evaluation (see “ADAN day-0 training” in Appendix 4 for full details)).”
- Details of statistical tests is not provided.
We apologize for this omission. In the revised manuscript, we have added a section in the methods summarizing all the statistical tests. In addition, we added the sample sizes for each stat reported in the results section.
Action in the text (page 15, lines 754-768): new Methods section added.
- (minor) The idea that for neurons that have disappeared that the CycleGAN can "infer their response properties", seems an incorrect description. A proper description should be that it "hallucinates" their response properties?
We prefer to avoid the term “hallucinate”, due to its recent increased (appropriate) use in the context of large language models describing content generation that is “nonsensical or unfaithful to the provided source content” (as per the Wikipedia article on hallucination in AI). The synthetized “responses” of vanished neurons are not nonsensical, but are indeed, inferred: they are the model’s best estimate of how these neurons would have responded, had they been observed. While not explored further here, this prediction could be of potential scientific use: a strong discrepancy between predicted and observed activity might be a clue to look for further evidence of learning or remodeling of neural representations of behavior.
Action in the text: none.
Reviewer #2 (Public Review):
In this manuscript, the authors use generative adversarial networks (GANs) to manipulate neural data recorded from intracortical arrays in the context of intracortical BCIs so that these decoders are robust. Specifically, the authors deal with the hard problem where signals from an intracortical array change over time and decoders that are trained on day 0 do not work on day K. Either the decoder or the neural data needs to be updated to achieve the same performance as initially. GANs try to alter the neural data from day K to make it indistinguishable to day 0 and thus in principle the decoder should perform better. The authors compare their GAN approach to an older GAN approach (by an overlapping group of authors) and suggest that this new GAN approach is somewhat better. Major Strengths are multiple datasets from behaving monkeys performing various tasks that involve motor function. Comparison between two different GAN approaches and a classical approach that uses factor analysis. The weakness is insufficient comparison to another state-of-the-art approach that has been applied on the same dataset (NoMAD, Karpowicz et al. BioRxiv.)
The results are very reasonable and they show their approach, Cycle GANs, does slightly better than the traditional GAN approach. However, the Cycle GANs have many more modules and also as I understand it performs a forward backward mapping of the day - 0 and day - k and thus theoretically better. But, it seems quite slow.
We are concerned that the reviewer may have mistaken the Cycle-GAN training time (the time it takes to find an alignment, Figure 4B) with its inference time (the time it takes to transform data once an alignment has been found). Whereas inference time is critical for practical deployment of a model, we argue that Cycle-GAN's somewhat longer training time is not a substantial barrier to use: it is still reasonably fast (a few minutes) and training will only need to be performed on the order of once per day. We have modified the y-axis label of Figure 4B to make this distinction clearer.
We have also now added information on the inference speed of trained models to the paper: we find that both Cycle-GAN and ADAN perform the inference step in under 1 ms per 50 ms sample of data – this is because the forward map in both models consists of a fully connected network with only two hidden layers. We also note that while forward-backward mapping between days does occur during Cycle-GAN training, only the forward mapping is performed during inference.
Action in the text (page 7, lines 303-306): added inference time for Cycle-GAN and ADAN.
I think the results are interesting but as such, I am not sure this is such a fundamental advance compared to the Farashcian et al. paper, which introduced GANs to improve decoding in the face of changing neural data. There are other approaches that also use GANs and I think they all need to be compared against each other. Finally, these are all offline results and what happens online is anyone's real guess. Of course, this is not just a weakness of this study but many such studies of its ilk.
-
eLife assessment
In its current form, the reviewers felt that the work describing the use of a CycleGAN for alignment of neural activity from a neural interface across sessions was useful, with solid evidence showing that it improved performance over similar-concept previous approaches.
-
Reviewer #1 (Public Review):
Alignment between high dimensional data which express their dynamics in a subspace is a challenge which has recently been addressed both with analytic-based solutions like the Procrustes transformation, and, most interestingly, via deep learning approaches based on adversarial networks. The authors have previously proposed an adversarial network approach for alignment which relied on first dimensionally-reducing the binned neural spikes using an autoencoder. Here, they use an alternative approach to align data without use of an initial dimensional-reduction step.
The results are fairly clear - the Cycle-GAN approach works better than their previous ADAN approach and one based on dimensionality reduction followed by the Procrustes transform. In general, a criticism of this entire field is to understand what …
Reviewer #1 (Public Review):
Alignment between high dimensional data which express their dynamics in a subspace is a challenge which has recently been addressed both with analytic-based solutions like the Procrustes transformation, and, most interestingly, via deep learning approaches based on adversarial networks. The authors have previously proposed an adversarial network approach for alignment which relied on first dimensionally-reducing the binned neural spikes using an autoencoder. Here, they use an alternative approach to align data without use of an initial dimensional-reduction step.
The results are fairly clear - the Cycle-GAN approach works better than their previous ADAN approach and one based on dimensionality reduction followed by the Procrustes transform. In general, a criticism of this entire field is to understand what alignment teaches us about the brain or how it specifically will be used in a BCI context.
There are a few issues with the paper.
1.) To increase the impact of their work, the investigators have now used it to align data in multiple types of tasks. There was an unanswered question about this related to neuroscience - does alignment in one task predict alignment for another?
Investigators use decoding as a way of comparing alignment performance. The description of the cycle GAN was not super detailed, and it wasn't clear whether there was any dynamic information stored in the network that might create questions of causality in actual use. It seems that input is simply the neural activity at a current time point rather than neural activity across the trial, which would alleviate this concern. However, they mention temporal alignment but never describe in detail whether all periods of spikes are properly modeled by the system or if only subsets of data (specific portions of task or non-task time) will work. Perhaps this is more a question of the Wiener filter, for which precise details are missing.
In general, precise details of the algorithms should have been provided.
Cross validation for day-0 alignment is not explained.
Details of statistical tests is not provided.
(minor) The idea that for neurons that have disappeared that the CycleGAN can "infer their response properties", seems an incorrect description. A proper description should be that it "hallucinates" their response properties?
-
Reviewer #2 (Public Review):
In this manuscript, the authors use generative adversarial networks (GANs) to manipulate neural data recorded from intracortical arrays in the context of intracortical BCIs so that these decoders are robust. Specifically, the authors deal with the hard problem where signals from an intracortical array change over time and decoders that are trained on day 0 do not work on day K. Either the decoder or the neural data needs to be updated to achieve the same performance as initially. GANs try to alter the neural data from day K to make it indistinguishable to day 0 and thus in principle the decoder should perform better. The authors compare their GAN approach to an older GAN approach (by an overlapping group of authors) and suggest that this new GAN approach is somewhat better.
Major Strengths are multiple …
Reviewer #2 (Public Review):
In this manuscript, the authors use generative adversarial networks (GANs) to manipulate neural data recorded from intracortical arrays in the context of intracortical BCIs so that these decoders are robust. Specifically, the authors deal with the hard problem where signals from an intracortical array change over time and decoders that are trained on day 0 do not work on day K. Either the decoder or the neural data needs to be updated to achieve the same performance as initially. GANs try to alter the neural data from day K to make it indistinguishable to day 0 and thus in principle the decoder should perform better. The authors compare their GAN approach to an older GAN approach (by an overlapping group of authors) and suggest that this new GAN approach is somewhat better.
Major Strengths are multiple datasets from behaving monkeys performing various tasks that involve motor function. Comparison between two different GAN approaches and a classical approach that uses factor analysis. The weakness is insufficient comparison to another state-of-the-art approach that has been applied on the same dataset (NoMAD, Karpowicz et al. BioRxiv 2022).
The results are very reasonable and they show their approach, Cycle GANs, does slightly better than the traditional GAN approach. However, the Cycle GANs have many more modules and also as I understand it performs a forward backward mapping of the day - 0 and day - k and thus theoretically better. But, it seems quite slow.
I think the results are interesting but as such, I am not sure this is such a fundamental advance compared to the Farashcian et al. paper, which introduced GANs to improve decoding in the face of changing neural data. There are other approaches that also use GANs and I think they all need to be compared against each other. Finally, these are all offline results and what happens online is anyone's real guess. Of course, this is not just a weakness of this study but many such studies of its ilk.
-