Multi-contrast anatomical subcortical structures parcellation
This article has been Reviewed by the following groups
Listed in
- Evaluated articles (eLife)
Abstract
The human subcortex is comprised of more than 450 individual nuclei which lie deep in the brain. Due to their small size and close proximity, up until now only 7% have been depicted in standard MRI atlases. Thus, the human subcortex can largely be considered as terra incognita. Here, we present a new open-source parcellation algorithm to automatically map the subcortex. The new algorithm has been tested on 17 prominent subcortical structures based on a large quantitative MRI dataset at 7 Tesla. It has been carefully validated against expert human raters and previous methods, and can easily be extended to other subcortical structures and applied to any quantitative MRI dataset. In sum, we hope this novel parcellation algorithm will facilitate functional and structural neuroimaging research into small subcortical nuclei and help to chart terra incognita.
Article activity feed
-
-
###Reviewer #3:
In this paper, the authors proposed an automatized method to sub-cortically parcellate the brain given a set of manual delineations. One of its strongest points relies on the adoption of a Bayesian approach, combining priors from the brain anatomic and MRI acquisition. These priors are then used to estimate the posterior probabilities per voxel, which after a series of operations on them provide a final sub-cortical parcellation. The paper sounds correct from a technical point of view and the proposed method potentially relevant, given the importance of having competent tools to find good sub-cortical brain delineations, especially in high resolution datasets.
I have some possible concerns and suggestions that might increase the quality of the paper:
-From Figure 4, it is clear how estimated Dice coefficients decrease …
###Reviewer #3:
In this paper, the authors proposed an automatized method to sub-cortically parcellate the brain given a set of manual delineations. One of its strongest points relies on the adoption of a Bayesian approach, combining priors from the brain anatomic and MRI acquisition. These priors are then used to estimate the posterior probabilities per voxel, which after a series of operations on them provide a final sub-cortical parcellation. The paper sounds correct from a technical point of view and the proposed method potentially relevant, given the importance of having competent tools to find good sub-cortical brain delineations, especially in high resolution datasets.
I have some possible concerns and suggestions that might increase the quality of the paper:
-From Figure 4, it is clear how estimated Dice coefficients decrease with age. As it is well noted by the authors, this is likely caused due to the fact that the priors were built from 10 subjects that had an average age of 24.4 years and thus, the highest predicted performance rates are reflected for subjects whose age range (18-40) lies around this average prior age. I know that the authors mentioned in the paper that they plan on modelling the effects of age in the priors in future works. However, I was wondering whether they could already sort of address this question in the current work. Since the data used to test this age bias has already been manually delineated, what if the authors generate new priors for this set of delineations, including subjects from all ages, and test whether the predicted Dice coefficients still depend on age, in the same way as was done in Figure 4?
-Automatized methods are usually sensitive to the number of subjects used to build the parcellation, with results from a bigger training cohort being potentially more robust and generalizable. As said earlier, I think that one of the strongest points of the automated method presented in this paper is the adoption of a Bayesian approach, which usually works efficiently for small sample sizes and allows to update previous results when new data comes. Still, I think it could be highly illustrative to show the performance of the current method depending on the initial training size. From the same set of delineations of the 105 subjects used to test the age bias, what if the authors show the predicted performance from generating the priors on a training set varying its size?
-What is the value for the scale parameter delta that appears in the priors? Is that a free parameter? If so, do results change when this parameter varies?
-
###Reviewer #2:
In the present manuscript, Bazin and colleagues describe an automatized computational approach to segment 17 subcortical nuclei from individual quantitative 7T quantitative MRI derivations. Therefore, they have trained a Bayesian "Multi-contrast Anatomical Subcortical Structure Parcellation (MASSP)" algorithm. They validate the approach in a leave-one-out fashion trained on 9/10 high-resolution scans. They assess age-related bias and report that dilated dice overlap allowing 1 voxel of uncertainty is demonstrating very high accuracy of segmentations when compared to expert delineation.
This is a straight forward work. It would certainly benefit from an additional step of out-of-center / out-of-cohort validation, but I have no serious concern that performance would be unsatisfactory. The most important limitation is …
###Reviewer #2:
In the present manuscript, Bazin and colleagues describe an automatized computational approach to segment 17 subcortical nuclei from individual quantitative 7T quantitative MRI derivations. Therefore, they have trained a Bayesian "Multi-contrast Anatomical Subcortical Structure Parcellation (MASSP)" algorithm. They validate the approach in a leave-one-out fashion trained on 9/10 high-resolution scans. They assess age-related bias and report that dilated dice overlap allowing 1 voxel of uncertainty is demonstrating very high accuracy of segmentations when compared to expert delineation.
This is a straight forward work. It would certainly benefit from an additional step of out-of-center / out-of-cohort validation, but I have no serious concern that performance would be unsatisfactory. The most important limitation is acknowledged, which is the bias from anatomical variation through age or disease. The algorithm is shown to be affected by age and most certainly will be affected by contrast and size changes in neurodegenerative disorders.
The authors certainly know their field and are a driving force in open 7T research of the basal ganglia.
-
###Reviewer #1:
The main criticisms of the work fall under categories largely centered on how the method is evaluated, rather than fundamental concerns with the method itself.
Major concerns:
Relative effectiveness. While a critical advancement of this method is the ability to segment many more regions than previous subcortical atlases, there are still many regions that overlap with existing segmentation tools. Knowing how the reliability of this new approach compares to previous automatic segmentation methods is crucial in being able to know how to trust the overall reliability of the method. The authors should make a direct benchmark against previous methods where they have overlap.
Aging analysis. The analysis of the aging effects on the segmentations seemed oddly out of place. It wasn't clear if this is being used to vet the …
###Reviewer #1:
The main criticisms of the work fall under categories largely centered on how the method is evaluated, rather than fundamental concerns with the method itself.
Major concerns:
Relative effectiveness. While a critical advancement of this method is the ability to segment many more regions than previous subcortical atlases, there are still many regions that overlap with existing segmentation tools. Knowing how the reliability of this new approach compares to previous automatic segmentation methods is crucial in being able to know how to trust the overall reliability of the method. The authors should make a direct benchmark against previous methods where they have overlap.
Aging analysis. The analysis of the aging effects on the segmentations seemed oddly out of place. It wasn't clear if this is being used to vet the effectiveness of the algorithm (i.e., its ability to pick up on patterns of age-related changes) or the limitations of the algorithm (i.e., the segmentation effectiveness decreases in populations with lower across-voxel contrast). What exactly is the goal with this analysis? Also, why is it limited to only a subset of the regions output from the algorithm?
Clarity of the algorithm. Because of the difficulty of the parcellation problem, the algorithm being used is quite complex. The authors do a good job showing the output of each stage of the process (Figures 7 & 8), but it would substantially help general readers to have a schematic of the logic of the algorithm itself.
-
##Preprint Review
This preprint was reviewed using eLife’s Preprint Review service, which provides public peer reviews of manuscripts posted on bioRxiv for the benefit of the authors, readers, potential readers, and others interested in our assessment of the work. This review applies only to version 1 of the manuscript. Timothy Verstynen (Carnegie Mellon University) served as the Reviewing Editor.
###Summary:
In this study, Bazin and colleagues propose a novel segmentation algorithm for parcelling subcortical regions of the human brain that was developed from multiple MRI measures derived from the M2RAGEME sequence acquired on a 7T MRI system. The key advancement of this approach is a reliable segmentation of more subcortical areas (17 regions) in native space than what is possible with currently available methods. The authors validate …
##Preprint Review
This preprint was reviewed using eLife’s Preprint Review service, which provides public peer reviews of manuscripts posted on bioRxiv for the benefit of the authors, readers, potential readers, and others interested in our assessment of the work. This review applies only to version 1 of the manuscript. Timothy Verstynen (Carnegie Mellon University) served as the Reviewing Editor.
###Summary:
In this study, Bazin and colleagues propose a novel segmentation algorithm for parcelling subcortical regions of the human brain that was developed from multiple MRI measures derived from the M2RAGEME sequence acquired on a 7T MRI system. The key advancement of this approach is a reliable segmentation of more subcortical areas (17 regions) in native space than what is possible with currently available methods. The authors validate their algorithm by comparing against age-related measures.
This manuscript was reviewed by three experts in the field, who found that this method has strong potential to be a new "workhorse" tool in human neuroimaging that could substantially advance our ability to measure brain structures that are largely overlooked due to problems with segmentation. The main criticisms of the work are largely centered on how the method is evaluated & implemented, rather than fundamental concerns with the validity of the method itself.
-