Multimodal Tissue Segmentation is better

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Being able to distinguish tissues and quantify features in an image is ubiquitous in medical imaging, allowing, for instance, the study of differences between clinical groups or the investigation of the impact of an intervention. Assigning the tissue type is also a fundamental preprocessing step in many neuroimaging applications like image registration, normalisation or even simple masking. Because of such ubiquity, thorough investigations of segmentation algorithms are necessary to determine in which conditions they work best. The SPM implementation of tissue segmentation is a commonly used tool in this context, providing voxel-wise probabilistic estimates of brain grey and white matter tissues, cerebrospinal fluid, soft tissues, and bones. Different estimates of tissue density and/or volumes have, however, been observed using unimodal vs. multimodal inputs. Here, we contend that possible misinterpretations arise from mis-specifying parameters of the generative model underlying tissue segmentation. Using T1 weighted vs. T1 and T2 weighted images as input while also varying the number of Gaussians (1 vs. 2 for brain tissues) used in the generative model, we compared tissue volumes, tissue distributions and accuracy at classifying non-brain intracranial tissue (arteries) and grey matter nuclei in two independent datasets (discovery N = 259, validation N = 87). Results show that compared to unimodal tissue segmentation, multimodal tissue segmentation gives more replicable volume estimations, more replicable tissue modelling, and more accurate results with regards to non-brain tissue (e.g. meninges or vessels), but only when the right model parameterization is used (i.e. 2 Gaussians per brain tissue class).

Article activity feed