Modality-Agnostic Decoding of Vision and Language from fMRI

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Humans perform tasks involving the manipulation of inputs regardless of how these signals are perceived by the brain, thanks to representations that are agnostic to the stimulus modality. Investigating such modality-agnostic representations requires experimental datasets with multiple modalities of presentation. In this paper, we introduce and analyze SemReps-8K, a new large-scale fMRI dataset of 6 subjects watching both images and short text descriptions of such images, as well as conditions during which the subjects were imagining visual scenes. The multimodal nature of this dataset enables the development of modality-agnostic decoders, trained to predict which stimulus a subject is seeing, irrespective of the modality in which the stimulus is presented. Further, we performed a searchlight analysis revealing that large areas of the brain contain modality-agnostic representations. Such areas are also particularly suitable for decoding visual scenes from the mental imagery condition. The dataset will be made publicly available.

Article activity feed