Brainrender : a python-based software for visualizing anatomically registered data

Curation statements for this article:
  • Curated by eLife

    eLife logo

    Evaluation Summary:

    This paper by Claudi et al. will be of interest to any scientist working in neuroanatomy and related fields. Dissemination of scientific results is one of the key products of science, and the software presented here will help scientists achieve that task more easily than ever before.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 agreed to share their name with the authors.)

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

The recent development of high-resolution three-dimensional (3D) digital brain atlases and high-throughput brain wide imaging techniques has fueled the generation of large datasets that can be registered to a common reference frame. This registration facilitates integrating data from different sources and resolutions to assemble rich multidimensional datasets. Generating insights from these new types of datasets depends critically on the ability to easily visualize and explore the data in an interactive manner. This is, however, a challenging task. Currently available software is dedicated to single atlases, model species or data types, and generating 3D renderings that merge anatomically registered data from diverse sources requires extensive development and programming skills. To address this challenge, we have developed brainrender : a generic, open-source Python package for simultaneous and interactive visualization of multidimensional datasets registered to brain atlases. Brainrender has been designed to facilitate the creation of complex custom renderings and can be used programmatically or through a graphical user interface. It can easily render different data types in the same visualization, including user-generated data, and enables seamless use of different brain atlases using the same code base. In addition, brainrender generates high-quality visualizations that can be used interactively and exported as high-resolution figures and animated videos. By facilitating the visualization of anatomically registered data, brainrender should accelerate the analysis, interpretation, and dissemination of brain-wide multidimensional data.

Article activity feed

  1. Response to: Reviewer #2 (Public Review):

    [...] Like all software, brainrender still has limitations. For example, it's unclear from the paper exactly what input and output formats are supported, particularly from the GUI. Additionally, at publication, using the software still requires a Python installation, with all the complexity that currently entails. However, thanks to the rich and growing scientific Python ecosystem, including application packaging tools, I am confident that the authors, perhaps in collaboration with some readers, will be able to address these issues as the software matures.

    We thank the reviewer for this positive evaluation of brainrender and for the helpful recommendations on how to improve our manuscript, which we have followed.

  2. Response to Reviewer #1 (Public Review):

    Claudi et al. present a new tool for visualizing brain maps. In the era of new technologies to clear and analyze brains of model organisms, new tools are becoming increasingly important for researchers to interact with this data. Here, the authors report on a new tool for just this: exploring, visualizing, and rendering this high dimensional (and large) data. This tool will be of great interest to researchers who need to visualize multiple brains within several key model organisms.

    We thank the reviewer for the positive comments on our work.

    The authors provide a nice overview of the tool, and the reader can quickly see its utility. What I would like to ask the authors to add is more information about computational resources and computing time for rendering; i.e. in the paper, they state "Brainrender uses vedo as the rendering engine (Musy et al., 2019), a state-of-the-art tool that enables fast, high quality rendering with minimal hardware requirements (e.g.: no dedicated GPU is needed)" - but would performance be improved with a GPU, runtimes, etc?

    We have now developed a set of benchmarking tests to quantify the performance of brainrender. We tested four machine configurations, with and without GPUs and across three operating systems. In general, we found that a GPU increases the framerate of interactive renderings by a factor of ~3.5. These methods and data are now shown the methods section (Benchmark Tests, Tables 2 and 3) and presented in the results section:

    "High-resolution renderings of rich 3D scenes can be produced rapidly (e.g., 10,000 cells in less than 2 secs) in standard laptop or desktop configurations. Benchmarking tests across different operating systems and machine configurations show that using a GPU can increase the framerate of interactive renderings by a factor of ~3.5 (see Tables 2 and 3 in Methods)."

    This performance increase, however, depends on the complexity of the pre-processing steps, such as data loading and mesh generation, which run on the CPU. As one the main goals of brainrender is to produce high-resolution visualisations, we have made the rendering quality independent of hardware configuration, which only affects the rendering time.

    We have also added a new discussion paragraph where we clarify that brainrender has been optimised for rendering quality instead of performance, and directly compare rendering performances with alternative software tools (Page 8, Line 220)

    I would also be happy to see the limitations and directions expanded. For example, napari is a powerful n-dimensional viewer, how does performance compare (i.e. any plans for a napari plug in, or ImageJ plug in, or is this not compatible with this software's vision?). How does brain render compare (run time, computing wise) to Blender, for example, or another rendering tool standard in fields outside of neuroscience?

    As suggested by all reviewers, we have now removed the Conclusions section and replaced it with a more extensive section on Limitations and Future Directions. As mentioned above, this includes a paragraph where we discuss the performance of brainrender in comparison to alternative tools. Although we do not envision the development of plugins for other software in the near future, we identify performance as an area that could be the target for improvements in future versions of brainrender. Additional options for improvements include methods for simplifying the installation process and expanding the GUI to minimize the need for scripting.

    As we write in the discussion and expand here, when comparing brainrender with other software it’s worth emphasizing that unlike most software tools (e.g. napari and ImageJ) brainrender is intended to work primarily with mesh data and not three dimensional image data. Although it can display image data (e.g., with the Volume actor) this functionality is not as fully developed as that using mesh data. To compare the performance of brainrender and napari at visualizing image data, we set up a benchmark test that involved loading and rendering 10 times gene expression data (51x41x67 voxels) for gene 'Gpr161' from the Allen Atlas database (downloaded with brainrender and saved to a numpy file). On a MacBook pro with Radeon Pro 560X 4 GB GPU napari was approximately ~5x faster than brainrender. It’s worth noticing, however, that in addition to loading and visualizing the data, brainrender thresholds and interpolates the voxel data to create much clearer data visualizations

    In a second benchmark test we compared performance for rendering mesh data. Napari, however, does not provide functionality to load mesh data from file, requiring that users utilize external python libraries for this task. For this benchmark we used brainrender to load the mesh data before visualizing either in napari or brainrender (only the time necessary to create the rendering was measured, the time for loading the data was ignored). The benchmark involved the rendering of 400 brain regions from the Allen mouse brain atlas’ hierarchy. This test showed that brainrender was approximately 20x faster at visualizing mesh data. Additionally, napari’s mesh rendering functionality is somewhat limited, and we found very difficult to achieve good looking renderings of multiple meshes at once. We were not able to test ImageJ as the plug in for visualizing mesh data as it did not work on our machine.

    Regarding dedicated rendering software such as Blender, these are indeed capable of handling mesh data with a performance surpasses brainrender. Although we did not compare the two directly, we note that our benchmarking tests revealed that on most machines brainrender’s performance begins to worse noticeably when the number of rendered vertices surpasses five million, which Blender can handle easily. However, using tools such as Blender requires learning to operate complex software which most researchers would likely rarely use outside the creation of simple renderings. Also, it would be up to the users themselves to download, store and access, mesh data for the anatomical atlases.

    These points are now presented in Limitations and Future Directions.

    The methods are short (maybe check for all open source code citations are included, as needed), but they have excellent docs elsewhere; it would be nice to have minimal code examples in the methods though, i.e. "it's as easy as pip install brainrender" … or such.

    We have added the relevant citations for the open-source code, and we have added additional code examples as suggested by the reviewer. We now also more clearly point to the online documentation as an additional source of methods and code examples.

    Lastly, I congratulate the authors on a clear paper, excellent documentation (https://docs.brainrender.info/, and I believe this is a very nice contribution to the community.

    We thank the reviewer for the kind comments and for the very useful suggestions for improvements.

  3. Reviewer #2 (Public Review):

    Open source software for data rendering in neuroanatomy is either too specific to be generically useful (for example, designed for only one specific brain atlas, or brain atlases of a single species), or too general, and thus not integrated with atlases or other relevant software. Additionally, despite the growing popularity of the Python programming language in science, 3D rendering tools in Python are still very limited. Claudi et al have sought to narrow both of these gaps with brainrender. Biologists can use their software to display co-registered data on any atlas available through their AtlasAPI, explore the data in 3D, and create publication quality screenshots and animations.

    The authors should be commended for the level of modularity they have achieved in the design of their software. Brainrender depends on atlasAPI (Claudi et al, 2020), which means that compatibility for new atlases can be added in that package and brainrender will support them automatically. Similarly, by supporting standard data storage formats across the board, brainrender lets users import data registered with brainreg (Tyson et al, 2020), but does not depend on brainreg for its functionality.

    Like all software, brainrender still has limitations. For example, it's unclear from the paper exactly what input and output formats are supported, particularly from the GUI. Additionally, at publication, using the software still requires a Python installation, with all the complexity that currently entails. However, thanks to the rich and growing scientific Python ecosystem, including application packaging tools, I am confident that the authors, perhaps in collaboration with some readers, will be able to address these issues as the software matures.

  4. Reviewer #1 (Public Review):

    Claudi et al. present a new tool for visualizing brain maps. In the era of new technologies to clear and analyze brains of model organisms, new tools are becoming increasingly important for researchers to interact with this data. Here, the authors report on a new tool for just this: exploring, visualizing, and rendering this high dimensional (and large) data. This tool will be of great interest to researchers who need to visualize multiple brains within several key model organisms.

    The authors provide a nice overview of the tool, and the reader can quickly see its utility. What I would like to ask the authors to add is more information about computational resources and computing time for rendering; i.e. in the paper, they state "Brainrender uses vedo as the rendering engine (Musy et al., 2019), a state-of-the-art tool that enables fast, high quality rendering with minimal hardware requirements (e.g.: no dedicated GPU is needed)" - but would performance be improved with a GPU, runtimes, etc?

    I would also be happy to see the limitations and directions expanded. For example, napari is a powerful n-dimensional viewer, how does performance compare (i.e. any plans for a napari plug in, or ImageJ plug in, or is this not compatible with this software's vision?). How does brain render compare (run time, computing wise) to Blender, for example, or another rendering tool standard in fields outside of neuroscience?

    The methods are short (maybe check for all open source code citations are included, as needed), but they have excellent docs elsewhere; it would be nice to have minimal code examples in the methods though, i.e. "it's as easy as pip install brainrender" ... or such.

    Lastly, I congratulate the authors on a clear paper, excellent documentation (https://docs.brainrender.info/), and I believe this is a very nice contribution to the community.

  5. Evaluation Summary:

    This paper by Claudi et al. will be of interest to any scientist working in neuroanatomy and related fields. Dissemination of scientific results is one of the key products of science, and the software presented here will help scientists achieve that task more easily than ever before.

    (This preprint has been reviewed by eLife. We include the public reviews from the reviewers here; the authors also receive private feedback with suggested changes to the manuscript. Reviewer #2 agreed to share their name with the authors.)