SpheroScan: A User-Friendly Deep Learning Tool for Spheroid Image Analysis

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Background

In recent years, three-dimensional (3D) spheroid models have become increasingly popular in scientific research as they provide a more physiologically relevant microenvironment that mimics in vivo conditions. The use of 3D spheroid assays has proven to be advantageous as it offers a better understanding of the cellular behavior, drug efficacy, and toxicity as compared to traditional two-dimensional cell culture methods. However, the use of 3D spheroid assays is impeded by the absence of automated and user-friendly tools for spheroid image analysis, which adversely affects the reproducibility and throughput of these assays.

Results

To address these issues, we have developed a fully automated, web-based tool called SpheroScan, which uses the deep learning framework called Mask Regions with Convolutional Neural Networks (R-CNN) for image detection and segmentation. To develop a deep learning model that could be applied to spheroid images from a range of experimental conditions, we trained the model using spheroid images captured using IncuCyte Live-Cell Analysis System and a conventional microscope. Performance evaluation of the trained model using validation and test datasets shows promising results.

Conclusion

SpheroScan allows for easy analysis of large numbers of images and provides interactive visualization features for a more in-depth understanding of the data. Our tool represents a significant advancement in the analysis of spheroid images and will facilitate the widespread adoption of 3D spheroid models in scientific research. The source code and a detailed tutorial for SpheroScan are available at https://github.com/FunctionalUrology/SpheroScan .

Key Points

  • A deep learning model was trained to detect and segment spheroids in images from microscopes and Incucytes.

  • The model performed well on both types of images with the total loss decreasing significantly during the training process.

  • A web tool called SpheroScan was developed to facilitate the analysis of spheroid images, which includes prediction and visualization modules.

  • SpheroScan is efficient and scalable, making it possible to handle large datasets with ease.

  • SpheroScan is user-friendly and accessible to researchers, making it a valuable resource for the analysis of spheroid image data.

Article activity feed

  1. Background In recent years, three-dimensional (3D) spheroid models have become increasingly popular in scientific research as they provide a more physiologically relevant microenvironment that mimics in vivo conditions. The use of 3D spheroid assays has proven to be advantageous as it offers a better understanding of the cellular behavior, drug efficacy, and toxicity as compared to traditional two-dimensional cell culture methods. However, the use of 3D spheroid assays is impeded by the absence of automated and user-friendly tools for spheroid image analysis, which adversely affects the reproducibility and throughput of these assays.Results To address these issues, we have developed a fully automated, web-based tool called SpheroScan, which uses the deep learning framework called Mask Regions with Convolutional Neural Networks (R-CNN) for image detection and segmentation. To develop a deep learning model that could be applied to spheroid images from a range of experimental conditions, we trained the model using spheroid images captured using IncuCyte Live-Cell Analysis System and a conventional microscope. Performance evaluation of the trained model using validation and test datasets shows promising results.Conclusion SpheroScan allows for easy analysis of large numbers of images and provides interactive visualization features for a more in-depth understanding of the data. Our tool represents a significant advancement in the analysis of spheroid images and will facilitate the widespread adoption of 3D spheroid models in scientific research. The source code and a detailed tutorial for SpheroScan are available at https://github.com/FunctionalUrology/SpheroScan.

    This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giad082 ), which carries out open, named peer-review. This review is published under a CC-BY 4.0 license:

    **Reviewer Name: Francesco Pampaloni **

    This study represents a significant contribution to the field of screening and analysis of threedimensional cell cultures. The demand for reliable and user-friendly image processing tools to extract quantitative data from a large number of spheroids or other types of three-dimensional tissue models is substantial. The authors of this manuscript have developed a tool that aims to address this need by providing a straightforward method to extract the projected area and intensity of individual cellular spheroids imaged with bright-field microscopy. The tool is compatible with "Incucyte" microscopes or any other automated microscope capable of imaging multiple specimens, typically found in high-density multiwell plates.An admirable aspect of this work is the authors' decision to make all the code and pipeline openly available on Github. This openness allows other scientists to test and validate the code, promoting transparency and collaboration in the scientific community. However, several improvements should be made to the manuscript prior to publication.One important aspect that the authors should address in the manuscript is the suitability, rationale, and extent of using a neural network-based segmentation approach for the specific analysis described in the manuscript—segmentation of single bright-field images of spheroids.

    While neural networks are anticipated to play an increasingly important role in microscopy data segmentation in the coming years, they are not a universal solution. Although there may be segmentation tasks that are challenging to accomplish with traditional approaches, where neural networks can be highly effective, other segmentation tasks can be successfully performed using conventional strategies. For example, in our research group, we were able to reliably segment densely populated bright-field images containing numerous organoids in a single field of view using a pipeline based on the ImageJ plugin MorphoLibJ (see references: https://doi.org/10.1093/bioinformatics/btw413 and https://doi.org/10.1186/s12915-021-00958-w). Therefore, it would be informative and valuable for readers if the authors compared the results obtained from the neural network with those achieved by employing simple thresholding techniques (such as Otsu or Watershed) on the same dataset, as demonstrated in a similar study (reference: https://doi.org/10.1038/s41598-021-94217-1, Figure 5).

    Furthermore, to address the limitations of the model, the authors should provide specific examples (preferably in the supplementary material due to space constraints) of incorrect segmentations or artifacts that arise from applying the neural network to the data. For instance, it would be beneficial to explore scenarios where spheroids are surrounded by cellular debris or when multiple spheroids are present in the field of view. These real-life situations are common and it is important to provide insights into potential challenges that may arise when the images of the spheroids are not pristine.

  2. Background In recent years, three-dimensional (3D) spheroid models have become increasingly popular in scientific research as they provide a more physiologically relevant microenvironment that mimics in vivo conditions. The use of 3D spheroid assays has proven to be advantageous as it offers a better understanding of the cellular behavior, drug efficacy, and toxicity as compared to traditional two-dimensional cell culture methods. However, the use of 3D spheroid assays is impeded by the absence of automated and user-friendly tools for spheroid image analysis, which adversely affects the reproducibility and throughput of these assays.Results To address these issues, we have developed a fully automated, web-based tool called SpheroScan, which uses the deep learning framework called Mask Regions with Convolutional Neural Networks (R-CNN) for image detection and segmentation. To develop a deep learning model that could be applied to spheroid images from a range of experimental conditions, we trained the model using spheroid images captured using IncuCyte Live-Cell Analysis System and a conventional microscope. Performance evaluation of the trained model using validation and test datasets shows promising results.Conclusion SpheroScan allows for easy analysis of large numbers of images and provides interactive visualization features for a more in-depth understanding of the data. Our tool represents a significant advancement in the analysis of spheroid images and will facilitate the widespread adoption of 3D spheroid models in scientific research. The source code and a detailed tutorial for SpheroScan are available at https://github.com/FunctionalUrology/SpheroScan

    This work has been peer reviewed in GigaScience (see https://doi.org/10.1093/gigascience/giad082 ), which carries out open, named peer-review. This review is published under a CC-BY 4.0 license:

    **Reviewer name: Kevin Tröndle **

    The authors present a "Technical Note" about an open-source web tool called SpheroScan. As input users could upload (large batches of) spheroid images (brightfield, 2D). The tool delivers two outputs: (1) Prediction Module: creates a file with area and intensity of detected spheroids (CSV), (2) Visualization Module: plots of the corresponding parameters (PNG). Performance was tested on 480 Incucyte images and 423 microscope images with 336 (70 %) and 265 for training, 144 (30 %) and 117 for validation, and 50 images for testing, respectively. The framework is based on Mask R-CNN and Detectron2 library. The performance was tested in the range of 0.5 to 0.95 against manual annotation (VGG Annotator). As evaluation measure they used Intersection over union (IoU), determining the overlap between the predicted and ground truth regions and calculates values of Average Precision (AP) for masking: 0.937 and 0.972 (Test), 0.927 and 0.97 (Validation) as well as AP for bounding box: 0.899 and 0.977 (test) 0.89 and 0.944 (Validation). They show a linear runtime, proofed with different sized datasets (1 s / image) for masking on a 16 core CPU, 64 GB RAM machine. The tool is available on GitHub and claimed to be available as a web tool on spheroscan.onrender.com.General evaluation:The concept of the tool serves some important needs of 3D cell culture-based assays: automated, standardized, high-throughput image analysis. As such, it represents value added for the research field.

    However, it remains open how high the impact, the reproducibility, and the chances of potential application by other researchers will be. This is due to some significant limitations in accessibility (i.e. non-permanent or non-functional web tool), as well as the (potential) restriction of input data (i.e. brightfield only, not validated with external data) and the limited options for analysis of the metadata (i.e. area and intensity only). The greatest value stems from the possibility to access a web interface, which is easy to use and will ideally be equipped with additional functionalities in the future.

    Comment 1 (minor):The presented tool uses the Mask R-CNN deep-learning model in their image processing pipeline. Several tools, which perform image segmentation, are based on this or other models are well-established and already implemented in several commercial imaging devices and allow for segmentation of cell containing image areas, e.g. to determine confluency or cell migration in "wound healing assays", mainly optimized for 2D cultures, but also applicable for 2D images of 3D spheroids. The concept of automated image segmentation is thus not novel and only meets the journal's input criterion as "update or adaptation of existing" tools.The state-of-the-art and preliminary work are not sufficiently referenced. Several similar and alternative (open-source) tools are existent and should be mentioned in the manuscript, e.g. (Lacalle et al., 2021; Piccinini et al., 2023; Trossbach et al., 2023), to give only a few examples.

    Comment 2 (major):The authors claim to present an user-friendly open-source web tool. The python project is available on Github, and on a demo-server (https://spheroscan.onrender.com/) where the web interface can be accessed. Unfortunately the mentioned web tool is not functional, i.e. it is stated on the website: "This is a demonstration server and the prediction module is not available for use. To utilize the prediction functionality, please run SpheroScan on your local machine.".This is significantly limiting the applicability of the presented tool to users who are able to execute python code on their local hardware. Therefore, the demo server should either present a functional user interface (recommended), or the statement should be removed from the manuscript, which would limit the impact of the submission significantly

    .Comment 3 (major):The presented algorithm was trained exclusively on internal data of brightfield images from "Incucyte and microscope platforms". Furthermore, two distinct models were generated, working with either Incucyte or microscope images.It remains unclear how the algorithm will perform on external data of prospective users. Given the fact that two distinct models had to be trained for different image sources (i.e. from two different platforms) indicates a limited robustness of the models in this regard. This is clearly a general problem of image processing algorithms, but one that will stand in the way of applicability by external users with certainly other imaging techniques. Since the web tool interface is not functional at this point, the authors will also not be able to evaluate or improve on this after publication. At least one performance test with external data, obtained from an ideally blinded user should be performed, to further elaborate on this.

    Comment 4 (major):Many assays nowadays use fluorescent labels, for example to calculate cell ratios within 3D arrangements, e.g. for cell viability or the expression of certain proteins. The authors do not state if the algorithm (or future iterations thereof) is or will be able to process multi-channel microscope images of spheroids.This is a significant limitation of the presented work and should at least be mentioned in the corresponding section, respectively. Furthermore, a proof-of-concept test run with fluorescent images could be performed to test the algorithm performance and derive potentially necessary adaptations in future versions.

    Comment 5 (minor):The output of the tool is a list of detected spheroids with corresponding area (2D) and bright field average intensity within the area.The usability of these two parameters is limited to specific assays, such as the mentioned use case to investigate collagen gel contraction assays. Several other parameters of interest could easily be derived from the metadata, such as roundness, volume estimation (assuming a spheroid shape), or even cell count estimation. This should again be mentioned in the "limitations and considerations" section.