Segment Anything for Microscopy

This article has been Reviewed by the following groups

Read the full article See related articles

Listed in

Log in to save this article

Abstract

We present Segment Anything for Microscopy, a tool for interactive and automatic segmentation and tracking of objects in multi-dimensional microscopy data. Our method is based on Segment Anything, a vision foundation model for image segmentation. We extend it by training specialized models for microscopy data that significantly improve segmentation quality for a wide range of imaging conditions. We also implement annotation tools for interactive (volumetric) segmentation and tracking, that speed up data annotation significantly compared to established tools. Our work constitutes the first application of vision foundation models to microscopy, laying the groundwork for solving image analysis problems in these domains with a small set of powerful deep learning architectures.

Article activity feed

  1. Thanks for the positive feedback! And especially good to hear that the installation and segmentation worked well for you! For tracking: overall this is still the most experimental workflow, and I agree that we need to document it better, especially how to adapt the parameters for fast moving cells. (See more on future plans below). It would also be helpful for us to see the example where you struggled with it. If you can share this data publicly than it would be best if you can open a topic on https://forum.image.sc/ with the tag 'micro-sam' and we could discuss it there. If you can't share it publicly feel free to reach out via mail (constantin.pape@informatik.uni-goettingen.de). Regarding the overall plans: I want to extend the user tutorials and usability for tracking (and other functionality) soon. However, we are currently working on fully implementing napari plugins for micro-sam, which will take a couple more weeks / few months. After this I will update and extend the tutorials.

  2. The ability to segment cells from images enables unbiased and high-throughput phenotyping for biological studies. I was excited to come across this preprint on Segment Anything for Microscopy, which builds upon the promising Segment Anything Model developed by a team at Facebook. Your model particularly excels in expanding the utility of the Segment Anything Model to cellular images and their associated biological features.

    Having tried Segment Anything for Microscopy, I'd like to offer some comments and questions:

    1. The software was straightforward to install, and I commend the clarity of the instructions provided on the Github repository. I was up and running within 30 minutes, which is excellent.

    2. I was impressed with the automatic segmentation feature in 2D mode, which successfully segmented an algal cell in my test image without additional training.

    3. For tracking mode, however, I encountered some difficulties when testing an image stack of an algal cell swimming in a microchamber. I allotted 30 minutes for this test, expecting this to be sufficient based on my experience with similar tools. The algorithm seemed to struggle with tracking the cell when it moved significantly between frames. Could you include a section in the documentation that describes how to adjust tracking parameters to account for different velocities of cell movement? This would be a useful addition for users like me who deal with fast-moving cells.

    4. I couldn't find any documentation regarding batch processing of images. Is this feature available, or do you have plans to implement it?

    Thank you for your contribution to the field. I look forward to future updates and improvements to Segment Anything for Microscopy. I used ChatGPT to revise an initial draft of my comments for clarity and accurate word choice, but I verify that all of the text accurately reflects my review of the software.

    Galo Garcia, Scientist at Arcadia Science