Determining growth rates from bright-field images of budding cells through identifying overlaps

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    In this interesting manuscript, Pietsch et al. develop innovative machine learning approaches for automated analysis of budding yeast live-cell imaging data obtained with a dedicated microfluidic device that retains mother cells. Developing such tools is crucial to enable high-throughput image analysis. These methods will be useful for researchers studying these cells, and may also inspire similar approaches for other types of cells.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Much of biochemical regulation ultimately controls growth rate, particularly in microbes. Although time-lapse microscopy visualises cells, determining their growth rates is challenging, particularly for those that divide asymmetrically, like Saccharomyces cerevisiae , because cells often overlap in images. Here, we present the Birth Annotator for Budding Yeast (BABY), an algorithm to determine single-cell growth rates from label-free images. Using a convolutional neural network, BABY resolves overlaps through separating cells by size and assigns buds to mothers by identifying bud necks. BABY uses machine learning to track cells and determine lineages and estimates growth rates as the rates of change of volumes. Using BABY and a microfluidic device, we show that bud growth is likely first sizer- then timer-controlled, that the nuclear concentration of Sfp1, a regulator of ribosome biogenesis, varies before the growth rate does, and that growth rate can be used for real-time control. By estimating single-cell growth rates and so fitness, BABY should generate much biological insight.

Article activity feed

  1. Author Response

    Reviewer #3 (Public Review):

    In the work presented in “A label-free method to track individuals and lineages of budding cells”, Pietsch et al. use multiple machine learning approaches to identify, delineate, and track yeast cells in microscopy images.

    I commend the authors for putting a lot of work into this manuscript and coming up with many new ideas to solve their problem of interest. However, throughout the manuscript, I felt that this manuscript does not work well as a ‘methods’ paper. Maybe it should have been a paper about the biology, which I find very interesting. My main reason for finding this manuscript not well-suited for a methods paper is that their approach as well as the goal are so specific that it may not be readily adopted by others. I would like to list the number of limitations and particularities of their set-up to support this conclusion:

    The whole problem of small cells not being in focus in a single plane is to a large part due to the high ceiling of the authors’ microfluidic chips (6 um according to Crane et al.). Other microfluidic chips have much lower ceilings, keeping cells essentially in 2D. If Pietsch et al. used a lower ceiling, small cells would presumably not be out of focus so frequently nor appear to overlap with other cells, and the usual single z-stack approach would suffice. (Another configuration in which cells appear to overlap is in wells, e.g., 96-well plates, which are similarly not ideal for imaging.) Thus, for the problem of interest to Pietsch et al., I would have used a different chip first and then seen what remains of the identification, segmentation, and tracking problem.

    The microfluidic devices we used in this paper are 4.6 µm high. If we make the ceiling any lower – devices that pin cells under PDMS pads are 4 µm high where the pinning occurs (Lee et al., Proc Nat Acad Sci USA 2012), we will start to bias the cells we load into the device to be unusually small. Our choice is typical: for example, Liu et al. use a device with traps of height 5 µm (Cell Rep 13:634, 2015) and Jo et al. use traps of height 5 µm and 6 µm (Proc Nat Acad Sci USA 112:9364, 2015).

    As we state in the Introduction (now moved from the Discussion), we observe overlaps in images generated by a variety of different techniques. It is a common problem and not unique to our set up.

    The method requires a number of z-stacks (although I read somewhere how many z-stacks the method needs, I now cannot find that information any more, which highlights a general problem with the presentation, which I will get to below). This means that the already large amount of data that needs to be acquired with regular 2D images now is multiplied by “n” for each z-stack. More importantly, initially, z-stacks have to be individually labeled for training the neural network. That is n times what other segmentation methods require. So, one would presumably only invest this amount of work if one really cared about the tiniest buds because that is, from what I understand, the main selling point of the method. But how many labs do care about this question and going about it in the exact same way as Pietsch et al.? For example, to just find the exact time a bud appears, most people could just extrapolate the size of a new bud in time to zero or simply use a fluorescent budneck marker. Somebody would have to want to measure the growth rates of the smallest buds without fluorescent labels, which the authors do in this present work. But unless someone wants to repeat this exact measurement, say, with other mutants, I do not see who else would invest a large amount of time and resources for this. Other quantities such as fluorescent protein levels cannot be measured with this approach anyway, i.e., by going through z-stacks with a widefield microscope. One would presumably have to use a confocal microscope.

    Our method does not require multiple Z stacks. We apologise for the confusion and have adjusted Fig. 2a to make this point obvious. In our evaluation of BABY, we now focus on the version that takes only a single Z section as input.

    With multiple Z sections, we do not annotate each Z section, but use all, via our GUI, to produce a 2D binary image of a cell’s outline from the Z section where it is best in focus. There is therefore no extra effort or resources required — taking five Z sections instead of one is straightforward, and BABY aims by design to avoid the use of any fluorescent strains because the information needed to detect small buds is already in the bright-field images.

    Our data shows that small buds are important. BABY predicts accurate single-cell growth rates partly because it identifies these buds, including those overlapping their mothers. We now include a new Figure 1 supplement 1 showing that most of the growth in volume for budding yeast occurs when buds are small — and then hardest to detect because of their higher frequency of overlap (Appendix 1 Fig. 2).

    For yeasts and other microbes, growth rate is a strong correlate with fitness (Ref. 1), and therefore we believe that accurately estimating single-cell growth rates will be of wide interest. Although BABY will be invaluable to labs investigating small buds, perhaps using mutants as the reviewer suggests, our target audience is much broader – those scientists wishing to correlate phenotype with fitness.

    Could the problem have been simplified by taking z-stacks but analyzing each as a regular 2D image with existing segmentation methods? If a new bud is detected in any of the z-stacks, it is counted as a new cell. This would allow one to use existing 2D training sets and methods and only add a few images of one’s own, whether taken in a single z-stack or not. It would only involve tweaking or augmenting existing methods slightly.

    In our experience, it is challenging to identify which cells are new and which cells are identical given different images of the same collection of objects. Rather than analysing multiple segmented Z-stacks as suggested, we tackle this problem for the three segmented images, each with cells of a different size, that we generate from an input image (Fig. 2). We solve it both by identifying cells that are more likely to be buds, using the small cells predicted by the CNN, and by optimising the post-processing to avoid spurious overlapping false positives. We therefore suspect that the method suggested will be more challenging to implement than expected.

    While a 3D image needs to be fed to the neural network, ultimately, all measurements in this manuscript are 2D measurements, e.g., all growth rates are in units of um2/h. (Somewhat unexpectedly, the authors use a Myo1-GFP construct to identify the budded phase of cells in Fig. 4, i.e., exactly what this method was designed to avoid.) Thus, the effort of going to 3D is only to make the identification of buds more accurate. So, we are not really dealing with a method that goes from 2D to 3D and reconstructs, for example, the shape of cells in 3D. So, while z-stacks go in, it is not 3D annotations that come out.

    Our growth rates are all reported in units of volume per time: µm3/h, not µm2/h. We estimate this volume from 2D cell outlines as described in Appendix 3. As we also now make clearer, BABY does not require Z-stacks and performs competitively with a single Z stack.

    For our results requiring detection of cytokinesis (Fig. 4) — not the budded phase of cells, we used either an Nhp6A nuclear marker or the Myo1 marker. We have now removed the method for estimating cytokinesis from growth rates from the main BABY algorithm: its accuracy is too low compared to the rest of BABY. Nonetheless, neither Nhp6A nor Myo1 assist in accurately estimating growth rates. We use both only to identify cell-cycle stages.

    The authors may argue that they want to use their high-ceiling chips because they want to follow aging cells. Or, they may argue that indeed, this method is going to be used more widely because people want to study the growth rates of tiny buds in various mutants. However, then the limitation of their method to convex shapes or shapes that can be represented in cylindrical coordinates is a problem since old cells and many mutants can have strange shapes. In this way, the authors have gone a step back methodologically for reasons that I do not understand.

    We do not use high-ceiling chips, but the chips of the height converged on by multiple labs: approximately 5 µm.

    Furthermore, we have gone forward not backward: our method is not limited to convex shapes, but star-convex shapes. We can therefore detect even pinched outlines. As we show, BABY performs competitively on the YeaZ data set, which includes many examples of cells with unusual morphologies.

    Given that the method is tailored to detecting small buds, I also do not understand why the authors do not use a higher magnification objective, e.g., a 100x objective instead of 60x? Maybe the problem becomes much easier that way?

    BABY accurately estimates growth rates because it segments overlapping cells, including small buds overlapping with their mothers. Using a higher objective lens does not remove overlaps. For example, images from the YeaZ data set are at 100x magnification and yet have overlapping cells.

    It is unclear how well the tracking method generalizes for other configurations. Here, the tracking problem is somewhat special because there are only a few cells in and around the traps and frequently cells are washed away. For a method paper, the tracking method would need to be compared and contrasted with others for different kinds of experiments. Since tracking is in the title of the manuscript, it is presumably an important selling point of the manuscript.

    We agree and have now shifted the paper’s focus to accurately determining growth rates, changing both the title and abstract.

    The same applies to the segmentation problem. The traps in the authors’ microfluidic chips only keep a small number of cells, avoiding problems that emerge when many cells of similar sizes abut.

    In fact, our segmentation method does generalise to images with large numbers of cells, as we demonstrate using YeaZ’s microcolony data.

  2. eLife assessment

    In this interesting manuscript, Pietsch et al. develop innovative machine learning approaches for automated analysis of budding yeast live-cell imaging data obtained with a dedicated microfluidic device that retains mother cells. Developing such tools is crucial to enable high-throughput image analysis. These methods will be useful for researchers studying these cells, and may also inspire similar approaches for other types of cells.

  3. Reviewer #1 (Public Review):

    This work develops new and improved methods for tracking and quantifying yeast cells in time-lapse microscopy. Overall, the manuscript presents exceedingly clever solutions to many practical data analysis problems that arise in microfluidics, some of which may be useful in other image analysis settings.

    I find the manuscript is at times very dense and technical and is missing context for a general audience. Hard to know what are the most important contributions, and the authors assume the reader is familiar with many details of their previous work/field. Claims are made with little explanation, context or scientific logic.

  4. Reviewer #2 (Public Review):

    Microfluidics-assisted live-cell imaging is often the method of choice to gain insight into the growth behavior of single cells, in particular unicellular organisms with simple shapes. While growth rate measurements of symmetrically dividing and rod-shape organisms such as E.coli or fission yeast are simplified by their geometry, measurements of the common model organism budding yeast are more complicated due to growth in three dimensions and asymmetric 'budding'. As a consequence, analysis of live-cell imaging experiments typically still requires time-consuming manual work, in particular, to correct automated segmentation and tracking, assign mother-bud pairs, and determine the time point of cell division. In the present manuscript, Pietsch et al. aim to address this important issue by developing deep-learning-based analysis software named BABY for the automated extraction of growth rate measurements performed with microfluidic traps that are designed to keep mother cells, but quickly lose newborn daughters.

    To achieve this, Pietsch et al. introduce several innovative approaches. 1.) In contrast to previous deep-learning segmentation tools they allow 3D data (z-stacks) as inputs and allow for overlapping segmentation masks. 2.) By introducing 3 different object categories based on their size, they can take more specified approaches for each category and for the segmentation of overlapping objects 3.) By using cell edges and bud necks as additional predicted channels, they facilitate downstream post-processing of segmentation masks and mother-bud pairing, respectively. 4.) By using machine learning to predict tracking and mother-bud pairs from multiple features, they develop a novel approach to automate these steps. Using their automated analysis pipeline, the authors then study the growth behavior in different mutants and propose a novel mechanism in which growing buds are regulated by a combination of a 'sizer' and a 'timer' mechanism.

    This manuscript introduces exciting steps towards a fully automated analysis of bright-field microscopy data of growing yeast cells, which makes this manuscript an important contribution to the field. However, in part the quantitative reporting on the actual performance is not sufficient. For example, what is the actual overall success-rate in predicting mother-bud pairs? How accurately can cell cycle durations be predicted? This lack of information makes it hard to evaluate how appropriate using fully automated BABY actual is. In addition, the experiments supporting the major biological insight, i.e. the sizer-timer transition for bud growth are rather limited, and further experiments would be needed to strengthen this conclusion.

  5. Reviewer #3 (Public Review):

    In the work presented in "A label-free method to track individuals and lineages of budding cells", Pietsch et al. use multiple machine learning approaches to identify, delineate, and track yeast cells in microscopy images.

    I commend the authors for putting a lot of work into this manuscript and coming up with many new ideas to solve their problem of interest. However, throughout the manuscript, I felt that this manuscript does not work well as a 'methods' paper. Maybe it should have been a paper about the biology, which I find very interesting. My main reason for finding this manuscript not well-suited for a methods paper is that their approach as well as the goal are so specific that it may not be readily adopted by others. I would like to list the number of limitations and particularities of their set-up to support this conclusion:

    - The whole problem of small cells not being in focus in a single plane is to a large part due to the high ceiling of the authors' microfluidic chips (6 um according to Crane et al.). Other microfluidic chips have much lower ceilings, keeping cells essentially in 2D. If Pietsch et al. used a lower ceiling, small cells would presumably not be out of focus so frequently nor appear to overlap with other cells, and the usual single z-stack approach would suffice. (Another configuration in which cells appear to overlap is in wells, e.g., 96-well plates, which are similarly not ideal for imaging.) Thus, for the problem of interest to Pietsch et al., I would have used a different chip first and then seen what remains of the identification, segmentation, and tracking problem.

    - The method requires a number of z-stacks (although I read somewhere how many z-stacks the method needs, I now cannot find that information any more, which highlights a general problem with the presentation, which I will get to below). This means that the already large amount of data that needs to be acquired with regular 2D images now is multiplied by "n" for each z-stack. More importantly, initially, z-stacks have to be individually labeled for training the neural network. That is n times what other segmentation methods require. So, one would presumably only invest this amount of work if one really cared about the tiniest buds because that is, from what I understand, the main selling point of the method. But how many labs do care about this question and going about it in the exact same way as Pietsch et al.? For example, to just find the exact time a bud appears, most people could just extrapolate the size of a new bud in time to zero or simply use a fluorescent budneck marker. Somebody would have to want to measure the growth rates of the smallest buds without fluorescent labels, which the authors do in this present work. But unless someone wants to repeat this exact measurement, say, with other mutants, I do not see who else would invest a large amount of time and resources for this. Other quantities such as fluorescent protein levels cannot be measured with this approach anyway, i.e., by going through z-stacks with a widefield microscope. One would presumably have to use a confocal microscope.

    - Could the problem have been simplified by taking z-stacks but analyzing each as a regular 2D image with existing segmentation methods? If a new bud is detected in any of the z-stacks, it is counted as a new cell. This would allow one to use existing 2D training sets and methods and only add a few images of one's own, whether taken in a single z-stack or not. It would only involve tweaking or augmenting existing methods slightly.

    - While a 3D image needs to be fed to the neural network, ultimately, all measurements in this manuscript are 2D measurements, e.g., all growth rates are in units of um^2/h. (Somewhat unexpectedly, the authors use a Myo1-GFP construct to identify the budded phase of cells in Fig. 4, i.e., exactly what this method was designed to avoid.) Thus, the effort of going to 3D is only to make the identification of buds more accurate. So, we are not really dealing with a method that goes from 2D to 3D and reconstructs, for example, the shape of cells in 3D. So, while z-stacks go in, it is not 3D annotations that come out.

    - The authors may argue that they want to use their high-ceiling chips because they want to follow aging cells. Or, they may argue that indeed, this method is going to be used more widely because people want to study the growth rates of tiny buds in various mutants. However, then the limitation of their method to convex shapes or shapes that can be represented in cylindrical coordinates is a problem since old cells and many mutants can have strange shapes. In this way, the authors have gone a step back methodologically for reasons that I do not understand.

    - Given that the method is tailored to detecting small buds, I also do not understand why the authors do not use a higher magnification objective, e.g., a 100x objective instead of 60x? Maybe the problem becomes much easier that way?

    - It is unclear how well the tracking method generalizes for other configurations. Here, the tracking problem is somewhat special because there are only a few cell in and around the traps and frequently cells are washed away. For a method paper, the tracking method would need to be compared and contrasted with others for different kinds of experiments. Since tracking is in the title of the manuscript, it is presumably an important selling point of the manuscript.

    - The same applies to the segmentation problem. The traps in the authors' microfluidic chips only keep a small number of cells, avoiding problems that emerge when many cells of similar sizes abut.