1. Note: This rebuttal was posted by the corresponding author to Review Commons. Content has not been altered except for formatting.

    Learn more at Review Commons


    Reply to the reviewers

    Author responses are written in bold and are italicized. We have underlined the important points in the reviewer's comments. All responses have been read and authorized by all authors of this manuscript. Authors would like to thank the reviewers and the editor for their valuable time. We believe that the comments and suggestions from both reviewers will significantly improve SMorph and the manuscript.

    Reviewer #1 (Evidence, reproducibility and clarity (Required)):

    First of all, I want to apologize the authors and editor for my delay. Secondly, for clarity, I want to disclose that I am the author of the Fiji's 'Sholl Analysis' plugin, that the authors cite extensively (Ferreira et al, Nat Methods, 2014).

    In this study, Sethi et al introduce a software tool - SMorph - for bulk morphometric analysis of neurons and glia (astrocytes and microglia), based on the Sholl technique. The authors compare it to the state-of-the-art in a series of validation experiments (stab wound injury), to conclude that it is 1000 times faster that existing tools. Empowered by the tool, the authors show that chronic administration of a tricyclic antidepressant (DMI) leads to structural changes of astrocytes in the mouse hippocampus. The paper is well written, the description of the tool is clear, and the authors make all of the source code available, as well as most of the imagery analyzed in the manuscript. The latter on its own, makes me really appreciative of the authors work.

    We thank reviewer #1 for their careful reading of the manuscript and their comments.

    Major comments:

    A major strength of SMorph is that it leverages the Python ecosystem, which allow the authors take advantage of powerful python packages such as sklearn, without the need for external packages or tools. However, I have strong criticisms for the claims that are made in terms of speed and broad-applicability of the software, including PCA.

    Speed:

    The 1000x speed gains, assumes - for the most part -- that the processing in Fiji cannot be automated. This is false. I read the source code of SMorph, and with exception of the PCA analysis, all aspects of SMorph can be automated in Fiji, using any of Fiji's scripting languages to make direct calls to the Fiji and Sholl Analysis plugin APIs (See https://javadoc.scijava.org/) . Now, perhaps the authors do not have experience with ImageJ scripting, or perhaps we Fiji developers failed to provide clear tutorials and examples on how to do so. Or perhaps, there is something inherently cumbersome with Fiji scripting that makes this hard (e.g., there is a current limitation with the ImageJ2 version of 'Sholl Analysis' that does not make it macro recordable). It such limitations do exist, it is perfectly fine to mention them, but do contact us at https://forum.image.sc, if something is unclear. We do strive to make our work as re-usable as possible. Unfortunately our own research does not always allow us the time required to do so. Case in point, our scripting examples (e.g., https://github.com/tferr/ASA/blob/master/scripting-examples/3D_Analysis_ImageStack.py; https://github.com/tferr/ASA/blob/master/scripting-examples/3D_Analysis_ImageStack.py) are not well advertised. That being said, I am still surprised that in their side-by-side comparisons the authors were not able to automate more the processing steps (e.g., the ImageJ1 version of 'Sholl Analysis' remains fully functional and is macro recordable). If I misunderstood what was done, please provide the ImageJ macros you used. Also, I wanted to mention that i) semi-manual tracing with Simple Neurite Tracer (now "SNT"), can also be scripted (see https://doi.org/10.1101/2020.07.13.179325); and that ii) Fiji commands and plugins can also be called in native python using pyimagej (https://pypi.org/project/pyimagej/), see e.g., https://github.com/morphonets/SNT/tree/master/notebooks#snt-notebooks). Arguably, the fact that SMorph handles blob detection and skeletonization-based metrics directly is more advantageous from a user point of view. In Fiji, blob detection, skeletonization and Strahler analysis (https://imagej.net/Strahler_Analysis) of the skeleton are handled by different plugins. However, those are also fully scriptable, and interoperate well. The point that topographic skeletonization in Fiji can originate loops is valid, however the authors should know that such cycles can be detected and pruned programmatically using e.g., pixel intensities (see https://imagej.net/AnalyzeSkeleton.html#Loop_detection_and_pruning and the original publication (https://pubmed.ncbi.nlm.nih.gov/20232465/)

    We completely agree with the reviewer’s assertion that most parts of the functionality of SMorph can be automated within imageJ as well, and in such comparison, the speed gains with SMorph will not be >1000X.

    However, automating the analysis in imageJ is beyond the scope of the present manuscript. In fact, imageJ analysis comparison was not a part of our original manuscript at all. Upon presubmission inquiry to one of the affiliate journals of Review Commons, we were specifically asked to include a side-by-side comparison with “already available” methods. So, we decided to use ImageJ as it is, and automation, if any, was limited to simple macros to run a series of commands sequentially on batches of images. Although it is true that this analysis could be done much more efficiently with additional scripting, it would not have met the definition of “already available” tools. The imageJ analysis was performed in a way an average biologist with no programming experience would perform it, since that group will find SMorph most useful. In no way do we intend to imply that imageJ analysis can’t be made more efficient and automated. Perhaps it was not clear from the way the text was framed in the initial version of the manuscript. We will add additional text to make this point clearer.

    On a side-note, in response to reviewer #2’s comments, we will perform the speed comparison on a per-image basis, so the speed gain (1080X) may change a little in the new comparison.

    Broad applicability:

    In our work, we made a significant effort to ensure that automated Sholl could be performed on any cell type: e.g., By supporting 2D and 3D images, by allowing repeated measures at each sampled distance, and by improving curve fitting. For linear profiles, we implemented the ability to perform polynomial fits of arbitrary degree, and implemented heuristics for 'best degree' determination. For normalized profiles, we implemented several normalizers, and alternatives for determining regression coefficients. We did not tackle segmentation of images directly (we did provide some accompanying scripts to aid users, see e.g. https://imagej.net/BAR) because in our case that is handled directly by ImageJ and Fiji's large collection of plugins. However, in SMorph, several of these parameters are hard-wired in the code. They may be suitable to the analyzed images, but they can be hardly generalized to other datasets. In detail: In terms of segmentation, SMorph is restricted to 2D images, scales data to a fixed 98 percentile, and uses a fixed auto-threshold method (Otsu). These settings are tethered to the authors imagery. They will give ill results for someone else using a different imaging setup, or staining method. In terms of curve fitting, the polynomial regression seems to be fixed at a 3rd order polynomial, which will not be suitable to different cell types (not even to all cells of 'radial morphology').

    We have indeed hard-coded the parameters that the reviewer mentions, and we agree that we can perhaps give all options to the end-users to choose from. The decision was made to hard-code the parameters so that SMorph becomes very easy and minimalistic to use for the end-users. But the reviewer is right to point out that this may compromise the broad applicability and accuracy. We will update the code in the revised version of the manuscript to give the users control over choosing these parameters.

    PCA:

    The idea of making PCA analysis of Sholl-based morphometry accessible to a broader user base has merit and is welcomed. However, it has to be done carefully in a self-critic manner as opposed to a black-box solution. E.g., in the text it is mentioned that 2 principal components are used, in the tutorial notebook, 3. Why not provide intuitive scree plots that empower users with the ability to criticize choice? Also, it would be useful for users to understand which metrics correlate with each other, and their variable weights.

    Reviewer #1’s suggestions would indeed make the PCA analysis more useful to the users. In the revised version of the code, we will provide additional data/plots to the user for making an informed choice of the significant principal components e.g. the elbow method, Ogive or Pareto plots, variable weights of different features in the principal components and correlation/covariance matrices.

    When we showcased the utility of PCA to distinguish closely related morphology groups (as in Type-1 and Type-2 PV neurons), we had been unable to base the distinction on individual metrics, at least not in a robust manner (see Fig. S4 in Ferreira et al, 2014). A minor conundrum of the paper, is that it does not directly highlight the advantages of "analyzes in a multidimensional space". The differences between groups in the stab wound and DMI assays are such, that PCA is hardly needed: I.e., the differences depicted Fig2F,G are already significant, and already convey changes in "size and branch complexity" (as per PC1). The same argument applies to Fig. 5. The paper would profit from having this discussed.

    PCA data indeed is not required to make any of the inferences we make in the paper and is superfluous. However, as mentioned in the discussion section of this manuscript, the low-dimensional PCA data can be used in future for other applications, e.g to cluster the astrocytes into morphometrically-defined subpopulations. SMorph can be further developed to perform real-time classification of these cells into morphometric clusters, which will allow the researchers to investigate clusters-specific gene expression, electrophysiology etc. Preliminary results from our lab do suggest that such clusters are differentially altered by stress and antidepressant treatments. However, these results are preliminary and are a part of a long-term future study. The data is really premature to publish at this stage, since it will require a lot of experimentation to show that these astrocyte subpopulations are indeed physiologically and functionally different. Nevertheless, we think that the utility of SMorph for such analyses may help others to come up with additional innovative ways to use the PCA data. Hence, we do believe that the community will benefit from the current release of SMorph having PCA. PCA data was shown in the figures just to demonstrate the functionality of SMorph. We will add additional text to make these points clearer.

    Other:

    • All metrics and parameters should be expressed in physical units (e.g.," radii increasing by 3 pixels", axes in Figure 2, 3, 5, S2) so that readers can directly interpret them.

    In the revised manuscript, we will convert all units into actual physical distances.

    We thank the reviewer for suggesting this paper. We will include this in the discussion of the manuscript.

    Minor comments:

    • Usage of RGB images (8-bit per channel) seems hardly justifiable. Aren't you loosing dynamic range of GFAP signal?

    We agree that we could have captured the images at a higher dynamic range. However, for the changes we observe between treatment groups using GFAP immunoreactivity signal as presented in the manuscript, we do not see an advantage of using higher dynamic range. However, as the reviewer rightly pointed out, under certain conditions, imaging using a higher dynamic range may help and hence, we will include this recommendation in the materials and methods section.

    • Please explain how MaxAbsScaler "prevents sub-optimal results"

    Since morphometric features extracted from cell images either have different units or are scalar, we had to perform normalization before PCA. We will add further explanation in the methods section of the manuscript.

    • The fact that automated batch processing can stall on a single bad 'contrast ratio' image seems rather cumbersome to deal with

    This problem has been resolved in the current version of SMorph, which will be uploaded with the revised version of the manuscript.

    We will add a GPLv3 license

    • "mounted on stereotax" should be "mounted on a stereotaxis device"?

    We will make this change

    • Ensure Schoenen is capitalized

    We will make this change

    Reviewer #1 (Significance):

    I find the Desipramine results interesting. However, given the existing claims that DMI can modulate LTP, I regret that the authors did not look at structural modifications in hippocampal neurons (e.g., by performing the experiments in Thy1-M-eGFP animals). I understand, that doing so at this point would be a large undertaking.

    Another manuscript from our lab1, as well as work from other labs have shown that stress causes significant degenerative changes in hippocampal astrocytes2,3. In the light of these observations, we do believe that our observation of chronic antidepressant treatment inducing structural plasticity in astrocytes is significant. Structural alterations in neurons after DMI treatment are of interest. But in our experience, we have not seen gross morphological (dendritic arborization) changes in hippocampal neurons as a result of antidepressant drug treatments. Such changes are restricted to spine morphology and axonal varicosities, which is beyond the capabilities of SMorph.

    Reviewer #2 (Evidence, reproducibility and clarity):

    This paper addresses the challenge of automatic Sholl analysis of large dataset of multiple cell types such as neurons, astrocytes and microglia. The developed approach should improve the speed of morphology analysis compared to the state of the art without compromising on the accuracy. The authors present an interesting application of their tool to the morphological analysis of astrocytes following chronic antidepressant treatment. The paper is well written, and the tool presented could be beneficial for different applications and context. However, some major aspects should be addressed by the author concerning the description of the algorithms used and the quantification of the results.

    We thank reviewer #2 for their careful reading of the paper and their comments.

    Major comments/Questions:

    1. In the Results and/or Methods sections, the author should better describe how their approach is different from state-of-the-art approaches in terms of algorithms used and how these difference impacts on the speed and accuracy of the analysis.

    We will add these descriptions in the methods section in response to this comment as well as some comments from reviewer #1.

    1. Imaging was performed on a Zeiss LSM 880 airyscan confocal microscope. Is this method robust to other types of imaging techniques, other microscopes, variable levels of signal-to-noise? This should be tested and quantified.

    We will demonstrate the results obtained from images taken using different microscopes and imaging techniques, and quantify the outcome.

    1. Manual cropping of the cells with ImageJ was used. However, in the methods section, the authors mention that other machine learning tools could be used for this task. Why were these tools not implemented in this paper in order to propose a fully automated analysis approach in combination with SMorph?

    We have tried both the machine learning tools cited in this paper (one for DAB images and other for confocal images). However, in our experience, we do not get robust performance from these tools with our datasets, and these tools will perhaps need more optimization for broad applicability. We are developing an auto-cropping tool in-house, but that is beyond the scope of the current study. Another point is that these tools are tailor-made for astrocytes, and their integration into SMorph will restrict its applicability to just one cell type.

    1. In the methods section you state that cropped cells need to have a good contrast ratio for automated batch processing. Could you define what a good contrast ratio is and characterize the performance of your approach for different contrast ratio?

    In the revised manuscript, we will compare the images taken from multiple microscopes and quantify the outcome. We will change the text accordingly. As such, the comment on rejected cells referred to really poor quality images. In the revised manuscript, we will make specific recommendations on imaging parameters so that this should not be an issue at all.

    1. It is mentioned that the analysis routine can be interupted by a cell with lower contrast ratio. This is a major drawback of the approach (but I think that it could be easily improved), as such interruptions may not be= practicable for many applications that need to rely on automated processing.

    We have already rectified this problem and the updated version of SMorph will be uploaded with the revised manuscript.

    1. Also, you should precise how the contrast ratio should be enhanced without modifying raw data in order to be processed with your approach. You suggest removing cells with lower contrast ratio from the analysis, but can this impact on the findings especially if some treatments impact on the detected fluorescence signal? Can you propose ways to improve the robustness of your approach to variable signal ratios?

    It is indeed possible that removing cells from analysis, may in certain cases, affect the results. To rectify this, we are testing the method on images obtained from different microscopes and under different imaging conditions. From these analyses, we will deduce minimum recommendations for imaging conditions so that images don’t have to be edited/altogether removed from analysis for the software to work. In the materials and methods section, we will add these recommendations to the users on the optimal range of imaging parameters. This way, rejection/modification of images should not be an issue.

    1. In the Results section, you describe the time necessary to perform different analysis. However, giving a total time in hours is not very informative as this will likely vary a lot depending on the size of the dataset, complexity of the images, etc. You should compare the average time per image for both methods and types of analysis.

    We compared the total time required for the entire dataset, since SMorph is meant for batch-processing all the images at once. However, we can change the comparisons to time taken per image. We can divide the total time taken by SMorph by the number of images analysed. However, in our opinion, the time taken to initiate SMorph will make these comparisons inaccurate.

    1. You state that for the number of branch point, the lower value of the measured slope when comparing SMorph and ImageJ was related to a constant overestimation of this parameter with ImageJ. How was this quantified? I think you should stress out more the comparison of both approaches with the manually annotated dataset.

    In the revised version of this manuscript, we will include some examples of skeletonized images that overestimate the number of forks. We have observed this to be a recurring problem with the skeletonization tools we have tried in imageJ. This can be rectified in imageJ itself as pointed out by reviewer #1. However, that’s beyond the scope of the present study and will not fit the definition of comparison with “already available” methods.

    1. How can you explain the differences in the 2D-projected Area, total skeleton length and convex hull between SMorph and ImageJ, which all show a slope around 0.83? Can you quantify the performance of both methods by comparing them with your manually annotated dataset?

    In the revised version, we will include the correlation data between completely manual and SMorph comparisons. We will discuss these comparisons further in the manuscript and make specific conclusions about the accuracy.

    1. In the introduction and discussion, you mention that you present a method that works on neurons, astrocytes and microglia. However, I don't see in the paper the comparison between the accuracy for all these cell types as you seem to have analyzed only the morphology of astrocytes.

    In the revised manuscript, we will include the Sholl analysis comparison (imageJ vs SMorph) from images of neurons and microglia.

    1. You mention that your method is quite sensitive to variation in contrast ratio. You should quantify the contrast ratio throughout the experiments and ensure that this is not biasing the SMorph analysis for some of the treatments.

    We thank both reviewers for highlighting this issue in the initial version of SMorph. As mentioned in our response to point #6, we will perform additional analyses to make specific recommendations to the end users regarding imaging parameters so that SMorph can work on images as they are. As such, our comments on contrast ratio applied only to very poor quality images. If images are acquired conforming to the imaging parameters we will recommend in the revised manuscript, images can be analysed without any issues.

    Minor Points :

    1. Precise the exact inclusion and exclusion criteria for Soma detection and rephrase: "The high-intensity blobs were detected as a position of soma..." & "Boundary blobs coming from adjacent cells...".

    We will add a complete explanation of blob detection and the exclusion criterion in the methods section.

    1. Throughout the text, make sure to always refer to an analysis time per image or per cell and not only include absolute duration values without reference to the task at hand (e.g. in the discussion : SMorph took 40 second to complete the analysis... please state to which analysis you are exactly referring to and if applicable if it varies from cell to cell).

    We will change all comparisons to time taken per cell. Text will be added to mention which datasets were used when any claims of speed are made.

    1. When you state in the discussion that "Although some methods do allow Sholl analysis without manual neurite tracing, they still work on one cell at a time", please precise if the only aspect that is missing from this type of analysis is batch processing (looping through the data) or if there is a major obstacle to automate this technique. This is important a SMorph does proceed with the analysis one cell at a time but can work in a loop/batch.

    We will elaborate further on our assertion regarding the challenges of using imageJ plugins for sholl analysis in large batches of cells.

    Reviewer #2 (Significance):

    This tool could very useful to researchers in the field of cellular neuroscience working with high-throughput analysis of microscopy data. The authors show some interesting improvements over existing methods. An improved quantitative characterization of the robustness of their approach would be of great importance to ensure the significance of this tool to a large community of researchers using different types of microscopes or studying different cell types.

    My expertise is in the field of optical microscopy and high-throughput (automated) image analysis for neuroscience. My expertise to evaluate the biological findings in this study is very limited.

    We thank reviewer #2 for their careful reading of the manuscript and their insightful comments. Growing evidence (clinical and preclinical) shows a significant reduction in astrocyte density in key limbic brain regions as a result of depression. We believe that the structural plasticity induced by chronic antidepressant treatment, as demonstrated in this manuscript, is an interesting novel plasticity mechanism that can negate deleterious effects of stress on astrocytes.

    The improvements suggested by both reviewers will help us to greatly improve SMorph in the revised version of this manuscript.

    References:

    1. Virmani, G., D’almeida, P., Nandi, A. & Marathe, S. Subfield-specific Effects of Chronic Mild Unpredictable Stress on Hippocampal Astrocytes. doi:10.1101/2020.02.07.938472.

    2. Czéh, B., Simon, M., Schmelting, B., Hiemke, C. & Fuchs, E. Astroglial plasticity in the hippocampus is affected by chronic psychosocial stress and concomitant fluoxetine treatment. Neuropsychopharmacology 31, 1616–1626 (2006).

    3. Musholt, K. et al. Neonatal separation stress reduces glial fibrillary acidic protein- and S100beta-immunoreactive astrocytes in the rat medial precentral cortex. Dev. Neurobiol. 69, 203–211 (2009).

    Read the original source
    Was this evaluation helpful?
  2. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

    Learn more at Review Commons


    Referee #2

    Evidence, reproducibility and clarity

    This paper addresses the challenge of automatic Sholl analysis of large dataset of multiple cell types such as neurons, astrocytes and microglia. The developed approach should improve the speed of morphology analysis compared to the state of the art without compromising on the accuracy. The authors present an interesting application of their tool to the morphological analysis of astrocytes following chronic antidepressant treatment. The paper is well written, and the tool presented could be beneficial for different applications and context. However, some major aspects should be addressed by the author concerning the description of the algorithms used and the quantification of the results.

    Major comments/Questions:

    1. In the Results and/or Methods sections, the author should better describe how their approach is different from state-of-the-art approaches in terms of algorithms used and how these difference impacts on the speed and accuracy of the analysis.
    2. Imaging was performed on a Zeiss LSM 880 airyscan confocal microscope. Is this method robust to other types of imaging techniques, other microscopes, variable levels of signal-to-noise? This should be tested and quantified.
    3. Manual cropping of the cells with ImageJ was used. However, in the methods section, the authors mention that other machine learning tools could be used for this task. Why were these tools not implemented in this paper in order to propose a fully automated analysis approach in combination with SMorph?
    4. In the methods section you state that cropped cells need to have a good contrast ratio for automated batch processing. Could you define what a good contrast ratio is and characterize the performance of your approach for different contrast ratio?
    5. It is mentioned that the analysis routine can be interupted by a cell with lower contrast ratio. This is a major drawback of the approach (but I think that it could be easily improved), as such interruptions may not be= practicable for many applications that need to rely on automated processing.
    6. Also, you should precise how the contrast ratio should be enhanced without modifying raw data in order to be processed with your approach. You suggest removing cells with lower contrast ratio from the analysis, but can this impact on the findings especially if some treatments impact on the detected fluorescence signal? Can you propose ways to improve the robustness of your approach to variable signal ratios?
    7. In the Results section, you describe the time necessary to perform different analysis. However, giving a total time in hours is not very informative as this will likely vary a lot depending on the size of the dataset, complexity of the images, etc. You should compare the average time per image for both methods and types of analysis.
    8. You state that for the number of branch point, the lower value of the measured slope when comparing SMorph and ImageJ was related to a constant overestimation of this parameter with ImageJ. How was this quantified? I think you should stress out more the comparison of both approaches with the manually annotated dataset.
    9. How can you explain the differences in the 2D-projected Area, total skeleton length and convex hull between SMorph and ImageJ, which all show a slope around 0.83? Can you quantify the performance of both methods by comparing them with your manually annotated dataset?
    10. In the introduction and discussion, you mention that you present a method that works on neurons, astrocytes and microglia. However, I don't see in the paper the comparison between the accuracy for all these cell types as you seem to have analyzed only the morphology of astrocytes.
    11. You mention that your method is quite sensitive to variation in contrast ratio. You should quantify the contrast ratio throughout the experiments and ensure that this is not biasing the SMorph analysis for some of the treatments.

    Minor Points :

    1. Precise the exact inclusion and exclusion criteria for Soma detection and rephrase: "The high-intensity blobs were detected as a position of soma..." & "Boundary blobs coming from adjacent cells...".
    2. Throughout the text, make sure to always refer to an analysis time per image or per cell and not only include absolute duration values without reference to the task at hand (e.g. in the discussion : SMorph took 40 second to complete the analysis... please state to which analysis you are exactly referring to and if applicable if it varies from cell to cell).
    3. When you state in the discussion that "Although some methods do allow Sholl analysis without manual neurite tracing, they still work on one cell at a time", please precise if the only aspect that is missing from this type of analysis is batch processing (looping through the data) or if there is a major obstacle to automate this technique. This is important a SMorph do proceed with the analysis one cell at a time but can work in a loop/batch.

    Significance

    This tool could very useful to researchers in the field of cellular neuroscience working with high-throughput analysis of microscopy data. The authors show some interesting improvements over existing methods. An improved quantitative characterization of the robustness of their approach would be of great importance to ensure the significance of this tool to a large community of researchers using different types of microscopes or studying different cell types.

    My expertise is in the field of optical microscopy and high-throughput (automated) image analysis for neuroscience. My expertise to evaluate the biological findings in this study is very limited.

    Read the original source
    Was this evaluation helpful?
  3. Note: This preprint has been reviewed by subject experts for Review Commons. Content has not been altered except for formatting.

    Learn more at Review Commons


    Referee #1

    Evidence, reproducibility and clarity

    First of all, I want to apologize the authors and editor for my delay. Secondly, for clarity, I want to disclose that I am the author of the Fiji's 'Sholl Analysis' plugin, that the authors cite extensively (Ferreira et al, Nat Methods, 2014).

    In this study, Sethi et al introduce a software tool - SMorph - for bulk morphometric analysis of neurons and glia (astrocytes and microglia), based on the Sholl technique. The authors compare it to the state-of-the-art in a series of validation experiments (stab wound injury), to conclude that it is 1000 times faster that existing tools. Empowered by the tool, the authors show that chronic administration of a tricyclic antidepressant (DMI) leads to structural changes of astrocytes in the mouse hippocampus. The paper is well written, the description of the tool is clear, and the authors make all of the source code available, as well as most of the imagery analyzed in the manuscript. The latter on its own, makes me really appreciative of the authors work.

    Major comments:

    A major strength of SMorph is that it leverages the Python ecosystem, which allow the authors take advantage of powerful python packages such as sklearn, without the need for external packages or tools. However, I have strong criticisms for the claims that are made in terms of speed and broad-applicability of the software, including PCA.

    Speed:

    The 1000x speed gains, assumes - for the most part -- that the processing in Fiji cannot be automated. This is false. I read the source code of SMorph, and with exception of the PCA analysis, all aspects of SMorph can be automated in Fiji, using any of Fiji's scripting languages to make direct calls to the Fiji and Sholl Analysis plugin APIs (See https://javadoc.scijava.org/) . Now, perhaps the authors do not have experience with ImageJ scripting, or perhaps we Fiji developers failed to provide clear tutorials and examples on how to do so. Or perhaps, there is something inherently cumbersome with Fiji scripting that makes this hard (e.g., there is a current limitation with the ImageJ2 version of 'Sholl Analysis' that does not make it macro recordable). It such limitations do exist, it is perfectly fine to mention them, but do contact us at https://forum.image.sc, if something is unclear. We do strive to make our work as re-usable as possible. Unfortunately our own research does not always allow us the time required to do so. Case in point, our scripting examples (e.g., https://github.com/tferr/ASA/blob/master/scripting-examples/3D_Analysis_ImageStack.py; https://github.com/tferr/ASA/blob/master/scripting-examples/3D_Analysis_ImageStack.py) are not well advertised. That being said, I am still surprised that in their side-by-side comparisons the authors were not able to automate more the processing steps (e.g., the ImageJ1 version of 'Sholl Analysis' remains fully functional and is macro recordable). If I misunderstood what was done, please provide the ImageJ macros you used. Also, I wanted to mention that i) semi-manual tracing with Simple Neurite Tracer (now "SNT"), can also be scripted (see https://doi.org/10.1101/2020.07.13.179325); and that ii) Fiji commands and plugins can also be called in native python using pyimagej (https://pypi.org/project/pyimagej/), see e.g., https://github.com/morphonets/SNT/tree/master/notebooks#snt-notebooks). Arguably, the fact that SMorph handles blob detection and skeletonization-based metrics directly is more advantageous from a user point of view. In Fiji, blob detection, skeletonization and Strahler analysis (https://imagej.net/Strahler_Analysis) of the skeleton are handled by different plugins. However, those are also fully scriptable, and interoperate well. The point that topographic skeletonization in Fiji can originate loops is valid, however the authors should know that such cycles can be detected and pruned programmatically using e.g., pixel intensities (see https://imagej.net/AnalyzeSkeleton.html#Loop_detection_and_pruning and the original publication (https://pubmed.ncbi.nlm.nih.gov/20232465/)

    Broad applicability:

    In our work, we made a significant effort to ensure that automated Sholl could be performed on any cell type: e.g., By supporting 2D and 3D images, by allowing repeated measures at each sampled distance, and by improving curve fitting. For linear profiles, we implemented the ability to perform polynomial fits of arbitrary degree, and implemented heuristics for 'best degree' determination. For normalized profiles, we implemented several normalizers, and alternatives for determining regression coefficients. We did not tackle segmentation of images directly (we did provide some accompanying scripts to aid users, see e.g. https://imagej.net/BAR) because in our case that is handled directly by ImageJ and Fiji's large collection of plugins. However, in SMorph, several of these parameters are hard-wired in the code. They may be suitable to the analyzed images, but they can be hardly generalized to other datasets. In detail: In terms of segmentation, SMorph is restricted to 2D images, scales data to a fixed 98 percentile, and uses a fixed auto-threshold method (Otsu). These settings are tethered to the authors imagery. They will give ill results for someone else using a different imaging setup, or staining method. In terms of curve fitting, the polynomial regression seems to be fixed at a 3rd order polynomial, which will not be suitable to different cell types (not even to all cells of 'radial morphology').

    PCA:

    The idea of making PCA analysis of Sholl-based morphometry accessible to a broader user base has merit and is welcomed. However, it has to be done carefully in a self-critic manner as opposed to a black-box solution. E.g., in the text it is mentioned that 2 principal components are used, in the tutorial notebook, 3. Why not provide intuitive scree plots that empower users with the ability to criticize choice? Also, it would be useful for users to understand which metrics correlate with each other, and their variable weights.

    When we showcased the utility of PCA to distinguish closely related morphology groups (as in Type-1 and Type-2 PV neurons), we had been unable to base the distinction on individual metrics, at least not in a robust manner (see Fig. S4 in Ferreira et al, 2014). A minor conundrum of the paper, is that it does not directly highlight the advantages of "analyzes in a multidimensional space". The differences between groups in the stab wound and DMI assays are such, that PCA is hardly needed: I.e., the differences depicted Fig2F,G are already significant, and already convey changes in "size and branch complexity" (as per PC1). The same argument applies to Fig. 5. The paper would profit from having this discussed.

    Other:

    • All metrics and parameters should be expressed in physical units (e.g.," radii increasing by 3 pixels", axes in Figure 2, 3, 5, S2) so that readers can directly interpret them.
    • The paper would profit from the insights provided by Bird & Cuntz (https://pubmed.ncbi.nlm.nih.gov/31167149/)

    Minor comments:

    • Usage of RGB images (8-bit per channel) seems hardly justifiable. Aren't you loosing dynamic range of GFAP signal?
    • Please explain how MaxAbsScaler "prevents sub-optimal results"
    • The fact that automated batch processing can stall on a single bad 'contrast ratio' image seems rather cumbersome to deal with
    • Please add a license to https://github.com/parulsethi/SMorph/. Without it, other projects may shy away from using SMorph
    • "mounted on stereotax" should be "mounted on a stereotaxis device"?
    • Ensure Schoenen is capitalized

    Significance:

    I find the Desipramine results interesting. However, given the existing claims that DMI can modulate LTP, I regret that the authors did not look at structural modifications in hippocampal neurons (e.g., by performing the experiments in Thy1-M-eGFP animals). I understand, that doing so at this point would be a large undertaking.

    Read the original source
    Was this evaluation helpful?