DeePosit: an AI-based tool for detecting mouse urine and fecal depositions from thermal video clips of behavioral experiments

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This manuscript presents a valuable machine-learning-based approach to the automated detection of urine and fecal deposits by rodents, key ethological behaviors that have traditionally been very poorly studied. The strength of evidence for their claim, however, that the method provides "easy, efficient, and unbiased spatiotemporal analysis of scent marking during behavioral experiments" is incomplete. In particular, there were concerns about the generalizability of the approach, the relatively limited detection capabilities of the method, and a lack of rationale for specific design choices. This manuscript could be of interest to researchers in animal behavior, neuroscience, and automated animal tracking.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

In many mammals, including rodents, social interaction is accompanied by active urination, also known as micturition, for spatial scent marking. Urine and fecal deposits were shown to contain multiple chemosensory cues carrying information regarding the identity, strain, and social rank, as well as the physiological and hormonal conditions of the individual. Moreover, scent marking was shown to be social context-, state-, and experience-dependent. Thus, analyzing scent-marking activity during social interactions may contribute to understanding the structure of mammalian social interactions in health and disease. So far, however, such analysis faced multiple technical obstacles. Mainly, the commonly used void spot assay relies on detecting urine traces left over a filter paper on which the social interaction occurred; thus, it lacks temporal information and is prone to artifacts such as urine smearing. Recently, several studies employed thermal imaging for spatiotemporal analysis of scent marking, as urine and feces are deposited at body temperature and get rapidly cold afterward. This analysis, however, was done so far manually, which made it time-consuming and prone to bias by the observer. Here, we combine thermal imaging, computer vision tools, and an open-source algorithm incorporating a transformer-based video classifier to automatically detect and classify urine and fecal deposits made by male and female mice during several social behavior tests. We found distinct dynamics for urine and fecal depositions in a test- and sex-dependent manner, suggesting two distinct processes of scent marking in mice. The method and tools presented here allow researchers an easy, efficient, and unbiased spatiotemporal analysis of scent marking during behavioral experiments.

Article activity feed

  1. eLife assessment

    This manuscript presents a valuable machine-learning-based approach to the automated detection of urine and fecal deposits by rodents, key ethological behaviors that have traditionally been very poorly studied. The strength of evidence for their claim, however, that the method provides "easy, efficient, and unbiased spatiotemporal analysis of scent marking during behavioral experiments" is incomplete. In particular, there were concerns about the generalizability of the approach, the relatively limited detection capabilities of the method, and a lack of rationale for specific design choices. This manuscript could be of interest to researchers in animal behavior, neuroscience, and automated animal tracking.

  2. Reviewer #1 (Public Review):

    Summary:
    The manuscript provides a novel method for the automated detection of scent marks from urine and feces in rodents. Given the importance of scent communication in these animals and their role as model organisms, this is a welcome tool.

    Strengths:
    The method uses a single video stream (thermal video) to allow for the distinction between urine and feces. It is automated.

    Weaknesses:
    The accuracy level shown is lower than may be practically useful for many studies. The accuracy of urine is 80%. This is understandable given the variability of urine in its deposition, but makes it challenging to know if the data is accurate. If the same kinds of mistakes are maintained across many conditions it may be reasonable to use the software (i.e., if everyone is under/over counted to the same extent). Differences in deposition on the scale of 20% would be challenging to be confident in with the current method, though differences of the magnitude may be of biological interest. Understanding how well the data maintain the same relative ranking of individuals across various timing and spatial deposition metrics may help provide further evidence for the utility of the method.

  3. Reviewer #2 (Public Review):

    Summary:
    The authors built a tool to extract the timing and location of mouse urine and fecal deposits in their laboratory set up. They indicate that they are happy with the results they achieved in this effort.

    The authors note urine is thought to be an important piece of an animal's behavioral repertoire and communication toolkit so methods that make studying these dynamics easier would be impactful.

    Strengths:
    With the proposed method, the authors are able to detect 79% of the urine that is present and 84% of the feces that is present in a mostly automated way.

    Weaknesses:
    The method proposed has a large number of design choices across two detection steps that aren't investigated. I.e. do other design choices make the performance better, worse, or the same? Are these choices robust across a range of laboratory environments? How much better are the demonstrated results compared to a simple object detection pipeline (i.e. FasterRCNN or YOLO on the raw heat images)?

    The method is implemented with a mix of MATLAB and Python.

    One proposed reason why this method is better than a human annotator is that it "is not biased." While they may mean it isn't influenced by what the researcher wants to see, the model they present is still statistically biased since each object class has a different recall score. This wasn't investigated. In general there was little discussion of the quality of the model. Precision scores were not reported. Is a recall value of 78.6% good for the types of studies they and others want to carry out? What are the implications of using the resulting data in a study? How do these results compare to the data that would be generated by a "biased human?"

    5 out of the 6 figures in the paper relate not to the method but to results from a study whose data was generated from the method. This makes a paper, which, based on the title, is about the method, much longer and more complicated than if it focused on the method. Also, even in the context of the experiments, there is no discussion of the implications of analyzing data that was generated from a method with precision and recall values of only 70-80%. Surely this noise has an effect on how to correctly calculate p-values etc. Instead, the authors seem to proceed like the generated data is simply correct.

  4. Reviewer #3 (Public Review):

    Summary:
    The authors introduce a tool that employs thermal cameras to automatically detect urine and feces deposits in rodents. The detection process involves a heuristic to identify potential thermal regions of interest, followed by a transformer network-based classifier to differentiate between urine, feces, and background noise. The tool's effectiveness is demonstrated through experiments analyzing social preference, stress response, and temporal dynamics of deposits, revealing differences between male and female mice.

    Strengths:
    The method effectively automates the identification of deposits
    The application of the tool in various behavioral tests demonstrates its robustness and versatility.
    The results highlight notable differences in behavior between male and female mice

    Weaknesses:
    The definition of 'start' and 'end' periods for statistical analysis is arbitrary. A robustness check with varying time windows would strengthen the conclusions.
    The paper could better address the generalizability of the tool to different experimental setups, environments, and potentially other species.
    The results are based on tests of individual animals, and there is no discussion of how this method could be generalized to experiments tracking multiple animals simultaneously in the same arena (e.g., pair or collective behavior tests, where multiple animals may deposit urine or feces).

  5. Author response:

    We want to thank the reviewers for their constructive feedback.

    General

    The recall values of our method range between 78.6% for all urine cases to 83.3% for feces (and not between 70-80%, as stated by reviewer #2), with a mean precision of 85.6%. This is rather similar to other machine learning-based methods commonly used for the analysis of complicated behavioral readouts. For example, in the paper presenting DeepSqueak for analysis of mouse ultrasonic vocalizations (Coffey et al. DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations. Neuropsychopharmacol. 44, 859–868 (2019). https://doi.org/10.1038/s41386-018-0303-6), the recall values reported for both DeepSqueak, Mupet and Ultravox (Fig. 2c, f) are very similar to our method.

    We have analyzed and reported all the types of errors made by our methods, which are mostly technical. For example, depositions that overlap the mouse blob for too long till getting cold will be associated with the mouse and therefore will not be detected (“miss” events). These technical errors are not supposed to create a bias for a specific biological condition and, hence, shouldn’t interfere with the use of our method. A video showing all of the mistakes made by our algorithm on the test set was submitted (Figure 2-video 1).

    Below we will to relate to specific points and describe our plan to revise the manuscript accordingly.

    Detection accuracy

    a. It should be noted that when large urine spots are considered, our algorithm got 100% correct classification (Figure 2, supplement 1, panel b). However, small urine deposits are very similar to feces in their appearance in the thermal picture. In fact, if the feces are not shifted, discrimination can be quite challenging even for human annotators. To demonstrate the accuracy of the proposed method relative to human annotators, we plan to compare its results with the accuracy of a second human annotator.

    b. As part of the revision, we plan to test general machine learning-based object detectors such as faster-RCNN or YOLO (as suggested by Reviewer 2) and compare them with our method.

    c. To check if our method may introduce bias to the results, we plan to check if the errors are distributed evenly across time, space, and genders.

    Design choices

    (A) The preliminary detection algorithm has several significant parameters. These are:

    a. Minimal temperature rise for detection: 1.1°C rise during 5 sec.

    b. Size limits of the detection: 2 - 900 pixels.

    c. Minimal cooldown during 40 sec: 1.1°C and at least half the rise.

    d. Minimal time between detections in the same location: 30 sec.

    We chose to use low thresholds for the preliminary detection to allow detection of very small urinations and to minimize the number of “miss” events, relying on the classifier to robustly reject false alarms. Indeed, we achieved a low rate of miss events: 5 miss events for the entire test set (1 miss event per ~90 minutes of video). We attribute these 5 “miss” events to partial occlusion of the detection by the mouse.

    To adjust the preliminary detection parameters to a new environment, one will need to calibrate these parameters in their own setup. Mainly, the size of the detection depends on the resolution of the video, and the cooldown rate might be affected by the material of the floor, as well as the room temperature.

    We plan to explore the robustness of these parameters in our setup and report the influence on the accuracy of the preliminary algorithm.

    (B) We chose to feed the classifier with 71 seconds of videos (11 seconds before the event and 60 seconds after it) as we wanted the classifier to be able to capture the moment of the deposition, the cooldown process, as well as urine smearing or feces shifting which might give an additional clue for the classification. In the revised paper we plan to report accuracy when using a shorter video for classification.

    Generability

    a. In the revised version, we plan to report the accuracy of the method used on a different strain of mice (C57), with a different arena color (white arena instead of black).

    Statistics

    a. In the revised paper, we will explain why we chose each time window for analysis. Also, we will report statistics for different time windows, as suggested by Reviewer 3.

    b. Unlike reviewer #2, we don’t think that the small difference in recall rate between urine and feces (78.6% vs. 83.3%, respectively) creates a bias between them. Moreover, we don’t compare the urine rate to the feces rate.

    c. In the revised manuscript we will explicitly report the precision scores, although they also appear in our manuscript in Fig. 2- Supplement 1b.