Rodent ultrasonic vocal interaction resolved with millimeter precision using hybrid beamforming

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This study demonstrates an important method that drastically improves the precision of ultrasound localization in interacting mice. The authors present convincing evidence of the usefulness of the method for quantifying vocal behavior in various situations and demonstrate an interesting vocal dominance phenomenon between males. This tool will be of great interest to all scientists interested in vocal behavior in small animals.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Ultrasonic vocalizations (USVs) fulfill an important role in communication and navigation in many species. Because of their social and affective significance, rodent USVs are increasingly used as a behavioral measure in neurodevelopmental and neurolinguistic research. Reliably attributing USVs to their emitter during close interactions has emerged as a difficult, key challenge. If addressed, all subsequent analyses gain substantial confidence. We present a hybrid ultrasonic tracking system, Hybrid Vocalization Localizer (HyVL), that synergistically integrates a high-resolution acoustic camera with high-quality ultrasonic microphones. HyVL is the first to achieve millimeter precision (~3.4–4.8 mm, 91% assigned) in localizing USVs, ~3× better than other systems, approaching the physical limits (mouse snout ~10 mm). We analyze mouse courtship interactions and demonstrate that males and females vocalize in starkly different relative spatial positions, and that the fraction of female vocalizations has likely been overestimated previously due to imprecise localization. Further, we find that when two male mice interact with one female, one of the males takes a dominant role in the interaction both in terms of the vocalization rate and the location relative to the female. HyVL substantially improves the precision with which social communication between rodents can be studied. It is also affordable, open-source, easy to set up, can be integrated with existing setups, and reduces the required number of experiments and animals.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    This study demonstrates that a hybrid measurement method increases 3 fold the resolution of mouse USV localization. This increased resolution enables to revise previous occurrence frequency measures for female vocalizations and establishes the existence of vocal dominance in triadic interactions. The method is well described and its efficiency is carefully quantified. A limitation of the study is the absence of ground truth data, which may have been generated eventually with miniaturized loudspeakers in mouse puppets. However, a careful error estimation partially compensates for the absence of these likely challenging calibrations. In addition, the conclusions take into account this uncertainty. The gain in accuracy with respect to previous methods is clear and the impact of localisation accuracy on biological conclusions about vocalisation behavior is clearly exemplified. This study demonstrates the impact of the new method for understanding vocal interactions in the mouse model, which should be of tremendous interest for the growing community studying social interactions in mice.

    We have performed the requested, additional ground estimate using a movable miniature speaker, for more details see point 2 of Reviewer 2, and the new supplementary figure.

    Reviewer #2 (Public Review):

    Past systems for identifying and tracking rodent vocalizations have relied on triangulating positions using only a few high-quality ultrasonic microphones. There are also large arrays of less sensitive microphones, called acoustic cameras that don't capture the detail of the sounds, but do more accurately locate the sound in 3D space. Therefore the key innovation here is that the authors combine these two technologies by primarily using the acoustic camera to accurately find the emitter of each vocalization, and matching it to the highresolution audio and video recordings. They show that this strategy (HyVL) is more accurate than other methods for identifying vocalizing mice and also has greater spatial precision. They go on to use this setup to make some novel and interesting observations. The technology and the study are timely, important, and have the potential to be very useful. As machine learning approaches to behavior become more widespread in use, it is easy to imagine this being incorporated and lowering entry costs for more investigators to begin looking at rodent vocalizations. I have a few comments.

    1. What is the relationship of the current manuscript to this: https://www.biorxiv.org/content/10.1101/2021.10.22.464496v1 which has a number of very similar figures and presents a SLIM-only method that reportedly has lower precision than the current HyVL approach. Is this superseded by the submitted paper?

    The referred manuscript (now published in Scientific Reports) is indeed related to the current work: The currently presented system is based on the integration between SLIM (based on 4 high quality microphones) and Beamforming (based on the 64-channel microphone array). The accuracy of SLIM is generally lower than that of HyVL, but it makes essential contributions to the overall accuracy of HyVL through the integration of the complementary strengths of the two methods/microphone arrays (see Fig. 3A, L-shape of errors). To our knowledge, SLIM was the previously most accurate technique (based on 4 microphones, see comparison in the Discussion), but HyVL exceeds this by a substantial margin. Some figures appear similar mostly due to related code in the underlying analysis pipeline and visualization scripts (e.g. the half-disc densities). However, the set of dyadic and triadic recordings was collected specifically for the present study, and all top-level analyses were performed separately. The single mouse (C57Bl/6 WT) ground truth dataset is shared between the two studies, where in the SLIM paper only the USM4/SLIM part was evaluated (leading to a correspondingly lower, single animal accuracy).

    We felt that the level of detail above would probably impede the reading of the manuscript, and we have therefore added a subset of the above clarifications to the methods and the first time the other study is mentioned.

    1. Can the authors provide any data showing the accuracy of their system in localizing sounds emitted from speakers as a function of position and amplitude? I am imagining that it would be relatively easy to place multiple speakers around the arena as ground truth emitting devices to quantify the capabilities of the system.

    Ground truth data is critical for any meaningful comparison. First, we would like to highlight that we already provided ground truth data in the previous version of the manuscript: In Fig. 3C. we analyzed vocalization data from trials with (1) just a single mouse as well as (2) vocalization at times when all mice were far apart in relation to the accuracy of HyVL (>100 mm, i.e. >25x the accuracy of HyVL) where the chances of erroneous assignment are negligible. We think that these tests are the most relevant, as they are conducted with the relevant sounds, at their actual intensity, spectral profile and emitter acoustics.

    In addition, we have now conducted a series of tests with sounds produced by a miniature speaker placed in 25 different locations to demonstrate the lower-bound of accuracy achievable with the system. The tests indicate an accuracy of MAE < 1mm under these ideal conditions, i.e. without the absorption of the mouse bodies, varying direction of emission of the mouse snout, varying intensity, varying spectral content, duration, etc. Exploring the dependence on all these parameters is in itself interesting, but requires a detailed study in itself. The detailed experimental conditions and results are now provided in Supplementary Fig. 4, including a quantification of the dependence on amplitude.

    1. How is the system's performance affected by overlapping vocalizations? It might be useful to compare the accuracy of caller identification for periods where only one animal is calling at a time vs. periods where multiple animals are simultaneously calling.

    This is an excellent question. Our current code for detecting vocalizations cannot automatically determine if one or multiple vocalizations are concurrently present. We have therefore manually checked all vocalizations for overlapping instances, including those in triadic recordings with two males, where this would be expected to occur most frequently.

    We considered vocalizations to be overlapping if the overlapping constituent timefrequency traces did not form a harmonic stack. Overall, overlaps were surprisingly rare. We did find a couple of cases (<0.1%) where our detection algorithm produced a longer vocalization interval that contained multiple, differently shaped vocalization traces that, when re-analyzed in shortened time-frequency bins with beamforming, belonged to two different males. Note here that beamforming is separately performed from the onset to the end of each vocalization, so the cumulative heatmap can change depending on these onset and end times, which are normally determined by our detection algorithm.

    However, although the identity of the assigned vocalizer could shift in these very rare cases depending on which time bin was re-analyzed, the system’s localization performance remained in principle unaffected: as mentioned above, shorter time bins on non-overlapping parts correctly show the origin of the vocalizations in this case, and therefore a solution to this issue could be a USV detection algorithm that is able to detect the overlap based on the spectral shapes and parses them apart. During the beamforming each vocalization can then be separately localized, by restricting the beamforming to the corresponding time and frequency range. Further, the analysis could be refined so that multiple salient peaks can be detected in the soundfield estimate. This would, however, substantially change the analysis approach, i.e. rather than a single estimate per USV, a sequence of soundfield estimates should be computed and later fused again. Since such a procedure uses less data per single estimate, it also increases the possibility of false positives, which in the current situation with very few overlaps in time, would likely reduce the overall accuracy of the system, we decided to not modify the algorithm in this direction, but we agree that ideally a joint approach - combining separation on the spectrogram and soundfield level - should be pursued. For the present data, if a time window was analyzed such that the intensity map of the sound field contains multiple hotspots of an approximately equal magnitude, the USV would likely remain unassigned, because the within soundfield uncertainty would be higher than for a single peak, and this would reduce the MPI. However, given the rarity of these cases in our dataset, we do not think that their exclusion would change the results appreciably. This information was added as a paragraph to the Discussion.

    It is worth noting that HyVL is very robust: There were a number of cases (<5%) where environmental dampening in combination with harmonic stacking produced interesting timefrequency traces in some of the USM4 microphones, but our system did not have any issue spatially localizing this - what seems like a - smeared vocalization trace. We provide a few examples of this kind in a short video (see Rebuttal Video 2 and the legend at the bottom of this document), where the overlap is also reflected in the intensity map of the sound field, overlaid onto the platform.

    1. Can the authors comment on how sound shadows cast by animals standing between the caller and a USM4 affect either the accuracy of identification or the fidelity of the vocal recording?

    An important point to raise. Sound scattering and dampening caused by the conspecifics of the vocalizing animal can impede the accuracy of any sound localization system, but can unfortunately not be avoided in a social setting. To address this issue, we raised all USM4 microphones by ~12 cm above the interaction platform to minimize the instances of sound blocked by the mice. Further, the Cam64 device should largely be unaffected by sound shadows as it is centrally located above the platform. We have added a modified version of the above comment to the discussion under the heading "Current limitations and future improvements of the presented system".

    1. I'm a bit confused about how the algorithm uses the information from the video camera. Reading through the methods, it seems like they primarily calculate competing location estimates by the two types of microphone data and then make sure that a mouse is in close proximity to one location, discarding the call if there isn't. Why did the authors choose this procedure rather than use the tracked position of the snouts as constrained candidate locations and use the microphone data to arbitrate between them? Do they think that their tracking data are not reliable or accurate enough?

    Thanks for this important suggestion, which we have actually grappled with a lot during the analysis. First of all, the visual tracking data, in particular the manual data, is in our opinion (based on human visual identification) near perfect (within the limits of the video resolution, pixel resolution = 0.8 mm), i.e. on the order of 1-2 mm, and is therefore not the source of any unattributable vocalizations. If we understand the reviewer correctly, then we indeed perform the attribution as he indicates based on the tracked snouts of all mice, specifically by measuring the MPI's of both acoustic location estimates for all mice and then choosing the most reliable one. Specifically, the attributions can be grouped into 3 cases: (i) Estimated origin close to one snout, and snouts rather far apart, (ii) Estimated origin close to one snout and snouts close, and (iii) estimated origin not close to either snout. (i) is easy to address, (ii) is appropriately handled by the mouse probability index, but (iii) is tricky. Since the vocalization has to come from one of the mice, this already indicates that the localization is not working well in this case. Therefore we found it prudent (similar to Neunuebel et al. 2015) to not assign in these cases. Interestingly the MPI is not useful in these cases, as due to the exponential dependence of the normal density on distance, for example a case with a distance of 50 mm to one snout and 60 mm to another snout could lead to an MPI close to 1, which is likely not trustable. We have described this in the Methods as follows:

    "This distance threshold mainly serves to compensate for a deficiency of the 𝑀𝑃𝐼: if all mice are far from the estimate, all 𝑃𝑘 are extremely small, however, the 𝑀𝑃𝐼𝑘 will often exceed 0.95."
    Due to the inherent limit for localizing very quiet, short USVs by any system, we think this kind of selection (introduced originally by Neunuebel et al 2015) is a valuable and necessary step in the processing to avoid confusions (which are of course already substantially reduced through HyVL here).

    1. I guess the authors have code that we can run, but I couldn't access it. The manuscript describes the algorithms and equations that are used to calculate the location, but this doesn't really give me a feel for how it works. If you want to have the broadest impact possible, I think you would do well to make the code user-friendly (maybe it is, I don't know). In pursuit of that goal, I would suggest that the authors devote some of the paper to a guided example of how to use it.

    While the code was made available to the reviewers via the link at the beginning of the manuscript (p2, before abstract), we completely agree that this method of distribution is not very accessible. We have therefore created a publicly available GitHub repository (https://github.com/benglitz/HyVL) which hosts the code and details its use on the basis of a sample data set (which is available to the reviewers in the repository link, and later to the public under https://doi.org/10.34973/7kgc-ta72). While we do provide a sample video and analysis workflow there, our data analysis pipeline is quite integrated and other labs will likely use different pipelines. We have therefore tried to make the core functions independent of our pipeline and thus easy to integrate by others into their analysis pipelines.

    Reviewer #3 (Public Review):

    The present manuscript describes a new method to identify the emitter of ultrasonic vocalisations during social interactions between 2 or 3 mice. The method combines two technologies (an "acoustic camera" and a set of four microphones) and succeeds in increasing the spatial precision and the attribution of USV emission to one of the mice. The manuscript describes the characteristics and advantages of each method and the advantages of using both to optimize the identification of USV emitter. The authors used the method to confirm that females are also vocalising during male-female interactions and that females emit USV mostly during nose-nose contact while this was not the case for males. Interestingly, the authors identified that the vocal behaviour of two competing males was strongly asymmetric when facing a female. This was not the case for two females facing one male.

    The method is really promising since the identification of the emitter of USVs during mouse social interactions is a necessary step to speed up our understanding of this communication modality. The increase in spatial precision and in the proportion of attributed vocalisations is non-negligible and will be of great utility in the future.

    We would like to thank the reviewer for this positive perspective on the future utility of our system.

    Generally, the statistical analyses should be adjusted. Indeed, the statistical analyses do not consider the fact that the same individuals were recorded several times (if we understood well the methods). Each point was considered independent (in non-parametric Wilcoxon tests), while this is not the case given the repetitions with the same individuals (the number of repeated encounters per individual should be given in the methods section, by the way). We strongly recommend revising the statistical analyses of the results in Figures 4 and 5. In addition, it could be interesting to check whether the vocal behaviour is stable within each individual (i.e., a male that is vocalising frequently in one situation vocalises always frequently in other situations).

    We generally agree with this suggestion: In order to properly conduct the analysis for individuals as you suggest, a balanced dataset should be used. We had initially collected such a balanced dataset, which was previously not detailed in the manuscript, as the focus was on USV localization/attribution and hence only the recordings containing USVs were analyzed (detailed now in the beginning of Results and Methods). However, overall, the probability of a recording containing vocalizations at all is low: in our balanced set only 23/112 recordings contained vocalizations. We therefore had collected additional recordings with the best vocalizers which created the previously analyzed set of 83 recordings containing USVs recorded with all microphones. This dataset is therefore dominated by recordings from mice that are active vocalizers. While this does not raise any issue for the estimation of the accuracy of the method (Figure 3) or the female vocalizations (Figure 4, because recordings were always randomized across female mice), it precludes an encompassing analysis of individual differences in Figure 5, i.e. the dyadic-triadic comparison. In the new Figure 5, we address the reviewer's question for the dyadic recordings, finding that the current set of recordings does not provide sufficient evidence that individual male mice had significantly different vocalization rates. We would, however, like to point out that this is likely a consequence of the n=4 recordings that are compared here. For the female mice, we also did not find differences in vocalization rates, which is based on n=14 recordings and thus a more reliable result (p=0.16, 1-way ANOVA with factor individual).

    For the triadic recordings, however, due to a limitation in the experiment execution, we unfortunately do not have the complete information available on an experiment level for the triadic recordings, i.e. the video stream was accidentally started after all mice were placed in the platform, and since the same sex animals are visually not separable (while the female mice are separable from the males, based on a slightly shaved region on their head), we cannot completely assess this question in triadic recordings based on the available data. When including the triadic recordings in addition and assuming a single vocalizer (combining all male USVs, see below for why the males could not be assigned in the triadic condition) the male individual comparison can be approximately performed with n=8 recordings, and then the dependence on individual becomes borderline significant (p=0.028, 2-way ANOVA with factors individual and condition).

    For the comparison of vocalization rates in the previous Figure 5 that the reviewer was referring to, we cannot perform a rigorous analysis on the individual level, due to the lack of balance. While we thus agree that differences between individual mice can contribute to the differences observed, we do not think that this would change the conclusion that one of the mice dominates the vocal emissions. If the reviewers agree, we would thus leave Figures 6 (old Fig. 5) and new Figure 7 (behavioral confirmation of dominant/subordinate division) as part of the manuscript, with a clear cautioning about the possible contribution of individual differences to the observed differences. If the reviewers find it inappropriate to leave the results based on the unbalanced dataset in, all results after figure 5 could also be excluded (although we would find this unfortunate, given the additional time and effort we have invested in these).

    It is not easy to understand the rationale behind testing animals in pairs and in triads from the beginning of the manuscript. The authors should better introduce this aspect in the manuscript, especially given the fact that biological results deal with this aspect in Figure 5. The authors might strengthen the parts of the biological results extracted from their new method.

    Thank you for pointing out the need for clarification regarding the rationale behind testing animals in pairs and in triads. It is because courtship interactions are particularly vocal and social, that they are of interest to many fields, e.g. neurodevelopmental disorders.3,4 Due to the natural competitiveness between mice during courtship interactions, high accuracy is particularly beneficial in this regard because it allows disentangling USVs at close distances. We adapted the introduction to better reflect this reasoning and included an extra paragraph in the introduction and also where the biological results from old Fig. 5 / new Fig. 6 are summarized.

    More specifically, the fact that one male takes over the vocal behaviour within a triad is of high interest. Nevertheless, some behavioural data would be needed to strengthen these findings.

    We agree that this is an interesting finding and also agree that some additional behavioral analysis is useful to complement it. In order to arrive at this analysis, we performed all-frame, 3-animal tracking on the 14 triadic recordings with two males. This required switching to skeleton tracking with SLEAP5 in addition to manual post-processing to ensure that no identity switches occur. In each recording the dominant male was then defined as the one that emitted more vocalizations, and then the vocalization-independent spatial interaction histogram was computed, similar to the ones in Fig.4, but now separating between the dominant and the subordinate males (see new Figure 7). The results are consistent with the most typical location of vocalization of the male, in proximity to the female abdomen: The dominant male's spatial interaction histogram (Fig. 7A) was more clearly peaked in the location of the female abdomen very close to the male's snout, in comparison with the subordinate male's histogram (Fig. 7B), which shows up very clearly in the difference between the normalized histograms (Fig. 7C). Significance analysis was performed using 100x bootstrapping on the relative spatial positions to estimate p=0.99 confidence bounds around the histograms of the dominant and subordinate respectively. Significance at a level of p<0.01 highlights multiple relative spatial positions (Fig. 7D), including the one proximal to the snout which has the largest absolute difference (Fig. 7C). Note, that these analyses were conducted on the basis of the non-balanced dataset which contained enough vocalizations to assess the dominant male based on the vocalization rates and thus individual traits of certain animals remain as a possible confound.

    A small proportion of USVs was not assigned. The authors did not discuss the potential reason for this failure (Were the USVs too soft? Did they include specific acoustic characteristics that render them difficult to localise?). These points could be of interest when testing other mouse strains or other species.

    Good point, we agree that it is interesting to know the reasons for failure. As so often, there is not a single property that makes localization hard, but multiple factors contribute. In the SLIM paper, we already identified duration and intensity as important contributors (Fig. 3E/F), and in the speaker test (see new Supplementary Fig. 4) we again demonstrated the influence of intensity. In addition, frequency bandwidth and acoustic occlusion are two other main contributors that each influence the availability of the information/signal-to-noise ratio at the microphones:

    • Frequency bandwidth: In signals that are very narrowband, there are more opportunities for phase ambiguity, in particular for very high-frequency signals. These are avoided/reduced for more wideband signals.

    • Acoustic occlusion: As ultrasonic sounds can be quite directional, if an animal is vocalizing away from a microphone, which in addition would put its body in the way of the sounds to the microphone, then this can reduce the intensity at the microphone to a level where the information is insufficient to utilize information from this microphone. This mostly influences the 4 microphones surrounding the platform, while the Cam64 overhead will likely not be affected by acoustic occlusion in the plain.

    We have added a brief version of this explanation to the discussion under the heading: "Current limitations and future improvements of the presented system"

  2. eLife assessment

    This study demonstrates an important method that drastically improves the precision of ultrasound localization in interacting mice. The authors present convincing evidence of the usefulness of the method for quantifying vocal behavior in various situations and demonstrate an interesting vocal dominance phenomenon between males. This tool will be of great interest to all scientists interested in vocal behavior in small animals.

  3. Reviewer #1 (Public Review):

    This study demonstrates that a hybrid measurement method increases 3 fold the resolution of mouse USV localization. This increased resolution enables to revise previous occurrence frequency measures for female vocalizations and establishes the existence of vocal dominance in tryadic interactions. The method is well described and its efficiency is carefully quantified. A limitation of the study is the absence of ground truth data, which may have been generated eventually with miniaturized loudspeakers in mouse puppets. However, a careful error estimation partially compensates for the absence of these likely challenging calibrations. In addition, the conclusions take into account this uncertainty. The gain in accuracy with respect to previous methods is clear and the impact of localisation accuracy on biological conclusions about vocalisation behavior is clearly exemplified. This study demonstrates the impact of the new method for understanding vocal interactions in the mouse model, which should be of tremendous interest for the growing community studying social interactions in mice.

  4. Reviewer #2 (Public Review):

    Past systems for identifying and tracking rodent vocaliztions have relied on triangulating positions using only a few high-quality ultrasonic microphones. There are also large arrays of less sensitive microphones, called acoustic cameras that don't capture the detail of the sounds, but do more accurately locate the sound in 3D space. Therefore the key innovation here is that the authors combine these two technologies by primarily using the acoustic camera to accurately find the emitter of each vocalization, and matching it to the high-resolution audio and video recordings. They show that this strategy (HyVL) is more accurate than other methods for identifying vocalizing mice and also has greater spatial precision. They go on to use this setup to make some novel and interesting observations. The technology and the study are timely, important, and have the potential to be very useful. As machine learning approaches to behavior become more widespread in use, it is easy to imagine this being incorporated and lowering entry costs for more investigators to begin looking at rodent vocalizations. I have a few comments.

    1. What is the relationship of the current manuscript to this: https://www.biorxiv.org/content/10.1101/2021.10.22.464496v1 which has a number of very similar figures and presents a SLIM-only method that reportedly has lower precision than the current HyVL approach. Is this superseded by the submitted paper?

    2. Can the authors provide any data showing the accuracy of their system in localizing sounds emitted from speakers as a function of position and amplitude? I am imagining that it would be relatively easy to place multiple speakers around the arena as ground truth emitting devices to quantify the capabilities of the system.

    3. How is the system's performance affected by overlapping vocalizations? It might be useful to compare the accuracy of caller identification for periods where only one animal is calling at a time vs. periods where multiple animals are simultaneously calling.

    4. Can the authors comment on how sound shadows cast by animals standing between the caller and a USM4 affect either the accuracy of identification or the fidelity of the vocal recording?

    5. I'm a bit confused about how the algorithm uses the information from the video camera. Reading through the methods, it seems like they primarily calculate competing location estimates by the two types of microphone data and then make sure that a mouse is in close proximity to one location, discarding the call if there isn't. Why did the authors choose this procedure rather than use the tracked position of the snouts as constrained candidate locations and use the microphone data to arbitrate between them? Do they think that their tracking data are not reliable or accurate enough?

    6. I guess the authors have code that we can run, but I couldn't access it. The manuscript describes the algorithms and equations that are used to calculate the location, but this doesn't really give me a feel for how it works. If you want to have the broadest impact possible, I think you would do well to make the code user-friendly (maybe it is, I don't know). In pursuit of that goal, I would suggest that the authors devote some of the paper to a guided example of how to use it.

  5. Reviewer #3 (Public Review):

    The present manuscript describes a new method to identify the emitter of ultrasonic vocalisations during social interactions between 2 or 3 mice. The method combines two technologies (an "acoustic camera" and a set of four microphones) and succeeds in increasing the spatial precision and the attribution of USV emission to one of the mice. The manuscript describes the characteristics and advantages of each method and the advantages of using both to optimize the identification of USV emitter. The authors used the method to confirm that females are also vocalising during male-female interactions and that females emit USV mostly during nose-nose contact while this was not the case for males. Interestingly, the authors identified that the vocal behaviour of two competing males was strongly asymmetric when facing a female. This was not the case for two females facing one male.

    The method is really promising since the identification of the emitter of USVs during mouse social interactions is a necessary step to speed up our understanding of this communication modality. The increase in spatial precision and in the proportion of attributed vocalisations is non-negligible and will be of great utility in the future.

    Generally, the statistical analyses should be adjusted. Indeed, the statistical analyses do not consider the fact that the same individuals were recorded several times (if we understood well the methods). Each point was considered independent (in non-parametric Wilcoxon tests), while this is not the case given the repetitions with the same individuals (the number of repeated encounters per individual should be given in the methods section, by the way). We strongly recommend revising the statistical analyses of the results in Figures 4 and 5. In addition, it could be interesting to check whether the vocal behaviour is stable within each individual (i.e., a male that is vocalising frequently in one situation vocalises always frequently in other situations).

    It is not easy to understand the rationale behind testing animals in pairs and in triads from the beginning of the manuscript. The authors should better introduce this aspect in the manuscript, especially given the fact that biological results deal with this aspect in Figure 5. The authors might strengthen the parts on the biological results extracted from their new method.

    More specifically, the fact that one male takes over the vocal behaviour within a triad is of high interest. Nevertheless, some behavioural data would be needed to strengthen these findings.

    A small proportion of USVs was not assigned. The authors did not discuss the potential reason for this failure (Were the USVs too soft? Did they include specific acoustic characteristics that render them difficult to localise?). These points could be of interest when testing other mouse strains or other species.