ATRAP - Accurate T cell Receptor Antigen Pairing through data-driven filtering of sequencing information from single-cells

Curation statements for this article:
  • Curated by eLife

    eLife logo

    eLife assessment

    This paper is of interest to immunologists conducting single-cell analyses of T-cell recognition. It provides a means of curating datasets to ensure T cell-antigen pairs are identified. The data generated through this method often suffers from a relatively high background, so the authors present a computational approach to enhance the signal-to-noise of this type of analysis. At this stage, it is unclear if the thresholds and filtering steps described by the authors can be generally applied to other datasets of different qualities than the one used here.

This article has been Reviewed by the following groups

Read the full article See related articles

Abstract

Novel single-cell based technologies hold the promise of matching T cell receptor (TCR) sequences with their cognate peptide-MHC recognition motif in a high-throughput manner. Parallel capture of TCR transcripts and peptide-MHC is enabled through the use of reagents labeled with DNA barcodes. However, analysis and annotation of such single-cell sequencing (SCseq) data is challenged by dropout, random noise, and other technical artifacts that must be carefully handled in the downstream processing steps.

We here propose a rational, data-driven method termed ATRAP (Accurate T cell Receptor Antigen Paring) to deal with these challenges, filtering away likely artifacts, and enable the generation of large sets of TCR-pMHC sequence data with a high degree of specificity and sensitivity, thus outputting the most likely pMHC target per T cell. We have validated this approach across 10 different virus-specific T cell responses in 16 healthy donors. Across these samples we have identified up to 1494 high-confident TCR-pMHC pairs derived from 4135 single-cells.

Article activity feed

  1. Author Response

    Reviewer #1 (Public Review):

    Single-cell sequencing technologies such as 10x, in conjunction with DNA barcoded multimeric peptide MHCs (pMHCs) has enabled high throughput paring of T cell receptor transcript with antigen specificity. However, the data generated through this method often suffers from the relatively high background due to ambient DNA barcodes and TCR transcripts leaking into "productive" GEMs that contain a 10X bead and a T cell decorated with antigen-specific barcoded proteins. Such contaminations can affect data analysis and interpretation and have the potential to lead to spurious results such as an incorrect assessment of antigen-TCR pairs or TCR cross-reactivity. To address this problem, Povelsen and colleagues have described a data-driven algorithm called "Accurate T cell Receptor Antigen Pairing through data-driven filtering of sequencing information from single-cells" (ATRAP) that supplies a set of filtering approaches that significantly reduces background and allows for accurate pairing of T cell clonotypes with cognate pMHC antigens.

    This paper is rigorously conducted and will be useful for the field - there are some areas where further clarifications and comparisons will benefit the reader.

    Strengths:

    1. Povelsen and colleagues have systematically evaluated the extent to which parameters in the experimental metadata can be used to assess the likelihood of a GEM to correctly identify the antigen specificity of the associated T cell clonotype.
    1. Povelsen and colleagues have provided elegant data-driven scoring metrics in the form of concordance score, specificity score, and an optimal ratio of pMHC UMI counts between different pMHCs on a GEM, which allows for easy identification of poor quality data points.
    1. Based on the experimental goals, ATRAP allows for customizable filters that could achieve appropriate data quality while maximizing data retention.

    Weakness:

    1. The authors mention that 100% of the 6,073 "productive" GEMs contained more than one sample hashing barcode, and 65% contained pMHC multiplets. While the rest of the paper elaborates on the steps taken to deal with pMHC multiplets issue, not much is said about the extent of multiplet hashing issue and how was it dealt with when assigning cells to individual donors. How is this accounted for? Even a brief explanation would be beneficial.

    We agree that the issue of multiplet hashing was only very briefly discussed in the manuscript. The reason for this is that although cell hashing multiplets exist for every GEM, it is generally a much simpler issue to solve than pMHC multiplets, because one hashing entry most often has much higher counts compared to the others (see supplementary fig. 3). Moreover, in the experimental design, only one hashing antibody is added to each sample. It is therefore given that only a single hashing signal should be associated with each GEM, i.e. this does not mirror the complex nature of the pMHC data, where cross-reactivity could result in more than one pMHC being a true binder to a given TCR. Given the simplicity associated with the hashing signal, we have here opted for utilizing an existing tool to annotate cell hashing. We have elaborated the description of this in the revised manuscript (line 384).

    1. It would be helpful for the authors to describe how experimental factors such as the quality of the input MHC protein may affect the outputted data (where different proteins may have different degrees of non-specific binding), and to what degree the ATRAP approach is robust to these changes. As an example, the authors mention that RVR/ A03 was present at high UMI counts across all GEMs and RPH/ B07 was consistently detected at low levels. Are these observations the property of the pMHCs or the barcoded dextran reagent? Furthermore, are there differences in the frequency of each of these multimers in the starting staining library which manifests in consistent high vs low read counts for the pMHC barcodes?

    We understand the reviewers' concern. We have extensive experience from staining with large libraries of different pMHCs in a bulk setting (Bentzen et al 2016), where it is part of the routine analyses to include an aliquot of the barcoded pMHC library taken prior to incubation with cells (input sample). From this data, we know that even if pMHCs are present in uneven amounts prior to cell incubation, this unevenness is not translated to the final output. I.e. if a given barcode (associated with a specific pMHC) is present at levels up to 2x higher than the remaining barcodes, this does not result in that barcode also being enriched after cell incubation if T cells do not recognize the corresponding pMHC. And vice versa, a barcode present at lower levels in the input can still be enriched after incubation with cells.. From the same type of data, we also have experience with differences in the background associated with different MHC/HLA molecules, i.e. a general higher level of background related to a certain MHC irrespectively of the peptide bound in this. We agree that this potentially could be a confounding factor influencing our results (as it will influence any other results related to the potential different background signal associated with different MHC/HLA molecules). We are currently in other studies investigating in a broader sense whether these differences reflect a biological inherent MHC association or are experimental artifacts. In the current work, we have opted for not defining pHLA specific UMI count threshold to ensure that any biological relevance remains unmasked, but still ensure that we can at the same time filter the data to identify the most likely true pMHC specific interaction.

    1. It would be helpful for the authors to further explain how ATRAP handles TCRs that may be present in only one (or a small number) of GEMs, as seen in Figure 7b, and potentially for the large number of relatively small clonotypes observed for the RVR/A03 peptide in Figure 6 (it is difficult to know if the long tail of clonotypes for RVR is in the range of 1 or 10 GEMs based on the scale bar). Beyond that, is there any effect on expected (or observed) clonal expansion on these data analyses, for example, if samples are previously expanded with a peptide antigen ex vivo or not?

    ITRAP removes any GEM that does not meet the criteria of the selected filters. Small clones are only removed if all GEMs in a clone fail to meet the selected filter criteria. As ITRAP is based on combinations of filters which are user-defined, one can choose to filter away singlet specificities, i.e. a TCR-pMHC pair only observed in a single GEM. However, this might not be relevant in all cases. We believe that it is a strength of the method that it is flexible and adaptable to the needs of individual users. This also allows for additional filters to be imposed by the user, if one for instance wishes to remove clones of fewer than a certain number of GEMs. With respect to figure 6, we agree that it was difficult to estimate the number of clonotypes within a given peptide plateau, and have updated the figure to include a clonotype count in the x-axis. In relation to the effect on clonotype expansion, we would first like to refer to figure 7. Here, we in figure a) and b) display the observed T cell frequencies towards the individual pMHCs as obtained by the two different experiment approaches a) conventional fluorescent multimer staining, and b) GEMs counts as obtained using the single-cell pipeline described here. This analysis demonstrates a very high concordance between the two approaches of the T cell populations, reflected by the vast majority of the responses detected by fluorescent multimer staining also being captured in the single-cell screening, (recall of 0.95). This result suggests that sensitivity of the SC approach, in the context of the current pMHC epitope set, is comparable to that of conventional fluorescent multimer staining. With regard to clonotype expansion, we would next like to refer back to figure 3. Even though we have not expanded the clones in vitro, this figure shows how the specificity of a TCR clone can be more confidently assigned when there are more GEMs mapped to a given TCR clone. Hence, to identify a single TCR-pMHC match, it could in many cases be valuable to expand a given clone prior to the experiments. However, since the 10x pipeline can only include a limited number of cells, we argue that it is valuable to identify pMHC TCR pairs on unexpanded/unmanipulated material to include as many different pairs as possible.

    1. The authors mention a second method, ICON, for conducting these types of analyses, and that the approach leads to significantly more data loss. However, given there could be differences in dataset quality themselves, and given the dataset, ICON is publicly available, it would be helpful for a more explicit cross-comparison to be conducted and presented as a figure in the paper.

    We have conducted such a comparative analysis in a separate manuscript (available at BioRxiv doi.org/10.1101/2023.02.01.526310). The overall conclusion is that both methods allow for effective denoising of the provided data, with an overall advantage in favor of iTRAP. We have extended the discussion in the current manuscript with a brief summary of the main findings from this study.

    Reviewer #2 (Public Review):

    The study by Povlsen, Bentzen et al. describes certain computational pipelines authors used to analyze the results from a single-cell sequencing experiment of pMHC-multimer stained T cells. DNA-barcoded pMHC multimers and single-cell sequencing technologies provide an opportunity for the high-throughput discovery of novel antigen-specific TCRs and profiling antigen-specific T-cell responses to multiple epitopes in parallel from a single sample. The authors' goal was to develop a computational pipeline that eliminates potential noise in TCR-pMHC assignments from single-cell sequencing data. With several reasonable biological assumptions about underlying data (absence of cross-reactivity between these epitopes, same specificity for different T-cells within a clonotype, more similarity for TCRs recognizing the same epitope, HLA-restriction of T cell response) authors identify the optimal strategy and thresholds to filter out artifacts from their data.

    It is not clear If the identified thresholds are optimal for other experiments of this kind, and how the violation of authors' assumptions (for example, inclusion of several highly similar pMHC-multimers recognized by the same clone of cross-reactive T cells) will impact the algorithm performance and threshold selection by the algorithm. The authors do not discuss several recent papers featuring highly similar experimental techniques and the same data filtering challenges:

    https://www.science.org/doi/10.1126/sciimmunol.abk3070

    https://www.nature.com/articles/s41590-022-01184-4

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9184244/

    As described above, we have investigated the use of ITRAP on the large data set provided by 10X Genomics, and here further compared the result to that obtained by ICON in an independent publication [BioRxiv doi.org/10.1101/2023.02.01.526310]. We have included a brief summary of the findings in study in the current manuscript. The overall results and conclusions between the two studies align very well. UMI count filtering and donor-HLA matching are in both cases driving the strongly denoising signal. However, when it comes to the identified UMI thresholds, they were found to differ between the two data sets. As stated above, this we however believe to be a strength of the ITRAP framework, since it demonstrates that the tools can be robustly applied to data originating from very different technical and/or biological settings.

    We acknowledge that ITRAP is highly dependent on the data containing a set of “large” clonotypes for which a single pMHC target can be assigned using the statistical approach outlined in the manuscript. This since the UMI filtering thresholds are defined based on these clonotypes and associated peptide annotations. However, other than this, the method does not exclude identification of cross-reactive TCR (in contrast to for instance ICON). We have expanded the discussion to make this point more clear.

    When it comes to the papers mentioned by the reviewer, these are clearly of high interest to us, and we are currently in the process of analyzing these data using the ITRAP framework. We however believe these analyses are beyond the score of the current publication, in particular since we have conducted the parallel benchmark study on the 10X Genomics data mentioned above.

    Unfortunately, I was unable to validate the method on other datasets or apply other approaches to the authors' data because neither code nor raw or processed data were available at the moment of the review.

    All data sets and code has been made publicly available at https://services.healthtech.dtu.dk/suppl/immunology/ITRAP

    One of the weaknesses of this study is that the motivation for the experiment and underlying hypothesis is unclear from the manuscript. Why these particular epitopes were selected, why these donors were selected, are any of the donors seropositive for EBV/CMV/influenza is unclear. Without particular research questions, it is hard to evaluate pipeline performance and justify a particular filtering strategy: for some applications, maximum specificity (i.e. no incorrect TCR specificity assignments) is crucial, while for others the main goal is to retain as many cells as possible.

    We understand this concern and have elaborate our motivation for the experimental design in the text. The overall motivation for this study was to generate TCR-pMHC data complementing what was available in the public domain at the start of the project. This with the purpose of generating novel data for training of TCR specificity prediction models. This is also the reason why we explicitly “deselected” T cells specific for the 3 negative control peptides, since these already are covered with large amounts of TCR sequences in the public databases.

    We do not know the serostatus of the donors included, but have determined the antigen-specificities present in the donors prior to initiating the study (evaluated for T cell recognition against 945 common viral specificities, using barcoded pMHC multimers in a bulk setting). The 945 peptides were selected from prevalent epitopes within IEDB. This means that the T cell specificities for the donors selected to be included in the current study was known a priori. We have updated the motivation for performing the study (lines 122-126).

  2. eLife assessment

    This paper is of interest to immunologists conducting single-cell analyses of T-cell recognition. It provides a means of curating datasets to ensure T cell-antigen pairs are identified. The data generated through this method often suffers from a relatively high background, so the authors present a computational approach to enhance the signal-to-noise of this type of analysis. At this stage, it is unclear if the thresholds and filtering steps described by the authors can be generally applied to other datasets of different qualities than the one used here.

  3. Reviewer #1 (Public Review):

    Single-cell sequencing technologies such as 10x, in conjunction with DNA barcoded multimeric peptide MHCs (pMHCs) has enabled high throughput paring of T cell receptor transcript with antigen specificity. However, the data generated through this method often suffers from the relatively high background due to ambient DNA barcodes and TCR transcripts leaking into "productive" GEMs that contain a 10X bead and a T cell decorated with antigen-specific barcoded proteins. Such contaminations can affect data analysis and interpretation and have the potential to lead to spurious results such as an incorrect assessment of antigen-TCR pairs or TCR cross-reactivity. To address this problem, Povelsen and colleagues have described a data-driven algorithm called "Accurate T cell Receptor Antigen Pairing through data-driven filtering of sequencing information from single-cells" (ATRAP) that supplies a set of filtering approaches that significantly reduces background and allows for accurate pairing of T cell clonotypes with cognate pMHC antigens.

    This paper is rigorously conducted and will be useful for the field - there are some areas where further clarifications and comparisons will benefit the reader.

    Strengths:
    1. Povelsen and colleagues have systematically evaluated the extent to which parameters in the experimental metadata can be used to assess the likelihood of a GEM to correctly identify the antigen specificity of the associated T cell clonotype.
    2. Povelsen and colleagues have provided elegant data-driven scoring metrics in the form of concordance score, specificity score, and an optimal ratio of pMHC UMI counts between different pMHCs on a GEM, which allows for easy identification of poor quality data points.
    3. Based on the experimental goals, ATRAP allows for customizable filters that could achieve appropriate data quality while maximizing data retention.

    Weakness:
    1. The authors mention that 100% of the 6,073 "productive" GEMs contained more than one sample hashing barcode, and 65% contained pMHC multiplets. While the rest of the paper elaborates on the steps taken to deal with pMHC multiplets issue, not much is said about the extent of multiplet hashing issue and how was it dealt with when assigning cells to individual donors. How is this accounted for? Even a brief explanation would be beneficial.

    2. It would be helpful for the authors to describe how experimental factors such as the quality of the input MHC protein may affect the outputted data (where different proteins may have different degrees of non-specific binding), and to what degree the ATRAP approach is robust to these changes. As an example, the authors mention that RVR/ A03 was present at high UMI counts across all GEMs and RPH/ B07 was consistently detected at low levels. Are these observations the property of the pMHCs or the barcoded dextran reagent? Furthermore, are there differences in the frequency of each of these multimers in the starting staining library which manifests in consistent high vs low read counts for the pMHC barcodes?

    3. It would be helpful for the authors to further explain how ATRAP handles TCRs that may be present in only one (or a small number) of GEMs, as seen in Figure 7b, and potentially for the large number of relatively small clonotypes observed for the RVR/A03 peptide in Figure 6 (it is difficult to know if the long tail of clonotypes for RVR is in the range of 1 or 10 GEMs based on the scale bar). Beyond that, is there any effect on expected (or observed) clonal expansion on these data analyses, for example, if samples are previously expanded with a peptide antigen ex vivo or not?

    4. The authors mention a second method, ICON, for conducting these types of analyses, and that the approach leads to significantly more data loss. However, given there could be differences in dataset quality themselves, and given the dataset, ICON is publicly available, it would be helpful for a more explicit cross-comparison to be conducted and presented as a figure in the paper.

  4. Reviewer #2 (Public Review):

    The study by Povlsen, Bentzen et al. describes certain computational pipelines authors used to analyze the results from a single-cell sequencing experiment of pMHC-multimer stained T cells. DNA-barcoded pMHC multimers and single-cell sequencing technologies provide an opportunity for the high-throughput discovery of novel antigen-specific TCRs and profiling antigen-specific T-cell responses to multiple epitopes in parallel from a single sample. The authors' goal was to develop a computational pipeline that eliminates potential noise in TCR-pMHC assignments from single-cell sequencing data. With several reasonable biological assumptions about underlying data (absence of cross-reactivity between these epitopes, same specificity for different T-cells within a clonotype, more similarity for TCRs recognizing the same epitope, HLA-restriction of T cell response) authors identify the optimal strategy and thresholds to filter out artifacts from their data.

    It is not clear If the identified thresholds are optimal for other experiments of this kind, and how the violation of authors' assumptions (for example, inclusion of several highly similar pMHC-multimers recognized by the same clone of cross-reactive T cells) will impact the algorithm performance and threshold selection by the algorithm. The authors do not discuss several recent papers featuring highly similar experimental techniques and the same data filtering challenges:
    https://www.science.org/doi/10.1126/sciimmunol.abk3070
    https://www.nature.com/articles/s41590-022-01184-4
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9184244/

    Unfortunately, I was unable to validate the method on other datasets or apply other approaches to the authors' data because neither code nor raw or processed data were available at the moment of the review.

    One of the weaknesses of this study is that the motivation for the experiment and underlying hypothesis is unclear from the manuscript. Why these particular epitopes were selected, why these donors were selected, are any of the donors seropositive for EBV/CMV/influenza is unclear. Without particular research questions, it is hard to evaluate pipeline performance and justify a particular filtering strategy: for some applications, maximum specificity (i.e. no incorrect TCR specificity assignments) is crucial, while for others the main goal is to retain as many cells as possible.

  5. Reviewer #3 (Public Review):

    The method of ATRAP provides a useful workflow for processing and analysing single-cell sequencing data of TCRs and barcoded pMHC. The method addresses an important subfield of research, as the availability of these datasets is increasing substantially due to the wider availability of commercial reagents and tools.

    Overall the study is highly technical and can be considered almost a "user manual" to assist researchers who pursue this TCR-pMHC specificity experiments by single-cell sequencing. Convincing experimental work, data analysis, appropriate controls, and technical details are provided throughout.