Plug-and-Play automated behavioral tracking of zebrafish larvae with DeepLabCut and SLEAP: pre-trained networks and datasets of annotated poses

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Zebrafish are an important model system in behavioral neuroscience due to their rapid development and suite of distinct, innate behaviors. Quantifying many of these larval behaviors requires detailed tracking of eye and tail kinematics, which in turn demands imaging at high spatial and temporal resolution, ideally using semi or fully automated tracking methods for throughput efficiency. However, creating and validating accurate tracking models is time-consuming and labor intensive, with many research groups duplicating efforts on similar images. With the goal of developing a useful community resource, we trained pose estimation models using a diverse array of video parameters and a 15-keypoint pose model. We deliver an annotated dataset of free-swimming and head-embedded behavioral videos of larval zebrafish, along with four pose estimation networks from DeepLabCut and SLEAP (two variants of each). We also evaluated model performance across varying imaging conditions to guide users in optimizing their imaging setups. This resource will allow other researchers to skip the tedious and laborious training steps for setting up behavioral analyses, guide model selection for specific research needs, and provide ground truth data for benchmarking new tracking methods.

SIGNIFICANCE STATEMENT

Larval zebrafish are an emerging model in systems neuroscience, offering unique advantages for linking brain activity to behavior. However, detailed behavioral tracking, essential for such studies, requires time- and labor-intensive annotation and model training. To eliminate this bottleneck, here we provide a high-quality, annotated dataset of zebrafish behaviors for both free-swimming and head-embedded preparations alongside four pre-trained pose estimation models using DeepLabCut and SLEAP. We benchmark models’ performance across diverse imaging conditions to guide optimal setup choices. This community resource will allow researchers to bypass the most time-consuming stages of data annotation and training, enabling immediate behavioral analysis. By removing this key hurdle, this work will accelerate project initiation, support reproducibility, and provide a foundation for future tracking method development.

Article activity feed