Capturing piglet behavior in novel object tests using pose recognition
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Novel object arena tests are commonly used to study the impact of dietary interventions on cognition and memory in animal models. Piglet-based models of early nutrition in infant humans lack a precise and affordable option for recording behavior in these tests. We aimed to develop and validate a pose recognition neural network approach (DeepLabCut™, DLC) and compare it to manually coded video (gold standard) and a commercial tracking program (Ethovision, Noldus). Piglets (n = 12) were recorded in two environmental habituation and eight object-based tests. Using DLC, 500 video frames were randomly captured from the training dataset (8 piglets; 10 tests) and manually annotated (piglet’s head, body mid-back, tail) and trained with DLC in an NVIDIA® Docker® container. The trained model was applied to the testing dataset (4 piglets; 8 tests containing objects), which was also coded manually and with the commercial tracking software. Data were summarised per animal, per test (duration with objects, latency to approach objects and visits with objects). Compared to manual coding, DLC provided good accuracy for duration (concordance correlation coefficient (CCC) = 0.98) and latency (CCC = 0.85), and moderate accuracy for number of object visits (CCC = 0.65). Commercial tracking software generally had lower CCC (duration: 0.71; latency: 0.70; visits: 0.10). For DLC, the most common source of variation appeared to be the 1 s bout length necessary for the manual coder to record a visit; given that human reaction time varies, DLC provides an opportunity for improving behavioural monitoring consistency and accuracy. Reliance on mid-back for animal tracking reduced the accuracy of the commercial software. Pose recognition was successfully applied by non-machine learning experts to piglet arena videos, and thus, this is an alternative approach to manual observation and off-the-shelf tracking.