Cell Behavior Video Classification Challenge, a benchmark for computer vision methods in live-cell imaging
Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
Live-cell imaging techniques enable the acquisition of videos capturing complex cellular morphodynamics over time. However, the classification of these videos presents unmet technological needs, including methods to effectively analyze the shape and motion of objects without a rigid boundary, derive hierarchical spatiotemporal features from entire video sequences rather than static images, and potentially account for multiple other objects in the field of view. To this end, we organized the Cell Behavior Video Classification Challenge (CBVCC), benchmarking 35 different computer vision methods that can be grouped into three approaches: based on classification of tracking-derived features, based on end-to-end deep learning architectures to directly learn spatiotemporal features from the entire video sequence without explicit cell tracking, or ensembling tracking-derived with image-derived features. Here we compare the methods and discuss the potential and limitations of each approach, serving as a basis to foster the development of computer vision methods for studying cellular dynamics.