Performance Analysis of the YOLO object detection algorithm in embedded systems: generated code vs. native implementation
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This work presents a comparative evaluation of advanced YOLO architectures for object detection, with a specific focus on their performance in traffic light detection for autonomous driving applications. Two deployment strategies were analyzed: a native implementation using PyTorch and a Model Based Engineering (MBE) implementation through automatic code generation. Evaluation metrics included precision-recall curves and confusion matrices across varying Intersection over Union (IoU) thresholds, as well as mean Average Precision (mAP) to assess detection quality and inference time measurements to evaluate computational efficiency on embedded platforms. The evaluation was based on a custom video extracted from the CARLA simulator, which was meticulously annotated by reviewing each frame to ensure the accuracy of the labeling. The study highlights the compromises between model accuracy and computational cost, providing a reproducible framework for performance benchmarking of object detection algorithms in safety-critical environments.