Real-Time Object Detection with QuantizedYOLOv11 and YOLOv8 on Raspberry Pi 5 for Low-Speed ADAS
Discuss this preprint
Start a discussion What are Sciety discussions?Listed in
This article is not in any list yet, why not save it to one of your lists.Abstract
This study investigates the object detection performance differences between YOLOv11 and YOLOv8 architectures for Advanced Driver Assistance Systems (ADAS) under CPU-only constraints. Speed and accuracy evaluations were conducted on a Raspberry Pi 5 (8 GB), with the CPU governor configured in performance mode to ensure stable benchmarking conditions. Since hardware accelerators such as GPUs or NPUs are not universally available in all vehicle platforms, this work focuses exclusively on CPU-based inference, which remains critical for practical ADAS deployment. Four models were evaluated: YOLOv11n, YOLOv11s, YOLOv8n, and YOLOv8s. Additionally, model optimization was performed using ONNX with INT8 post-training quantization to improve inference efficiency. Experimental results on the KITTI dataset demonstrate that the quantized YOLOv11n model achieved approximately 13 FPS with an average latency of 76.78 ms, whereas the YOLOv11s model achieved around 7 FPS with a latency of 152.20 ms. This corresponds to an approximately 46\% latency reduction for the nano model compared to the small variant. The findings indicate that nano-scale YOLO models provide a more favorable speed–accuracy trade-off for real-time, CPU-based ADAS applications on low-power embedded platforms.