Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge
Metadatos
Mostrar el registro completo del ítemAutor
Méndez, Javier; Rodríguez Santiago, Noel; Molina, Miguel; Pegalajar Cuéllar, Manuel; Morales Santos, Diego PedroEditorial
MDPI
Materia
Sensor fusion Deep learning Edge computing Camera sensor LiDAR sensor Target detection
Fecha
2021Referencia bibliográfica
Mendez, J.; Molina, M.; Rodriguez, N.; Cuellar, M.P.; Morales, D.P. Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge. Sensors 2021, 21, 3992. https://doi.org/10.3390/ s21123992
Resumen
There have been significant advances regarding target detection in the autonomous vehicle
context. To develop more robust systems that can overcome weather hazards as well as sensor
problems, the sensor fusion approach is taking the lead in this context. Laser Imaging Detection
and Ranging (LiDAR) and camera sensors are two of the most used sensors for this task since they
can accurately provide important features such as target´s depth and shape. However, most of the
current state-of-the-art target detection algorithms for autonomous cars do not take into consideration
the hardware limitations of the vehicle such as the reduced computing power in comparison with
Cloud servers as well as the reduced latency. In this work, we propose Edge Computing Tensor
Processing Unit (TPU) devices as hardware support due to their computing capabilities for machine
learning algorithms as well as their reduced power consumption. We developed an accurate and
small target detection model for these devices. Our proposed Multi-Level Sensor Fusion model
has been optimized for the network edge, specifically for the Google Coral TPU. As a result, high
accuracy results are obtained while reducing the memory consumption as well as the latency of the
system using the challenging KITTI dataset.