A method to automatically generate radar-camera datasets for deep learning applications


Scenario where the Hungarian inter-frame algorithm ensures that radar reflections on consecutive frames of the same object(s) are always labeled, even if YOLO intermittently fails. Credit: Sengupta, Yoshizawa and Cao.

In recent years, roboticists and computer scientists have developed a wide range of systems capable of detecting objects in their environment and moving about them accordingly. Most of these systems are based on machine learning and deep learning algorithms trained on large image datasets.

Although there are now many image datasets available for training machine learning models, those containing data collected using radar sensors are still rare, despite the significant advantages of radars over sensors. optics. Additionally, many available open source radar datasets are not easy to use for different user applications.

Researchers at the University of Arizona recently developed a new approach to automatically generate datasets containing labeled radar data camera images. This approach, presented in an article published in IEEE Letters on Robotics and Automationuses a highly accurate object detection algorithm on the camera image stream (called YOLO) and an association technique (called the Hungarian algorithm) to label the radar point cloud.

“Deep learning applications using radar require a lot of labeled training data, and labeling radar data is a non-trivial, extremely time-consuming and laborious process, mostly done by manually comparing it with a data stream image obtained in parallel”, Arindam Sengupta, Ph.D. student at the University of Arizona and principal investigator for the study, told TechXplore. “Our idea here was that if the camera and the radar are looking at the same object, then instead of looking at the images manually, we can take advantage of an image-based object detection framework (YOLO in our case) to automatically label radar data.”

The automatic labeling algorithm at work on real camera-radar data acquired at an intersection in Tucson, Arizona. Credit: Sengupta, Yoshizawa and Cao.

Three characteristic features of the approach introduced by Sengupta and his colleagues are its capacities for co-calibration, clustering and association. The approach co-calibrates a radar and its camera to determine how the location of an object detected by the radar would translate in terms of a camera’s digital pixels.

“We used a density-based clustering (DBSCAN) scheme to a) detect and suppress spurious/noise radar returns; and b) separate the radar returns into clusters to distinguish distinct objects,” Sengupta said. “Finally, an intra-frame and inter-frame Hungarian (HA) algorithm is used for the association. The intra-frame HA associated the YOLO predictions to the co-calibrated radar clusters in a given frame, while the radar clusters relating to the same object on consecutive frames to account for labeling of radar data in frames even when optical sensors fail intermittently.”

In the future, the new approach introduced by this team of researchers could help automate the generation of radar-camera and radar-only datasets. Additionally, in their paper, the team explored both proof-of-concept classification schemes based on a radar-camera sensor fusion approach and data collected solely by radars.

“We have also suggested the use of an efficient 12-dimensional radar feature vector, constructed using a combination of spatial, Doppler and RCS statistics, rather than the traditional use of point cloud distribution. or just micro-doppler data,” Sengupta says.

Ultimately, the recent study by Sengupta and colleagues may open new possibilities for the rapid investigation and training of deep learning-based models to classify or track objects using fusion. of sensors. These models could help improve the performance of many robotic systems, ranging from autonomous vehicles to small robots.

A method to automatically generate radar-camera datasets for deep learning applications

Steps leading to intra-frame YOLO-radar association, which then results in the labeling of radar clusters. Credit: Sengupta, Yoshizawa and Cao.

“Our lab at the University of Arizona conducts research in data-driven millimeter wave radar research targeting the areas of autonomy, health, defense and transportation,” TechXplore told TechXplore. Dr. Siyang Cao, assistant professor at the University of Arizona and principal investigator of the study. . “Some of our ongoing research includes investigating robust tracking schemes based on sensor fusion and improving autonomous millimeter wave radar perception using classical signal processing and deep learning. .”

A new look at quantum radar suggests it could increase accuracy more than expected

More information:
Automatic generation of radar-camera datasets for sensor fusion applications. IEEE Letters on Robotics and Automation(2022). DOI: 10.1109/LRA.2022.3144524.

© 2022 Science X Network

Quote: A method to automatically generate radar-camera datasets for deep learning applications (2022, February 25) retrieved March 30, 2022 from https://techxplore.com/news/2022-02-method-automatically -radar-camera-datasets-deep.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.


About Author

Comments are closed.