Improved Autonomous Vehicle Vision Relies on Location

In collaboration with Ford Motor Company, QUT robotics researchers have developed a method for instructing an autonomous vehicle’s cameras which to use for navigation. Senior author and Australian Research Council Laureate Fellow Professor Michael Milford said the study is the result of an investigation into how cameras and LIDAR sensors, which are frequently used in autonomous vehicles, can better perceive their surroundings.

The main concept is to determine which cameras to utilize at various areas throughout the globe based on prior experience there, according to Professor Milford.

For instance, the system might decide to employ a specific camera on consecutive trips to a particular stretch of road after discovering that it is highly helpful for tracking the position of the vehicle there.

The project is being led by Dr. Punarjay (Jay) Chakravarty on behalf of the Ford Autonomous Vehicle Future Tech group.
According to Dr. Chakravarty, “Autonomous cars heavily depend on knowing where they are in the world, employing a range of sensors, including cameras.”

Knowing your location enables you to make advantage of map data that is also helpful for spotting other dynamic things in the scene. People may cross at a certain intersection in a specific manner.

Accurate localization is crucial because it can be used as input for neural networks that recognize objects, and this research enables us to concentrate on the best camera at any given time.

The team “has also had to develop new methods of measuring the performance of an autonomous car positioning system in order to make headway on the problem.”

“We’re focusing not just on how the system operates when it’s performing well, but what happens in the worst-case situation,” said co-lead researcher Dr. Stephen Hausler.

This study was conducted as a component of a broader, more fundamental Ford research effort that examined how cameras and LIDAR sensors, which are frequently used in autonomous vehicles, might better comprehend their surroundings.

In addition to being presented at the next IEEE/RSJ International Conference on Intelligent Robots and Systems in Kyoto, Japan in October, this work was recently published in the journal IEEE Robotics and Automation Letters .

Punarjay Chakravarty, Shubham Shrivastava, and Ankit Vora from Ford worked together with researchers from QUT Stephen Hausler, Ming Xu, Sourav Garg, and Michael Milford.
By permission of Queensland University of Technology (QUT) .

Like the uniqueness and cleantech news coverage of CleanTechnica? Think about becoming an Patreon patron or a member, supporter, technician, or ambassador for CleanTechnica. Don’t miss a cleantech story, will ya? Register for daily news updates from CleanTechnica by email. Or follow us on Google News Want to advertise with CleanTechnica, send us a tip, or propose a speaker for our podcast CleanTech Talk? You can reach us here.

Share

Related Articles

World News Today

Featured Posts

In the UK, the Hyundai Ioniq 5 goes camping
October 26, 2022
In the UK, the Hyundai Ioniq 5 goes camping
The Most Recent EV To Market Before Cybertruck Is The Foxconn Model V
October 26, 2022
The Most Recent EV To Market Before Cybertruck Is The Foxconn Model V
Initial EV Purchaser? Coming Full Circle
October 26, 2022
Initial EV Purchaser? Coming Full Circle
The Ideal Vespa For A Terminator 2 Reboot Is Bandit9
October 26, 2022
The Ideal Vespa For A Terminator 2 Reboot Is Bandit9
The proposed North American Battery Factory is halted by CATL.
October 26, 2022
The proposed North American Battery Factory is halted by CATL.
previous arrow
next arrow

Science News Today

Tech News Post