Improved Autonomous Vehicle Vision Relies on Location

In collaboration with Ford Motor Company, QUT robotics researchers have developed a method for instructing an autonomous vehicle’s cameras which to use for navigation. Senior author and Australian Research Council Laureate Fellow Professor Michael Milford said the study is the result of an investigation into how cameras and LIDAR sensors, which are frequently used in autonomous vehicles, can better perceive their surroundings.

The main concept is to determine which cameras to utilize at various areas throughout the globe based on prior experience there, according to Professor Milford.

For instance, the system might decide to employ a specific camera on consecutive trips to a particular stretch of road after discovering that it is highly helpful for tracking the position of the vehicle there.

The project is being led by Dr. Punarjay (Jay) Chakravarty on behalf of the Ford Autonomous Vehicle Future Tech group.
According to Dr. Chakravarty, “Autonomous cars heavily depend on knowing where they are in the world, employing a range of sensors, including cameras.”

Knowing your location enables you to make advantage of map data that is also helpful for spotting other dynamic things in the scene. People may cross at a certain intersection in a specific manner.

Accurate localization is crucial because it can be used as input for neural networks that recognize objects, and this research enables us to concentrate on the best camera at any given time.

The team “has also had to develop new methods of measuring the performance of an autonomous car positioning system in order to make headway on the problem.”

“We’re focusing not just on how the system operates when it’s performing well, but what happens in the worst-case situation,” said co-lead researcher Dr. Stephen Hausler.

This study was conducted as a component of a broader, more fundamental Ford research effort that examined how cameras and LIDAR sensors, which are frequently used in autonomous vehicles, might better comprehend their surroundings.

In addition to being presented at the next IEEE/RSJ International Conference on Intelligent Robots and Systems in Kyoto, Japan in October, this work was recently published in the journal IEEE Robotics and Automation Letters .

Punarjay Chakravarty, Shubham Shrivastava, and Ankit Vora from Ford worked together with researchers from QUT Stephen Hausler, Ming Xu, Sourav Garg, and Michael Milford.
By permission of Queensland University of Technology (QUT) .

Like the uniqueness and cleantech news coverage of CleanTechnica? Think about becoming an Patreon patron or a member, supporter, technician, or ambassador for CleanTechnica. Don’t miss a cleantech story, will ya? Register for daily news updates from CleanTechnica by email. Or follow us on Google News Want to advertise with CleanTechnica, send us a tip, or propose a speaker for our podcast CleanTech Talk? You can reach us here.

Share

Related Articles

World News Today

Featured Posts

Sweden’s Plug-In EV Market Share Is At 59.4%, And The Volvo XC40 Is Growing
November 6, 2022
Sweden’s Plug-In EV Market Share Is At 59.4%, And The Volvo XC40 Is Growing
Reportedly delayed until the first quarter of 2023 is the Tesla Cybertruck
November 5, 2022
Reportedly delayed until the first quarter of 2023 is the Tesla Cybertruck
Together, Tritium and DC-America will offer a coast-to-coast EV charging solution.
November 5, 2022
Together, Tritium and DC-America will offer a coast-to-coast EV charging solution.
First EV charging stations are provided for Taco Bell in California.
November 5, 2022
First EV charging stations are provided for Taco Bell in California.
New GMC Hummer EV All-Wheel Drive Ebike Introduced by Recon
November 4, 2022
New GMC Hummer EV All-Wheel Drive Ebike Introduced by Recon
previous arrow
next arrow

Science News Today

Tech News Post