New Algorithm Helps Navigation of Autonomous Vehicles

New algorithm developed at Caltech allows autonomous systems identify their location simply by looking at the terrain around them. Even more fascinating, the technology works irrespective of seasonal changes to that terrain.

Science Robotics published details about the process on June 23rd, published by the American Association for the Advancement of Science (AAAS).

The general process is referred to as visual terrain-relative navigation (VTRN). It was first developed in the 1960s. It uses comparison of nearby terrain to high-resolution satellite images to determine its location.

The challenge is that for it to work, the current generation of VTRN requires that the terrain it has its gaze on closely matches the images in its database. Anything obscuring or altering the terrain like fallen leaves or snow causes the images not to match up and interferes with the system. If there is no database of the landscape images under every conceivable condition, VTRN systems can be easily confused.

To overcome this challenge, a team from the lab of Soon-Jo Chung, Bren Professor of Aerospace and Control and Dynamical Systems and Research Scientist at JPL, which Caltech manages for NASA, referred to deep learning and artificial intelligence to remove seasonal content hindering VTRN systems.

‘The rule of thumb is that both images— the one from the satellite and the one from the autonomous vehicle— have to have identical content for current techniques to work. The differences that they can handle are about what can be accomplished with an Instagram filter that changes an image’s hues’, says Anthony Fragoso, Lecturer and Staff Scientist, as well as lead author of the Science Robotics paper. ‘In real systems, however, things change drastically based on season because the images no longer contain the same objects and cannot be directly compared.’

The process was developed by Chung and Fragoso in collaboration with graduate student Connor Lee and undergraduate student Austin McCoy. It uses a method known as ‘self-supervised learning’. Although most computer-vision strategies depend on human annotators who curate large datasets to teach an algorithm how to identify what it is seeing, this one lets the algorithm teach itself. The AI searches for patterns in images by baiting it with details and features that humans would likely miss.

The new system complements the current generation of VTRN very nicely, yielding to more accurate localization. In one experiment, the researchers attempted to localize images of summer foliage against winter-leaf-off imagery using a correlation-based VTRN technique. They discovered that performance wasn’t better than a coin flip— 50% of attempts resulting in navigation failures. In contrast, insertion of the new algorithm into the VTRN performed far better— 92% of attempts correctly matches, and the remaining 8% could be identifies as challenging in advance and then easily managed using other established navigation techniques.

‘Computers can find obscure patterns that our eyes can’t see and can pick up even the smallest trend,’ says Lee. VTRN was on the verge of turning into impracticable technology in common with challenging environments, he says. ‘We rescued decades of work in solving this problem.’

Apart from the utility for autonomous drones on Earth, the system also has applications for space missions. The entry, descent and landing (EDL) system on JPL’s Mars 2020 Perseverance rover mission used VTRN on the Red Planet to land at the Jezero Crater, a site previously considered as too hazardous for safe entry. With rovers like Perseverance, ‘a certain amount of autonomous driving is necessary since transmissions could take 20 minutes to travel between Earth and Mars, and there is no GPS on Mars,’ Chung says.

The team also considered the Martian polar regions that have intense seasonal changes, conditions similar to Earth, and the new system could allow for advanced navigation to support scientific goals including the search for water.

The next objective for Fragoso, Lee and Chung is to expand the technology to account for changes in the weather such as fog, rain, snow, etc. If successful, it could help improve navigation systems for driverless cars.

The Science Robotics paper is titled ‘A Seasonally-Invariant Deep Transform for Visual Terrain-Relative Navigation’. The project was funded by the Boeing Company, and the National Science Foundation.

By Marvellous Iwendi.

Source: Caltech