Researchers at MIT are using VR to train autonomous drones in safer, less expensive environments.
Commercial drone use has skyrocketed over the last four years. Breakthroughs in unmanned aircraft technology have lowered the prices of key components to the point that filmmakers, researchers, even casual hobbyists have been able to join the fray, resulting in a booming new market with a generous amount of use-case scenarios.
Now, with a new a dawn in parcel transportation just beyond the horizon (see Amazon Prime Air), engineers and developers have begun doubling-down on their efforts to develop more efficient autonomous quadcopters. Unfortunately, training high-tech, fast-moving machines to automatically detect and avoid physical objects isn’t exactly a safe, or cheap process. Thankfully, a handful of brilliant minds over at the Massachusetts Institute of Technology have begun developing a new process of VR-based testing that could save manufacturers hundreds of thousands in shattered hardware.
Referred to by MIT researchers as “Flight Goggles,” the cutting-edge system utilizes VR technology to allow self-piloting vehicles to “see” a virtual world full of objects in which to navigate and avoid, when in reality they’re simply flying around an open, empty space. This allow engineers to properly calibrate a vehicles tracking components, make adjustments specific hardware and more, all without the risk of an expensive crash.
Inspired initially by the rise in popularity surrounding competitive drone racing, Sertac Karaman, associate professor of aeronautics and astronautics at MIT, saw the high speed cuts and turns these skilled pilots were capable of performing using their custom racing drone hybrids and wondered if these same advanced aerial maneuvers could be taught to an autonomous one as well.
“We think this is a game-changer in the development of drone technology, for drones that go fast,” spoke Karaman. “If anything, the system can make autonomous vehicles more responsive, faster, and more efficient.”
Currently being tested at MIT’s new robotics-based Building 31 facility, the hanger-style space is surrounded by motion-tracking cameras designed to record the position and orientation of every drones. An advanced image rendering-system can then generate photorealistic surroundings based on practical, real-world locations such as living rooms and warehouses, each filled with plenty of virtual objects to “avoid.”
“The moment you want to do high-throughput computing and go fast, even the slightest changes you make to its environment will cause the drone to crash,” Karaman continues. “You can’t learn in that environment. If you want to push boundaries on how fast you can go and compute, you need some sort of virtual-reality environment.”
During early testing phases of the new system, the results were promising. Over the course of 10 flights consisting of 361 trips through a narrow virtual window, the drone only “crashed” a total of 3 times. Once the vehicles onboard camera was turned off and the vehicle was put through a real-world test of the same scenario, the drone managed to navigate through the open window 119 times.
“It does the same thing in reality,” Karaman adds. “It’s something we programmed it to do in the virtual environment, by making mistakes, falling apart, and learning. But we didn’t break any actual windows in this process.”
Karaman and her team plan on revealing more on their exciting program next week at the IEEE International Conference on Robotics and Automation. For Karaman however, her eyes are set on potentially bringing this advanced self-navigating technology to the pro circuit.
“In the next two or three years, we want to enter a drone racing competition with an autonomous drone, and beat the best human player,” Karaman claims.
Research assistance on the project has been supported, in part, by U.S. Office of Naval Research, MIT Lincoln Laboratory, and the NVIDIA Corporation.
Image Credit: UAS Vision / MIT