If there is one thing we learned from the Google I/O conference, it is to look beyond the general announcements and focus on the foundation Google is laying out by attracting makers and developers to build for its platforms. As previously mentioned, Day 1 of Google I/O brought us major announcements like Jump & Cardboard, which showcased significant advancements for Google into virtual reality.
Although Google didn’t explicitly reveal its utility for Magic Leap, the hand it revealed was impressive. Google’s introduction of Projects Tango, Soli and Jaquard represent intriguing cards if paired.
Like Google Cardboard, Project Tango has evolved greatly from its introduction in 2014. Originally presented as a 3D imaging tool, I/O 2015 showcased Tango’s capabilities in both VR and Augmented Reality. Jonny Lee opened by stating Project Tango’s Goal: “To help everything and everyone understand where they are… Devices and tools that understand space”
Demoing the Project Tango Tablet, Lee showcased Tango’s Core Technologies (motion tracking, area learning, and depth perception) and illustrated its functionality in multiple areas, including real-time meshing in Unity with Physics, Precise indoor positioning, and AR overlay in indoor navigation. Google demonstrated these points through the following live examples:
Building with virtual objects in physical space:
Using a Unity demo as homage to a popular world building application (Mindcraft; cough, cough), Lee was able to direct the tablet’s camera to arbitrary points on the stage, and quickly construct a virtual building.
Transferring 3D assets to AR:
Using a 3D game provided by Other World, Lee highlighted how developers could port their games to the system allowing 6 Degree Freedom Input.
Bringing real-world geometry into the game engine:
While utilizing the tablet’s depth sensor and 3D tracking capabilities, Lee quickly launched into the developer app, Mesh Builder. In real time, Lee quickly rendered a 3D copy of the physical stage, complete with a basket he used to shoot virtual balls into, highlighting the device’s capability to not only render the physical space, but also real time physics.
Measuring your physical environment:
Using a simple utility app, Lee traced the distance between the corners of an onstage table. From one measurement to another, the tablet’s software was able to rapidly display the augmented measurements.
The last live demo displayed the tablet’s capability to serve as virtual showroom. Google waited until its final announcement however, to unveil one of Project Tango’s coolest functions, the ability to provide
VR devices sharing 3D data to replicate the physical world:
Opening with a static image of a Durovis Dive Helmets, Google explained that these head mounted tablets were being used as VR devices. The images quickly changed from one scene of a forest to three floating heads in a room. Lee explained that through these devices they were able to replicate the user’s physical motions in a virtual world. The floating heads represented three users talking in a room; as they spoke their mouths moved synchronically in the virtual world. Where as the forest example was a large scale virtual forest, involving the movement of multiple players enabled with Jaunt Helmets.
Pairing Project Tango Devices with Physical Objects:
Gesturing towards a picture of a Nerf Gun, Lee spoke about Google’s partnership with Hasbro and explained the creation of a 3D sight mount where there was a Project Tango-enabled device displaying a virtual world.
Although the presentation lacked the dazzle of a hardware device like Microsoft HoloLens, it is another reminder of Google’s brilliant long-term strategy. While others are placing high cost bets on the creation of luxury hardware, Google is enabling developers and makers to easily build for its platforms through the release of guidelines, SDKs, and low priced hardware.
Not only has Google already shipped 3,000 Project Tango developer kits, they have also partnered with a significant number of companies (including NASA) to develop hundreds of apps for their platform. In fact, before closing the presentation they once again incentivized developers to build for their platform through the announcement of an app development contest in three major categories: utility, AR/VR, and entertainment.
While Project Tango’s ability to enable experiences mimicking that of Star Trek’s Holodeck are impressive; it shares equal footing with two of Google’s other announcements: Project Soli and Project Jacquard. These two projects blend futuristic technology to enable a world without the need for physical interfaces, such as a keyboard, mouse, or game pad.
Enabled by ATAP, the same team that developed Project Tango, Google shrunk radar technology into a form factor of equal, or lesser size to that of a Micro USD card, giving users the ability to interact with their devices though the micro movements of their hand.
The baby step of Soli, Project Jaquard provides conductive threads that can be woven into clothing to provide futuristic interactions through our physical clothing. To bring this technology to life, Google is partnering with Levi’s to create the next generation of wearable devices.
Google I/O has served as a platform for enabling innovative makers rather than a full-fledged product introduction like The Consumer Electronics Show (CES). Makers are leaving the conference with the ability to construct 3D camera rigs and futuristic clothing while developers lay the framework for both VR and AR experiences.
With conferences like Apple’s WWDC and E3 on the horizon, it will be interesting to see how the other technology giants like Microsoft, Apple, Oculus, and Sony steer developers toward their platforms. Unsurprisingly, Google I/O confirms Google’s stronghold as a brilliant content leader, both in traditional media and its aspirations into VR.