The virtual reality industry is exploding. Practically everywhere you look, VR is flooding into the mainstream consciousness at an increasingly rapid rate. Huge sums of money are being pumped in for content creation, and it’s all thanks to 360-degree video.
Let’s be honest, we would not have the level of public interest in VR that we have now without easily accessible content. Reaching millions of individuals meant that the processing capabilities to power an experience needed to be ‘dumbed down.’ Hence, the rise of smartphone-enabled VR devices.
Which is why we have products like Samsung’s GearVR and Google’s Cardboard. People need a way to experience and share content without having to buy an expensive computer or to travel to an expensive technology conference. Sure, it’s not the awe-inspiring, interactive experiences that leave people speechless, but it’s by far the easiest way to turn people onto the possibilities of virtual reality.
It’s all about throughput. For instance, let’s say a content company wanted to get a thousand people to try VR on a Rift or Vive. They could start with one headset and line up those thousand people. If the experience was 2 minutes, it would take approximately 33 hours and 20 minutes to get everyone through. Not to mention the added time of taking off the headset, cleaning the lenses, and initiating the next person.
On the other hand, companies have instead taken an array of cameras, stuck them anywhere, and stitched them together. Then, they are getting a bunch of cheap headsets and handing them out where people are already congregating. From there, some would tell everyone who has a phone (which everyone does) to download an app with a sync timer that will play a 360-degree video at a specific time. This approach gets those 1000 people into VR within a couple of minutes and is way more cost effective than trying to attempt anything similar with a more powerful experience.
Mirada synced small groups of people in a similar fashion at Comic Con. Wemersive ramped up the numbers to around 400 with the Nantucket Project. VRSE did it with around 1200 individuals at TED. Samsung streamed 360-video to over 6000 people at the 2016 Mobile World Conference. And they are not the only ones.
Even easier, you could take that 360-degree video and put it up on Youtube or Facebook to reach the masses.
Brands want views. Simple as that. Interactive virtual reality content powered on headsets like the Rift or Vive will not deliver the same amount of views as those watched on mobile HMDs anytime soon. Which means that 360-degree video is the hottest way to get people interested in VR at this time.
Because of this, companies are actively searching for people to stitch those experiences together. The problem for those who enter the industry at that level are going to be replaced by artificial intelligence processors fairly quickly. It’s already happening.
Look at the new JUMP camera. This camera solution uses Google’s AI algorithms to stitch the individual videos by pinpointing patterns after uploading to their servers. Granted, there is no top and bottom camera with blurry sections cut out of the experience, leaving much to be desired. Plus, it still uses GoPros, which are not that good in low light and don’t capture high enough resolutions to create cinema quality work.
Content creators are thus experimenting with other camera solutions, like REDs, Canon EOS M3s, and Sony A7Ss (among others), which all record at a higher quality and are usually better in low light scenarios. Which is great for those people entering the industry at the stitching level. The experimentation with other cameras means that stitching is a much needed skill; and it will be that way for a little while. But it is still only a matter of time before the neural networks of AI’s get fed the information to handle more camera arrays.
Talk about having no job security. Unless those people can edit video, create VFX, know how to operate the cameras, can write scripts, or get into directing, they will be dropped to the curb like it’s nothing.
Stitching is often a thankless job. It gets people in the door, but it is difficult to create a perfect seam; especially when moving the camera. The software used nowadays just isn’t quite there. You can work for days, going frame by frame on a few minutes of video, and it still won’t be perfect.
Kolor Autopano, VideoStitch, OpenCV, and Nuke can get the job done good enough. It’s mentally draining work though, and often 360-video projects can pile up quickly. It is easy to setup an array of cameras, but it’s difficult to understand the work that goes into creating a nice stitch.
For those entering the stitching world, it is recommended to document all the problems that are being encountered. They need to show every issue and describe the process to fix them. That way people shooting video can understand the process better.Obviously, it is hard to create documentation when projects keep piling up with tight deadlines. Not to mention, VR teams are still small. But it is worth the extra effort to show the fine-tuning process. That way those that go the extra mile can transition into something like an R&D sector of the company as it grows, even when the AI infrastructure starts to kick in.
Technology moves fast, and it is only going to increase in speed. So, those who are getting into the industry now need to keep looking towards the future, or they will surely get replaced.
Feature image credit: DataFloq