Tough Opinions: The Critic’s Guide to Sundance

Editor’s Note: We at VRScout love VR and AR. Sometimes, that means we have to put the gush-fest on hold for a second and take a hard look at what’s working and what could be working better. Contributor Nick Bicanic was brave enough to point that out to us and take up the mantle. These are opinions meant to encourage discussion.


I came, I saw, I critiqued.

I made my yearly pilgrimage to Park City. Despite the marked absence of Samsung, all the usual players were there. And it was a great place to experience some headset-driven experiences that aren’t yet available everywhere.

Sufficient quantities of time and money have been invested (by VCs and tech companies alike) that audience attention is both possible and necessary, and, as VR creators, it’s our job to honor that attention to the best of our abilities.

Because here’s the thing: we won’t get audiences if we don’t deliver compelling content. And we (the content creators) are not competing with some dude who just bought a Gear360 and is filming his dog’s first steps – we’re competing for eyeballs with Game of Thrones, Peaky Blinders, Taylor Swift, Snapchat, and Instagram. So I’m going through some of the big-buzz pieces I was able to experience at the festival with a fine-tooth comb. 

With that out of the way, let’s get to it:

Life of Us

Screen Shot 2017-01-30 at 3.53.44 PM

WHO: Within (Chris Milk, Aaron Koblin; with music by Pharrell Williams)

WHAT: “This shared VR journey tells the complete story of the evolution of life on Earth.”

MECHANICS: This is a roomscale Vive experience, which, at New Frontier was done as a 2-person thing, but I’ve also seen it referred to as a 3-player experience. An additional non-standard element is microphone pickup; when you talk the other person in the experience can hear you.

WHAT YOU EXPERIENCE: You evolve through different phases of life and the environment around you changes as you transition from one form to another – roughly approximating the correct environment for the creature you have become.

WHAT ELSE CAN YOU DO: You hold Vive controllers in each hand, which allows varying degrees of interactivity. In the “bird” stage, waving them causes your wings to flap. In the “human” level papers fly out of your suitcase.

There are supposed to be mild aspects of interactivity on display also – the only one I discovered was that in the “gorilla” stage, my partner and I could pull little monkeys off of each other’s backs. It was tough to figure out what throwing them away did (or didn’t) but it did exist as a joint feature.

I should also mention that you can talk to each other. When you speak, the other person hears your voice—through a series of modulated real-time filters. At the very beginning the voices are (presumably deliberately) entirely unintelligible, but by somewhere in the middle you can communicate with your partner.

NICE TOUCH: Shadows. Look behind you when the sun is in front of you and the shadows on the ground are of your whole body running – including your arms – for which the inverse kinematics are correctly calculated on the fly (i.e. they look like your arms). That’s a subtle thing that could’ve have easily been left out – but it felt solid.

OPINION: I don’t get the point.

As a game it’s not particularly exciting—I have no agency at all—I’m just stuck on a sort of defanged Temple Run clone with no steering, jumping or point system. As a story, well, it doesn’t really have one beyond being a literal facsimile of evolution. To me it seems like the popularity of Tilt Brush might have influenced the creation of Life of Us, and the team let the desire to produce a technical demo with confetti painting features drive the storyline—rather than the other way around.

Chris has said before that VR is not a game. And it’s not a movie.

An excellent VR experience is one where my emotional engagement with the story (however it might manifest – agency, character, plot, music, fear, exhilaration, joy, etc.) should be sufficient to overwhelm the technical restrictions of the medium.

Life of US, is a well-executed tech demo of a multi-player, roomscale, high-polygon count animated landscape, but the story component doesn’t land for me.

See our other thoughts on Life of Us.



WHO: Baobab Studios (Eric Darnell)

WHAT: “The story chronicles a journey through the cosmos aboard the spaceship of Mac and Cheez, an alien duo so mission-focused they forget what’s important in life.”

MECHANICS: Baobab presented this in the Vive at their premiere party. I imagine it will be viewable in others, but the full experience requires accurate spatial tracking and hand controllers.

WHAT DO YOU SEE:  You are basically watching an animated movie take place with mild interactivity. The gist is that you are in a spaceship with two characters and their dog-like robot and stuff happens to them.

WHAT ELSE CAN YOU DO: You can interact with certain objects on the spaceship (mostly pushing buttons on consoles) and you can also move your position on the ship (quite how this works was a bit of a mystery to me) –  I actually saw this experience for the first time at VR on the Mountain where the control mechanics were not explained to me and there was no intro level to familiarize myself with them. I later realized that at certain times in the story you can move (the virtual camera position) forward and backward.

The cutest aspect of interactivity in my opinion was throwing the ball to the dog. Since the game engine renders the animation in real-time – the robot dog responds to your throws very realistically.

OPINION: Like everything I’ve seen of Baobab’s work before, this piece is visually very cute. The dog animations are a labor of love—and everything from the sound effects to the transitions are elegantly handled. I wasn’t personally particularly taken by the story – but at least there WAS a story. I think adding small aspects of agency into a strong storyline gets my vote for the way we’ll get to the panacea of agency, emotion and speed. However if the most memorable part of the experience for me was throwing the ball to the dog, then there is a problem afoot.

As it happens I think this piece could work very well as a flat animated short also.

See our other thoughts on ASTEROIDS!

Journey to the Center of the Natural Machine

Journey to the Center of the Natural Machine

Who: Meta (Daniella Segal, Daniel Lazo, Eran May-Raz, Charles Niu)

WHAT: “Journey with a friend on an augmented reality headset to come face-to-face with the earliest days of human brain development, and travel through to its evolution as the fulcrum of our extraordinary and always-growing abilities to create, communicate, and collaborate.”

MECHANICS: Two people use Meta 2 headsets together to see an educational Public Service Announcement on Brain Development.

WHAT DO YOU SEE:  Two people enter a room together – both wear Meta 2 headsets and stand under an audio dome of some kind. The experience floats a brain in front of you and asks you to insert pieces of the brain into the correct place.

WHAT ELSE CAN YOU DO: Nothing. You are hearing voiceover and seeing a schematic of the human brain. We spent more time trying to “grab” a piece of the brain than anything else.

OPINION: This piece makes less sense at Sundance than it does at Emerging Technologies at SIGGRAPH. Sundance is about storytelling—experiences we would want to encourage others to view or share. While I commend Daniella and the entire team that Meta acquired to create content for them—and I have no doubt that future biology educators will have some version of human organs for us to wave around like Tom Cruise in Minority Report—this tech demo was absolutely not ready for public consumption.

Tracking stability is absolutely crucial in AR experiences. I have seen the ODG R9, HoloLens and the Meta in controlled demos and so far only the HoloLens comes close to magic. And I mean actual magic – not the VC Kool-Aid-drinking Magic-Leap magic. Field of view is nice and wide on the Meta 2—but I’ll take narrow field of view with rock-solid tracking all day.

See our other thoughts on Journey to the Center of the Natural Machine.

Dear Angelica

Dear Angelica Geena Davis

Who: Oculus Story Studio (Saschka Unseld, Wesley Allsbrook)

WHAT: “A visually splendid and deeply human journey through memory, loss, and the magic of a valorous life.”

MECHANICS: An Oculus experience with headset/headphone only.

WHAT DO YOU SEE: Art being drawn around you as a narrator tells you a story about memory, life and love.

WHAT ELSE CAN YOU DO: Nothing. But you don’t need to do anything else.


While the story structure was not particularly complex—and I wasn’t overly fond of Mae Whitman’s voiceover acting skills—this mattered comparatively little. The visuals and music were enough to pull me into the story. As the often-quoted Scott McCloud said about Comic Books – the story between the panels themselves is as important as the story in the panels (because your brain constructs the rest and makes it your own).

Something very similar is at work here. In practical terms this is a sequence of still illustrations with some minor animations (e.g. waving sheet or a blowing blade of grass), but the transitions between the illustrations are elegant enough that the sleight of hand works.

This felt like being inside a graphic novel – not just as it was being drawn – but as it was being imagined.

This gets my vote for the most important piece of the festival – because it pushes the emotional aspects of the medium forward the most.

See our other thoughts on Dear Angelica.



WHO: Felix & Paul (Félix Lajeunesse, Paul Raphaël)

WHAT: “Experience love and obsolescence as a Japanese toy robot, gifted to a child in the home of a fractured family in 1982 suburban America.”

MECHANICS: This is a 360 video with ambisonic sound – I experienced it on a GearVR in the Oculus House at Sundance.

WHAT DO YOU SEE: Most shots are from the POV of the robot, which is a about the size of a small 8-year-old child (so they are a lower angle). It’s a full 360 video experience, meaning it was shot as live action—with some aspects composited in later to fix errors. It’s 40 minutes long.

It’s split into a number of chapters (each about 4-6 minutes long). At each break there is a fade to black – although the fade is accomplished in a sort of 8-bit digital aesthetic reminiscent of the video game capabilities of mid-80’s toys. There are no cuts in the traditional sense (there are two moving shots, but the rest are just static)

WHAT ELSE CAN YOU DO: Interestingly the answer is not “nothing,” as is the case with almost all other 360 videos. There are three randomly placed secret objects around some of the scenes, and if you happen to see all three of them (which I did), this unlocks a Jeff Goldblum scene at the end of the film (it’s not a particularly great scene, but hey, it’s there).

NICE TOUCH: Felix and Paul like to do things a little differently so they made a custom player for their 360 video – which allows them to overlay graphics that are uncoupled from the head-rotation. So, for example, when Miyubi speaks the name of a child, the on-screen graphics behave like a fighter-pilot’s head up display, staying in the center of the screen regardless of how you turn.

This is a cute addition to the toolset, but doesn’t add much to the narrative.

Oh, and: “Safety Dance.” Nice. Finally F&P have something in common with Pauly Shore (Yes I’ve seen Biodome. So?)

OPINION: F&P are perfectionists. It’s extremely rare to find errors in their stitching or problems with their zenith/nadir patches (there were some extremely minor ones but this work was not being shown in its finished state).

This is both a good thing and a bad thing. It’s a good thing because it means there are few technical distractions for a viewer, but it’s a bad thing because (in Miyubi) the audience is free to focus on the performances and the narrative – neither of which are particularly exemplary in this case.

To be clear: clean 360 video post-production is hard to do well. So when a small team with their first VR camera deliver a cleanly stitched experience to a client – high fives all around. But when a significantly funded studio does this, they don’t get any extra points. That’s table stakes for them.

F&P’s role—if they wish to continue to be heralded as trailblazers by the Hollywood establishment—is to blaze a trail with stuff that will blow away the Hollywood types so they are driven to continue investing in 360 video as a medium.

In my opinion, Miyubi fails to do this. The story lacks enough heft to span its 40 minutes and the performances are weak. 360 video is extremely intolerant of weak performances. As I have said before – the additional emotional intensity of a VR headset demands an emotional authenticity in an actor’s performance. And it is the director’s job to elicit this from the talent. Ensemble casts are hard to do at the best of times.

A good example is the dinner scene in Miyubi, where multiple conversations are taking place. It just all feels….anything but real. The actors are taking their turns and not talking over each other too much – nobody is moving very much. The set is barren and there is very little going on in the background.

For an example of how to do this kind of scene right – watch the dinner scene in the Viggo Mortensen movie Captain Fantastic,  Matt Ross’s tour-de-force as a first time feature film director. That is not a 360 movie of course – but I’m using it as an example of how a performance is supposed to feel.

As has been stated before – actors in 360 video need to have the emotional intensity of a close-up and the physicality of a wide shot – at the same timeI’m sure a lot of money and time and effort went into this project and the industry needs high profile projects like this to succeed.

But we’re not going to make successful dramatic cinematic VR without a serious look at whether or not audiences are responding to stuff like this. And I don’t mean audiences of investors, partners and/or people wanting to ingratiate themselves with the anointed VR superstars (be they Felix, Paul, Chris, or others). I mean audiences that actually pay money to watch content.

Because without those guys – we’re all screwed.

Sure, we can all make a VC-dollar fueled argument that this is all experimentation and no one knows what will ultimately work anyway—and there is of course truth in that—but it strikes me that the lack of criticism in this industry overall is not healthy for its growth. Now is when this medium is being shaped the most. And I think we all agree that there is a ton of lackluster stuff out there—and the stuff that is good is very (very) hard to find (as it happens I’m trying to solve that problem, but that’s a conversation for later).

I didn’t ask to be the “Emperor has no clothes” guy, but those who know me, know that I speak my opinions openly and firmly. So go forth and experiment – attach a camera to your dog, your cousin, your car, your skydiving helmet. Cut aggressively, cut quickly, jar the audience, experiment with audio. If you’re making scripted narrative experiences, understand how to communicate with actors. A good actor doesn’t need a deep technical understanding of VR, but they do need an emotional understanding of what is required.

Just don’t make your audiences sick, confused or bored. The rest will come.

About the Scout

Nick Bicanic

A filmmaker and a technologist who mixes left brain and right brain stuff on an ongoing basis - Nick is equally at home on an MMA fighting mat, a unix command prompt, a mast-high crushing north swell, behind the controls of a helicopter, convincing a board about strategic changes of direction, field-stripping a SIG 552 Commando blind-folded or inventing new paradigms in mobile phone user interface design. He wrote, directed and produced the 4 time LEO award winning feature length documentary "Shadow Company" - a unique insight into the secret world of modern day soldiers for hire. He also produced "The War Against Boko Haram" for VICE News and published his first book - "Executive Outcomes" - a graphic novel based on his script of the same name. He is a serial entrepreneur and is considered one of the leading Mobile Product visionaries in Silicon Valley. Nick is currently a founder at RVLVR, a new VR/AR storytelling and software company.

Send this to a friend