AI, music, and VR creation tools collide in this beautiful 360° video.
Artificial Intelligence and Virtual Reality are two of the most compelling and fastest growing sectors in today’s technological landscape. Analysts often focus on which of these world-changers are “winning” or which is “better” for investors and the general public to focus on. One pair of artists, however, wants to ask “porque no los dos?”
Taryn Southern is an artist, singer, producer and popular YouTuber. Dani Bittman is one of the most prolific creators of immersive art in the world today. And together they have created Life Support — a piece of music composed and created entirely by artificial intelligence.
Part of YouTube’s VR Creator Lab, the video project can be viewed in 360° on desktop, mobile devices, and VR headsets. We reached out to the duo to find out the process and details behind this one-of-a-kind artistic undertaking.
What was the process you used to create this experience and marry it with the music?
Dani Bittman: Taryn and I first worked closely for a month to define the story. We ultimately wanted to display someone who was plugged into a fantastical motion based immersion system, I.E their life support system, but we wanted to convey that some kind of balance between VR and RR is necessary in order to maintain sanity.
From the start we wanted to create as much of the experience in VR as possible, so we tossed around possible plot ideas then immediately drafted them out in Tilt Brush to get a sense of scale and flow. I then used a virtual camera in Tilt to film possible 360 camera paths of the environments, sent a cut to Taryn, and discussed where to go from there.
Once we locked our story, I began the process of jumping back and forth between blocks and Tilt Brush by using the Poly integration. The house, city, and ground in the first shot, for example, were all made in blocks, while the mountains, trees, and city lights were painted in Tilt. We also used an unreleased app called AnimVR which allowed us to import our entire scenes into the app and paint animations in VR like those luminescent strings and the bursts of light that the main character generates.
But I didn’t want to switch to a 2D workflow once I imported into Unity, and so I used Virtual Desktop’s display overlay to work in Unity on a virtual big screen. With this approach, I was able to preview my scene in VR while still maintaining control over my variables in Unity. I just had to press play, and the scene would appear around me and my virtual monitor. I could then use my monitors overlay to tweak lighting colors, object scale, and even modify my camera path animation, all while still in VR.
I wanted to keep the streak going though, so I used the virtual overlay in conjunction with Adobe Premiere’s live 360 previewer. It was quiet a dream as a traditional filmmaker to utilize my old workflows while working with this new Medium. Actually, the only difference between traditional film editing and this 360 workflow was that instead of having a 2D monitor taking up my UI space, my world became my monitor. I had so much more room in Adobe for my timeline and color correction nodes.
I did have to do all of the character and solid object animation in Maya, but in the end I’d estimate that I completed about 70% of the video with a headset on.
Taryn Southern: Dani did a fantastic job of describing the art creation and direction and collaborative workflow. I focused primarily on the song – arranging and editing the instrumentation stems, writing lyrics, and doing my best to match sounds with the visuals Danny was creating.
What were some of the largest challenges you faced during production and how did you overcome them?
Southern: Dani could speak better to the challenges on the VR creation side, but I’d say one of the biggest challenges we had to overcome was figuring out how to please the YouTube audience. We knew most people would be watching this video on their phones, not on headsets – so we wanted to be cognizant to create a great visual experience for them. The downside, of course, is then you have to make choices that can negatively effect the headset experience. Ultimately we had to make some compromises to accommodate these considerations – but this problem speaks to larger issues that VR creators have in distributing their work across multiple platforms that reach different audiences.
On the music side, my biggest challenge is trying to effectively “build” a song so that it feels dynamic. I felt I could have done a better job of that with my first single.
Bittman: Character animation and processing power were the biggest killers. Rendering 4K stereoscopic 360 isn’t easy for a computer to handle, and so it sometimes took hours just to export and test a 30 second shot. I also had to work while on other business trips, and so I did most of the final processing on my Razer laptop. It surprisingly did a phenomenal job, but in the future I’ll be working with multiple GPUs. The Character animation was the other monster because this was actually my first time doing character animation for a big project. I have of course rigged and moved characters before, but never on this scale. So I had to spend a good amount of time teaching myself as I was going.
Taryn and I also were struggling to make the shots look good both in 360 and on YouTube in 2D. We knew a large portion of her viewers might not move the world around, and so we wanted to make sure the video looked good from start to finish from one perspective. Because of this, the experience plays more like a 180 experience. However, we were able to build this limitation into the story to reward those who view it in 360, because the main character essentially jumps in and out of immersive experiences, and so when we’re looking at the character connected to the machine, all the information is in front of us, but when they launch into a virtual world like in the flying scene, things are happening all around us.
Why did you decide to create this project?
Southern: The project initially began as an experiment in creating music with AI. I started composing music with artificial intelligence platforms, and then writing a whole album became a creative challenge. I then submitted a pitch to YouTube, and was notified that they’d chosen my project as one of 10 creator grants. As part of the grant program, I would make several immersive 360 videos for the album.
Dani was the first person I reached out to. I was dying to collaborate with her after seeing her work online, she’s crazy talented. We iterated on a few different ideas before ultimately deciding to do a song that explored the relationship between humans and their virtual worlds. Right now, our virtual lives primarily exist of email and social media updates — but it’s not a stretch to say that many humans feel dependent on and addicted to their virtual lives, even to the point that their real lives suffer. That’s where the concept for “Life Support” was born.
Bittman: I’ve been wanting to work on a 360 project for a while since I’ve been creating VR art since 2016, and I was extremely intrigued by Taryn’s AI music concept, so it was a perfect fit. Plus, Taryn has been an avid VR user since the beginning, so I knew the creative process would be enjoyable since she intuitively understands the medium.
What do you think makes this project valuable/interesting to the world?
Bittman: At the end of the day, while Taryn focuses of music and I focus on visuals, we are both just trying to figure out how to partner with emerging technology to prototype and polish our ideas faster. While Taryn worked with Amper’s AI to create Life Support, she still used her creativity to select, tweak, and add to the ideas her computer generated. She could focus more on sculpting her sounds rather than sourcing the clay so to say. Then on my end, I was able to cut off weeks worth of work by working directly in VR without having to translate things on a 2D screen. While both of our workflows could use more polish, I believe it’s projects like ours that will eventually help new creators generate AAA art either by themselves or with a few friends in a short amount of time. It’s all about diminishing the learning curve to creating digital art.
Southern: From a technical standpoint, I really love how this piece incorporates different technologies for a very mixed media feel. For instance, the 2D brain images in the chorus are actual fMRI videos of my brain! We also created videos of the AI code used to compose the song in a screen toward the end.
From a philosophical standpoint, I care a lot about the theme of the song. Last fall, I took 2 weeks off from social media use – deleted the apps from my phone, stopped checking email more than once a day – and I noticed an almost immediate improvement in my mood and cognitive abilities. Ultimately, I’m a technology enthusiast – I do believe that technology has improved our lives for the better – but I also think there’s value in careful reflection. How we choose to engage with our tech is a worthy conversation – particularly within the VR/AR community, as what we build now will have profound impacts on the next generation.
Who was all involved in the creation of this project?
Southern: On the video side, Dani really did it all. She’s a brilliant visual artist and director. It’s been an honor working with her.
On the music side, Amper (the AI) was a most excellent collaborator. Their team over there was instrumental in helping me streamline the writing process, and Ethan Carlson, one of my favorite producers, really helped me take the song to the next level with his production skills.
I also have to shout out Jenn Duong, my producing partner on the VR Creator Lab videos – she’s just a kick ass partner and friend with great instincts.
And, of course, YouTube – without the grant program and their mentorship, this video wouldn’t have been possible!!
Bittman: I’m sure Taryn has this question answered, but my parents helped me beta test our cuts. They both have tried VR, but it’s great to hear their fresh perspective on how they wanted to look around the scenes.
What are you working on next?
Bittman: Professionally I’m working with a few studios to generate immersive worlds with VR creations tool, along with helping other musicians create VR music videos. But I’m also working on a personal project about the similarities between dream logic and VR cinematics. I’m interested in creating a VR experience that flows and conveys symbolism like a dream, but I’m also extremely intrigued by the possibility of turning the project into an MR performance piece. Nothing is locked yet, but it’s something I’m planning to complete before Winter 2018. For the most part I’ll be pushing teasers out of my Twitter throughout the year, along with a long overdue tutorial series
Southern: The next VR music video, directed by Jenn Duong and animated by Vladimir Ilic with post supervision by VR Playhouse, will be released in early March. Like Life Support, it also integrates 2D video into an animated 360 VR world.
After that, I’m working on a documentary focused on the future of human intelligence….and I also get to release this album!
Disclaimer: VRScout worked with both artists in a joint partnership with YouTube last year.