Simulations

The combination of virtual production and virtual intelligence is the future of film.

It the piece above, I used a variety of technologies to create an AI character that generated it’ own dance movies. I then create virtual sets to film these dances using my Oculus Quest in VR.

This approach was my first foray into a full virtual reality AI driven production.

It uses Unity, variational auto-encoders, beat detection, and traditional editing to create a cohesive animated sequence.

Variational auto-encoders are used to compress and generate dancing animations in the form of depth maps.

The depth map data is created using shaders and image generation techniques within Unity.

Off the shelf beat detection is used to discover percussive moments in the track.

I fed this analysis to the auto-encoder. At ever beat, the code generates a random gaussian sample. It then linearly interpolates between each sample depending on the the seconds between beats and the animation frame rate.

The creates the illusion that the AI is dancing, but I like to think of it as a very advanced music visualizer.

By using the technique above, you can generate hundreds of AI choreographed dances in minutes. I used premiere to edit my favorites into one final cohesive dance.

I rendered the depth map animation as voxels, and filmed the dance in a pre-vis environment in VR.

I did this numerous times as well as recording some automatic camera follow animations.

All of these Unity renders were then edited to make a final piece within premiere.

Previous
Previous

GAN Based Virtual Production

Next
Next

Brush strokes