GAN Based Virtual Production

This untitled piece was an experiment in combining generative AI and virtual production with a classic animation aesthetic, i.e. Disney’s Multi-Plane Camera.

Assets are generated by various trained StyleGan models using a custom built StyleGan designed to integrate with Unity.

During scene creation, the artist can generate new 2D assets on the fly, or create morphing animations to create a whimsical watercolor animation.

The animation and datasets where all created in unity with a watercolor shader being used as the final post processing effect.

Unity is used to generate a dataset large enough to train a StyleGan, roughly 10k images.

The trained style gan is then capable of generating novel assets and animations.

A bespoke Unity interface is created to create shots, post effects, as well as interact with trained StyleGan Models via a web API.

Previous
Previous

Ancestor Saga

Next
Next

Simulations