Ancestor Saga

Ancestor Saga

Ancestor Saga is my latest attempt to build an AI driven virtual production pipeline. Combining the latest innovations in CLIP guided Diffusion models with bespoke img2img translation models, we were able to create this 60 second animation with a really small team in short period of time.

We tested various img2img models, but MUNIT ended up working the best.

To the left we see the input video side by side with the AI translated animation. Here we see stock photography being translated, but we can also work with a fully virtual production process as shown below.

Using data augmentation, we were able to create a useable dataset with minimal work from our illustrators, roughly 5-10 illustrations per shot vs hundreds that would have required using a traditional approach.

Below you can see some of the backgrounds generated by Stable Diffusion.

Many of the input shots were filmed in Unity with purchased 3D models and animations, giving us full control of the camera, characters, and action.

The AI generated assets are composited in a traditional manner via After Effects.

Next
Next

GAN Based Virtual Production