Python For Feature Film

Python For Feature Film

Python is a programming language that has become integral to the movie making process over the last few years. There’s rarely an animated feature or visual effects film, if any, that hasn’t had Python play a large part in getting it to the screen

When people think about movies, even programmers often think about the artistry involved in bringing those images to life. However, the technical side of the film industry is something that often goes unnoticed outside a small group of people.

To that end, I’ve written a few blog posts about how I’ve used Python on several major films that I’ve been lucky enough to work on. Hopefully this shows how much it contributes to the entire life of a movie.

I’ve also recently released a course on Udemy to teach artists how to learn Python For Maya since it’s becoming an increasingly valuable skill in the industry. These blog posts serve as companion material to the course as well.

With that intro out of the way let’s continue…

What is Python?

Some of you may not be familiar with Python.

Python is a programming language designed to be very easy to read and write. It’s incredibly popular in the feature film industry as well as other groups, like mathematics, science and machine learning.

You can learn more about Python on the official website.

Additionally, it’s important to note that the film industry uses Python 2.7 and not Python 3. There has been significant code developed with Python 2 and it’s not easy to switch forward yet, despite Python 2 no longer being actively developed and Python 3 having many useful features.

The Feature Film Pipeline

The biggest use of Python is in our feature film pipeline.

This is an image that describes the pipeline at most major studios.
The Pipeline is the arrows that link each department together. It’s responsible for making sure that data flows between each department and that everyone can play well together. It’s also responsible for the toolsets in each department so that the artists themselves can work efficiently, but for now lets focus on the inter-department flow.

A more visual demonstration of this graph is in this video of the making of Ratatouille by Pixar

Here you can see the various stages of a visual effects film or animated feature. Studios may differ slightly from this, but it’s the general workflow.

The storyboards/footage/previs represent the data we get, and Compositing/Lighting are the last stages of the film for us.
Visual Effects Films differ slightly from animated films because you have to deal with the extra added element of film footage in the form of plates.

The Pipeline is responsible for getting the data between departments. Here’s the gist of how it works (though it is more organic a process than the one described):

  • We get data from the client or story artists in form of plates, previsualization (previs) or storyboards that tell us what is happening in the scene.
  • Modeling look at all of this and generate 3D models of all the assets that will be required.
  • Rigging take the modelled assets for characters and apply a virtual skeleton to make it animatable.
  • Matchmove are in charge of creating virtual cameras that match the ones used to shoot the film as well as any standin characters or geometry
  • Layout take the rigs, and either create their own cameras or take matchmoves cameras and set up the scene. They’re the equivalent of a virtual director of photography.
  • The scene is then handed off to Animation, who are the equivalent of the actors. They are in charge of the movement of the characters, and bring the inanimate skeletons to life.
  • CharacterFX are in charge of all the technical parts of animation. Muscle simulations, making cloth move realistically, the hair, grass etc… all comes from CharacterFX.
  • FX then handle the non animation related effects. Weather it be destruction, fire, voxelization, etc… there’s a lot that FX is in charge of.
  • While this is happening, Texturing are in charge of giving color to the 3D Assets so they aren’t just grey objects.
  • Shading then takes these textures and gives the assets a material that tells it how light should interact with it.
  • Matte Painting are the department we use when it is not logical or feasible to build an environment. We realistically can only build so much, and past that point it’s more efficient to have someone make a very high quality painting instead.
  • This all gets funnelled into Lighting who are in charge of adding lights to the shot and rendering the image out. They also do a little bit of compositing to sweeten the image. On an animated feature this may be the end of the show.
  • On a visual effects show, we have to prepare the plates by either removing unwanted elements, removing noise or removing the lens warp. This is handled by Plate Prep,also known as RotoPaint.
  • Finally everything goes to Compositing who take all the images and integrate our CG renders into the actual film plate. On a visual effects show, this will be the last stage.

We use Python to tie all these bits together.
In the next section I’ll go over publishing before moving onto describing how Python is used for the individual departments.

Case Studies of Feature Films

Here are a few posts I’ve done about movies that I’ve worked on, and the role Python played for those films.

Article Continued Below These

Publishing and Asset Management

This is pretty much the core of traditional pipeline, to make sure assets can be shared between departments and be tracked.

The first part of this is Asset Publishing. 

When a department deems it’s work ready, it decides to publish it so the next department in the chain can consume it. For example modeling exports models, but does animation export animation? Well that depends on who is consuming it. This is dependent on whether it needs to be interactive past this point or not.

For geometry  we often just publish as a geometry cache using Alembic  which is a industry standard developed by Imageworks and ILM so that we can have a consistent cache format.
For point cloud data, we either use Alembic or OpenVDB, for images we tend to use tiff’s or OpenEXR.
Soon the industry will probably also standardize on a universal scene format from Pixar called OpenUSD.

Anyway  the idea is to generally keep the data in the most efficient format possible, while allowing for easy interchange. Cache data is often best because you really only take an IO hit, which is cheap, versus a deformation hit, which can be very expensive.

This all gets really complex though. Artists don’t need to know where their data is coming from or going to, or even how it gets there. They just need to be able to load it in and publish it out.

This is the pipeline. We develop the UI’s and the tools to do this in a very user friendly manner.

To publish data, the user just has to open a publish UI that will validate their assets against some tests, and then send it off to the machine farms where the conversion to one of the open formats happens.

To ingest the published data, we similarly just have an asset browser that lets the artist pick which asset they want. Often they just see a thumbnail and description. The details are irrelevant to most artists.

Because these publishing and asset management systems need to be common to multiple apps, we develop them in Python and Qt (PyQt or PySide) , which allows us to reuse our code without recompiling for each application and makes it easy to rapidly add functionality where needed.

This is common to pretty much every department, so I figure it warrants its own section here rather than repeating for each.

This is Pyblishan open source Publishing system similar to the ones many studios have developed internally.

In this wireframe version of Big Buck Bunny from the Blender short, you can see how models are made up of polygons (each of the squares is a single polygon)

Modeling

Modeling is the department in charge of creating the 3D source geometry used by every other department.
Often there are quite a few repetitive tasks involved in regards to placing or editing geometry, or even just managing the scene.

This is where Python comes in handy. Most 3D packages have a Python API that lets you do program everything that you would do manually.

So instead of spending 10minutes creating a simple asset, you can script it up and then just click a button when you need it next time. That 10 minutes saved really adds up, and over the course of a project you may be saving hundreds of hours that could be better focused on building more complex assets that require artistic input.

For example, in  my course  (Python For Maya) I go over creating a simple gear using Python as well as creating a simple UI so that you can specify how many teeth the gear has and how long they will be.

This can get more complex though with using Python to create custom deformers or interactive tools, like this demo from Hans Godard demonstrates.

Rigging

Rigging’s job is to create a skeleton for the character geometry so that it can deform, just like a real human would.

Of course that’s an over simplification.
Rigging is also essentially creating the interface that animators use to bring these creatures to life, but they also have to make sure that any movement happens convincingly.
If an animator moves the arm, the rigger must make sure that the shoulders deform properly.

Python plays an integral role in Rigging. Here are just some of the uses:

  • Creating automated builds of rigs. Rather than doing everything manually, rigs can be composed using code which makes them easy to reuse.
  • Developing custom deformers or nodes to perform operations not native to the application.
  • Develop supporting scripts for animators to switch between modes or controls etc..

In  my course  (Python For Maya) I go over creating a controller library.
Controllers are, as the name suggests, the objects animators use to control the rig. Instead of using the geometry directly, they use controls, that move the skeleton, that in turn deforms the geometry.

I go over developing a user interface with Qt to save out and import the controllers for easy reuse.

If you’re interested in learning Rigging, I also recommend checking out Animator Friendly Rigging which is a resource that many riggers in the industry have learned from.

Kraken is a rigging framework, built on top of Fabric, but uses Python for its frontend UI

Another good resource for rigging is Stop Staring, a book about facial rigging and modelling.

A video demonstrating how animators interact with a rig, as well as the kind of tools animators use.

Animation

Animation is the department most people would be familiar with. They take the static rigs and give them motion, and make them emote. They’re essentially the virtual actors.

But as you can imagine, there’s a lot of repetitive actions that animators have to do that can also be scripted away using Python, or just made easier.

Examples include:

  • Picking controllers. A complex rig may have hundreds of controls that clutter the view. A picker interface lets animators have a friendly UI to select the controls while hiding them in their actual view.
  • Creating keys, specifically inbetweens. In the age of 2D, animators would define in between keys (the keys that give motion between poses) as a weighting towards a pose. In 3D we can have tools that help the animator say they want to weight the new key 30% of the way to the old key.
  • Setting up constraints. Characters will pick up objects, and at that point animators must constrain the object and their character together so they move as one. This can be made easier with tools that handle the management of this common task.

We develop most of our tools to do this kind of thing using Python and most of our user interfaces using Qt, via either PyQt or Pyside.

In  my course  (Python For Maya) I go over creating a an animation tweener Ui, where an animator can drag a slider to choose the weighting of their keys between two poses.

Character FX

Now that Animation is done animating the character, we still have to figure out the muscles, hair and cloth that are more technical considerations but add a lot of depth to the scene.

From flowing dresses in Tangled to crazy hair in Hotel Transylvania 2 or muscle simulations in Suicide Squad, these are all handled by the Character FX department.

Like the other departments they too have a lot of use for Python.

For example:

  • Setting up their simulations
  • Combining multiple takes of simulations together
  • Creating brand new procedural workflows.

Often overlooked, Character FX is that very important step in getting really lifelike images on screen.

A behind the scenes look at how the clothing on Hotel Transylvania 2 was simulated

This video  was created using Houdini to generate procedural FX with an input motion capture data

FX

A completely different department to Character FX, this FX department is in charge of more procedural effects.

Destroying buildings, explosions, magical particles or even entire oceans. This is driven by the Effects department.

They too use Python for many things including setting up their procedural effects graphs and scripting parameters that would otherwise be time consuming to do by hand.

Python can even be used to create entire procedural nodes that generate interesting effects. It’s very powerful.

Lighting

Now that all this 3D geometry has been created, we need to convert it to images that can be displayed on screen, but if we were to just do this, it would be black.

Lighting are in charge of adding the lights to the scene and making it look interesting as cinema. They are  the ones who can set the mood of the shot, make it dark and menacing or happy and vibrant even if nothing is happening.

Even here, Python can be incredibly useful because a scene may have many, many lights and many, many 3D assets. Here, we can make UI’s that help make these scenes manageable.

What can often be a scene with billions, or even trillions, of objects, can be distilled down to simple user interfaces.

In my course (Python For Maya) I go over the creation of a lighting manager using PyMel and Qt to generate a UI that lets us control all the lights in the scene. It also goes over importing and exporting them as JSON files, so that they can be shared among shots and artists.

Once the lighting is done, it is time to convert the scenes to images. This is done using a render like Arnold, Renderman or Vray, among many others.

Renderman is the render engine that Pixar developed (in fact it predates them as a movie company) and is used by many studios.

You can also download it for Non-Commercial use

An example of how the results all come together

Compositing

Finally, there is Compositing.

Here we take the final footage and any rendered elements, if any, and combine them together. It’s more than just combining, because you need to make sure everything integrates well and feels like it’s all one cohesive image.

People often think of visual effects as over the top explosions and creatures, but it can be even subtle things like background replacements, or adding crowds.

I bet most people didn’t realize how much work went into even something as seemingly simple as Forest Gump

Here too, Python is incredibly useful.

It can be used to automate setting up the compositing node graph in an appliication like Nuke, but it can also be used to do entire image processing operations itself using libraries like OpenColorIO or Pillow.

We can even use Python for Computer Vision to help track objects inside the scene.
The possibilities are really quite exciting.

Final Words

I hope this blog post has been a good look into how programming can be such a vital part of bringing moving to life nowadays, and more specifically, how Python itself is so useful.

It makes sure that we can quickly make very complex relationships between all these moving pieces simpler, and also makes it so each department can work more efficiently.

It’s often overlooked since it’s not an artistic part of the process and you rarely see the direct results of it on screen, but nevertheless, it’s a skiill that is increasingly valuable in getting these increasingly complex movies to the big screen.

Learn Python

If you're interested in learning more about Python, why not sign up for my course here?

This course takes you from learning the fundamentals of Python, to creating tools with advanced user interfaces, all while creating projects you can use immediately.
It will teach you many of the skills necessary to create tools like the ones in this article.

With over 700 students in its' first week, including artists from major studios, it is sure to provide valuable knowledge to everyone.



Python For Maya: Artist Friendly Programming - $70 $90

 

If you sign up from this link, it will save you $20 off the retail price using the coupon code: DGOVILCOM

Working on Suicide Squad

Working on Suicide Squad

In Part 5 of this blog series, I’ll be going over the work we did for Suicide Squad. One of 2016’s most hyped films, it landed with a critically derided meh.
I won’t go into the behind the scenes of why the movie went from A+ to D- in a heartbeat, there are plenty of articles out there that talk about what happened, but instead I’ll talk about the challenges we went through to bring some amazing visuals to the screen.

For the record, I honestly don’t think it was as bad as it was made out to be. It was definitely not great, but it was fun enough and I’ve seen worse films get better reviews. I’ve totally seen worse superhero films, and it doesn’t excuse it, but I feel the hate was a direct response to the immense hype, and not totally objective. But then again, maybe I’m not objective myself having worked on it.

Also my favorite review that I’ve read states: “If 2016 could be summed up in a movie, it would be Suicide Squad”. Ouch!

Challenges and Craziness

So for this movie, we were responsible for two primary characters.

Incubus (The big orange dude) and Enchantress (the lady in the green dress with the eyebrows).

 

We used the tentacle technology developed on Edge of Tomorrow to handle Incubus’s tentacles of destruction

Incubus

Incubus was of course fully CG. You can see him in the trailer as the guy who destroys the subway train.

This was lost in translation in the final movie, but he actually absorbs everything he destroys. There’s like a mini universe inside him.
If you were to pause on a frame of him and strip away his armor, there’s floating heads, eyeballs, guns, and even an entire tank inside of him.

Unfortunately with all the other effects, and the armor, it totally gets lost.

He also fires these tentacles outwards when destroying things. We made good use of the tentacle technology that was developed for Edge of Tomorrow, to make the tentacles that he fires.

Enchantress

Played by Cara Delevigne, Enchantress is a semi CG character. When she’s in her Jade outfit, basically the only part of her that is real is her face, and even then, we replace her face in a few shots.

The rest of her body is all computer generated, a mixture of some great tracking, animation, simulation and shading.
It may not look as realistic in the final film, with all her glowing tattoos and other effects, but if you were to see the CG model without all of that, there are several shots where all we could do to distinguish them was to look for her eyebrows (our model didn’t have eyebrows for a while).

We made use of some new skin shading technology, a new muscle simulation technology and a lot of talented artist time to recreate Ms.Delevigne in CGI.

Echantress in all her CG glory.
Forgive the low rez image, I had to find it via Google Images and there’s not much available yet.

Python For Suicide Squad

We made a lot of use of Python for the movie to make our lives easier and create new pipelines.

Our Muscle Simulation Pipeline used new technology invented by Ziva Dynamics

Muscle Simulation Pipeline

To get Enchantress to look as realistic as possible, we had to simulate the muscles under her skin.

About the same time we were doing this, Ziva Dynamics were doing closed betas of their new mucle simulation technology.
These are the same guys who did the amazing muscle work at Weta and now have their systems for sale for both feature film but also for interactive video games. (Seriously, their realtime VR demo is mindblowing).

Our character artist who was doing the sims needed to work in stages.

  1. Take the animation and prepare it
  2. Simulate the Bones (some bones are flexible) and write out their geometry cache.
  3. Simulate the muscles on top of the bones, and cache them out.
  4. Simulate a fascia on top of the muscles
  5. Simulate the fat and the skin sliding on top of this fascia.
  6. Simulate the tight clothing sliding on the skin.

While we used Ziva for the actual simulation, we still needed a new pipeline to procedurally handle all these stages.

So I built a framework where the artist could provide  a list of stages and their dependencies, as well as a construction class that would set up the sims.
Then my tool would figure out the dependency graph, generate all the data needed at each stage to feed the next and finally send it on through to the lighting department.

The framework was completely built using Python and in fact does not rely on Ziva at all, but does support it where needed.

This became especially useful when having to run through multiple shots as once, but it meant that setups could be reused for characters with little work.

Ingesting Motion Capture Data

For this show, we had a lot of motion capture data that needed to be ingested.
But we had some big problems that made it slow to do so manually.

  • Our motion capture vendor had their own naming conventions for scene files.
  • The motion capture rigs weren’t directly compatible with our production rigs and required some manual work.
  • We needed to then playblast these and add them to our animation library.

Doing this manually would have taken roughly 20 Minutes per captured clip, if I had no issues.. We had a couple hundred clips. That would be a full weeks work to do just one iteration over all of them.

This is where Python was super useful. I figured out the steps and scripted it all up. It could be done in a couple hours instead.

Given a list of files, it would:

  • Convert it to our naming conventions
  • Create a tracking camera to playblast it
  • Transfer the animation to our production rigs
  • Add these to our animation library
  • Send out an email

That meant I could start the process, go to my meetings, come back and it would be done.

That’s a lot of time saved.

Motion Capture lets us realistically capture animation from real world actors. This is often then used as a base to animate on top of, or even just as a reference.

Learn Python

If you're interested in learning more about Python, why not sign up for my course here?

This course takes you from learning the fundamentals of Python, to creating tools with advanced user interfaces, all while creating projects you can use immediately.
It will teach you many of the skills necessary to create tools like the ones in this article.

With over 700 students in its' first week, including artists from major studios, it is sure to provide valuable knowledge to everyone.



Python For Maya: Artist Friendly Programming - $70 $90

 

If you sign up from this link, it will save you $20 off the retail price using the coupon code: DGOVILCOM

Buy The Movie

Okay, I know I basically just talked about how bad the movie was, and I won’t lie it’s not great,but maybe you’ll like it in the same campy way people like the original Evil Dead? It’s actually pretty fun as a movie and you might have a good time.


 

Buy the BluRay

 

Affiliate Link

Working on Hotel Transylvania 2

Working on Hotel Transylvania 2

In Part 4 of my blog series, I’ll be going over what it was like to work on Hotel Transylvania 2 as well as how we used Python during its production.

This was the first project I’ve done where I was on it from the very beginning and it’s a really great experience being able to shape the foundations of the movie itself.

Additionally, I basically grew up watching Dexter’s Labarotory and Samurai Jack, so being able to work with Genndy Tartakovsky was something I was so psyched to do.

In the end it turned out to be a very difficult film to work on, with many changes to the story, a lot of crowds, large environments and a team that was fairly new to the studio.

Oh, and it’s also the show where I met my Wife. She’d just started in the industry as a cloth/hair artist and, well, I guess it’s a happy story from then on!

A Behind the Scenes look of working on Hotel T2

Challenges of The Project

Hotel Transylvania 2 was probably one of our most challenging animated features at the time.
This section goes over some of the challenges we had:

 

Animation Style

If you’re at all familiar with Genndy, you know he loves to do really over the top animation, where each frame is sculpted to be like how a 2D animator would work.
This means we were often pushing the character rigs way past where the rig would go and animators would sculpt their characters on a per frame basis to get the graphic style necessary.

 

Characters and Crowds

The movie had north of 200 hero characters, and a further 250+ variation characters. With close to 500 characters, versus a typical animated feature that had under a hundred, we were pushing new grounds.
Additionally it wasn’t unheard of single shots that had close to a hundred characters in them at a given time.

This made for a huge amount of work managing so many assets, but also a huge use of our crowd system that helped animators push out shots even when there was so much going on.
Other movies since have continued to push past these numbers, with the upcoming Emoji movie trying to take our crown, while Storks had shots with close to 800 characters in scene at once.

 

Cloth and Hair

We spent a lot of R&D upgrading our cloth and hair systems for this show.

We had Dennis, the child, who had a lot of red, curly hair.
We had characters transforming shapes while wearing multiple layers of clothing.
We had shots where animators would sculpt their characters into shapes that could never exist.

There had to be a way to get physically simulated cloth and hair, while stiill maintaining the graphic style that Genndy loves and is known for.

Large Environments

Environments were both incredibly large but also very dense.
To tackle this, we had to make heavy use of instancing to keep our memory usage low.

We also were often traversing outside the region of floating point precision, where graphics calculations and simulation start breaking down. This required the development of a new way to efficiently handle scenes where we’d travel outside this region.

Python On Hotel Transylvania 2

To get through this show, we had to work as efficiently as possible. Here’s a sampling of the Python tools I developed on this show.

Working at Origin

There were several shots that involved our characters moving great distances.

The problem with this is that eventually you move outside the limits of what computers can accurately represent and you start getting floating point precision errors.

This manifests itself in errors like simulations and other calculations having little bugs or even crashing the applicatiion.
Traditionally, movies work around this by centering the scene at the center of the world at the beginning of each shot. Unfortunately for us, we had single shots, and many of them, where we’d transition too far for this to work.

To work around this, I came up with a system that would take the scenes major character, find their world position, and offset the entire scene to the center of the world.
This was implemented using Python to build a node network inside Maya, and today is implemented in a single custom C++ node.

This gives us the illusion that from the camera’s point of view, we’re still in the right spot, but really we’re working at the center of the world.
When the animators move their character forward, it actually just moves the whole scene backwards.

Then these values are recorded and can be used to move them back to their actual world positions at render time.
This lets all our departments to work in a very comfortable area.

Animators don’t have to keep moving their cameras.
Cloth, Hair and FX can all work at the world center.
Everyone is happy.

This was the sequence we developed this tool for.
The car is moving pretty fast and rapidly goes outside of the comfort zone on some of the longer shots…

Reusing animation is a tale as old as time…
because it’s smart! Movie’s take a lot of time, money and effort.
It’s wise to save yourself time that you can spend elsewhere.

Rebuilding Animation

So we’ve got a lot of shots, a lot of characters, basically a lot of work. So let’s be smart and reuse as much as we can from the previous Hotel Transylvania film.

Should be easy right? Wrong.
In the mean time we’ve changed our toolsets so significantly that our old geometry data is no longer that useful.
We need to recreate all these old scenes so that we can use them. Close to 400 in total.

So they ask me to rebuild it all. This is where Python comes in handy, because I definitely was not going to do it manually.
I have the names of the characters, I can use that to look up what characters they map to on the new show.
I also can find their old cached animation curves.

So my tool would take a text file as input that told it what characters were in the scene and where.
It then finds all the relevant data, rebuilds the scene using our new, shiny assets and within a day we have 400 scenes recreated as if they were done from scratch again.

If I did this manually, it would have taken a few months.

Learn Python

If you're interested in learning more about Python, why not sign up for my course here?

This course takes you from learning the fundamentals of Python, to creating tools with advanced user interfaces, all while creating projects you can use immediately.
It will teach you many of the skills necessary to create tools like the ones in this article.

With over 700 students in its' first week, including artists from major studios, it is sure to provide valuable knowledge to everyone.



Python For Maya: Artist Friendly Programming - $70 $90

 

If you sign up from this link, it will save you $20 off the retail price using the coupon code: DGOVILCOM

Buy The Movie or Art Book

 

The movie is a good one for the kids, but the art book has some really gorgeous work in there from Genndy himself. If you’re interested, click on any of these Amazon Affiliate Links




Buy The Blu-Ray

 

 



Buy the Art Book

Working on The Amazing Spider-Man 2

Working on The Amazing Spider-Man 2

In Part 3 of this blog series, I’ll be covering what it was like to work on The Amazing Spider-Man 2.

This is the 5th Spider-Man feature film from Sony, and is part of the reboot/alternate universe starring Andrew Garfield as Spidey.

This project was a last minute project for me. Imageworks was light on work post Cloudy2 and I was about to be let go, but a last minute shift up in plans on another show meant I got to stay, and I’ve never been in that position since, fortunately.

If you’re interested in Layout, or cinematography, check out these two great books! (Affiliate Links)


Setting The Scene
Primarily about 2D animation, but a fantastic resource on traditional layout.


Framed Ink
One of my favorite books about composition

Layout and Pipeline: Doing Double Duty

For a lot of our Visual Effects features, we tend to combine the Layout and Pipeline departments.
This is because layout on these shows can become quite technical and it’s to our advantage to have them combined.

Other studios do similar things, for example some studios combine their matchmove and layout departments.

Fortunately, I had been working in Layout at Rhythm and Hues, and these skills came in very handy for this.

I ended up doing a lot of layout on this show, and in fact the opening shots in that trailer are all mine. It really helped having both the artistic and technical grounding, because it let me work more efficiently than I normally could have.

What is Layout?

Layout may be a term unfamiliar to some of the people reading this.

We essentially are the equivalent of a virtual cinematographer. Our job involves:

  • Handling all camera motion
  • Staging the scene by placing the environment pieces and characters
  • Prelighting (depends on the studio)
  • Continuity of sequences

We basically take the storyboards, or (increasingly common) the previs, and recreate it with our actual set and actual characters.

For example, in the trailer above, the very first shot of him falling into the city is mine.
I animated the camera, placed the buildings in their respective positions and set up the pacing of the shot by roughly animating Spider-Man.

This is then handed off to an animator to actually animate it properly.

Tools Built With Python

Since I was more preoccupied with layout on this show, I didn’t build a ton of tools, but there were a few that came in handy.

Quick Layout

The final battle sequence with Electro required us to be working in a very heavy environmet where we had to swap out set pieces for ones in various stages of destruction.

For example, if Electro destroys a pylon, we then need to make sure that it stays destroyed in all the other shots after that.

Since they were really wrecking this environment, I built a simple UI using Python and PyQt that let all the layout artists (including me), simply choose the state of predefined elements in the scene.

Each element had a group of radio buttons that let them decide whether they were intact, partially destroyed, fully destroyed etc…
This saved a ton of time and reduced potential errors because layout artists didn’t need to manually find the assets and swap them out, they could just click a few buttons and, BAM!, they were done.

 

A sequence that required a lot of layout work. Warning: Spoiler alert!

Lens Distortion

In Visual Effects movies, because we are shooting through a real camera lens, we pick up the imperfections of these lenses. Most importantly, we pick up the lens distortion.

This is non-ideal for us as our CG is undistorted, so when we ingest the plates, we use a calibration profile to undistort the images. This lets us work against a flattened plate.
When we output our final images, we then redistort them back to match the original camera.

However, clients are increasingly adamant that we present everything with the distortion, even earlier on in the process.
Even our animation playblasts need distortion these days, but those are just simple OpenGL captures from the viewport.

I set up a quick python script that would do the following:

  • Find the lens distortion used by the shot
  • Generate a Nuke file that would read our images
  • Write out these newly distorted images

It’s pretty simple to do, and something a lot of shows are now using, even the animated features like Storks.

Learn Python

If you're interested in learning more about Python, why not sign up for my course here?

This course takes you from learning the fundamentals of Python, to creating tools with advanced user interfaces, all while creating projects you can use immediately.
It will teach you many of the skills necessary to create tools like the ones in this article.

With over 700 students in its' first week, including artists from major studios, it is sure to provide valuable knowledge to everyone.



Python For Maya: Artist Friendly Programming - $70 $90

 

If you sign up from this link, it will save you $20 off the retail price using the coupon code: DGOVILCOM

Buy the BluRay

The movie was enjoyable. It’s definitely not one of the great super hero movies, but I enjoyed it as a Spider-Man fan. (Also I got to do the shot that finishes one of Spider-man’s most iconic story lines)

 

Affiliate Link

 

Working on Cloudy With a Chance of Meatballs 2

Working on Cloudy With a Chance of Meatballs 2

In this second part of my blog series where I go over projects that I’ve worked on, with a focus on how I used Python, I’ll be analyzing Cloudy With A Chance of Meatballs 2.
This was my first animated feature film, and my first film at Sony Pictures Imageworks.

Before I continue I’d like to give a little history.
I’d just left Rhythm & Hues as my contract was expiring. Rhythm wanted to extend it, but Sony just had a better deal between a more stable job, much higher pay and the chance to work on the sequel to one of my favorite animated films.

I was hesitant to leave because Rhythm had been a great gig, but the oppurtunity was too good to pass up. In hindsigh, this was a great decision because only a few weeks later, Rhythm fatefully filed for bankruptcy.

So begins my journey as a Pipeline TD, having transitioned from being a layout artist at Rhythm.
Imageworks had taken a chance hiring me, and so far it looks to be one that’s worked out.

Animation vs Visual Effects Films

Sony Pictures Imageworks is unique in that it’s one of the few studios that works on Animated Features as well as Visual Effects.
Seeing as I was changing from working on a Visual Effects Film to my first Animated feature, there were many  differences to take note of.

Pros of Animation

Animated features have a lot going for them, and there’s a reason why many artists try and work on them.

  • It’s so much more relaxing, at a slower pace and less overtime.
  • There is no client, or rather, the client is on the same team as you.
    They understand better the struggles of creating the imagery because they’re in the trenches with you, and there are fewer mad crunch times.
  • Teams are larger. Just the Animation department alone can eclipse the size of an entire visual effects team.
    This means work is more spread out and crunch time is easier to deal with.
  • You can deal with tasks on a sequence level rather than a shot level most of the time.
    This is because entire sequences are cut from one source, whereas in VFX films, each shot is its own beast.
  • You really get to feel like you’re crafting the movie. Even in Pipeline, you can have some influence over the final result, rather than in Visual Effects where you often feel like a cog in the machine.

Cons of Animation

It’s not always peaches and sunshine though. There are some downsides to it as well.

  • You work on the project for much  longer. It can get quite boring seeing the same shot on your screen 2 years later.
  • Teams are significantly larger, this means you don’t form as close bonds with your coworkers, and communication can be a real challenge. The show is now a giant lumbering machine, rather than an agile one.
  • As a Pipeline TD, there are fewer chances to do something really cool because the teams are so much larger, that tasks are shared around a lot, and you may have little to do.
  • There’s less of a cool factor. You’re often relegated to working on just a “kids film”. The Visual Effects films are the ones that often get the oohs and ahs.

Similarities

At the end of the day though, it’s not really all that different

  • Our pipeline at Imageworks is largely shared between Visual Effects and Animated movies. This means for the most part, you don’t have to consider them different at all.
  • Often you still have focus tests, marketing etc… on both that require crunch time. It’s not always smooth sailing, and I never go into a project thinking it’s going to be easy.

Python Tools for Cloudy With a Chance of Meatballs 2

There were quite a few major tools that I made for this show using Python. I’ll go over them here.

Deep Compositing for Animators

Cloudy 2 had a lot, and I mean a lot,of background characters.
This meant that shots couldn’t just be animated by a single artist and often had to be split up between multiple animators just to get it done in a realistic amount of time.

We have some great crowd tools that let us instance animation around the scene, but for many of these shots we needed unique, hero animation for (in some cases) a 100+ characters in a shot.

To help with this, I came up with a tool that takes our Playblasts (OpenGL captures from the animators scenes) along side a depth output, and then use this inside Nuke to combine them using depth.
This is a bit of a remedial use of deep compositing, but it’s quick, effective and animators can see the combined results of their scenes in under five minutes.

Since playblasts are a natural byproduct of animators working, there was no overhead other than enabling depth write outs for all their playblasts if certain criteria were met.

This can go even further though. Using the same depth compositing, we can bring the  data right back into Maya again as an image plane.
Maya’s viewport supports a single depth composited image plane. This means an animator can bring in either a single playblast or a combined output, and put it on an image plane.

From the shot camera, this 2d image is now properly composited into depth and you can move around the objects in the image as if they’re in the scene. It’s really quite cool to see.

Again, this process requires very little extra data, and no new workflows for the animators. It just provides a very natural way to get quick, iterative feedback on their scenes.

Here’s a video that goes over Deep Compositing on Planet of the Apes.
I didn’t work on this but it’s one of the best videos describing it.

This shot is an example of where we used the deep compositing, but also where we use the texture variation tool.
When Chester shows off his giant screens or his candy bars, each one is the same geometry and animators could pick what to show on them.

Texture Variations

Throughout the course of the movie we’d make constant reuse of the same geometry but have varying textures for them.
Traditionally lighting would just choose which texture they wanted, but for Cloudy 2, we wanted Animation to have control over it because they fed into gags in the shots.
Rather than have these be rigged assets or anything complex, we decided to keep it simple.

I built a tool that would show the animators any available textures for their assets, let them select which one they wanted and then let them apply it. They could do this for several objects at once.
Once they chose the textures, it would then be tagged to the geometry as an attribute that would then be picked up by the lighting template so that lighters didn’t have to even give it a second thought.

We used this for a lot of objects, from candybars, to ships to random objects in the scene that needed a little breakup.

Sorting Characters In The Scene

So not all of the tools we build on a show are this complex.

An example of a simpler tool I built that was pretty useful was in regards to a stadium scene in the movie. We had hundreds of characters that we needed to organize into sections.

This was a simple system of:

  • Get a list of all the characters in the scene.
  • Find their x,y,z positions in the world.
  • Sort them into sections based on the seats around them and their position.

Like I said, something really simple but even that can prove to be really useful in production.

Don’t really have an appropriate image for this one. So here’s a gif instead!

This is Pyblish, a similar publishing tool to the one we use in production.
There’s no shared code, but the fundamental design is similar. Developed by Marcus Ottosson.
Check it out here:

https://github.com/pyblish/pyblish-qml

Publishing Frontend

Like most studios, Imageworks has a very well defined publishing system to get data from one department to another.

Unfortunately, while the backend of our system was very well defined, the frontend system that was exposed to the artists was not.

This consists of these few basic ideas:

  • Artists select which assets they want to publish
  • They can configure a few options
  • The tool runs some validation tests
  • It then publishes the scene once all tests have passed

This gives us a reasonable safeguard against bad data making it to the next department, and lets us catch issues early.

Our old framework for this was old, and while the design was good, the implementation made it very unfriendly for artists, but also really hard to maintain and to add new tests. Additionally, a lot of it was in MEL.

So a coworker and I were tasked with coming up with a new framework, built from the ground up in Python.
We’d still use the same backend for publishing on our computer farm, but the frontend would be much more artist friendly and make it much easier for a TD on a show to add tests.

 

We’d built this towards the end of Cloudy 2, and we decided to beta test it on the ill fated test for Popeye

Learn Python

If you're interested in learning more about Python, why not sign up for my course here?

This course takes you from learning the fundamentals of Python, to creating tools with advanced user interfaces, all while creating projects you can use immediately.
It will teach you many of the skills necessary to create tools like the ones in this article.

With over 700 students in its' first week, including artists from major studios, it is sure to provide valuable knowledge to everyone.



Python For Maya: Artist Friendly Programming - $70 $90

 

If you sign up from this link, it will save you $20 off the retail price using the coupon code: DGOVILCOM

Buy the Movie and Artbook

The movie is pretty darn funny, and the art book has some incredibly gorgeous art behind it.


Buy the BluRay

Buy the Artbook

The links are affiliate links that will take you to your local Amazon store.

Pin It on Pinterest