Rig Retargeting

Challenges

  • The number of unique skeletons on bg characters is roughly doubling per film
  • We lack the team size to animate each body type
  • We need to retarget facial data as well as skeletons

Solution

  • A python package of tasks: Maya’s HIK solver + custom extension
  • Retargets to any body type, and allows overrides for tricky animations
  • Delivers resulting animation clips to multiple departments

Starting with the film ‘Spies in Disguise’, we created more background characters than we had the bandwidth to animate within a reasonable amount of time. Rather than sacrifice the variety of our cast, I prototyped a HIK-based retarget system that would allow animators to work on a prototype asset, and the clip propagated to all similar bodies in an automated process. Our animators were initially a little uncomfortable with the idea of not having exact control over the final look of 100% of the characters, so I worked with them to get a preview tool that would allow them to see the results of their work.

Once retargeting seemed like a viable option, I extended it to extract additional facial data and rig correctives. I worked with the cloth sim team to deliver a retarget file that they could use for simulation, and then created a series of render quality check passes that automatically got executed and posted to email/slack once the retarget was completed. I’m currently working to expand this for our next film, by adding features like heel height interpretation, preserving intentional self contact, avoiding unintentional self-contact, and support for extendable limb lengths.

Context-Based Dialogue System for Unity

Challenges

  • Treeless dialogue for less maintenance of varying story threads
  • Must be context-aware, allow for easy extension as gameplay grows
  • Must read from external files, writers prefer googleDocs

Solution

  • A Unity-based managed plugin
  • Reads from xml file, text file, or google spreadsheet
  • Hooks into existing Unity UI system
  • Custom Editor window for debugging

When developing my current game, I quickly ran into a problem: managing a growing number of branching dialogues. I initially tried some existing dialogue tools, but none of them met my requirements for ease of use and scalability. When researching other options, I came across Valve’s own published paper from GDC 2012 about contextual dialogue. The core idea was attractive, so I implemented this approach in Unity. The flat database of dialogue lines is converted from a bunch of possible sources, queried against the state of any objects tagged as relevant, and displayed via Unity’s UI Canvas system.

A custom editor window lets the user keep track of the dialogue, and has editing functions to tweak lines and push them back to the file or google doc that the writer is using. It also lets users see the game state data that’s being tracked, and the results of the dialogue queries as they happen. I had so much fun building this that I made it as a more generalized plugin, rather than a tool for my specific game genre. This tool will see customers before my game comes out, since a local dev company from my area is using it for their current project, due in early 2019. I’m still actively developing this plugin, and have been adding features and smashing bugs based on my users’ feedback as well as my own experimentation. It’s been very rewarding to support someone trying to release their own game on a short deadline.

Auto-Render, Composite, Playblast and Review Tool - The MasterBlaster

Challenges

  • The process of setting up a render for director review is time intensive
  • New team members have a very long ramp-up period before efficacy
  • The renders themselves are error and crash prone

Solution

  • Utilize playblasts from upstream, render only the crowd
  • Options to playblast or render
  • Render with different quality/lighting/DoF settings to fit the shot
  • Automatically composites images using different depth schemas

This tool is the most heavily used tool I’ve created for our department, and was one of the points of my siggraph talk in 2017. I looked at the typical life cycle of one of our shots and saw that users were spending a lot of time setting up renders to get their crowd sims approved. Each individual artist had their preferred style of lighting and displaying a rendered sim. The lack of consistency was slowing down our approvals. So I created this tool to speed along shots, standardize their final output, and give the artist more time to make creative decisions rather than spending that time with render wrangling.

The tool gives the user several different quality options to display the crowd, and automatically finds and composites image sequences from other departments that have contributed to a shot. With this method, the crowds artist only spends time generating crowds imagery, and we re-use hero and background set imagery that already exists. The existing playblasts and cgiStudio renders use a different depth schema, so I had to do some trickery with trig and a reference image plane to get the varying image sequences to composite together without apparent distortion. The human time this tool saved directly impacted our performance, and allowed us to deliver three times as many crowd shots on ‘Ferdinand’ as we had on any past film at our studio.

Blendskin-Based Localized Cloth Simulation

Challenges

  • We need cloth sim on characters that are close to camera
  • There are no promotion tools, so cloth sim must be applied to clip
  • We need to support terrain adaption and post-sim fixes
  • Each character has many garments that can be mixed and matched

Solution

  • A custom solver/deformer, based off of the blendskins concept
  • Mesh deltas from a cloth sim are applied as a normal blendshape
  • Allows limb and head animation after cloth sim, with a surprising tolerance

Cloth sim is typically one of the most expensive parts of the crowds pipeline, and in our films, crowds can be found very close to camera, and the simulation needs to hold up even if the crowd simulation includes terrain adaption and last-minute animation tweaks. In some film environments, a solution for this would be upgrading the character to a hero character and adding cloth sim in the normal way. That solution was too expensive and cumbersome for us.

Our approach was to run a cloth simulation on the source cycle, and use a custom blendskins solver to find the mesh deltas compared to the garment’s original joint weighting. These deltas were applied to a generic blendshape node, and its placement on our deformation stack meant that we kept the full power of our crowd software post-fix tools without losing any of the original cloth data. On this project I worked with the RnD programmer who created the blendskins solver for rigging, and I implemented the process that extracted this data after a sim and then applied it at time of render. The extraction was a straightforward python module, and the render time application was coded in our in-house render language, CgiStudio.

Cycle Manager Tool

Challenges

  • Anim cycles have mutliple separate components that need to be managed
  • Upstream rig issues often mean all cycles have to be updated frequently
  • We need a one-stop location for cycle status, anim fixes, and file processing

Solution

  • A pyqt tool powered by a python task platform
  • Connects to shotgun, slack, email, and our render farm
  • Bird’s eye view of cycle status, and info can be explored as needed

One of the biggest problems we had with growing our team’s capabilities was the danger of being bogged down by file management. For our first four films, the size of the team hovered between three to five people, but each film significantly increased our number of animation clips. There was no budget for a studio-wide solution to manage all that data, so I took on the process of scripting some of our file handling tasks, and split up many of those tasks into modular components that could be called independently.

This gave us the ability, via script, to re-generate joint data, blendshapes, fur objects, cloth sim, and secondary joint data for a given animation clip. These are the components used by our in-house renderer. To help track this data, I found an unused datatype in our shotgun database and recycled it for our purposes. I made a pyqt window to let us view all of our clips and call the modular scripts as needed. I worked with someone from RnD to connect these processes into our Maya, Houdini, and Nuke wrappers. Since the first iteration of this, the Production Tech team and I have worked together to flesh out the process and keep everything department agnostic. We can now manage and regenerate our entire library, port anim clips between shows, and communicate between departments via this tool.

Asset Creation Pipeline For Rigs & Renderable Characters

Challenges

  • Crowd rigs must be performant, and indistinguishable from lead characters
  • Rigs require data from several departments, each with a different build pattern
  • The number of crowd characters increases significantly on each film
  • Other departments would like to use our assets as well

Solution

  • A package of modular python tasks that is stackable, and farm friendly
  • Extendable and easy to plug in to other studio processes
  • Allows for micro/macro asset management

When I was first brought into the crowds team, the process of creating a crowds rig was only partially automated through the use of a monolithic python script, which lived in a hidden corner of the file system. For the productivity of myself and our future team members, it became one of my first tasks on the team to clean that up, document it, split it into bite-size bits and attach each to a user friendly interface.

That process is now fully modular. We can run all characters simultaneously, or just one part a single character. Each part of the character is a python task, and can be visually stepped through for debugging. The package is an accessible list of tasks, so it’s easy to add new parts to our process. This is how we delivered rigs for our newly created VR team to use: the process of creating an additional task to cleanup our asset for game engine use was much faster than if a rigger were to do so manually. This process scales up, so a new task can be run immediately on all characters.

Training and Team Development

Challenges

  • We’ve got a renderer that’s thirty years old
  • Studio-specific practices typically take time for a new employee to absorb
  • Some tools, especially within our Maya pipeline, are entirely custom

Solution

  • Crowds101: A series of pages using the Confluence platform
  • A weekly lesson syllabus for new emplyoees, with assignments
  • Prefabricated shots and simulations to ease new folks into the process

For several years we borrowed a couple of artists from other departments to help us weather the crunch periods. Those artists would often arrive with little to no experience in crowds or even the in-house renderer that we use. To get them up to speed quickly and maximize the time we had with them, I created a series of instructional lesson plans that each new person could experiment with.

I ran about three workshops a week where we would work through each lesson together. The rest of their onboarding time would be spent practicing the material. Each new person had a test shot and personal tasks that they were encouraged to complete by the next lesson. I also created supplemental video content and gave them a link to our internal video training software. Folk that were especially engaged had the space to add features to their shots, while I would help the struggling members. This was a really fun project for me, and the feedback I got was overwhelmingly positive. Many years ago, I used to be a middle school math teacher, and I’m pleased that I can make use of that skillset from time to time!

Auto-Playblast for Cycle Animators

Challenges

  • Cycles approved by anim supes are not suitable from every angle
  • Studio policies barred us from joining in the anim review process
  • Existing review imagery was nonstandard, and frequently hid flaws
  • 90+ animators and one Greg

Solution

  • A Maya shelf tool that was officially added to the pipeline’s approval process
  • Auto-generate front, side, orbit, and runup images of standard size
  • Easy access for animators to grab a helpful camera rig
  • Automates email blasts to the reviewers and the end users on completion

The cycle creation process for our studio was initially a bit clunky. Anyone would decide that a new cycle was needed, call it out, and the name of that future cycle would be given to an animator with little other information. Work was typically done in a vacuum, and the cycle would only be seen by the end users (character simulation, crowds, and layout departments) after the animator was done and barred from doing further work on it. This resulted in a lot of animation that was ultimately unusable, and a lot of cycles needed expensive fixbacks to make it work at all angles and with all clothing garments.

To try to get ahead of these issues, I got together with the lead technical animator and we worked out a more official process that kept everyone in the loop. It took a lot of pitching to explain why this would save resources for the company overall, but eventually everyone agreed! I mocked up a tool that was ultimately used as-is, where reference video was standardized and delivered for almost every cycle as part of the original callout. The end users would have a chance to add their thoughts by the time animation was near 50%. Part of that process meant standardizing the way playblasts were created and reviewed by animators and their leads. With this tool, they were sure to review a cycle from all angles, and the automated emails meant it took minimal effort for any stakeholders to stay informed.

Crowds Implemeted in the New Universal Scene Description Format

Challenges

  • The entire studio was given six weeks to test a drastically new data pipeline
  • Must transition from legacy renderer to a render-agnostic USD file
  • USDSkel and potential USD crowd solutions would not be ready within our timeline

Solution

  • A series of custom nodes in Houdini’s experimental new lighting context: Lops

This project is in active development. Our studio has been using its in-house rendering software for nearly 30 years, and we recently decided to try a large shift in how we work. Our goal was to deliver a trailer for one of our upcoming films entirely via the USD format, render it using Pixar’s Renderman, and do this all within a six week period. This was a pretty disruptive change, as some of the most basic building blocks of character deformers and materials are heavily tied in with our legacy renderer. My responsibility was to prototype how we might achieve crowds work in this new environment.

Much of the asset/cycle pipeline that I’ve built over the years was already set up to be platform agnostic, which meant that asset building, animation clips, and crowd simulation required only small adjustments. The final deliverable was trickier, since USD did not support instance deformations and USDSkel was still in its early stages and didn’t support the lattice or blendshape deformations that we needed in near-camera crowds. To meet the deadline for our trailer, I aimed for a MVP of “mesh-baking out all the things” from the crowd sim in Houdini. It was expensive, but it allowed us to create the images we needed. I still had a couple weeks before the deadline after achieving this, and used that time to rebuild the crowd sim using a bunch of custom nodes in houdini’s new experimental lops context. Through these I was able to achieve some more intelligent instancing, and reduced the file size by about 1,000x on any given frame. We still have to implement finer joint deformations in future versions, and since USDSkel support for blendshapes was just announced for the future, we’ll be adapting that as soon as possible. More info to come soon!