Experience and Thoughts on FacewareTech

Perhaps you've heard of Faceware? The multi-application plugin has been used in an ever-increasing number of AAA titles and films to map captured facial animations onto 3D characters. The intended benefits of using a tool like this are twofold. First of all, it's important to consider that good facial animations can be challenging to produce - especially for more limited rigs - and remapping from motion capture provides a high floor in terms of the quality of timing and secondary motion. The second bonus is efficiency - after some set-up, the remapping of motion from actor to rig can be more or less automatic, which makes it possible to scale a project to larger volumes of animation without creating much more work. I figured that experience with a cutting-edge tool like this could only be a feather in my tech artist cap, so a month ago I gave its free trial a shot.

Video courtesy of Faceware Tech, with footage from dozens of titles. View on Vimeo for full info.

Unfortunately, it seems that I chose a bad time to try it out - there were a couple bugs in the software which made things frustrating. First, there was a problem with the AutoSolve feature with respect to the head rotations. AutoSolve is Faceware's retargeting method which operates in the "express" pipeline, to put it simply. The normal workflow involves repeatedly matching poses on a rig to frames of captured actor footage, until you reach the point where the software can interpolate everything in between. The AutoSolve method, by contrast, relies on a pre-defined "Expression Set" to animate the rig autonomously. The upside is a faster and less demanding process (Faceware does have a live version, as proof). The downsides, it seems to me, are that the software must rely on its generic understanding of the actor's poses and there is no opportunity for artistic exaggeration or deviation. For head movements, however, these problems don't seem very problematic. A generic model of head movements just doesn't carry the same cost as it does for mouth poses. As such, AutoSolve was the recommended method for retargeting head animation. Problem was, it just didn't work in the version of Faceware I was running (5.1.0.114).

After AutoSolving, my rig's head controller often rotated in unexpected directions, despite my double- and triple-checking the expression set. Additionally, the resulting rotations were minute, and reliable seemed scaled by ~.017 - aka the conversion factor for degrees to radians. Unexpected conversion of degrees to radians was a recurring theme during my Faceware trial. It also happened to any attribute which had "rotate" in its name - my custom blendable attributes ".autoRotate" and ".rotateChildren" got caught up and had their full-effect (1.0) values converted to 0.0175. Not ideal.

Scaling rig controls was not recommended, but I scaled them anyways and didn't have any problems.

The biggest issue by far for me was with undoing. If the retargeter window (a Maya plugin) received a Ctrl+Z keystroke, Maya would die, instantly. Like, worse than fatal error - it didn't even have the opportunity to perform its usual last-ditch attempt to save. It just died. Gone. Kaput. It would be one thing if the plugin just didn't have an undo feature and ignored the keystroke. But to die and take Maya with it? Undo is arguably the most sacred feature of any productivity software - what am I going to do, not use it? I found myself having to count operations, save every 30 seconds, and turn every undo into a kind of superstitious ritual.

Despite the frustrations, I can't really say anything bad about the results that I ended up with. The workflow (bugs aside) was simple and the interpolations largely accurate. What's more, the demands it placed on my rig functioned as a good benchmark for its completeness and quality. I found myself having to add a couple more adjustment blendshapes to hit certain poses while building up the pose library.

The thing that most impressed me about Faceware, however, was rather humble - it was the wide range of inputs it could work with. As this was my unpretentious, on-the-side undertaking, I didn't have the kind of resources and equipment that are generally associated with motion capture. I had Maya and a crappy laptop webcam; Faceware said, "okay, I can work with that." Of course, it can also work with fancy fixed-point or head-mounted cameras and all sorts of special equipment - and I'm sure the results elevate appropriately. But to make (essentially) the same workflow available to so many different tiers of capability can only be good for the animation community.

By the end of the trial, I had learned a valuable new workflow, been forced to improve both my test rig and my acting "skills," built an analyzer model and a pose library (for future pipelining), and ended up with a pretty sweet clip (it's in my rigging reel). Faceware's power can't be ignored - its combination of flexibility, quality, and efficiency is second to none. And now, with Snapchat/Apple/Google/Facebook's new retargeted 3D emoji bursting onto the scene, demand for this sort of thing is set to peak. Recommended.