Rock Falcon, the poster boy for Softimage’s Face Robot technology, takes digital acting to new heights with expressive facial animation.
The problem is a classic one for animators-many of whom have fallen back on the argument that there is no place for hyper-realistic human animation. When it comes to animating humans, it’s better to opt for more stylized faces so the viewer doesn’t get distracted. And certainly there is a whole beautiful body of work that supports this point including Disney’s Snow White, Hayao Miyazakiís Spirited Away, Pixar’s melding of classic squash-and-stretch with 3D in The Incredibles. But then, an animated character like Gollum comes along, a combination of talented acting by Andy Serkis, and stellar animation by at least 18 animators working for Weta. The bar is moved.
At Blur Studios, in Venice Beach, CA, the quest for good facial animation is close to an obsession. However, Blur is not the kind of studio that puts an army of technicians to work on specialized software. Rather, Blur prides itself on turning out high-quality 3D animation on time and on budget. Its body of work includes mischievous animated critters such as the Academy Award nominee Gopher Broke and plenty of human character animation for cinematics in games such as X-Men Legends 2 Rise of Apocalypse.
On a recent visit with Blur Studios, we talked with Blur’s President and Creative Director Tim Miller and Jeff Wilson, animation supervisor. Like so many people in the animation business, the people at Blur are friendly and funny when they’re not being driven by murderous deadlines. On this particular day, the people at Blur were taking an earthquake training course, though Tim mused that it seemed unnecessary to take a course in “running and screaming.”
When it comes to facial animation, Miller is opinionated and outspoken. He levels plenty of criticism at the tools that have been available from Alias, Autodesk, Softimage, and LightWave, saying the Blur team worked arduously on facial expressions, only to end up with animation they considered unworthy of their efforts. Wilson at Blur also was frustrated, especially when Miller pointed out places where the facial animation wasn’t working , particularly around the jaw and the mouth. “You can hack the eyebrows,” notes Wilson, “but motion capture wasn’t helping with the mouth.” Up to this point, Blur Studio’s animators were trying to make facial animation work through brute force. Jeff notes that at one point, they were up to using 100 markers to capture facial movement and still the results were not what they wanted. When trying to use morphs, too many steps were required to accomplish the movements of cheeks bulging, eyes opening and eyebrows wrinkling. Blur was discovering that you can’t model and mocap every single move. Brute force is not practical.
The solution, they were certain, was a matter of getting better software. “We wanted the software to do more of the work,” explained Wilson.
Blur has worked primarily with Autodesk Media and Entertainment’s 3Ds max, and Miller has been a believer in a single pipeline for as much of the production as possible. So Wilson took time of to focus on facial animation, first working Autodesk Media and Entertainment’s Character Studio. The Autodesk team pitched in to help, but the Blur team wasn’t getting the results it wanted and the complexity of what they wanted to achieve slowed down the software. Somewhere around this time, the guys from Blur ran into the guys from Softimage’s Special Projects Team. (Venice, California is, after all, a small town especially if you’re working in computer animation.) And, the Softimage Special Projects Group was formed.
Just around the corner from Venice’s famous Muscle Beach, in offices that, ironically, were formerly occupied by Arnold Schwarzenegger, the Softimage Special Projects Group tackles customer problems such as creating realistic facial animation. Yes, the heavy lifting for facial animation is now being carried out in Arnold’s former workout room.
In those offices, Michael Isner leads the team working on facial animation, which includes Thomas Kang, Dilip Singh, and Javier von der Pahlen. Isner and von der Pahlen both have backgrounds in architecture and Kang has worked in interface design. Singh is a facial production expert, and he and von der Pahlen both have production experience and a good idea of what customers are going through. They have a larger backup group at Softimage HQ in Montreal.
By the time the Softimage Special Projects Team met Blur, Michael Isner said his team was ready to tackle solving “tens of hundreds of extremely hard problems.”
At first there was some concern about turning to software from Softimage since Blur’s pipeline has been built around 3ds max. Tim Miller laughs in retrospect. “These guys were scared to come to me with something from another company.” Miller, a veteran of the complicated problems faced by big studios with spaghetti pipelines, compounded by proprietary tools, vowed that Blur would avoid that situation by sticking to a simplified pipeline, using software tools from one company. But, once he was convinced that the Softimage team could help with the problem of facial animation, he was willing to give it a try. Says Miller: “Hey, I said, we tried. We put in a good faith effort, have at it.”
The relationship between Softimage’s special Projects Group and Blur is described by both as a super accelerated beta program and, in fact, Blur’s input helped the Softimage group take its Face Robot facial animation technology to the final stages of product development. Like Wilson, the Softimage team believed that the secret to facial animation was in building a system that understands how faces work so that th animator could work creatively with less focus on the mechanics of the process. But Isner and Kang both stress that the input from Blur’s animators gave them a critical understanding of how facial animation software needs to work to be truly useful to animators.
Thomas Kang with skull, Gregor vom Sheidt, Michael Isner, Javier von der Pahlen, and Dilip Singh of Softimage in the Special Projects studio near Muscle Beach.
“We want to be in the acting business,” said Isner who believes Face Robot can open the door to a new community of graphics artists-people who will make facial animation a specialty. And, just as there are Inferno artists, people skilled in using Autodesk’s Inferno effects software, there will be Face Robot artists.
Softimage|Face Robot differs from traditional modeling and animation tools, which have evolved as mechanical assemblies that move via software levers and pulleys or along paths. Instead, Face Robot enables soft tissue animation. When an animator grabs a control point and pulls, the face follows the point in a natural way, and the whole face is involved. Grab one side of the mouth and pull up-you get a sneer. Create a smile and the cheeks bulge, the eyes crinkle. Kang compares Face Robot to a Google app. “It’s simple on the outside, but there is a lot going on under the hood.”
A common problem in motion capture is the requirement for up to a hundred control points, which can cause lengthy set up times. Face Robot reduces the complexity and time of this process by using only 32 points. The result is a quicker process and better control over editing afterwards. A head created with any 3D modeling tool, for example, can be brought into Face Robot, where the key 32 facial landmarks are selected on the model. Face Robot includes an interface that prompts users to select the points such as corner of mouth, corner of eye, center of eyebrow, etc., and motion capture or keyframe animation can be used to drive a new set of handles that are optimized to pull the face around like a piece of rubber. Through being bundled with the entire programming API, modeling and character setup environment within Softimage XSI it has a great deal of flexibility for studios to fine-tune the facial system for their own needs. The process of incorporating additional rigging over the Face Robot solver allows studios to make the system resemble their own internal facial animation processes an interfaces. Face Robot also includes the ability to work with motion capture data, as well as perform keyframe animation. It is a superset of Softimage XSI and includes a complete environment for facial animation, with tweaks to the XSI core.
At Siggraph 2005, Face Robot was demonstrated by Rock Falcon a tough-guy character created by Blur Studios. His great dramatic moment comes when he rolls a watermelon seed around in his mouth, positions it, spits it out and turns to the camera with a satisfied smile. Blur’s Jeff Wilson, wearing mocap markers, supplied the motion for Rock’s big moment. He notes that the little smile at the end was probably involuntary but it’s a big part of what he likes about Face Robot. “The face stays live.” Wilson notes that one drawback of painstakingly animated faces is that they can go dead and flat between movements. With its underlying network of interconnected vertices, Face Robot does not go dead. In an interview with Softimage’s site XSI-base, Jeff Wilson notes “The human face is always moving unless it is very relaxed. Even when ‘hitting’ an expression in real life, the muscles will settle or twitch a bit. There are lots of very subtle adjustments that can happen after you reach an expression, and that’s what makes the performance interesting.”
So while Face Robot has all the capabilities of XSI, it also has its own unique engine for faces. Underneath it all, there is math. Michael Isner says the engine for Face Robot is essentially a solver with a collection of algorithms tackling the problems of facial movement. “We parametized the problem.” To meet those parameters, however, Face Robot requires a certain level of modeling. In most instances, says Isner, users will bring a head into Face Robot and they may have to fine-tune it to meet the expectations of the system’s engine. Face Robot has tools for modeling and sculpting the head beyond what is originally brought into the system. Being realistic, Isner says Face Robot could make the lives of animators more difficult as they get used to the initial hurdle of having to create a higher level of facial detail. It will require expertise, but the work put into preparing the head for animation will be rewarded with more flexibility, and power, when it comes to actually creating animation.
Rock Falcon, the poster boy for Softimage’s Face Robot technology, takes digital acting to new heights with expressive facial animation.
Clearly the effort has paid off for Blur. Jeff Wilson notes that the company’s productivity has skyrocketed. Wilson says that animators at Blur have been able to turn around about one second of animation for every hour of animator time, including setup. It’s a new equation for the company.
The company believes that the ability to animate faces better will also drive new business. Where, in the past, a lot of work went into downplaying the face in animation by focusing, instead, on the motion of the body to convey most of the performance, now they could use the face and take advantage of the emotional impact that faces can provide.
Likewise, it doesn’t take much to get Isner talking excitedly about the potential for facial animation and for Softimage. The ability to create Face Robot comes to a large degree from the painful rewriting process that took Softimage to the next level with XSI. One of the new features of XSI 5 is Gator, the ability to apply animation from one model to another. This can be especially powerful in the case of facial animation, and especially facial animation created with Face Robot. Performances can be saved and because of the consistent use of 32 markers, they can be easily transferred between faces. It’s even possible that an actor’s head can be scanned, captured, mo-capped, and re-used for additional footage.
Jeff Wilson notes that working Face Robot into Blur’s work flow has been reasonably simple. In the case of Face Robot, they are easily able to bring in a head modeled in 3ds max. At that point though, Blur tries to avoid doing additional modeling in XSI. If a head needs more work, they’ll usually go back out to 3ds max to do the additional modeling.
>Face Robot supports a variety of output formats, allowing users to work with different modeling and animation products for their entire project. Of course, notes Isner, the process would be naturally easier when the pipeline is based on XSI.
Just as Blur Studios is enjoying a period where they have an edge over the competition with Face Robot, before the product is officially rolled out, Softimage believes they have an edge with Face Robot. Softimage Vice President Gregor von Sheidt, former CEO of Alienbrain who came to Softimage with Alienbrain’s acquisition, will be leading the introduction of the new product. Face Robot isn’t going to be thrown to the dogs as a low cost module. Rather, von Sheidt says Face Robot is a high-end tool and will carry a premium price. The price tag will be closer to the Softimage of old than the XSI of today. It’s not a product for everyone and Softimage is not going for volume. Instead, Face Robot will be offered to key customers first. “We want users to have a good experience and we need to be sure we can support them.
Blur used Face Robot in the animation pipeline when creating the Brothers in Arms television commercials for its client Activision.
Obviously, Face Robot is enjoying a honeymoon. As of this writing only a few people have access to the software and most of them work for Blur Studios or Softimage. Blur helped develop it and they do see room for improvement, “especially around the mouth,” says Tim Miller who apparently says this a lot because everyone standing around him will sigh, roll their eyes just slightly, and give a resigned nod. It seems clear though that Face Robot really will change the face of animation. It could have the effect of democratizing facial animation in the same way new price points have expanded the universe of 3D animators.
Screenshot from Softimage | XSI 1.0 showing the render region
Making of Jurassic Park
–from Sega Summer CES 1993 “Sneak Preview” video
XSI 2.0.2 DEMO VERSION
with mental ray!
And speaking of render views, here’s something a little more modern 😉
This gallery contains 6 photos.
GDC 2007 Play Together!
1997 Softimage Sumatra splash screen
ILM use SOFTIMAGE|XSI to score Quidditch points
ILM use SOFTIMAGE|XSI to score Quidditchpoints in “Harry Potter and the Chamber of Secrets.” With a library of mocap and the XSI animation mixer, these animation legends were able to blend individual moves together into one seamless, broom-flying heart-racing, Quidditchgame.
by Michael Abraham
It’s extremely rare that the second installment in a series surpasses its predecessor. In the case of last year’s wildly successful and fantastically fun Harry Potter and the Sorcerer’s Stone, the pressure was definitely on for this year’s follow-up, Harry Potter and the Chamber of Secrets, to blow the roof off a theater near you. Such pressure, as we moviegoers know so well, often leads to the very worst kinds of cinematic disaster.
So it was with profound pleasure, then, that I watched the Chamber of Secrets when it opened last November. Not only did it exceed its predecessor in virtually every area, but the visual effects were more spectacular, exciting and, yes, frightening than any other film I’ve seen. In short, the film was an absolute blast, and for that we can thank, yet again, the extraordinary work of legendary effects facility Industrial Light & Magic (ILM), who have been stunning moviegoers for more than 28 years.
So, while Harry Potter has Industrial Light & Magic (ILM) behind him, ILM has SOFTIMAGE|XSI and SOFTIMAGE|3D on their side.
Anyone for Quidditch?
To create the pulse racing Quidditch match scenes in Chamber of Secrets, for example, Lead Animator Paul Kavanagh and the visual effects team at ILM relied on the SOFTIMAGE|XSI Animation Mixer to mix digital doubles of the actors with a generic library of Quidditch moves.
“For the second film, we wanted to add an element of motion capture to the Quidditch match,” says Kavanagh. “We built a complete rig with a broom, and then had a rider in full mocap gear sit on it as stagehands moved it around. By doing this, we were able to shoot a variety of different scenarios with the rider kicking, punching and that kind of thing. We gradually built up a library of highly realistic Quidditch moves.”
Having a library of moves was one thing, but Kavanagh quickly found himself yearning for a way to tie various moves together and blend them into one seamless action.
“We quickly realized that, if we wanted to blend the motion capture moves together, SOFTIMAGE|XSI was the only way to go,” says Kavanagh. “We took our SOFTIMAGE|3D scenes and imported them one at a time into SOFTIMAGE|XSI.”
Using the SOFTIMAGE|XSI Animation Mixer, the ILM team were then able to blend the individual moves together into one seamless, heart racing Quidditch play.
“SOFTIMAGE|XSI made it very easy to chop, cut and paste different pieces of motion and blend between them,” Kavanagh continues. “Once we had a shot that we liked, we were able to pare it down into a compound that was exportable as a SOFTIMAGE|3D anim file. The anim file would then be imported back into SOFTIMAGE|3D and onto one of our character models. The character would then follow the motion capture moves. We were using XSI to perform the animation we wanted, but, when it came time to export, it was only an anim file. There was no geometry or animation controllers involved in the export, just the animation data. It was fantastic, and I don’t think we could have done it any other way.”
ILM is now deploying version 3.0 of the SOFTIMAGE|XSI environment on Linux workstations and servers throughout its production pipeline. The story of Harry Potter and SOFTIMAGE, however, is far from over.
Please read on.
Leading by Example
Steve Rawlins admits that he likes to lead by example:
“I love to be involved at the very beginning of a project, when the animation is just starting,” says the veteran animator from his desk at ILM in San Rafael. “An early start on a character gives you a nice ramping-up period and an ideal chance to define exactly who the character will be. If you can establish that early, it’s better for everyone, I think. Otherwise, things can get kind of confusing.”
Rawlins should know. As he embarks on his eighth interesting year at Lucas Digital’slegendary effects company, the Lead Animator on such films as Star Wars Episode II: Attack of the Clones (2002) has played an early role in some seriously large projects. After an initial eighteen months in ILM’s now-closed Commercial Production division, Rawlins moved straight into animating the return of the infamous Jabba the Hutt for Star Wars Episode I: The Phantom Menace (1999). Following work as Lead Animator on the title characters for The Adventures of Rocky and Bullwinkle (2000), Rawlins assumed the same role on Episode II, helping to create Dexter, the bizarre, yet-entirely appropriate looking diner owner in the film.
“Listing them like that doesn’t make it sound like I’ve worked on much in the last seven years,” says Rawlins thoughtfully. “They were all long schedules and a lot of work, however. They were all great to work on.”
Not much? I beg to differ. For my money, Harry Potter and the Chamber of Secrets alone might have taken Rawlins’ full ILM tenure to complete (much, much longer, of course, if I’d been working on it, but that’s why Steve’s at the best effects facility in the whole damn world, and I’m in Montreal writing stories about him – but I digress).
Just as he did with Dexter, Rawlins used SOFTIMAGE|3D v. 3.9 to define the animation, and, therefore, the screen character, of Dobby, a brand new character in the continuing Harry Potter story. Short, meek and troublemaking, Dobby is a “house elf,” an overwhelmingly obsequious goblin with a penchant for self-punishment. When he discovers a plot to harm Harry upon his return to school, Dobby, despite his instinctual loyalty to his nefarious masters, is determined to rescue Harry from his apparent fate. The methods Dobby uses to keep Harry from being hurt are as hilarious as they are destructive, as disastrous as they are well intentioned.
It was not long before animators Sue Campbell, Kevin Martell and Steve Aplin joined Rawlins to work on Dobby. By the end of the project, a total of fifteen animators, all working on SOFTIMAGE|3D, came together to bring Dobby completely to life.
“SOFTIMAGE|3D definitely enhanced the efficiency of this project,” says Rawlins with emphasis as he begins to explain why.
When Harry Met Dobby…
While Rawlins began his animation work in early January 2002, Dobby had been conceived and modeled in November and December 2001, just a few weeks following the opening of the inaugural Harry Potter film. Confident of the first film’s impending success, ILM had no time to lose.
Fortunately for Rawlins’ schedule, the first scene shot was Harry’s first meeting with Dobby.
“It worked out really well for me,” says Rawlins. “Because the scene in Harry’s bedroom was the first one shot, an edited version was available to us by the beginning of February. It included somewhere around 22 shots and, the finished voice track, everything except Dobby himself. That had the dual advantage of setting the stage for me but still allowing me the freedom to have Dobby move the way I wanted him to.”
Rawlins eased into the project, using SOFTIMAGE|3D to create a simple walk cycle for Dobby, “just to see how he was going to move.” From there, the team chose a single shot on which to test their proposed animations.
“We chose what turned out to be the third shot in the sequence for our reference,” says Rawlins. “Fittingly, it is the very shot where Dobby introduces himself to Harry. It was a great shot to start with, because it provides a good shot of his face, and I was then able to imbue his entire character with the sort of nervousness we wanted to convey. What was great was that Chris Columbus, the director, was able to see Dobby for the first time actually interacting with Harry in the movie. He was very positive.”
When Dobby Hits Dobby…
Once established and approved by Columbus, it was time to make Dobby behave like, well, Dobby. As the conflicted yet kind little being that he is, Dobby is given to feeling profoundly guilty even when he does the right thing. Attempting to assuage that guilt, Dobby spends a good deal of time hitting himself. Both funny and kind of sad, these scenes of self-punishment were some of the most challenging on the film:
“The decision was made early on that Dobby would be created using entirely keyframe animation,” says Rawlins. “It was the right decision, but there is always a tendency with keyframing to make the animation overly ‘cartoony’ in its look. Using SOFTIMAGE|3D, Kevin Martell did a fantastic job of striking a balance between Dobby’s otherworldly look and some sort of realism. When Dobby bashes himself in the head, it isn’t entirely funny. He doesn’t come across as enjoying the pain or just being goofy. It really is rather sad, I think, and the audience feels just a little bit badly for laughing at him.”
When asked if Martell’s accomplishment was aided by SOFTIMAGE|3D, Rawlins is quick to answer in the affirmative:
“When you draw with a pencil and paper, you never have to think about how they work,” he says matter-of-factly. “Because of that, you can focus entirely on your creativity. That’s the way it is with SOFTIMAGE|3D. The tools are all right there and extremely intuitive. You can just grab your model and get to work. You never have to spend time fighting to do something.”
“My favorite feature would have to be the dope sheet,” continues Rawlins thoughtfully. “It lets me play and record basic motions and then layer in different variations quickly and easily. Together with the curve editor, the dope sheet becomes a very powerful combination.”
Just like ILM, SOFTIMAGE and Harry Potter.