Friday Flasback #405

Court Vision. Master Every Angle. XSI 3.0

“When FIFA and NBA were looking at starting
on the next generation of their product, they
decided it was time to look in the marketplace
and evaluate the products that were out there.
So we literally put everyone in a classroom and
ran them through the scenarios that they
were actually going to be doing in production
with a variety of softwares, and, in the end,
we decided that SOFTIMAGE|XSI was
the best way for us to go.”
3-D Non-linear Production Environment.
download the free XSI Experience at:
John Rix
Director of Visual Development
EA Canada


Friday Flashback #403


by Michael Abraham

L03_link_pic_refine_1_smEvery story needs a beginning. Ask around and most digital artists are more than happy to tell you theirs. Every story is different, of course, and usually interesting in its own way, but most of them inevitably involve New York, Los Angeles, or London. I ain’t complaining, you understand, but you’ll understand how my ears pricked up when Sandy Sutherland started telling me his story.

“I got into 3D when I was still living in Kenya,” says Sutherland, now Senior Animator at The Refinery in Cape Town, South Africa.

Getting started in Kenya. Now, that’s a beginning. I mean, how many people can say that? Not many can, according to Sutherland.

“I started out freelancing with my own equipment, which in those days consisted of an old Commodore Amiga equipped with Sculpt Animate. When a local post facility decided to get into high-end animation, they bought one seat of TDI Explore running on a Silicon Graphics 4D 25 and I was hired to drive it. I was really the only person in Kenya with any 3D experience, so that was my big break.”

And, of course, it is from big breaks that great careers grow. It was Sutherland’s success in Kenya that brought him here to Cape Town – and to a crucial position at one of South Africa’s largest post-production facilities – nearly six years ago. Specializing in high-end television commercials for both the African and international markets, The Refinery is based in Johannesburg, but Sutherland calls Cape Town home. He is also quick to praise SOFTIMAGEÒ |XSIÔ v1.5, his system of choice for 3D character animation. When asked how SOFTIMAGE|XSI fits into The Refinery’s production pipeline, his answer is simple and succinct.

“XSI is our production pipeline!” says Sutherland emphatically. “It simply has everything we need for the type of work we do.L03_link_pic_refine_2_sm The Render Tree, the Animation Mixer and the vastly improved modeling tools, which in version 1.5 cover every style of modeling, from basic polygonals to complex subdivisions and very strong NURBS tools: it’s all there.”

Sutherland’s fervor for the latest version of SOFTIMAGE|XSI is such that he can speak with enthusiastic authority about virtually every new feature.

“The entire workflow is greatly improved in the latest version,” Sutherland explains. “I can now spend more time on animation and, thanks to the Custom Parameter sets, almost no time on character or system setups. All the setups that I used to have to do using sliders are a thing of the past. Softimage really seems to have been watching how people work and have responded with a well-thought out system.”

Singled out for special praise, however, is the SOFTIMAGE|XSI Animation Mixer: “The Animation Mixer has allowed me to experiment more,” explains Sutherland. “Now, I can mix different animations until I have what I want, all without destroying animations that were close, but not quite there! What I like most about the Animation Mixer is that it functions much like a nonlinear editor: you can drop a section of footage or an animation onto the timeline to mix it into another piece. That simplicity lets me focus on particular pieces instead of the whole character, then add the bits into one using the Animation Mixer tools, all of which makes things more flexibility and efficient. Weighting was for me a nightmare before, but now with tools like the weight painting, it makes things far easier. I have used the new weighting tools recently, and I love them!”

Nowhere were the benefits of the Animation Mixer more apparent than on a recent project for Bubbaloo Chewing Gum, a long and particularly challenging project for Sutherland and The Refinery.

L03_link_pic_refine_3_sm“With the Bubbaloo cat character that we’ve been focusing on most recently, we had some animation on which another of our animators had worked. With an earlier setup version, it would have been far more difficult to bring the animation across to another system but, thanks to the toolset in SOFTIMAGE|XSI, it was a breeze to transfer and reuse. More than that, one of the scenes of the Bubbaloo cat included some motion capture work. When the clients saw it initially, they didn’t care for some of the larger motions we’d used, so we used the Animation Mixer to maintain the basic motion capture look, while softening the motion to the client’s satisfaction.”

In a business where clients are rarely, if ever, satisfied the first time around, Sutherland sees SOFTIMAGE|XSI’s animation editing tools as a godsend. “The non-destructive philosophy of SOFTIMAGE|XSI has been great for me so far,” elaborates Sutherland. “We always have clients who make certain creative decisions, then reject them and want to return to an earlier version. Now, all we have to do is change the strength of the sliders in the Mixer, rather than having to search through previously saved files in order to find the ‘previously preferred’ animation. The immediate updating of parameter in the Render Region also speed things up immensely. I was working with the director of the Bubbaloo spot recently, and he asked if we could change the color and tone of the lights, as he had gone for a golden tone on his footage in telecine. I put some footage in the Rotoscope Camera, and using the render region we were able to quickly and easily adjust my lights to match the footage. The director was amazed, mainly because he was expecting to have return later to see the changes. I was able to show him right away.”

Sutherland concludes: “We’ve found XSI’s subdivisional surfaces provide an absolutely wonderful turnaround in our modeling methods. We used to model exclusively using NURBS; now, we almost always use ‘subdees’. We use XSI pretty much exclusively now.”

Friday Flashback #393

Of Mice and Models

By Karen Moltenbrey

A series of UK television commercials for Aero candy stars some extremely talented digital mice who sing, dance, and even hula-hoop. Despite their efforts, the mice fail to impress chocolate buyers and scurry away dejected. The ad campaign began with a 30-second commercial featuring one hula-hooping mouse, who is joined in a second spot by dozens of others, including one that sings Chinese opera and performs an elaborate, oriental-style dance. The most recent spot, which is expected to air soon, contains a group of mice performing a complicated Irish step dance.

Creating these gifted photorealistic creatures and giving them naturalistic move ments required a similar amount of talent and ingenuity on the part of the modelers and animators at digital effects house Glassworks and animation production company Passion Pictures, both located in London. According to Alastair Hearsum, head of 3D at Glassworks, the computer-generated mouse in the film Stuart Little set a high standard against which the audience would judge the Glass works character. For its Aero mouse, though, Glassworks chose a different tack by dispensing with the character’s clothes and opting for a more realistic approach compared to the stylized Stuart in terms of appearance and movement.

“The challenge was to create a mouse that the audience believed was real,” says Hearsum. “It had to move like an actual mouse, and have realistic proportions and features.”

The 3D animations of the mice were inspired by 2D pencil drawings from Passion’s visual effects director Chris Knott and animation directors Tim Watts and Alyson Hamilton, who had collected various mouse images before making character drawings. The duo worked closely with Hearsum to produce the convincing personalities of the mice. Glassworks then constructed the mice models, added realistic fur, and animated them to move under Passion’s guidance and direction within a live-action environment.

“There are no complicated rigging setups, layered expressions, or complex shaders with the exception of the fur,” notes Hearsum. “But that’s not to say that the project wasn’t challenging. One of the more difficult aspects was getting the mice to look sweet and move in simplistic yet charming ways.” This was accomplished, in part, by using a fairly blank facial expression for all the creatures, so they wouldn’t look anthropomorphic, despite their humanlike actions.
Digital effects house Glassworks and animation production company Passion Pictures teamed up to create a series of television commercials for Aero candy in the UK that feature dozens of realistic mice performing some amazing but unrealistic stunts.
Mouse Modeling
Whether a scene has one mouse or a hundred, each character originated from the same model, which Hearsum built with separate NURBS patches using Soft image|3D running on an SGI Octane system. He then constructed a complex skeletal structure for the model, whereby the skeletal parts that were controlled by inverse kinematics (IK), such as the arms and legs, would be limited to two connecting joints. With this structure, the shoulders were not part of the arms, but linked to them as parents rather than children in the chain. That setup, combined with the vector constraints, gave the group total control over the movement of the arm and shoulder, for example, and eliminated rotational or other range-of-motion issues that can occur when multiple joints are interconnected within the bone structure.

To achieve the intricate and realistic foot animation in some of the scenes, Hearsum set up the root of the foot’s bone structure at the bottom of the foot rather than at the ankle. This made it easier to keep the ball of the foot stationary when the animators stood the mouse on its tiptoes. Specifically, the foot’s main bone structure contained one joint with the root positioned near the toes, and the effector (the endpoint in the chain that poses the rest of the chain) positioned at the heel and ankle areas. The leg effectors, meanwhile, were constrained to the effector of the foot joint, and the middle three toe bones were constrained to the root of the main foot bone. “Therefore, when the mouse stood up on its toes and its foot was bent, the toes remained flat to the ground,” explains Hearsum.

Each mouse was constructed with (left to right) NURBS patches, a skeletal structure, a combining of the two, FurShader texturing, then various levels of shading and texturing. The finished mouse as it appears in a commercial is shown below (bottom image).

For skinning the model, the artist used Softimage’s binding box feature, which automatically includes or excludes the points associated with a particular joint that falls within the box. “They are a pain to set up, but by using them, I was able to reskin the mouse in a matter of a few minutes,” says Hearsum. “Early in the process, the mouse was in a constant state of flux, especially the facial region. Using the binding boxes enabled me to quickly and easily accommodate changes from the client.”

Fur Real

The modeling efforts only scratched the surface when it came to achieving a photorealistic appearance for the mice. The key element was the hair, created in Glassworks’ FurShader, a combination Softimage|3D plug-in/Mental Ray shader that has been in de velopment at the studio for the past three years. It was first employed for models of spiders appearing in TV commercials for other brands. The software provided wireframe control hairs, and by directly manipulating the control hairs through their individual IK, the group was able to style the fur to create a different look for each mouse appearing in the series.

“For instance, FurShader enabled us to give the opera-singing mouse a more feminine demeanor by making its coat fluffier,” says Hearsum.

During the rendering process, FurShader fills in the gaps between the control hairs with a specified number of hairs. On average, each mouse had roughly 500,000 hairs. The software also controls the tip and root color, and thickness of individual strands, enabling the artists to create a unique appearance for many of the characters. “Its own antialiasing eliminates any ‘buzzing’ problems that can result when you have a lot of fine detail in a frame, which can interfere with the scan lines of the TV,” says Hearsum. Furthermore, each hair is fully raytraced and integrated into the scene during rendering rather than postproduction. As a result, the hairs cast and reflect shadows. “It truly made a difference in the realism of each shot,” he adds.

For the original commercial, the artists created a single mouse using Softimage and Glassworks’ own FurShader software. The numerous mice appearing in the subsequent spots were all generated from the original model, but given slightly different feature

As a final touch, Hearsum used a great deal of motion blurring. This helped to seamlessly integrate the models into their respective live-action scenes, which was done using Discreet’s inferno. “The mice are moving quickly in the commercials, and we used motion blur so they wouldn’t look computer-generated next to the live actors and scenery,” he explains.

It’s a Wrap

Even with the mouse “cloning” process, it took Hearsum’s team several weeks to generate the numerous characters and their complex movements for the subsequent commercials. “There was more work in the first scene of the second commercial-which had five mice-than in the entire first commercial, which had just one mouse in each scene,” says Hearsum.

Although arduous, bringing the CG mice to realistic virtual life was rewarding for those at Glassworks and Passion Pictures. In fact, the studios won accolades from industry peers for their work on the initial project, receiving two British Television Advertising Awards for Best Animation and Best Computer Animation.

Karen Moltenbrey is a senior associate editor for Computer Graphics World.

Friday Flashback #313

Fantastic Faces

Rock Falcon, the poster boy for Softimage’s Face Robot technology, takes digital acting to new heights with expressive facial animation.

The problem is a classic one for animators-many of whom have fallen back on the argument that there is no place for hyper-realistic human animation. When it comes to animating humans, it’s better to opt for more stylized faces so the viewer doesn’t get distracted. And certainly there is a whole beautiful body of work that supports this point including Disney’s Snow White, Hayao Miyazakiís Spirited Away, Pixar’s melding of classic squash-and-stretch with 3D in The Incredibles. But then, an animated character like Gollum comes along, a combination of talented acting by Andy Serkis, and stellar animation by at least 18 animators working for Weta. The bar is moved.

At Blur Studios, in Venice Beach, CA, the quest for good facial animation is close to an obsession. However, Blur is not the kind of studio that puts an army of technicians to work on specialized software. Rather, Blur prides itself on turning out high-quality 3D animation on time and on budget. Its body of work includes mischievous animated critters such as the Academy Award nominee Gopher Broke and plenty of human character animation for cinematics in games such as X-Men Legends 2 Rise of Apocalypse.

Softimage’s Face Robot technology is being utiliized for many projects at Blur. Most recently, it was used to complete a series of game cinematics for Xmen Legend 2.

On a recent visit with Blur Studios, we talked with Blur’s President and Creative Director Tim Miller and Jeff Wilson, animation supervisor. Like so many people in the animation business, the people at Blur are friendly and funny when they’re not being driven by murderous deadlines. On this particular day, the people at Blur were taking an earthquake training course, though Tim mused that it seemed unnecessary to take a course in “running and screaming.”

When it comes to facial animation, Miller is opinionated and outspoken. He levels plenty of criticism at the tools that have been available from Alias, Autodesk, Softimage, and LightWave, saying the Blur team worked arduously on facial expressions, only to end up with animation they considered unworthy of their efforts. Wilson at Blur also was frustrated, especially when Miller pointed out places where the facial animation wasn’t working , particularly around the jaw and the mouth. “You can hack the eyebrows,” notes Wilson, “but motion capture wasn’t helping with the mouth.” Up to this point, Blur Studio’s animators were trying to make facial animation work through brute force. Jeff notes that at one point, they were up to using 100 markers to capture facial movement and still the results were not what they wanted. When trying to use morphs, too many steps were required to accomplish the movements of cheeks bulging, eyes opening and eyebrows wrinkling. Blur was discovering that you can’t model and mocap every single move. Brute force is not practical.

The solution, they were certain, was a matter of getting better software. “We wanted the software to do more of the work,” explained Wilson.

Blur has worked primarily with Autodesk Media and Entertainment’s 3Ds max, and Miller has been a believer in a single pipeline for as much of the production as possible. So Wilson took time of to focus on facial animation, first working Autodesk Media and Entertainment’s Character Studio. The Autodesk team pitched in to help, but the Blur team wasn’t getting the results it wanted and the complexity of what they wanted to achieve slowed down the software. Somewhere around this time, the guys from Blur ran into the guys from Softimage’s Special Projects Team. (Venice, California is, after all, a small town especially if you’re working in computer animation.) And, the Softimage Special Projects Group was formed.

Just around the corner from Venice’s famous Muscle Beach, in offices that, ironically, were formerly occupied by Arnold Schwarzenegger, the Softimage Special Projects Group tackles customer problems such as creating realistic facial animation. Yes, the heavy lifting for facial animation is now being carried out in Arnold’s former workout room.

In those offices, Michael Isner leads the team working on facial animation, which includes Thomas Kang, Dilip Singh, and Javier von der Pahlen. Isner and von der Pahlen both have backgrounds in architecture and Kang has worked in interface design. Singh is a facial production expert, and he and von der Pahlen both have production experience and a good idea of what customers are going through. They have a larger backup group at Softimage HQ in Montreal.

By the time the Softimage Special Projects Team met Blur, Michael Isner said his team was ready to tackle solving “tens of hundreds of extremely hard problems.”

At first there was some concern about turning to software from Softimage since Blur’s pipeline has been built around 3ds max. Tim Miller laughs in retrospect. “These guys were scared to come to me with something from another company.” Miller, a veteran of the complicated problems faced by big studios with spaghetti pipelines, compounded by proprietary tools, vowed that Blur would avoid that situation by sticking to a simplified pipeline, using software tools from one company. But, once he was convinced that the Softimage team could help with the problem of facial animation, he was willing to give it a try. Says Miller: “Hey, I said, we tried. We put in a good faith effort, have at it.”

The relationship between Softimage’s special Projects Group and Blur is described by both as a super accelerated beta program and, in fact, Blur’s input helped the Softimage group take its Face Robot facial animation technology to the final stages of product development. Like Wilson, the Softimage team believed that the secret to facial animation was in building a system that understands how faces work so that th animator could work creatively with less focus on the mechanics of the process. But Isner and Kang both stress that the input from Blur’s animators gave them a critical understanding of how facial animation software needs to work to be truly useful to animators.

Thomas Kang with skull, Gregor vom Sheidt, Michael Isner, Javier von der Pahlen, and Dilip Singh of Softimage in the Special Projects studio near Muscle Beach.


“We want to be in the acting business,” said Isner who believes Face Robot can open the door to a new community of graphics artists-people who will make facial animation a specialty. And, just as there are Inferno artists, people skilled in using Autodesk’s Inferno effects software, there will be Face Robot artists.

Softimage|Face Robot differs from traditional modeling and animation tools, which have evolved as mechanical assemblies that move via software levers and pulleys or along paths. Instead, Face Robot enables soft tissue animation. When an animator grabs a control point and pulls, the face follows the point in a natural way, and the whole face is involved. Grab one side of the mouth and pull up-you get a sneer. Create a smile and the cheeks bulge, the eyes crinkle. Kang compares Face Robot to a Google app. “It’s simple on the outside, but there is a lot going on under the hood.”

A common problem in motion capture is the requirement for up to a hundred control points, which can cause lengthy set up times. Face Robot reduces the complexity and time of this process by using only 32 points. The result is a quicker process and better control over editing afterwards. A head created with any 3D modeling tool, for example, can be brought into Face Robot, where the key 32 facial landmarks are selected on the model. Face Robot includes an interface that prompts users to select the points such as corner of mouth, corner of eye, center of eyebrow, etc., and motion capture or keyframe animation can be used to drive a new set of handles that are optimized to pull the face around like a piece of rubber. Through being bundled with the entire programming API, modeling and character setup environment within Softimage XSI it has a great deal of flexibility for studios to fine-tune the facial system for their own needs. The process of incorporating additional rigging over the Face Robot solver allows studios to make the system resemble their own internal facial animation processes an interfaces. Face Robot also includes the ability to work with motion capture data, as well as perform keyframe animation. It is a superset of Softimage XSI and includes a complete environment for facial animation, with tweaks to the XSI core.

At Siggraph 2005, Face Robot was demonstrated by Rock Falcon a tough-guy character created by Blur Studios. His great dramatic moment comes when he rolls a watermelon seed around in his mouth, positions it, spits it out and turns to the camera with a satisfied smile. Blur’s Jeff Wilson, wearing mocap markers, supplied the motion for Rock’s big moment. He notes that the little smile at the end was probably involuntary but it’s a big part of what he likes about Face Robot. “The face stays live.” Wilson notes that one drawback of painstakingly animated faces is that they can go dead and flat between movements. With its underlying network of interconnected vertices, Face Robot does not go dead. In an interview with Softimage’s site XSI-base, Jeff Wilson notes “The human face is always moving unless it is very relaxed. Even when ‘hitting’ an expression in real life, the muscles will settle or twitch a bit. There are lots of very subtle adjustments that can happen after you reach an expression, and that’s what makes the performance interesting.”

So while Face Robot has all the capabilities of XSI, it also has its own unique engine for faces. Underneath it all, there is math. Michael Isner says the engine for Face Robot is essentially a solver with a collection of algorithms tackling the problems of facial movement. “We parametized the problem.” To meet those parameters, however, Face Robot requires a certain level of modeling. In most instances, says Isner, users will bring a head into Face Robot and they may have to fine-tune it to meet the expectations of the system’s engine. Face Robot has tools for modeling and sculpting the head beyond what is originally brought into the system. Being realistic, Isner says Face Robot could make the lives of animators more difficult as they get used to the initial hurdle of having to create a higher level of facial detail. It will require expertise, but the work put into preparing the head for animation will be rewarded with more flexibility, and power, when it comes to actually creating animation.

Rock Falcon, the poster boy for Softimage’s Face Robot technology, takes digital acting to new heights with expressive facial animation.

Clearly the effort has paid off for Blur. Jeff Wilson notes that the company’s productivity has skyrocketed. Wilson says that animators at Blur have been able to turn around about one second of animation for every hour of animator time, including setup. It’s a new equation for the company.

The company believes that the ability to animate faces better will also drive new business. Where, in the past, a lot of work went into downplaying the face in animation by focusing, instead, on the motion of the body to convey most of the performance, now they could use the face and take advantage of the emotional impact that faces can provide.

Likewise, it doesn’t take much to get Isner talking excitedly about the potential for facial animation and for Softimage. The ability to create Face Robot comes to a large degree from the painful rewriting process that took Softimage to the next level with XSI. One of the new features of XSI 5 is Gator, the ability to apply animation from one model to another. This can be especially powerful in the case of facial animation, and especially facial animation created with Face Robot. Performances can be saved and because of the consistent use of 32 markers, they can be easily transferred between faces. It’s even possible that an actor’s head can be scanned, captured, mo-capped, and re-used for additional footage.

Jeff Wilson notes that working Face Robot into Blur’s work flow has been reasonably simple. In the case of Face Robot, they are easily able to bring in a head modeled in 3ds max. At that point though, Blur tries to avoid doing additional modeling in XSI. If a head needs more work, they’ll usually go back out to 3ds max to do the additional modeling.

>Face Robot supports a variety of output formats, allowing users to work with different modeling and animation products for their entire project. Of course, notes Isner, the process would be naturally easier when the pipeline is based on XSI.

Just as Blur Studios is enjoying a period where they have an edge over the competition with Face Robot, before the product is officially rolled out, Softimage believes they have an edge with Face Robot. Softimage Vice President Gregor von Sheidt, former CEO of Alienbrain who came to Softimage with Alienbrain’s acquisition, will be leading the introduction of the new product. Face Robot isn’t going to be thrown to the dogs as a low cost module. Rather, von Sheidt says Face Robot is a high-end tool and will carry a premium price. The price tag will be closer to the Softimage of old than the XSI of today. It’s not a product for everyone and Softimage is not going for volume. Instead, Face Robot will be offered to key customers first. “We want users to have a good experience and we need to be sure we can support them.

Blur used Face Robot in the animation pipeline when creating the Brothers in Arms television commercials for its client Activision.


Obviously, Face Robot is enjoying a honeymoon. As of this writing only a few people have access to the software and most of them work for Blur Studios or Softimage. Blur helped develop it and they do see room for improvement, “especially around the mouth,” says Tim Miller who apparently says this a lot because everyone standing around him will sigh, roll their eyes just slightly, and give a resigned nod. It seems clear though that Face Robot really will change the face of animation. It could have the effect of democratizing facial animation in the same way new price points have expanded the universe of 3D animators.