Friday Flashback #335


Interview with Joseph Kasparian
Textures & Lighting Lead on 300 at Hybride, Joseph Kasparian talks about creating 540 visual effects shots, the production process and how to work with completely green-screen shot movies.
March, 8th, 2007by Raffael Dickreuter 

1173333602_jo1
Joseph Kasparian, Textures &
Lighting Lead at Hybride.

Tell us how and why you got started in the cg industry?
When I was 18, I dreamed of becoming the lead guitarist of a big band that would tour around the world or becoming a professional skateboarder. For some enigmatic reasons, I ended up in Finance at the University of Montreal (HEC). But the day that I saw the T1000 in Terminator 2, I realized what I really wanted to do for a living and that would be Special Effects.

In 1992, a close friend of mine introduced me to the world of computer graphics. I realized that the possibilities were endless. I kept on studying full time in Finance but was spending 6 to 8 hours a day learning 3D at home. It was the biggest hobby I ever had.

When Jurassic Park came out, I heard the professional software used for the dinosaurs was Softimage 3D and that it was made in my hometown Montreal. My dream of working in that field was more than ever possible. As soon as I got my degree in 1996, I took a specialize course in 3D animation at the NAD Centre. Once I finished, I got a job at Hybride in 1997 as a 3D animator. It’s been 10 years now.

What do you like to do in your spare time?
For what I remember, I use to play guitar, mountain bike and snowboard. But now I have a 4 year old son and I’m trying to spend most of my spare time with him and my wife. So the correct answer would be I play with Legos and Transformers and make up my time to not miss the incredible kids channel shows.

Tell us about your responsibilities on 300 as Lighting and Texturing Supervisor
As the Textures and Lighting lead, I have to evaluate the complexity of each sequence with the supervisors. I establish procedures to speed up the artist’s work which includes constant R&D; on new techniques to texture and light scenes. I make sure the outputs of my team suit the compositors needs. I guide my colleagues technically as to follow the art direction and to keep the composition in place. And finally, I take care of delivering some of the more complex shots.

What were the biggest challenges in order to deliver the desired look on this production?
We had to transform the very stylistic look of the renowned American comic book author into film: silhouetted images, painted skies with brush-stroke effects, contrasting colors, charcoal blacks, and so on. For the environments created, each structure had to be an obvious part of Frank Miller surreal world before to be photo reel. The Hot Gates walls and skies were probably the most important aspect to define before to start the production. It was crucial that each vendor respects the exact same look. In order to help that process we received detailed documentation and concept arts about the look we had to achieve.

The wolf is another good example of the style they were looking for: a surreal beast in a photo real environment developed with an artistic touch. In the graphic novel, the wolf appears only as a huge black silhouette with red eyes. We kept those elements and integrated skin, muscles and hair to go a few steps closer to the real world.

jo5

What custom tools or techniques were used especially in the area of lighting?
Many lighting techniques were used depending on the location. At first, we had to rely on precise layouts. Once we had the client approbation and depending on the amount of shots, we chose to move forward with regular Textures and Lighting techniques or Matte paintings.

We created high dynamic range environments for each sets. The effort on textures was critical to accomplish the movie style. In fact, we had to reduce the contrast levels on the 3D side to bring them back with the live footage on the final comp. The combination of flat textures with dynamic lighting helped us deliver images that allowed great flexibility for extensive color correct sessions.

The textures created where either a mixture of procedurals done in Dark Tree called within XSI or a good use of quality pictures taken from the actual set.
The matte paintings were developed over shaded scenes lit using final gathering or light rigs.

For many locations, specific light rigs where built at a very early stage and we used internal scripts to automate the creations of the passes required by the compositing department.

What kind of inspirations were used to inspire the artists for this kind of look besides the comic book?
Sin City and Sky Captain and the World of Tomorrow were without a doubt a major influence because they allowed us to work on very stylized movies that were entirely shot on green screen. We acquired a great confidence and expertise in that kind of movie.

One of the biggest sources of inspiration was the artwork sent by the production. It definitely captured the essence of the comic book and gave us a great head start.
The detailed and very technical documents done from the art department were essential to define properly each location and to develop the appropriate look for the backgrounds and skies. Each section was defined by a specific color palette to the great pleasure of the artists.

jo4

How long was the production schedule for the team at Hybride?
Hybride produced 540 visual effect shots for a total of 45 minutes. A total of 95 Hybride employees worked on the project for 16 months. We were 45 in the 3d department, 35 in the compositing department and 15 in administration and technical support.

We had a wide variety of shots to do:

  • Animation: wolf, warriors (Spartans, Persians, Free Greeks, Immortals), troops animation (thousands of warriors), whip, banners, swords, weapons, spears, axes, etc…
  • Virtual Environments: backgrounds for all of our scenes, mountains, cliffs, plains, valleys, winter sceneries, oceans, skies (clouds, lightning, moon), etc.
  • Particles: Snow, embers, fire, rain, blood, smoke, dust, dirt.

Was the film entire shot in blue screen or to what extent is the film life action and to what extent cg?
Having shot the film entirely on a blue-screen background, each step involved in post-production required a considerable amount of work. The live footage included the heros and the ground. Everything else was CG.

jo3
How was the processing of the life action footage to not only blend with the cg elements but at the same time look stylized?
We had to start by matching the 3d environment to the live footage to pretend they were shot simultaneously. Once the integration done, we stylized the whole image with the help of different passes (mats, depth fading, etc…). The extensive use of color correct definitely eased the integration of the live footage with CG elements.

How was XSI used in this production? 
XSI was used to generate almost every 3d elements. The software was very efficient to create complex environments, to do character animation and to manage army scenes with more than a 100 000 soldiers.

jo2
Which features were especially useful?
The ability to built solid output pipelines with the creations of customizable passes was without any question a very important Feature in XSI.

These Features where also very usefull:
1. GATOR to transfers any surface attributes
2. Ultimapper to grab normal maps and occlusion maps.
3. The Render Tree to build complex shaders
4. The Fxtree to do practically everything with image clips.

Which areas of the software should be improved?
The procedurals could be improved in XSI. More than ever, we are asked to generate full cg environments and the usage of procedurals is very efficient for that need. It would also be nice to improve the Texture Editor with some tools such as Uv Layout’s flatten tool and Deep Uv’s Relax tool. And since Pixologic’s Zbrush is now used everywhere, it would be interesting to include a Ztool reader in XSI. That would allow to interactively change the meshes resolution in a different way than with referenced models.

jo6
Do you think we will see more movies in the future like this and what possibilities do you see in this kind of filmmaking?
More movies are shot entirely on green screen. The director can do what he wants with each shot. I think this way of filming will become a standard. The big advantage consists in the infinite freedom of creativity that last long after the shooting is achieved. For 300, the graphic quality quickly takes over the complex and technical aspect to generate the images. This allows the audience to plunge in Frank Miller’s fantasy world. It is obvious that a new trend of film is emerging, a trend that joins more than ever the art of telling a story with the art of drawing that story.

Is there anything you would like to say to the rest of the cg community?
Nothing is more thrilling than working in a field in constant explosion. Not evolution, explosion. Each year, I’m overwhelmed by the images produced by the movie and the game industry. All technical borders are falling apart. The tools are more efficient and the artists more talented. There is no doubt that everything that made this profession exciting ten years ago is true more than ever and I’m delighted to be part of it.

 

Hybride near Montreal, Canada.


 

Friday Flashback #317


Interview With Michael Arias
1064594254_arias1
Michael Arias works for the Softimage special projects team and talks about creating The Animatrix movies, the industry and the fusion of 2D & 3D techniques.

September, 26th, 2003by Raffael Dickreuter, Bernard Lebel, Will Mendez

 

Tell us a bit about your background and how you got started in the 3d industry
It always seems to me that my career’s taken many bizarre turns along the way. I started out in the film industry – in 1987 or ’88 – at a company called Dream Quest Images – later reincarnated as Disney’s Secret Lab, and now defunct, as far as I know. I was just looking for something that would make use of my electronics skills and keep me busy while my band sorted out its various personnel problems. After a little while there, I ended up helping debug Dream Quest’s new motion control system, and then being chosen to work on the stages as an assistant, since I was one of the few people who knew their way around the system. Those were great days for the effects industry – it was before the advent of much CG and people were doing tremendous things with motion control, optical effects, miniatures, pyro, whatever. And the studio was a great place for me – still 19 or 20 – a big tinkertoy factory run by car nuts and mad bikers. At DQ I got to work on some great films too – THE ABYSS, TOTAL RECALL, and some others.

1064595338_arias3
working on James Cameron’s The Abyss.

After a couple years there, I moved back to the East Coast and promptly got recommended to Doug Trumbull for his BACK TO THE FUTURE, THE RIDE attraction for Universal. Another great experience – Doug was – IS – such an inspiring figure. For me and the other younger crew, including John Gaeta, now VFX Supervisor on the MATRIX films, Doug was so generous with his knowledge; such a very warm and receptive and articulate and creative guy.

It happened that our optical composites were being done by a Japanese company called Imagica. BTTF was all Omnimax so it required large format opticals, and Imagica was one of the few places in the world where one could do high-quality 15-perf opticals. And because I spoke some Japanese (from having studied Japanese in university), I ended up spending a great deal of time with the Imagica folks when they were in town. That, combined with the fact that our miniature crew was also largely Japanese, left me with a standing invitation to come visit Japan.

I ended up finally going with Doug to the Osaka expo (in ’90 I guess) and, though I’d seen the CG in THE ABYSS and a couple other shows by that time, the stuff I saw in Japan just blew me away. That, more than anything else, was what convinced me that the future of filmmaking was CG. I ended up working at Imagica for a year after that, still doing motion control camera work. And then I got the chance to direct a short “ridefilm” at Sega, for an 8-seater hydraulic motion base they had. Of course I had no real experience with 3d except for what I’d managed to learn from books and a borrowed copy of 3DStudio (rev. 1!). But the Sega folks saw this as an interesting opportunity to build up their CG team, still using Iris4D workstations at the time. And by the time I was onto the project they had chosen Softimage|3D, then called Softimage Creative Environment, version 2.4 I think. Perfect for a newbie like me. The film I did at Sega, MEGALOPOLICE TOKYO CITY BATTLE was shown in SIGGRAPH’s ’93 Electronic Theater. By today’s standards it’s pretty goofy but for the time it was quite ambitious. Insane really, considering that none of the team had any real CG experience.

After that I moved back to New York to team up with some friends, Randy Balsmeyer and Mimi Everett (their company is now called Big Film Design – BFD), to start a little CG production house, called “Syzygy Digital Cinema”. Through their work in film titles and design we ended up with some great clients: David Cronenberg, The Coens Brothers, Spike Lee, Jim Jarmusch. But New York wasn’t quite ready for the CG business yet and because of our feature-film focus we couldn’t cash in on New York’s commercial business (which has since all but dried up). But it was a fun two years.

And by that time, I was ready for something new. I had really been thinking for a while that to go any further in CG I’d need to program more. I’d written a package called M/CAD for Sega and Doug’s motion base programming, and that had really whet my appetite for coding. And as my time at Syzygy started getting short, Softimage seemed like a natural fit. I’d made some close friends at SI; it was (and still is) a fairly tight-knit group. And I loved Montreal (having only been there in the Summer, thus far). David Morin, then director of Special Projects, made me a very nice offer that included lots of travel to Japan, and so I joined up. This was still the “good old days” when workstations and software were costing 40, 50, 60 thousand dollars, and up, so it really was quite different than it is now that everyone’s tightening their belts.

I think I’ve been with Softimage for eight or nine years now. I’ve now actually outlived all of the folks who hired me. And I spend so much time away from Montreal that most of my co-workers only know me from email.

Can you tell us what is the role of the special projects team 
We used to joke that Special Projects was where Softimage put people capable of doing everything and those capable of doing nothing. Not sure about that, but SP does seem to have always attracted people with a jack-of-all-trades sensibility. At the time I joined I think the entire team was composed of people with heavy production experience and even now I think we tend to gravitate towards hands-on work. Some of us, like myself, do a bit of programming. Others are experts in technical animation, scripting, or games development, for example.

Our focus is fairly short term, by necessity. Even our software projects are limited in scope by our commitment to a particular client, or by how much we have to work on many things at once. Not at all like how the folks in R&D; spend their time. That said, though the R&D; folks may not have the practical knowledge or the user’s outlook, they definitely have to deal with the software at a much lower level, and their knowledge of its workings, and of the mathematics and methods of 3D, are way beyond that of any of us in Special Projects (or at least me). The smartest people I know are all in R&D; at Softimage.

Did you always work for the Special Projects team at Softimage, or what were your previous positions?
I’ve been with SP since I joined Softimage. Actually I think I’m the only original member left at this point. Kind of a dinosaur, really. Someone has to come from Montreal periodically to clean off the cobwebs. Change my fluids, filters, and hoses, that kind of thing.

What do you do in your spare time?
Play with little kids. My kids, that is. Also, I’ve been biking a lot lately. Biking seems to be the latest fad to hit the traditional animation industry in Japan these days. I just did a 120km ride with Katsuhiro Otomo (director of AKIRA and the upcoming STEAMBOY). That almost killed me. Everyone else’s bicycle cost 5 times what mine did and weighed half as much. When I showed up at the rendezvous, the first thing the others did was pump up my tires and offer to strip my bike of all the extraneous bits (kickstand, baby-seat rack, etc.).

Working in the industry do you find that projects are becoming more technical than artistic?
I’m not sure if you mean the CG industry or the film industry in general.
If you mean the CG industry, I’d have to say “not really”. Software is (slowly) getting easier to use, and this means that CG artists, in some positions anyway, require less technical knowledge. That’s opening the field up to more talented artists with a broader range of talents. That’s good.

Filmmaking has always been a mixture of the artistic and technical, but I think slowly filmmaking too, particularly with the advent of non-linear editing and digital cinema, is becoming more open to artists who might have otherwise been inclined to pursue more direct means of expression. It’s easier and cheaper to make movies these days. At least, I think one could say, the entry cost is much lower. You can shoot a movie digitally, and mix and edit it all on a home computer, achieving respectable quality at the same time. That kind of thing was unthinkable just a few years ago.

What is your view about the current situation in the industry?
I think, on the one hand, the CG software industry is in a very tough spot, even while these are good days to be a CG animator. The film industry, like always, suffers from a lack of good ideas. There’s just so few good movies made.

What are the biggest differences between the Asian market and the Western 3D market?
I think the biggest difference is in the size of the market, in general, at least for artists in the film industry. Though it wasn’t always the case, the Japanese film industry is miniscule compared to that of the US. And movie budgets here reflect that. And lower budgets mean typically mean fewer effects, hence less CG.

On the other hand I think the game industry, though it’s definitely seen better days, still offers some interesting work, both for artists and software developers.

How has the localization of XSI help improved the Japanese market? 
Well, the people here at Studio4ºC (Studio Four Degrees) started using the Japanese version then day they got it and haven’t switched back to English. I have no idea if the introduction of a region-specific interface has helped sales here, but I can’t imagine otherwise.

You’ve worked closely in the Animatrix project. What can you tell us about that?
Andy and Larry Wachowski, directors of the MATRIX films, and their producer, Joel Silver, contacted me through their VFX supervisor, John Gaeta, an old friend who I’d kept in touch with over the years, and who knew I was working in Japan. We all met in Tokyo, and then in LA a couple of times, and after we’d talked about their idea for an anime “dream-team” project a couple of times, they asked me to produce the project for them.

I had very little “production” experience. Nothing really, except for once having acted simultaneously as CG director and co-producer of a feature-film pilot (TEKKON KINKREET, seen at SIGGRAPH 2000). But I felt good about the Wachowskis and the folks at Silver Pictures, and, more importantly, I had great partners in Japan: Eiko Tanaka, president and producer at Studio4ºC (where we ended up making much of ANIMATRIX), and Hiroaki Takeuchi. Tanaka stayed pretty close to her studio, while Takeuchi dealt with a couple of our other studios, and oversaw legal and contractual issues. Regardless, it was an enormous responsibility, and it totally dominated my life for the three years I was on it. I think in many ways it was more complex than producing a feature might have been, simply because we had so many teams running in parallel, and each director aspired to make their own “mini”-feature. Traditional animation in Japan has so much to do with personalities: some directors and animators require a great deal of hand-holding, while others are very independent. Fortunately, because I had worked a great deal in the animation business here already, I was on a “first-name” basis with many of the staff of our various episodes.

I really had to draw on a great deal of experience that had sat unused in the background while I’d been pursuing software development. Everything I’d learned until this point: a brief career in recording studios, composing music and doing sound effects for short films in college, having my own company, working in special effects. It was a great chance to exercise some dormant (or damaged) brain cells.

Honestly, though I didn’t get involved in the computer graphics aspect of the films as much as I’d have liked to, the most enjoyable part of the process for me was post-production. None of our directors came over for the dubbing sessions and, even for the fellows who made it to their final mixes and met with the composers, I was able to act much more as a collaborator than most producers are. This was not only because of the language barriers, but also because the post-production of animated films in the US is so different from the Japanese way.

Without question, my favorite single day on the show was towards the very end, when I got to record “walla” (crowd noise and background voices) for the battle scenes in SECOND RENAISSANCE. They had me in a booth screaming at the top of my lungs for hours – enormous stress relief. I’m especially proud of my soldier, begging for mercy while having his arms ripped off by a Sentinel (“oh God, no, please, God, no, AAAARRGGGHHHH!!!!”).

To what extent XSI was used in the overall project?
I think all of the episodes done at Studio4ºC were done using XSI, though there were some models built with Softimage|3D. You have to remember that we started ANIMATRIX three years ago, and there were some major gaps in the modeling functionality then. By the end, when we were working on BEYOND, we were doing everything with XSI version 2.0.

Kawajiri’s PROGRAM had a couple of CG shots done using Softimage|3D.
CG elements for Peter Chung’s episode MATRICULATED were done by a couple shops in Seoul using mainly Maya and Max. Of course, the only full CG episode, Square’s THE FINAL FLIGHT OF THE OSIRIS, was done using Maya and Square’s in-house renderer.

As a producer, I wasn’t necessarily in the best position to deal with software choices, but I think all of the Japanese animation houses saw a clear advantage in using Softimage software. Being on close terms with a software provider definitely gives a production an edge, particularly when technical challenges arise. Just the fact that I was writing the XSI Toon Shaders and dropping off new versions at the studio almost daily was seen as reason enough to use XSI, particularly on SECOND RENAISSANCE and BEYOND, which contain so many hybrid 2D-3D shots.

The Second Renaissance

Why do you think was XSI the perfect choice for a project like the Animatrix?
Great set of tools useful for 2D/3D integration: the Toon Shaders, of course. But also the camera projection mapping features, lens center offsets, compositor, audio tracks, animation mixer. The Render Tree and interactive rendering with mental ray. Good Japanese documentation and great local support.

Were there any special techniques used in combining 2D and 3D artwork
No rocket science really, but I think we did end up doing some wonderful shots with 3d characters (Toon rendered) and hybrid 2D-3D backgrounds. Studio4ºC has really refined the techniques involved in “perspective mapping” – projecting hand-painted artwork onto 3D geometry to camera movement other than just 2D panning and zooming.

You’ve worked on the development of the XSI toon shader, right? What can you tell us about that?
One of the first things I wrote when I started at Softimage was a very simple shader to do two-tone rendering – a poor-man’s cel shader if you like. It was quite primitive. I think most shader-writers start with similar projects. But my boss at the time, David Morin, thought that it might be useful for a project that was then being worked on by MTV’s digital team, DTV, and he put me in touch with their director, Myles Tanaka. Myles asked me to check out the work they were doing on a television pilot called “The Cathy Sorbo Show” (or something similar). The show involved cartoon-rendering a motion-captured performance of a talk-show host. The original idea was to do everything in real-time, but my shader was eventually used to offline render all the characters.

I kept working on the shader after that. Warner Bros. had me work with their rendering guy for the Duck Dodgers thing they did with Michael Jordan, showing him the techniques I was using to get my ink lines. They ended up incorporating the same techniques into a Renderman-based pipeline.

The big advances came when Softimage put me in touch, more or less simultaneously, with Dreamworks and Hayao Miyazaki’s studio, Studio Ghibli. Dreamworks was ramping up for PRINCE OF EGYPT at the time, and Ghibli, for PRINCESS MONONOKE. I spent the better part of the next couple years writing a library of ink-and-paint shaders to their specifications, and managed to get a patent on some of the techniques I was developing at the same time. Dreamworks went on to use the shaders on some amazing shots in THE ROAD TO EL DORADO. Full 3D characters, rendered to match the traditionally animated elements – really convincing stuff. I’m quite proud of their work. I’m no longer in touch with Dreamworks, so I don’t know how theye are going about these kind of shots now, but Ghibli continues to use the latest version of the Softimage “Toon Shaders”. While producing ANIMATRIX, I rewrote them from the ground up to take advantage of XSI’s Render Tree and interactive rendering as well as some new mental ray features. We used them quite a lot on THE SECOND RENAISSANCE 1 and 2 and BEYOND.

All in all, I think I’ve been playing around with these shaders for six or seven years now. As software, they’re really not so complex. The real key to their success in the field is the feedback that I was receiving from various animation studios testing the shaders for me. Because I had regular contact with all of the key Toon Shader users while working on each new version, I was able to incorporate a ton of suggestions as well as analyze users’ reports to pinpoint bottlenecks and deficiencies in the software. Along the way I think I managed to stumble onto a couple of clever tricks as well. I taught myself programming and math and computer graphics for the most part, so anytime I write software I end up employing a “hunt-and-peck” methodology. A more experienced programmer would no doubt arrive at a better solution faster. But I learned a great deal from the clients and Softimage people I was working with, and consequently really enjoyed working on all of this.

Is there a difference between anime animators and 3D Animators, if so what is it?
I haven’t seen any 3D animators in Japan with the chops of many of the traditional animators here. 3D is a somewhat deceptive tool: you’re able to create fantastically real (or surreal) images with comparatively very little effort. Things like motion blur, depth of field, sophisticated camera movement, rigid body dynamics, particles, hair, etc. are all included right out of the box. But if you give most 3D animators a simple skeleton with minimal rigging and automation and ask for a good “sad” pose, for example, or some athletic running and jumping, without relying on the technical animator’s bag of tricks, I think you’d be quite disappointed with the results. So when looking at various 3D work, you might see very evocative or convincing still imagery, but when you evaluate it for animation, you’ll often be disappointed by performances that are below the level of even some very rudimentary traditional work. This isn’t to say that there’s no bad trad. animation here – there’s tons of it – just, generally speaking, the training and experience and talent of traditional animators is of a higher level. That said, there’s an extraordinary shortage of good traditional animators, while there’s a phenomenal glut of 3D artists. As far as I can tell, the situation is similar, in both regards, in other parts of the world as well.

What Do you think about the fusion of 2d and 3d techniques? Will 2d disappear?
I don’t think it will ever completely disappear, not as long as there’s a few people crazy enough to want to actually hand-draw their films, frame by frame. At the same time though, everyone here complains about the lack of good animators. There seems to have been an entire generation of artists that chose other fields – CG for instance – instead of becoming traditional animators. The youngest animators on our staff were already pros when AKIRA was being made, and that’s an old film now. And these guys have never had the chance to pass on their knowledge to the next generation of animators. The budgets stay the same or shrink, while people’s expectations get higher and deadlines get tighter. It’s tough work; can you imagine actually hand-crafting a film? There’s nothing on the same scale except for animating with clay or models.
Even in the US, traditional animation has lost enormous ground to CG, though I think a great deal of this has to do with the relative brilliance of the stuff being put out by Pixar and Blue Sky. I really do feel like ANIMATRIX proves that people will watch good animation with compelling stories.

But yes, I suppose I do feel like we’re seeing the slow decline of traditional animation. Sad, really.

What challenges would you like to take on? 
I’m currently storyboarding a feature-length animated film that I’ll be directing at Studio4ºC.

Friday Flashback #273


Experience XSI 4: The Official Softimage XSI 4 Guide to Character Creation
A look at the new book by Aaron Sims and Michael Isner which guides you through the production process of a digital creature from start to finish.
July, 16th, 2004by Raffael Dickreuter

 

When you go to the bookstore you find tons of books focusing on 3d, modeling, animation and all kinds of different software packages, but very few were written for XSI. Finally there is another one, and it’s a really good one.

This book is written by industry experts Aaron Sims and Michael Isner. Aaron Sims is a character creator at Stan Winston Studios. He has been involved in projects like Terminators 3 and Steven Spielberg’s A. I. Michael Isner is the head of Softimage Special Projects and one of the key people behind XSI’s character tools. They have teamed up to explain the production process from start to finish. Sims and Isner together means a collaboration of art and technology knowledge.

The book in general is written for an audience that is already familiar with the concepts of 3d and XSI. However it contains a few pages that will help users from other packages quickly adapt to the XSI interface and workflow. This also might help users who are somewhere between a beginner and intermediate level.

Authors Michael Isner (left), Softimage Special Projects and Aaron Sims of Stan Winston Studios in front of the T1 from Terminator 3 inside Stan Winston Studios.

Chapter 1 is a quick introduction to the interface and contains a hotkey list. It makes it ideal for experienced artists to find the most important commands within XSI quickly.

Chapter 2 gets you started with designing and modeling a character.

Aaron explains the thought process that is used when designing a 3d character, such as: how it has to be appealing for the audience and that you have to really know how you intend to animate the character, if it’s flying, walking, swimming etc…

Initially there is a quick overview of the modeling features where you will start with a sphere to model a basic shape of your creature. Later on you learn how to add lots of detail and gradually the shape turns into a stunning dinosaur. Altough you can see modeling process in the images, it doesn’t explain the exact concepts how details were added.

Chapter 3 deals with using the texturing tools in XSI.
After a quick overview of how to use the most important features like bump mapping, displacement mapping, texture layers etc… You learn how to use different subprojections to texture a human face. The knowledge you get during this process is then used in the more complex task of texturing the dinosaur. After applying the textures onto the mesh you learn to mix the color maps, specularity maps, bump maps and transparency maps together. Aaron also shows you how to texture the nails and the eyes of the dinosaur.
Your character is now ready for the Skeleton setup. The book also offers a website where you can download the mesh, textures and scripts. People who want to compare their mesh with the original, learn from it or simply skip some chapters can benefit a lot from the accompanying materials.

Chapter 4 dives into character setup. Michael Isner shows how to use guides that let you easily define the proportions of a setup. There is a section that shows how to use the new character tools of XSI version 4 to quickly add more fingers, toes or tails. Step by step you go through the process of creating a proper rig for the dinosaur, including creating marking sets, symmetry maps and weight painting. You will also get accustomed to the new constructions modes that were introduced in version 4 and let you divide the working stack.

The first part of this chapter deals with the most important rigging features. The second part goes way deeper and explains the fundamental mechanics of character setup tools in the XSI software. Besides constraints you get a glance at scripted operators, springs and how to use different character components such as the spine, tail, legs. The portion that shows you how to solve flipping problems and takes an in-depth look at enveloping algorithms is very interesting. Later in this chapter you will learn how to use control splines for proper shape deforming, how to model different shapes and use them for shape animation. The chapter also reveals information about character engineering at a production scale and how you can use scripting for improving your rigging process and explains the fundamentals of the rigging SDK.

Modeling and texturing of a dinosaur.

Chapter 5 demonstrates how to create a walk cycle for your dinosaur. Aside from creating poses you learn how to use the controls you built earlier, such as the footroll control, to create a smooth animation. Step by step you go through the F-Curve Editor and the Dopesheet. You continue to store your animation so you can use it with the Animation Mixer for non-linear animation. At the end you modify your dinosaur with shape modeling, which lets you close and open the eyes of your character.

Up till now most aspects of the training were directly connected to the dinosaur. In chapter 6 Michael Isner goes a bit off and gives you very good introduction into scripting. People who are not familiar yet with programming or have just a basic understanding will find this straightforward chapter very helpful as he guides you through passing variables, looping, collecting selections, arrays, creating vectors etc… After some basic exercises you learn to create helpful tools and find solutions for problems that can only be solved with scripting. After this chapter you will also know how to create geometry entirely with scripting. You might start as a beginner and leave at an intermediate level, just by working through this chapter.

Chapter 7 then goes back to the dinosaur where you will add Lights and an environment to your scene. Several different lighting types, different rendering techniques such as raytracing, scanline rendering, global illumination, final gathering and caustics are examined throughout this chapter. The rendering of the dinosaur will be split up into different render passes and composited together in the FX Tree, where you can apply color correction and finish your composition.

Rigging, Animation, then rendering in passes and compositing.Chapter 8 covers the most important concepts of exporting your character to Softimage|Behavior, creating actions for crowds animation and bring the crowds back to XSI for rendering. For a person entirely new to Behavior this might be a bit difficult to understand what really is going on, but you will get an idea of the potential of the software.

Chapter 9 shows some of the stunning work that Aaron Sims and Michael Isner created during their careers.

Summary
Experience XSI 4 is a great book. You learn from start to finish how to create a complex character by using advanced features within XSI. It doesn’t try to explain every feature that there is, it just focuses on what you need for the specific task. Scripting, that might have been a bit difficult to get into for people with not too much experience in this area, is very well laid out and gets people a quick start who want to dive into it. This book lets you quickly learn new concepts that you can use for your own work. A must have for every enthusiastic XSI user, or experienced users, that want to switch quickly and easily from another package.

The Authors inside Stan Winston Studios
Pictures taken with permission inside Stan Winston Studios.

Friday Flashback #217


Interview With Alain Laferrière
The Program Manager of Softimage Japan talks about his career at Softimage, the Japanese Industry and XSI 4.0.
June, 28th, 2004, by Raffael Dickreuter, Will Mendez

1088446379_al_1
Alain Laferrière,
Program Manager,
Softimage Japan.

How did you get started in the cg industry?
I got interested in computer imagery from a young age, playing with the first video game consoles and personal computers and following the evolution of CG with a deep interest. The “demo scene” development communities for both C-64 and Amiga computers also brought a lot of innovation in realtime graphics synthesis.
At University of Montreal I studied computer graphics and did a M.Sc. in human-computer interfaces ( agents ). There, I met Réjean Gagné, Richard Laperrière and Dominique Boisvert, who would later implement the Actor Module in SI3D, and many other friends who were, or are still working at Softimage today. After I completed my studies I got a job interview at Softimage and was hired ; it was a dream come true ! I started in the modeling team, then moved on to motion capture & control, then headed the new Games team and now Special Projects Japan.

What do you do on your spare time?
I love music very much and I’ve been DJ’ing on and off since the age of 13, so I buy records and practice mixing. I also like programming personal projects, reading, cooking, nature, going out with friends, going to the gym, etc.

Tell us a bit about the highlights of your 11 year career at Softimage

  • April 14th 1993 – my first day at Softimage !
  • solving a data loss problem with an Ascension “Flock of Birds” motion capture system not working properly with a Onyx computer, which was being used at a Jurassic Park theme park at Unviersal Studios. I implemented a fix in Montreal and I was sent to Los Angeles to configure the system. I had a chance to meet the movie actors and Steven Spielberg made an appearance, it was a very impressive “Hollywood” type first experience for me.
  • then I worked on an interesting project : one of our customers was using a Waldo-like hand manipulation device to radio-control the facial expressions of the robotic heads used for the Teenage Mutant Ninja Turtles movies. The actors had to wear the Turtle heads while servo-motors moved metal blades around their face to flex and animate the head latex skin ; as well as a radio receiver backpack which was hidden in the turtle shell. Since the facial animation had to be redone live at each shot , it was not possible to produce a constant facial animation quality. First, I wrote a motion capture driver for the Waldo device and a designer modeled the turtle head in SI3D and connected the Waldo inputs to its facial animation shapes. Now we were able to record a live facial animation using the Waldo device, then perfect it in SI3D by editing the animation curves. After that, I modified the motion capture communication protocol to support motion control, and wrote a control driver to radio-broadcast the data which was previously motion captured and edited in SI3D. With this pipeline it was possible to author a perfect animation sequence which could be played back identically at each shot. The project had an extra dimension of danger, since there were urban legends of actors getting mutilated by the blades attached inside the robotic heads. Something that keeps your mind busy when your head is inside.. 😉
  • Fall of 1994, I went to Japan with Daniel Langlois for a 2 weeks business trip, which mutated into a large effort between Softimage and SEGA to create a first generation of 3D game authoring tools. I was called to stay there for nearly 3 months in the first trip, and to frequently pay visits to Japan afterwards. I supervised the project and other developers helped me designing and implementing the features ( 3D paint, color reduction, SEGA Saturn Export / Import and Viewer, raycast polygon selection, etc). It was a very intense and exciting coding experience. Following this, SI3D quickly became the standard for 3D game authoring in Japan.
  • RenderMap – a project which I started after talking to a Japanese game designer, who explained me how he was planning to use clay models to create the characters of his next game. They would scan the clay models into high-resolution polygon mesh models with colors at vertices, and then transform into texture data by rendering front and back view images and re-projecting them on the low-resolution model. With this method there is no control over how the texture is being distributed on the geometry, and although it works for polygons facing the camera, the texturing quality quickly degrades as polygons face at an increasing angle. Instead, I thought it would be better to fire rays along the surface to capture the color information and if those rays could hit a high-resolution version of the object, then we could carry the texturing data from high-res to low-res. RenderMap solved this by using mental ray to fire rays at the intersection of triangles and texels ( according to their location on the object ) and accumulating the weighted color contributions to a final color per texel. You could carry the high-res data to a low-res model, by placing the high-res model in the same location as the low-res one, but making it slightly bigger. Since it uses Mental Ray, it also allows to burn procedural effects into texture maps, generate vertex colors, normal maps, etc. My colleague Ian Stewart implemented the much improved version in XSI.

    1088446400_al_3
    Manta model before applying RenderMap – we can see the results of rendering using mental ray in the render region.

    1088446540_al4
    Texture data after being pre-rendred with mental ray using RenderMap.

    1088446556_al_2
    Manta model after applying RenderMap.
  • dotXSI file format – an initiative which I launched over 7 years ago, at a time when our customers were asking us to design a high-level 3D data interchange solution. After much discussions with my colleagues in the Games team, we finally opted for using the Microsoft .X format concept, with many new data templates to support a lot of features which were not covered in basic .X files. Since that time, the format has become very popular in the cg and games markets as a generic data interchange solution between authoring tools and pipelines. I was initially hoping to see a good level of popularity one day, but it has now reached well beyond those early expectations. This was also an opportunity for us to understand the fundamental differences between a high-level generic format and an optimized “memory image” data format which can be loaded and used directly by a game engine without further optimizations. From this, we derived our strategy behind the dotXSI file format and the FTK ( File Transfer Kit ), which is a library to read and write dotXSI files and which can be used to write converters between dotXSI files and optimized memory image formats tailored to any custom game engine ( or between any other format and dotXSI ). I came up with the “.XSI” name suffix, which was later adopted by the marketing team for the name of the XSI product itself, although I had no involvement with this decision.
  • April 99 – moving to Japan
  • various Special Projects with Japanese clients
  • misc. SI3D features like vertex colors, generic user data, GC_ValidateMesh, SI3D 4.0 ; and now XSI tools

Working in Japan: Was it difficult to adjust to a new location and culture?
Not really. I got a good experience of Japan during my first trip ; having a routine and working on-site at SEGA was a chance to feel how life would be living here. I came back again on business trips about a dozen times, my interest growing until I relocated in April 99. Adapting to the Japanese culture is a life-long effort, but Japan is very welcoming so this can be relatively painless. Of course I always miss my friends and family, but fortunately I go back a few times a year and we stay conneced by mail and phone.

What is the biggest difference when working for a company in Japan compared to Canada?
Since I work from home the differences for me are very low. But traditional Japanese companies are more hierarchical than american ones so you have to be aware of that. Also, your ability to connect with a team will completely depend on your ability to communicate in Japanese.

Do companies in Japan and Northern America work closely together on projects?
Many Japanese companies have branch offices in North-America, however I don’t know how much is new production work, regional adaptation or collaboration on a joint project.

How do Japanese customers differ from Canadian customers?
The main difference of course is the language. And Japanese companies employ many designers and programmers, so there are many sources of feedback to report issues and submit suggestions for improvements. Sometimes there are misunderstandings, missing information in order to reproduce a problem or understand what a specific request is about or why it’s important, so I keep an eye on all incoming feedback from Japan and work with our support teams in Japan and Montreal and our Japan resellers to make sure we have all the information needed to fully understand and address customer requirements.

Another difference is related to the fact that most customers in Japan are game developers. It is normally easier for film companies to switch pipelines from one 3D application to another than for game companies, because the final output for film is a rendered movie while for games is most often data which needs to be compatible with a run-time game engine. A large portion of our customers in Japan are game developers, so they tend to migrate pipelines at a slower pace than for customers in other markets since they need to validate the workflow and data compatibility with a new pipeline before they can adopt it in production. Japanese customers are very thorough in their analysis of new technologies and do not migrate without serious consideration. XSI has already been used in production for a while by many Japanese companies, and its popularity is now accelerating. This is very exciting to see !

When dealing with high profile gaming companies how do you meet their demands for new features that are not implemented?
We gather all incoming feedback from Japan, prioritize according to severity, popularity and amount of work involved and then plan the next version features. In cases where a customer requires something which is specific to them, like special training ( features, SDK ), assistance to setup / migrate a pipeline or custom development, then there is always the possibility of purchasing R&D time from Softimage through our Special Projects team.

Part of your job is “managing escalation of critical issues”. can you give us an example?
If a customer reports something very bad like a production showstopper or something which prevents the adoption of XSI in production, then we may escalate the issue internally and implement a fix which is provided to the customer as a QFE ( Quick Fix Engineering ). This is a service available to customers under maintenance. QFEs done in answer to issues in our last released product are always integrated in the next public release : either a point release ( Service Pack ) or the next main version.

Softimage 3D has been a long time favorite by Japanese companies and we are seeing some migration to XSI, Why has it been so hard for them to move to XSI more quickly ?
There is a cost involved in migrating since you need to train designers and port your pipeline ( workflow, plugins ) from one application to another. We released our first generation of game tools in SI3D at the time where the 1st generation of game consoles were coming out : SEGA Saturn, Sony PlayStation, Nintendo 64. Since we designed this in conjunction with SEGA, it quickly became the de-facto standard for 3D game authoring in Japan. Then, Alias released Maya in 1998. Although the first version was not ready for production use ( Japanese customers called it “mada”, i.e. “not yet” ), Alias had 2 years to work on Maya until we released XSI 1.0. Of course, we lost some users from SI3D to Maya during this period. It was a hard time for us, but we wanted to build a solid architecture from the beginning rather than something we would need to patch along the way. Now we can see this bet is paying off, as our development speed has greatly improved and is now imposing a pace of innovation which is getting difficult for our competitors to follow. With XSI 4.0 which includes the new poweful and affordable Foundation product, and many new advanced features throughout our product line, we have everthing we need to accelerate the expansion of our user community.

What features that you originally developed for SI3D have found their way into XSI?
Polygon raycasting, vertex colors, generic user data, polygon reduction, dotXSI support, RenderMap, pipeline tools ( export, viewing ), etc. These were implemented in XSI by my colleagues in Montreal. I wrote a few things for XSI like a User Normal Editing interactive tool and some realtime shader examples which I published on XSINet. Now I am studying the new Xgs and CDH features of XSI 4.0 ( which I think are really cool btw ! ).

What features excel in XSI for game companies?
Character modeling and CDK ( Character Development Kit ), polygonal modeling, polygon reduction, texturing, RenderMap / RenderVertex, CDH ( Custom Display Host ) and Xgs ( Graphics Synthesizer ), Realtime Shaders, dotXSI pipeline and FTK, ability to attach generic user data to scene elements and have it automatically supported through the dotXSI pipeline, general ease and flexibility of customization using the XSI Net View, Relational Views, Synoptic Views, and finally, the XSI SDK itself which is rich and now provides strong support for UI customization, among many new things in 4.0.

Will 4.0 change the gaming industry in Japan , and gaming industry in general?
Definitely. The low price point of our new Foundation product is opening up access to XSI to middle and low-end segments of cg production. It will also become easier for 2nd and 3rd parties to collaborate with high-end clients using XSI on joint projects.

As for the features of 4.0, there are many innovations which bring exciting new opportunities to our users. For example, the new rigid body dynamics are based on ODE ( Open Dynamics Engine ), an open source royalty-free solution. It is possible for game users to adopt ODE in their run-time engine and create realtime simulations which are entirely compatible with how things behave in XSI ( or you could simply plot / bake and export the animation if you do not want to recompute it in the game engine ). Also, the CDK ( Character Development Kit ) brings all the tools needed for making custom character animation rigs.

The CDH ( Custom Display Host ) allows an external application to communicate with XSI and display its output into a XSI view. It is possible for XSI to drive an external application and vice-verca ; this external application could be a game engine synchronized with XSI, or any custom tool which can be interacted with from within XSI. The Xgs ( XSI Graphics Synthesizer ) allows to create scene-level realtime rendering effects and can communicate with realtime shaders for advanced effects.

The Polygon Reduction tool in 4.0 is simply amazing, Texture Layers provide a very powerful workflow for multi-texturing management, Material Libraries simplify material management and the SDK is both rich and powerful.

XSI v4.0 is our biggest release ever since v1.0. It contains many other interesting features which are not listed here, as well as a large number of fixes for issues which were reported by our customers around the world.

What advice would you give to an artist of Northern America or Europe who wants to start working in Japan?
Learn the whole package. Designers in Japan do not tend to specialize in one thing, but instead are called to work on many different aspects of production : modeling, character design, animation, texturing, custom data editing, etc. The more flexible and proficient you are with the package, the easier you will connect with a Japanese production team.

Learn Japanese. Even though you can usually always find a few Japanese persons in each company who can speak a good level of English, it is more an exception than the rule and it would be better not to rely on that to bridge the gap with your Japanese colleagues. The better you understand and speak Japanese, the more chances you will have to connect with the team and contribute to the project. An american friend of mine who worked in a game company in Tokyo learned conversational Japanese very quickly because he was completely immersed in a Japanese working environment, but if you can learn some before heading to Japan it will make things a lot smoother and expand your opportunities.

Friday Flashback #198


Interview with Chinny Brynford-Jones: The XSI Product Manager of Softimage talks about version 4, the cg industry and his past career as a comedian — May, 12th, 2004, by Raffael Dickreuter, Bernard Lebel, Ed Harriss

xsibase_interview_chinny

What is your history with Softimage?
Over seven years of fun and an additional 40lbs of weight, I would like to think of it as additional charisma but I have a feeling I am just kidding myself. I started, weirdly enough, in 1996 in the Pitcher and Piano in Dean street London, sitting next to Adrian Hill (formerly Special Projects now Cinegroupe) at Ben White’s (formerly Softimage now Framestore) leaving do. I asked Adrian if I could come into the softimage office and learn SI3D. I thought it looked like the dog’s bollocks. He said yes and my window to weight gain had begun. An expense account, 100 flights a year around the world demoing Softimage 3D, then XSI, just sounded like a bloody good laugh. After five years of that with the odd spell in production, I met my current girlfriend. Montreal being such a crushingly superb place, I decided it was time to make the move. After years of experience in the field one thing followed another and now I am the XSI product manager. Funny how thing work out.

Can you tell us something about your past comedic career?
Mostly a disaster – the rest was just plain awful. I have tried doing a spot of stand-up in mainly friends type venues and having never been so nervous or embarrassed in my life. So I switched tack and started to write, although no matter how serious the subject matter it ends up as comedy. (Or “words” as others say) I guess always seeing the funny side of life has it good points. What has been fun tough is that Softimage has always allowed (even encouraged) me to entertain people during my demos. So the marriage of my two loves, comedy and XSI seems to be made in heaven.

You are also known as “Chinny”, what can you tell us about that name?
A friend Nick Savvy (another animator, sheesh do I know anyone else) in London and I used to write comedy stuff together -(which now sits in drawers gathering dust) He had 3D Studio release 4 installed (all legit – cough – of course) and after a year of teasing me that I should “try it” I did. I thought it looked waaaaay too complicated for my poor brain, but it really was absorbing and I was totally hooked. I conned my old man into springing for a computer, next was simply installing all the necessary software (cough, choke, splutter), which was naturally completely over my head. Although I could use 3DS4 I could not even change my login name and password. Which had been setup by Nick as login “Chinny”, password “Chin Chin”. He used to make fun of of my chin it seems. When I went to Softimage Adrian asked me for a login name, so I said, err I dunno…”Chinny”. There was a Justin King (Now EA) in the Softimage office and I used to get his calls, oddly it seems that Justin and Jason are apparently too similar. So I said why not call me Chinny and stop the confusion. The rest they say is the rest.

How challenging is your new title?
The title itself is not too challenging – but the job sure is. It takes up almost my every waking moment. In addition to all my regular duties, I really love to use the software, so doing both takes up all my time.

Now that you’ve been promoted, will you still do demos?
Fewer and fewer. I have little time to do it these days, except for the big occasions. Although preparing for them is still is an excellent way to keep my hand in using the software. Without using the software and interacting with customers one can not expect to really be a good product manager.

What do you do on your spare time?
(Embarrassed smiley) I work. Mainly making my own tools etc in XSI, but also writing marketing blurb, and future design ideas. I generally sit with my girlfriend in front of the TV with my laptop on top of my lap (how ironic). She says it’s a bit pathetic. (Another embarrassed smiley) as I am writing this interview as in the manner described above. Actually watching a DVD series called “Firefly” which is really rather good.

What impact does Special Projects have on the development of XSI?
As special projects is the ultimate in front line support, many specific production issues and requests get elevated to the top of the development cycle. This can have a direct bearing on future features and workflow being implemented inside XSI.

What is the exact nature of the relation between Softimage and Avid? Does Avid having any input in XSI’s development?
Very little. In version 4.0 there were a few items such as Mojo support, (very cool direct output which mostly does away for needing a DDR) but generally the XSI development is balanced between customer requests and our own innovations. Whilst all the time they are balanced as to what will make us money. We are a business after all. But one thing I have noticed is that since David Krall took over as head of Avid, I have seen more and more people prepared to lay down their life for him. He is a brilliant leader and a genuinely lovely guy. This has a huge impact on all of us. Never in my working career has one man so enriched a company of this size.

Now that Avid has bought NXN, will we be seeing some of that Alienbrain tech making its way into XSI?
Officially I can not comment on any future development. But if we were to look at past trends, it would fit nicely into Avid’s plans for all types of pipelines

How is Softimage structured?
Well Softimage is a business first and foremost. It just so happens that it is also a passion for the vast majority of us. Come in at the weekends or walk the halls late at night and count the people. They often have brilliant ideas which they just have to test.
As a software company we are driven by the release cycle and to a less extent major trade shows. We have all the usual suspects. Dev, support, marketing, sales, finance etc etc etc. It is a team, no… more a family. Everyone is 100% dedicated to bringing the best product to market that they can.

How different is Softimage from when you first started with them?
More experienced, more tight, more passionate.

How important are demo artist and product specialist to Softimage?
They are the well known faces of Softimage. They are the soldiers who charge into battle waving the flags, They work and play like they demo. Without them, the message would not reach the right people. But they are still as important a part as the rest of the amazing family “Softimage”

What do you see as version 4’s best new features?
Construction modes
Character Development Kit
Rigid Body Dynamics
Custom Display Host
Customizability
SDK
Mental Ray v.3.3
Animator Audio tools
Vector and Raster Paint
Reference Animation
Material Library
UV Unwrapping
Programmable shaders
Etc etc etc

What are the most drastic changes within XSI from version 3.5 to 4?
XML UI
Customization
Construction modes
SDK

How much have the particles/simulation tools changed in XSI 4?
Not much, the core is still there, but there are new customizable controls and goals.

How easier will it be in XSI 4 for people to create custom layouts and property pages?
Just as easy, but waaaaay more options and controls. For instance any layout can have a direct relation which can also trigger scripts. And if you want you could put an entire HTML front end onto XSI

In what areas do you think XSI is way ahead of the competition?
Compositing
Real-time shaders
Hair (even with Maya’s half-hearted attempt)
Almost all types of animation
Characters
Polygon modeling
Rendering

What are your thoughts when looking back to XSI 1.0 and what it has now become with version 4?
It is roughly as predicted but I certainly did not see all the innovations coming. My boss Gareth Morgan did though, he is a visionary at Soft.

What do you think of the XSI community as a whole and from what is was then to what it is now?
I love itthem. They are my extended family. I am amazed at how quickly it has matured, just seeing the rapid rise in the number of members of XSI Base is testament to what has become an explosion of XSI users (particularly after v3.0). With the huge numbers of projects being done, I am very impressed at how well they (the community) push the software. For most people 3D is now almost a religion, so imagine what it is like here at Softimage. XSI is our child. It is why there is so much passion surrounding it.

If you compare the European and the Canadian markets, what things are different, what things are similar?
Commercial production is a common tread. Although there is more money in Europe (especially London) for a range of different broadcast projects. So people who want to do high profile work often have to go to one of only a few major cities. As for the work culture, I find the differences often in the teams. In Soho they work very very hard, but play equally as hard. They socialize together more than I have seen anywhere else. Even from my own experience I used to go on regular snowboarding holidays with large gangs of animators. It often made for very boring conversations in local bars always talking about work, but that is what we did. With a huge number of post houses in one square mile I guess it more easily makes for a community. It is like food courts, all your specialist needs in one place. I would be interested if that type of community is exists elsewhere.

What’s your advice to a person that is starting in the industry?
It depends on what they want to do – So generally I would say ask around, look on the net and drop Ed a line  – better still buy How to get a Job in Computer Animation: If you want to be an animator etc, then make a showreel. Look for local companies and ask if you learn there or just go and show them your stuff, the catch is if you will probably have to do it for free. But working in a real production house really pays off in the long run. Think of it simply as slavery.

What skills are needed to join the XSI development team?
Programming is a must, but more than they have to be the best. Softimage has an incredibly talented bunch of devs. Which is why they manage to make such wonderful software

What are some skillsets that you believe studios are looking for these days?
TD’s always. From to character rigging to rendering and lighting. In film and longform projects, TD’s outnumber animators. In broadcast and commercials generally a good all-rounder will go far. And in games, modelers-texturers-animators make up the bulk compared to programmers. But simply if you can do one thing really really well then you can find a home, it just means you might have to travel

Where do you see the 3D industry 5 years from now?
From my private moon in the shape of an enormous pie.