Friday Flashback #223


i am 4.
customization • speed • options • power • thought • imagination • integration
SOFTIMAGE|XSI version 4.0 launch 04.19.2004

iam_wallpaper_1280x854

We used both SOFTIMAGE|XSI and Avid|DS extensively in the making of the Britney Spears Toxic video. The intelligent, fast and intuitive interface in XSI, coupled with the overall speed of the software, meant that we could get the job done faster and in a style that our competitors couldn’t match.” – Amy Yukich, Executive Producer, KromA

Avid
make manage move | media

Friday Flashback #222


Ten years ago: 64-bit SOFTIMAGE|XSI previewed
previews
[This screenshot is] actually one we showed at GDC and IDF a few weeks ago. Its the USS Ronald Reagan provided to us by Digimation. The model is 1.5 million polys and there are 36 of them on screen so there are plenty of polys in that scene too. I think that scene is taking up about 3.7 gigs RAM.

The workstations we are using are Dell 670, 380’s and also Intel provided boxes, both dual Xeon and dual-core Pentium 4EE’s with 8 gigs in the dual core boxes and 16 in the Xeon boxes. I was hoping for some SLI goodness but the boxes are single PCIE machines at the moment so we’re running single Quadro 3400’s.
marksch on cgtalk

Avid to Preview 64-bit SOFTIMAGE|XSI Software at Microsoft Conference
WinHEC 2005 – Seattle , WA – April 25, 2005
Avid Technology, Inc. (NASDAQ: AVID) today announced that it will be demonstrating a technology preview of a native 64-bit version of SOFTIMAGE®|XSI® 3-D animation software during WinHEC – Microsoft’s annual Windows Hardware Engineering Conference – from April 25-27. The technology preview will provide attendees with a first-hand look at a prototype of SOFTIMAGE|XSI software that is designed to take advantage of 64-bit computing architectures in order to streamline time-consuming 3D animation tasks such as modeling, texturing, and rendering. The 64-bit architecture of SOFTIMAGE|XSI software will leverage the increased performance capabilities found in a range of technologies, including Microsoft Windows XP Professional x64 Edition, Dell Precision™ Workstations with 64-bit Intel ® Xeon ™ processors supporting up to 16GB of memory, and the new Dell Precision 380 with the Intel ® Pentium ® Processor Extreme Edition – Intel’s first dual-core processor-based platform which also features Intel ® EM64T, supporting up to 8GB of high speed memory.

“At Softimage, our customers create incredible CG visuals for feature films, commercials, and video games – and they are consistently delivering greater realism and more sophisticated 3D content,” said Marc Stevens , director of product marketing and research & development for Softimage. “Advancements in 64-bit hardware, like dual core processors, enable artists to work at greater speeds and to produce incredible results in less time. We’re pleased to be a part of WinHEC this year in collaboration with Dell and Intel, and for the opportunity to show how 64-bit applications are poised to dramatically change the creative process for CG artists who create some of the most complex and compelling visual imagery in today’s media.”

André Bustanoby, visual effects supervisor at Stan Winston Digital, a LA-based visual effects shop, said, “At Stan Winston Digital, we are dedicated to delivering the highest level of character animation and visual effects for our projects, often with huge 3-D scenes that can slow software applications as we push system memory to its maximum capacity. With 64-bit technology, we expect that our team of artists will be able to increase the complexity of their 3-D work – without sacrificing time – to deliver greater visual experiences to viewers than ever before.” Artists at Stan Winston Digital have used SOFTIMAGE|XSI to create CG imagery for films including Sky Captain and the World of Tomorrow, Garfield and Cat in the Hat.

The technology preview of the 64-bit version of SOFTIMAGE|XSI will demonstrate increases in performance over 32-bit memory systems. Some of these benefits include greater interactivity and faster rendering of massively complex 3-D scenes, and less rendering required for creating special effects with longer sequences that require layering of 3-D, film, and video content.

Friday Flashback #219


Las Vegas, Nevada, April 7th 2003
SOFTIMAGE ANNOUNCES SOFTIMAGE|XSI V.3.5 FEATURING SPEED, INTEROPERABILITY AND WORKFLOW ADVANCES

Continued Fast Pace of Customer-Driven New Versions Underscores Company’s Unmatched Commitment to Innovation and the Professional 3-D Market

At NAB 2003, Softimage Co., a subsidiary of Avid Technology, Inc., today announced version 3.5 of its SOFTIMAGE®|XSI® software, the industry’s leading nonlinear 3-D production environment.

In addition to including hundreds of new tools and refinements to increase creativity, productivity and reliability in any production, the SOFTIMAGE|XSI v.3.5 environment also seamlessly and completely integrates mental ray v3.2, the new version of the award-winning rendering technology from mental images GmbH & Co. KG.

The latest version of the XSI environment, which follows less than six months after the release of version 3.0, is driven by the production requirements of industry leading customers, including Capcom, Electronic Arts, Industrial Light & Magic (ILM), Konami, Mainframe, Pixel Liberation Front (PLF), Sega, The Mill and Valve, and continues the company’s unmatched pace of development. The SOFTIMAGE|XSI v.3.5 environment further extends Softimage’s position as the professional 3-D market leader in innovation, customer responsiveness and return on investment.

New_in_v3.5_page

Friday Flashback #217


Interview With Alain Laferrière
The Program Manager of Softimage Japan talks about his career at Softimage, the Japanese Industry and XSI 4.0.
June, 28th, 2004, by Raffael Dickreuter, Will Mendez

1088446379_al_1
Alain Laferrière,
Program Manager,
Softimage Japan.

How did you get started in the cg industry?
I got interested in computer imagery from a young age, playing with the first video game consoles and personal computers and following the evolution of CG with a deep interest. The “demo scene” development communities for both C-64 and Amiga computers also brought a lot of innovation in realtime graphics synthesis.
At University of Montreal I studied computer graphics and did a M.Sc. in human-computer interfaces ( agents ). There, I met Réjean Gagné, Richard Laperrière and Dominique Boisvert, who would later implement the Actor Module in SI3D, and many other friends who were, or are still working at Softimage today. After I completed my studies I got a job interview at Softimage and was hired ; it was a dream come true ! I started in the modeling team, then moved on to motion capture & control, then headed the new Games team and now Special Projects Japan.

What do you do on your spare time?
I love music very much and I’ve been DJ’ing on and off since the age of 13, so I buy records and practice mixing. I also like programming personal projects, reading, cooking, nature, going out with friends, going to the gym, etc.

Tell us a bit about the highlights of your 11 year career at Softimage

  • April 14th 1993 – my first day at Softimage !
  • solving a data loss problem with an Ascension “Flock of Birds” motion capture system not working properly with a Onyx computer, which was being used at a Jurassic Park theme park at Unviersal Studios. I implemented a fix in Montreal and I was sent to Los Angeles to configure the system. I had a chance to meet the movie actors and Steven Spielberg made an appearance, it was a very impressive “Hollywood” type first experience for me.
  • then I worked on an interesting project : one of our customers was using a Waldo-like hand manipulation device to radio-control the facial expressions of the robotic heads used for the Teenage Mutant Ninja Turtles movies. The actors had to wear the Turtle heads while servo-motors moved metal blades around their face to flex and animate the head latex skin ; as well as a radio receiver backpack which was hidden in the turtle shell. Since the facial animation had to be redone live at each shot , it was not possible to produce a constant facial animation quality. First, I wrote a motion capture driver for the Waldo device and a designer modeled the turtle head in SI3D and connected the Waldo inputs to its facial animation shapes. Now we were able to record a live facial animation using the Waldo device, then perfect it in SI3D by editing the animation curves. After that, I modified the motion capture communication protocol to support motion control, and wrote a control driver to radio-broadcast the data which was previously motion captured and edited in SI3D. With this pipeline it was possible to author a perfect animation sequence which could be played back identically at each shot. The project had an extra dimension of danger, since there were urban legends of actors getting mutilated by the blades attached inside the robotic heads. Something that keeps your mind busy when your head is inside.. ;)
  • Fall of 1994, I went to Japan with Daniel Langlois for a 2 weeks business trip, which mutated into a large effort between Softimage and SEGA to create a first generation of 3D game authoring tools. I was called to stay there for nearly 3 months in the first trip, and to frequently pay visits to Japan afterwards. I supervised the project and other developers helped me designing and implementing the features ( 3D paint, color reduction, SEGA Saturn Export / Import and Viewer, raycast polygon selection, etc). It was a very intense and exciting coding experience. Following this, SI3D quickly became the standard for 3D game authoring in Japan.
  • RenderMap – a project which I started after talking to a Japanese game designer, who explained me how he was planning to use clay models to create the characters of his next game. They would scan the clay models into high-resolution polygon mesh models with colors at vertices, and then transform into texture data by rendering front and back view images and re-projecting them on the low-resolution model. With this method there is no control over how the texture is being distributed on the geometry, and although it works for polygons facing the camera, the texturing quality quickly degrades as polygons face at an increasing angle. Instead, I thought it would be better to fire rays along the surface to capture the color information and if those rays could hit a high-resolution version of the object, then we could carry the texturing data from high-res to low-res. RenderMap solved this by using mental ray to fire rays at the intersection of triangles and texels ( according to their location on the object ) and accumulating the weighted color contributions to a final color per texel. You could carry the high-res data to a low-res model, by placing the high-res model in the same location as the low-res one, but making it slightly bigger. Since it uses Mental Ray, it also allows to burn procedural effects into texture maps, generate vertex colors, normal maps, etc. My colleague Ian Stewart implemented the much improved version in XSI.

    1088446400_al_3
    Manta model before applying RenderMap – we can see the results of rendering using mental ray in the render region.

    1088446540_al4
    Texture data after being pre-rendred with mental ray using RenderMap.

    1088446556_al_2
    Manta model after applying RenderMap.
  • dotXSI file format – an initiative which I launched over 7 years ago, at a time when our customers were asking us to design a high-level 3D data interchange solution. After much discussions with my colleagues in the Games team, we finally opted for using the Microsoft .X format concept, with many new data templates to support a lot of features which were not covered in basic .X files. Since that time, the format has become very popular in the cg and games markets as a generic data interchange solution between authoring tools and pipelines. I was initially hoping to see a good level of popularity one day, but it has now reached well beyond those early expectations. This was also an opportunity for us to understand the fundamental differences between a high-level generic format and an optimized “memory image” data format which can be loaded and used directly by a game engine without further optimizations. From this, we derived our strategy behind the dotXSI file format and the FTK ( File Transfer Kit ), which is a library to read and write dotXSI files and which can be used to write converters between dotXSI files and optimized memory image formats tailored to any custom game engine ( or between any other format and dotXSI ). I came up with the “.XSI” name suffix, which was later adopted by the marketing team for the name of the XSI product itself, although I had no involvement with this decision.
  • April 99 – moving to Japan
  • various Special Projects with Japanese clients
  • misc. SI3D features like vertex colors, generic user data, GC_ValidateMesh, SI3D 4.0 ; and now XSI tools

Working in Japan: Was it difficult to adjust to a new location and culture?
Not really. I got a good experience of Japan during my first trip ; having a routine and working on-site at SEGA was a chance to feel how life would be living here. I came back again on business trips about a dozen times, my interest growing until I relocated in April 99. Adapting to the Japanese culture is a life-long effort, but Japan is very welcoming so this can be relatively painless. Of course I always miss my friends and family, but fortunately I go back a few times a year and we stay conneced by mail and phone.

What is the biggest difference when working for a company in Japan compared to Canada?
Since I work from home the differences for me are very low. But traditional Japanese companies are more hierarchical than american ones so you have to be aware of that. Also, your ability to connect with a team will completely depend on your ability to communicate in Japanese.

Do companies in Japan and Northern America work closely together on projects?
Many Japanese companies have branch offices in North-America, however I don’t know how much is new production work, regional adaptation or collaboration on a joint project.

How do Japanese customers differ from Canadian customers?
The main difference of course is the language. And Japanese companies employ many designers and programmers, so there are many sources of feedback to report issues and submit suggestions for improvements. Sometimes there are misunderstandings, missing information in order to reproduce a problem or understand what a specific request is about or why it’s important, so I keep an eye on all incoming feedback from Japan and work with our support teams in Japan and Montreal and our Japan resellers to make sure we have all the information needed to fully understand and address customer requirements.

Another difference is related to the fact that most customers in Japan are game developers. It is normally easier for film companies to switch pipelines from one 3D application to another than for game companies, because the final output for film is a rendered movie while for games is most often data which needs to be compatible with a run-time game engine. A large portion of our customers in Japan are game developers, so they tend to migrate pipelines at a slower pace than for customers in other markets since they need to validate the workflow and data compatibility with a new pipeline before they can adopt it in production. Japanese customers are very thorough in their analysis of new technologies and do not migrate without serious consideration. XSI has already been used in production for a while by many Japanese companies, and its popularity is now accelerating. This is very exciting to see !

When dealing with high profile gaming companies how do you meet their demands for new features that are not implemented?
We gather all incoming feedback from Japan, prioritize according to severity, popularity and amount of work involved and then plan the next version features. In cases where a customer requires something which is specific to them, like special training ( features, SDK ), assistance to setup / migrate a pipeline or custom development, then there is always the possibility of purchasing R&D time from Softimage through our Special Projects team.

Part of your job is “managing escalation of critical issues”. can you give us an example?
If a customer reports something very bad like a production showstopper or something which prevents the adoption of XSI in production, then we may escalate the issue internally and implement a fix which is provided to the customer as a QFE ( Quick Fix Engineering ). This is a service available to customers under maintenance. QFEs done in answer to issues in our last released product are always integrated in the next public release : either a point release ( Service Pack ) or the next main version.

Softimage 3D has been a long time favorite by Japanese companies and we are seeing some migration to XSI, Why has it been so hard for them to move to XSI more quickly ?
There is a cost involved in migrating since you need to train designers and port your pipeline ( workflow, plugins ) from one application to another. We released our first generation of game tools in SI3D at the time where the 1st generation of game consoles were coming out : SEGA Saturn, Sony PlayStation, Nintendo 64. Since we designed this in conjunction with SEGA, it quickly became the de-facto standard for 3D game authoring in Japan. Then, Alias released Maya in 1998. Although the first version was not ready for production use ( Japanese customers called it “mada”, i.e. “not yet” ), Alias had 2 years to work on Maya until we released XSI 1.0. Of course, we lost some users from SI3D to Maya during this period. It was a hard time for us, but we wanted to build a solid architecture from the beginning rather than something we would need to patch along the way. Now we can see this bet is paying off, as our development speed has greatly improved and is now imposing a pace of innovation which is getting difficult for our competitors to follow. With XSI 4.0 which includes the new poweful and affordable Foundation product, and many new advanced features throughout our product line, we have everthing we need to accelerate the expansion of our user community.

What features that you originally developed for SI3D have found their way into XSI?
Polygon raycasting, vertex colors, generic user data, polygon reduction, dotXSI support, RenderMap, pipeline tools ( export, viewing ), etc. These were implemented in XSI by my colleagues in Montreal. I wrote a few things for XSI like a User Normal Editing interactive tool and some realtime shader examples which I published on XSINet. Now I am studying the new Xgs and CDH features of XSI 4.0 ( which I think are really cool btw ! ).

What features excel in XSI for game companies?
Character modeling and CDK ( Character Development Kit ), polygonal modeling, polygon reduction, texturing, RenderMap / RenderVertex, CDH ( Custom Display Host ) and Xgs ( Graphics Synthesizer ), Realtime Shaders, dotXSI pipeline and FTK, ability to attach generic user data to scene elements and have it automatically supported through the dotXSI pipeline, general ease and flexibility of customization using the XSI Net View, Relational Views, Synoptic Views, and finally, the XSI SDK itself which is rich and now provides strong support for UI customization, among many new things in 4.0.

Will 4.0 change the gaming industry in Japan , and gaming industry in general?
Definitely. The low price point of our new Foundation product is opening up access to XSI to middle and low-end segments of cg production. It will also become easier for 2nd and 3rd parties to collaborate with high-end clients using XSI on joint projects.

As for the features of 4.0, there are many innovations which bring exciting new opportunities to our users. For example, the new rigid body dynamics are based on ODE ( Open Dynamics Engine ), an open source royalty-free solution. It is possible for game users to adopt ODE in their run-time engine and create realtime simulations which are entirely compatible with how things behave in XSI ( or you could simply plot / bake and export the animation if you do not want to recompute it in the game engine ). Also, the CDK ( Character Development Kit ) brings all the tools needed for making custom character animation rigs.

The CDH ( Custom Display Host ) allows an external application to communicate with XSI and display its output into a XSI view. It is possible for XSI to drive an external application and vice-verca ; this external application could be a game engine synchronized with XSI, or any custom tool which can be interacted with from within XSI. The Xgs ( XSI Graphics Synthesizer ) allows to create scene-level realtime rendering effects and can communicate with realtime shaders for advanced effects.

The Polygon Reduction tool in 4.0 is simply amazing, Texture Layers provide a very powerful workflow for multi-texturing management, Material Libraries simplify material management and the SDK is both rich and powerful.

XSI v4.0 is our biggest release ever since v1.0. It contains many other interesting features which are not listed here, as well as a large number of fixes for issues which were reported by our customers around the world.

What advice would you give to an artist of Northern America or Europe who wants to start working in Japan?
Learn the whole package. Designers in Japan do not tend to specialize in one thing, but instead are called to work on many different aspects of production : modeling, character design, animation, texturing, custom data editing, etc. The more flexible and proficient you are with the package, the easier you will connect with a Japanese production team.

Learn Japanese. Even though you can usually always find a few Japanese persons in each company who can speak a good level of English, it is more an exception than the rule and it would be better not to rely on that to bridge the gap with your Japanese colleagues. The better you understand and speak Japanese, the more chances you will have to connect with the team and contribute to the project. An american friend of mine who worked in a game company in Tokyo learned conversational Japanese very quickly because he was completely immersed in a Japanese working environment, but if you can learn some before heading to Japan it will make things a lot smoother and expand your opportunities.

Friday Flashback #216


Jumanji’s Amazing Animals (Computer Graphics World, 1 Jan 1996)
ILM used SOFTIMAGE|3D to animate a CG elephant walking step by step over a crushed car.
jumanji

In one memorable shot in the stampede sequence, a CG elephant walks up and over a car, crunching the car underfoot. For this sequence, the ‘run cycles were abandoned in favor of hand animation.

elephant

Here’s the complete article (PDF, 7.4MB). The quality of photocopies back in 1996 was pretty bad.

Friday Flashback #215


Produced by Softimage in Montréal, 1994-1995, Osmose was an immersive, virtual environment utilizing 3D computer graphics and interactive 3D sound, a head-mounted display and real-time motion tracking based on breathing and balance.

john
John Harrison, “immersed” in Osmose development

install
Installation at Musée d’art contemporain de Montréal

georges
Georges Mauro,immersed in Osmose*

shadow
Shadow of immersant as seen by audience

treepondred
Scene from Osmose

What is Osmose

” …By changing space, by leaving the space of one’s usual sensibilities, one enters into communication with a space that is psychically innovating. For we do not change place, we change our nature.”
–Gaston Bachelard, The Poetics of Space, 1964 Clearing with Tree

Osmose is an immersive, virtual environment utilizing 3D computer graphics and interactive 3D sound, a head-mounted display and real-time motion tracking based on breathing and balance.
tree
Tree

Created by a team led by artist Char Davies, Director of Visual Research at Softimage (Montréal), Osmose is a space for exploring the perceptual interplay between self and world, i.e. a place for facilitating awareness of one’s own self as embodied consciousness in enveloping space. This work challenges conventional approaches to virtual reality and explores what Davies believes to be the most intriguing aspect of the medium, namely its capacity to allow us to explore what it means, essentially, to “be-in-the-world”.

Immersion in Osmose begins with the donning of the head-mounted display and motion-tracking vest. The first virtual space encountered is a three-dimensional Cartesian Grid which functions as an orientation space. With the immersant’s first breaths, the grid gives way to a clearing in a forest. There are a dozen world-spaces in Osmose, most based on metaphorical aspects of nature. These include Clearing, Forest, Tree, Leaf, Cloud, Pond, Subterranean Earth, and Abyss. There is also a substratum, Code, which contains much of the actual software used to create the work, and a superstratum, Text, a space consisting of quotes from the artist and excerpts of relevant texts on technology, the body and nature. Code and Text function as conceptual parentheses around the worlds within. Through use of their own breath and balance, immersants are able to journey anywhere within these worlds as well as hover in the ambiguous transition areas in between. After fifteen minutes of immersion, the LifeWorld appears and slowly but irretrievably recedes, bringing the session to an end.

forest
The Forest

In contrast to the hard-edged realism of conventional 3D computer graphics, the visual aesthetic of Osmose is soft, luminous and transparent, consisting of translucent textures and flowing particles. Figure/ground relationships are spatially ambiguous, and transitions between worlds are subtle and slow. This mode of representation serves to ‘evoke’ rather than illustrate and is derived from Davies’ previous work as a painter. The sounds (447K WAV) (1.8M AIFF) within Osmose are spatially multi-dimensional and have been designed to respond to changes in the immersant’s location, direction and speed: the source of their complexity is a sampling of a male and female voice.

rosetree
Tree in transition

The user-interface of Osmose is based on full-body immersion in 360 degree spherical, enveloping space, through use of a head mounted display. Solitude is a key aspect of the experience, as the artist’s goal is to connect the immersant not to others but to the depths of his or her own self. In contrast to interface techniques such as joysticks, Osmose incorporates the intuitive processes of breathing and balance as the primary means of navigating within the virtual world. By breathing in, the immersant is able to float upward, by breathing out, to fall, and by subtlely altering the body’s centre of balance, to change direction, a method inspired by the scuba diving practice of buoyancy control. The experience of being spatially-enveloped, of floating rather than flying or driving is key. Whereas in conventional VR, the body is often reduced to little more than a probing hand and roving eye, immersion in Osmose depends on the body’s most essential living act, that of breath — not only to navigate, but more importantly — to attain a particular state-of-being within the virtual world. In this state, usually achieved within ten minutes of immersion, most immersants experience a shift of awareness in which the urge for action is replaced by contemplative free-fall. Being supercedes doing.

rocks
The Subterrainean

lifeworld
The Lifeworld

Based on the responses of several thousand individuals who have been immersed in Osmose since the summer of 1995, the after-effect of immersion in Osmose can be quite profound. Many individuals feel as if they have rediscovered an aspect of themselves, of being alive in the world, which they had forgotten, the experiencing of which they find to be very emotional, leading some to even weep after immersion. Such response has confirmed the artist’s belief that traditional interface boundaries between machine and human can be transcended even while re-affirming our corporeality, and that Cartesian notions of space as well as illustrative realism can effectively be replaced by more evocative alternatives. Immersive virtual space, when stripped of its conventions, can provide an intriguing spatio-temporal context in which to explore the self’s subjective experience of “being-in-the-world” — as embodied consciousness in an enveloping space where boundaries between inner/outer, and mind/body dissolve.

The public installation of Osmose includes large-scale stereoscopic video and audio projection of imagery and sound transmitted in real-time from the point-of-view of the individual in immersion (the “immersant”): this projection enables an audience, wearing polarizing glasses, to witness each immersive journey as it unfolds. Although immersion takes place in a private area, a translucent screen equal in size to the video screen enables the audience to observe the body gestures of the immersant as a poetic shadow-silhouette.
Credits

Charlotte.Davies: Concept and direction
Georges.Mauro: Creation of graphics
John.Harrison: Virtual Reality software programming
D.Blaszczak: Sound design and programming
rb@accessone.com: Music composition and programming