Friday Flashback #217


Interview With Alain Laferrière
The Program Manager of Softimage Japan talks about his career at Softimage, the Japanese Industry and XSI 4.0.
June, 28th, 2004, by Raffael Dickreuter, Will Mendez

1088446379_al_1
Alain Laferrière,
Program Manager,
Softimage Japan.

How did you get started in the cg industry?
I got interested in computer imagery from a young age, playing with the first video game consoles and personal computers and following the evolution of CG with a deep interest. The “demo scene” development communities for both C-64 and Amiga computers also brought a lot of innovation in realtime graphics synthesis.
At University of Montreal I studied computer graphics and did a M.Sc. in human-computer interfaces ( agents ). There, I met Réjean Gagné, Richard Laperrière and Dominique Boisvert, who would later implement the Actor Module in SI3D, and many other friends who were, or are still working at Softimage today. After I completed my studies I got a job interview at Softimage and was hired ; it was a dream come true ! I started in the modeling team, then moved on to motion capture & control, then headed the new Games team and now Special Projects Japan.

What do you do on your spare time?
I love music very much and I’ve been DJ’ing on and off since the age of 13, so I buy records and practice mixing. I also like programming personal projects, reading, cooking, nature, going out with friends, going to the gym, etc.

Tell us a bit about the highlights of your 11 year career at Softimage

  • April 14th 1993 – my first day at Softimage !
  • solving a data loss problem with an Ascension “Flock of Birds” motion capture system not working properly with a Onyx computer, which was being used at a Jurassic Park theme park at Unviersal Studios. I implemented a fix in Montreal and I was sent to Los Angeles to configure the system. I had a chance to meet the movie actors and Steven Spielberg made an appearance, it was a very impressive “Hollywood” type first experience for me.
  • then I worked on an interesting project : one of our customers was using a Waldo-like hand manipulation device to radio-control the facial expressions of the robotic heads used for the Teenage Mutant Ninja Turtles movies. The actors had to wear the Turtle heads while servo-motors moved metal blades around their face to flex and animate the head latex skin ; as well as a radio receiver backpack which was hidden in the turtle shell. Since the facial animation had to be redone live at each shot , it was not possible to produce a constant facial animation quality. First, I wrote a motion capture driver for the Waldo device and a designer modeled the turtle head in SI3D and connected the Waldo inputs to its facial animation shapes. Now we were able to record a live facial animation using the Waldo device, then perfect it in SI3D by editing the animation curves. After that, I modified the motion capture communication protocol to support motion control, and wrote a control driver to radio-broadcast the data which was previously motion captured and edited in SI3D. With this pipeline it was possible to author a perfect animation sequence which could be played back identically at each shot. The project had an extra dimension of danger, since there were urban legends of actors getting mutilated by the blades attached inside the robotic heads. Something that keeps your mind busy when your head is inside.. ;)
  • Fall of 1994, I went to Japan with Daniel Langlois for a 2 weeks business trip, which mutated into a large effort between Softimage and SEGA to create a first generation of 3D game authoring tools. I was called to stay there for nearly 3 months in the first trip, and to frequently pay visits to Japan afterwards. I supervised the project and other developers helped me designing and implementing the features ( 3D paint, color reduction, SEGA Saturn Export / Import and Viewer, raycast polygon selection, etc). It was a very intense and exciting coding experience. Following this, SI3D quickly became the standard for 3D game authoring in Japan.
  • RenderMap - a project which I started after talking to a Japanese game designer, who explained me how he was planning to use clay models to create the characters of his next game. They would scan the clay models into high-resolution polygon mesh models with colors at vertices, and then transform into texture data by rendering front and back view images and re-projecting them on the low-resolution model. With this method there is no control over how the texture is being distributed on the geometry, and although it works for polygons facing the camera, the texturing quality quickly degrades as polygons face at an increasing angle. Instead, I thought it would be better to fire rays along the surface to capture the color information and if those rays could hit a high-resolution version of the object, then we could carry the texturing data from high-res to low-res. RenderMap solved this by using mental ray to fire rays at the intersection of triangles and texels ( according to their location on the object ) and accumulating the weighted color contributions to a final color per texel. You could carry the high-res data to a low-res model, by placing the high-res model in the same location as the low-res one, but making it slightly bigger. Since it uses Mental Ray, it also allows to burn procedural effects into texture maps, generate vertex colors, normal maps, etc. My colleague Ian Stewart implemented the much improved version in XSI.

    1088446400_al_3
    Manta model before applying RenderMap – we can see the results of rendering using mental ray in the render region.

    1088446540_al4
    Texture data after being pre-rendred with mental ray using RenderMap.

    1088446556_al_2
    Manta model after applying RenderMap.
  • dotXSI file format – an initiative which I launched over 7 years ago, at a time when our customers were asking us to design a high-level 3D data interchange solution. After much discussions with my colleagues in the Games team, we finally opted for using the Microsoft .X format concept, with many new data templates to support a lot of features which were not covered in basic .X files. Since that time, the format has become very popular in the cg and games markets as a generic data interchange solution between authoring tools and pipelines. I was initially hoping to see a good level of popularity one day, but it has now reached well beyond those early expectations. This was also an opportunity for us to understand the fundamental differences between a high-level generic format and an optimized “memory image” data format which can be loaded and used directly by a game engine without further optimizations. From this, we derived our strategy behind the dotXSI file format and the FTK ( File Transfer Kit ), which is a library to read and write dotXSI files and which can be used to write converters between dotXSI files and optimized memory image formats tailored to any custom game engine ( or between any other format and dotXSI ). I came up with the “.XSI” name suffix, which was later adopted by the marketing team for the name of the XSI product itself, although I had no involvement with this decision.
  • April 99 – moving to Japan
  • various Special Projects with Japanese clients
  • misc. SI3D features like vertex colors, generic user data, GC_ValidateMesh, SI3D 4.0 ; and now XSI tools

Working in Japan: Was it difficult to adjust to a new location and culture?
Not really. I got a good experience of Japan during my first trip ; having a routine and working on-site at SEGA was a chance to feel how life would be living here. I came back again on business trips about a dozen times, my interest growing until I relocated in April 99. Adapting to the Japanese culture is a life-long effort, but Japan is very welcoming so this can be relatively painless. Of course I always miss my friends and family, but fortunately I go back a few times a year and we stay conneced by mail and phone.

What is the biggest difference when working for a company in Japan compared to Canada?
Since I work from home the differences for me are very low. But traditional Japanese companies are more hierarchical than american ones so you have to be aware of that. Also, your ability to connect with a team will completely depend on your ability to communicate in Japanese.

Do companies in Japan and Northern America work closely together on projects?
Many Japanese companies have branch offices in North-America, however I don’t know how much is new production work, regional adaptation or collaboration on a joint project.

How do Japanese customers differ from Canadian customers?
The main difference of course is the language. And Japanese companies employ many designers and programmers, so there are many sources of feedback to report issues and submit suggestions for improvements. Sometimes there are misunderstandings, missing information in order to reproduce a problem or understand what a specific request is about or why it’s important, so I keep an eye on all incoming feedback from Japan and work with our support teams in Japan and Montreal and our Japan resellers to make sure we have all the information needed to fully understand and address customer requirements.

Another difference is related to the fact that most customers in Japan are game developers. It is normally easier for film companies to switch pipelines from one 3D application to another than for game companies, because the final output for film is a rendered movie while for games is most often data which needs to be compatible with a run-time game engine. A large portion of our customers in Japan are game developers, so they tend to migrate pipelines at a slower pace than for customers in other markets since they need to validate the workflow and data compatibility with a new pipeline before they can adopt it in production. Japanese customers are very thorough in their analysis of new technologies and do not migrate without serious consideration. XSI has already been used in production for a while by many Japanese companies, and its popularity is now accelerating. This is very exciting to see !

When dealing with high profile gaming companies how do you meet their demands for new features that are not implemented?
We gather all incoming feedback from Japan, prioritize according to severity, popularity and amount of work involved and then plan the next version features. In cases where a customer requires something which is specific to them, like special training ( features, SDK ), assistance to setup / migrate a pipeline or custom development, then there is always the possibility of purchasing R&D time from Softimage through our Special Projects team.

Part of your job is “managing escalation of critical issues”. can you give us an example?
If a customer reports something very bad like a production showstopper or something which prevents the adoption of XSI in production, then we may escalate the issue internally and implement a fix which is provided to the customer as a QFE ( Quick Fix Engineering ). This is a service available to customers under maintenance. QFEs done in answer to issues in our last released product are always integrated in the next public release : either a point release ( Service Pack ) or the next main version.

Softimage 3D has been a long time favorite by Japanese companies and we are seeing some migration to XSI, Why has it been so hard for them to move to XSI more quickly ?
There is a cost involved in migrating since you need to train designers and port your pipeline ( workflow, plugins ) from one application to another. We released our first generation of game tools in SI3D at the time where the 1st generation of game consoles were coming out : SEGA Saturn, Sony PlayStation, Nintendo 64. Since we designed this in conjunction with SEGA, it quickly became the de-facto standard for 3D game authoring in Japan. Then, Alias released Maya in 1998. Although the first version was not ready for production use ( Japanese customers called it “mada”, i.e. “not yet” ), Alias had 2 years to work on Maya until we released XSI 1.0. Of course, we lost some users from SI3D to Maya during this period. It was a hard time for us, but we wanted to build a solid architecture from the beginning rather than something we would need to patch along the way. Now we can see this bet is paying off, as our development speed has greatly improved and is now imposing a pace of innovation which is getting difficult for our competitors to follow. With XSI 4.0 which includes the new poweful and affordable Foundation product, and many new advanced features throughout our product line, we have everthing we need to accelerate the expansion of our user community.

What features that you originally developed for SI3D have found their way into XSI?
Polygon raycasting, vertex colors, generic user data, polygon reduction, dotXSI support, RenderMap, pipeline tools ( export, viewing ), etc. These were implemented in XSI by my colleagues in Montreal. I wrote a few things for XSI like a User Normal Editing interactive tool and some realtime shader examples which I published on XSINet. Now I am studying the new Xgs and CDH features of XSI 4.0 ( which I think are really cool btw ! ).

What features excel in XSI for game companies?
Character modeling and CDK ( Character Development Kit ), polygonal modeling, polygon reduction, texturing, RenderMap / RenderVertex, CDH ( Custom Display Host ) and Xgs ( Graphics Synthesizer ), Realtime Shaders, dotXSI pipeline and FTK, ability to attach generic user data to scene elements and have it automatically supported through the dotXSI pipeline, general ease and flexibility of customization using the XSI Net View, Relational Views, Synoptic Views, and finally, the XSI SDK itself which is rich and now provides strong support for UI customization, among many new things in 4.0.

Will 4.0 change the gaming industry in Japan , and gaming industry in general?
Definitely. The low price point of our new Foundation product is opening up access to XSI to middle and low-end segments of cg production. It will also become easier for 2nd and 3rd parties to collaborate with high-end clients using XSI on joint projects.

As for the features of 4.0, there are many innovations which bring exciting new opportunities to our users. For example, the new rigid body dynamics are based on ODE ( Open Dynamics Engine ), an open source royalty-free solution. It is possible for game users to adopt ODE in their run-time engine and create realtime simulations which are entirely compatible with how things behave in XSI ( or you could simply plot / bake and export the animation if you do not want to recompute it in the game engine ). Also, the CDK ( Character Development Kit ) brings all the tools needed for making custom character animation rigs.

The CDH ( Custom Display Host ) allows an external application to communicate with XSI and display its output into a XSI view. It is possible for XSI to drive an external application and vice-verca ; this external application could be a game engine synchronized with XSI, or any custom tool which can be interacted with from within XSI. The Xgs ( XSI Graphics Synthesizer ) allows to create scene-level realtime rendering effects and can communicate with realtime shaders for advanced effects.

The Polygon Reduction tool in 4.0 is simply amazing, Texture Layers provide a very powerful workflow for multi-texturing management, Material Libraries simplify material management and the SDK is both rich and powerful.

XSI v4.0 is our biggest release ever since v1.0. It contains many other interesting features which are not listed here, as well as a large number of fixes for issues which were reported by our customers around the world.

What advice would you give to an artist of Northern America or Europe who wants to start working in Japan?
Learn the whole package. Designers in Japan do not tend to specialize in one thing, but instead are called to work on many different aspects of production : modeling, character design, animation, texturing, custom data editing, etc. The more flexible and proficient you are with the package, the easier you will connect with a Japanese production team.

Learn Japanese. Even though you can usually always find a few Japanese persons in each company who can speak a good level of English, it is more an exception than the rule and it would be better not to rely on that to bridge the gap with your Japanese colleagues. The better you understand and speak Japanese, the more chances you will have to connect with the team and contribute to the project. An american friend of mine who worked in a game company in Tokyo learned conversational Japanese very quickly because he was completely immersed in a Japanese working environment, but if you can learn some before heading to Japan it will make things a lot smoother and expand your opportunities.

Friday Flashback #216


Jumanji’s Amazing Animals (Computer Graphics World, 1 Jan 1996)
ILM used SOFTIMAGE|3D to animate a CG elephant walking step by step over a crushed car.
jumanji

In one memorable shot in the stampede sequence, a CG elephant walks up and over a car, crunching the car underfoot. For this sequence, the ‘run cycles were abandoned in favor of hand animation.

elephant

Here’s the complete article (PDF, 7.4MB). The quality of photocopies back in 1996 was pretty bad.

Friday Flashback #215


Produced by Softimage in Montréal, 1994-1995, Osmose was an immersive, virtual environment utilizing 3D computer graphics and interactive 3D sound, a head-mounted display and real-time motion tracking based on breathing and balance.

john
John Harrison, “immersed” in Osmose development

install
Installation at Musée d’art contemporain de Montréal

georges
Georges Mauro,immersed in Osmose*

shadow
Shadow of immersant as seen by audience

treepondred
Scene from Osmose

What is Osmose

” …By changing space, by leaving the space of one’s usual sensibilities, one enters into communication with a space that is psychically innovating. For we do not change place, we change our nature.”
–Gaston Bachelard, The Poetics of Space, 1964 Clearing with Tree

Osmose is an immersive, virtual environment utilizing 3D computer graphics and interactive 3D sound, a head-mounted display and real-time motion tracking based on breathing and balance.
tree
Tree

Created by a team led by artist Char Davies, Director of Visual Research at Softimage (Montréal), Osmose is a space for exploring the perceptual interplay between self and world, i.e. a place for facilitating awareness of one’s own self as embodied consciousness in enveloping space. This work challenges conventional approaches to virtual reality and explores what Davies believes to be the most intriguing aspect of the medium, namely its capacity to allow us to explore what it means, essentially, to “be-in-the-world”.

Immersion in Osmose begins with the donning of the head-mounted display and motion-tracking vest. The first virtual space encountered is a three-dimensional Cartesian Grid which functions as an orientation space. With the immersant’s first breaths, the grid gives way to a clearing in a forest. There are a dozen world-spaces in Osmose, most based on metaphorical aspects of nature. These include Clearing, Forest, Tree, Leaf, Cloud, Pond, Subterranean Earth, and Abyss. There is also a substratum, Code, which contains much of the actual software used to create the work, and a superstratum, Text, a space consisting of quotes from the artist and excerpts of relevant texts on technology, the body and nature. Code and Text function as conceptual parentheses around the worlds within. Through use of their own breath and balance, immersants are able to journey anywhere within these worlds as well as hover in the ambiguous transition areas in between. After fifteen minutes of immersion, the LifeWorld appears and slowly but irretrievably recedes, bringing the session to an end.

forest
The Forest

In contrast to the hard-edged realism of conventional 3D computer graphics, the visual aesthetic of Osmose is soft, luminous and transparent, consisting of translucent textures and flowing particles. Figure/ground relationships are spatially ambiguous, and transitions between worlds are subtle and slow. This mode of representation serves to ‘evoke’ rather than illustrate and is derived from Davies’ previous work as a painter. The sounds (447K WAV) (1.8M AIFF) within Osmose are spatially multi-dimensional and have been designed to respond to changes in the immersant’s location, direction and speed: the source of their complexity is a sampling of a male and female voice.

rosetree
Tree in transition

The user-interface of Osmose is based on full-body immersion in 360 degree spherical, enveloping space, through use of a head mounted display. Solitude is a key aspect of the experience, as the artist’s goal is to connect the immersant not to others but to the depths of his or her own self. In contrast to interface techniques such as joysticks, Osmose incorporates the intuitive processes of breathing and balance as the primary means of navigating within the virtual world. By breathing in, the immersant is able to float upward, by breathing out, to fall, and by subtlely altering the body’s centre of balance, to change direction, a method inspired by the scuba diving practice of buoyancy control. The experience of being spatially-enveloped, of floating rather than flying or driving is key. Whereas in conventional VR, the body is often reduced to little more than a probing hand and roving eye, immersion in Osmose depends on the body’s most essential living act, that of breath — not only to navigate, but more importantly — to attain a particular state-of-being within the virtual world. In this state, usually achieved within ten minutes of immersion, most immersants experience a shift of awareness in which the urge for action is replaced by contemplative free-fall. Being supercedes doing.

rocks
The Subterrainean

lifeworld
The Lifeworld

Based on the responses of several thousand individuals who have been immersed in Osmose since the summer of 1995, the after-effect of immersion in Osmose can be quite profound. Many individuals feel as if they have rediscovered an aspect of themselves, of being alive in the world, which they had forgotten, the experiencing of which they find to be very emotional, leading some to even weep after immersion. Such response has confirmed the artist’s belief that traditional interface boundaries between machine and human can be transcended even while re-affirming our corporeality, and that Cartesian notions of space as well as illustrative realism can effectively be replaced by more evocative alternatives. Immersive virtual space, when stripped of its conventions, can provide an intriguing spatio-temporal context in which to explore the self’s subjective experience of “being-in-the-world” — as embodied consciousness in an enveloping space where boundaries between inner/outer, and mind/body dissolve.

The public installation of Osmose includes large-scale stereoscopic video and audio projection of imagery and sound transmitted in real-time from the point-of-view of the individual in immersion (the “immersant”): this projection enables an audience, wearing polarizing glasses, to witness each immersive journey as it unfolds. Although immersion takes place in a private area, a translucent screen equal in size to the video screen enables the audience to observe the body gestures of the immersant as a poetic shadow-silhouette.
Credits

Charlotte.Davies: Concept and direction
Georges.Mauro: Creation of graphics
John.Harrison: Virtual Reality software programming
D.Blaszczak: Sound design and programming
rb@accessone.com: Music composition and programming

Friday Flashback #213


From the Softimage Customer Stories, volume 1, issue 1, a 2001 customer story on Aldis Animation
click to read the full story (PDF) or scroll down
customer_story_the_aldis_ascension_2001_page1

THE ALDIS ASCENSION: SOFTIMAGE|XSI Helps Aldis Animation Keep Moving On Up
by Michael Abraham

When last we visited with Kim Aldis, founder and co-owner of London’s Aldis Animation Company, he and his crew were busy putting the beta version of SOFTIMAGE|XSI v.1 (codename Sumatra) through its paces.

Roughly a year after our first conversation, I reconnected with Aldis at his home number. I’d called the Aldis Animation offices the previous day, but the sound of holidays celebrations in the background suggested it wasn’t a very good time to talk. Damn the holidays anyway; they play hell on the work schedule of irredeemable procrastinators such as myself. But I digress.

Speaking with Aldis the morning after the night before, he sounded surprisingly upbeat. It’s been a good year for the company. So good, in fact, that everybody was feeling a little fried as the Christmas season approached. A party was definitely in order.

“We’ve had a pretty good year,” Aldis admits, with typical understatement. “We branded all the UEFA Cup pieces for Ford and did a whole bunch of stuff for the CITV network here in the UK. More recently, we created some titles for the British television game show Blankety-Blank (the British version of the old celebrity game show The Match Game). We also created backgrounds using SOFTIMAGE|XSI v.1.5 in conjunction with Avid|DS for a very challenging pop video promo. That project was really an interesting one.”

“Interesting” in this instance apparently means intensity of the mind-bending variety. As it turns out, the Hatiras video (entitled “Spaced Invader” and featuring the talents of nefarious MC/Rapper, Slarta John) was shot entirely on bluescreen, leaving Aldis to create all the backgrounds using SOFTIMAGE|XSI v.1.5. That is a lot of work in and of itself, but after careful consideration, Aldis felt he was up to it. That was before the Defected Records and Vigilante Productions realized that the video should be out before Christmas, which significantly compressed the timeframe.

Even with the original deadline compressed to a fortnight, Aldis isn’t one to complain. Adhering to his philosophy that there is always a solution within Softimage, he prepared to put his beta version of SOFTIMAGE|XSI v.1.5 under some of the most intense pressure the system has ever seen.

spaced002

spaced004

spaced003

spaced001

spaced006

spaced005

“Basically, they shot all the footage of Slarta John against a bluescreen,” explains Aldis. “About the same time they were shooting, I started working on the graphics. By the end of the first week, the footage had been offlined. We took that footage into Avid|DS together with the graphics I had managed to create by the start of the second week. Aries Brooker did a great job on the compositing, editing and effects using the Avid|DS system. I carried on working on the remaining graphics in the meantime, so it was all quite efficient. What was most encouraging in the midst of all the chaos though, was how well the new version of SOFTIMAGE|XSI performed. I’ve always really liked and relied on the system in the past, of course, but we’ve never put it under pressure quite this intense. We rendered an enormous number of frames. We did a lot of rotoscoping and matching of backgrounds. And, of course, there was all that bluescreen footage to deal with. I can quite honestly say that I wouldn’t have been able to get it done without version 1.5.”

Despite the stress and hard work, however, Aldis insists that the project was a thoroughly enjoyable one. Much of the credit for that goes to co-directors Ben Hume-Paton and Steve Lowe.

“Anytime you can work with directors with whom you can get on, you know that you’ve got a good thing going,” says Aldis knowingly. “It was the first time I’d worked with Ben, and we seemed to share the same vision of things. It was a pleasure, in spite of all the problems and even though we had to work so hard.”

Aldis also gives considerable credit to the latest version of SOFTIMAGE|XSI, both the success of the project and the positive experience it ultimately produced.

“It was great to see XSI working so well,” he says happily. “I really hope that people will sit up and take notice of the vast improvements in version 1.5. I love it: it goes from strength to strength. My favorite parts are the Render Tree and the scripting capabilities, which have both come a long, long way since the first version. The much-improved rotoscoping was key to this project. We had backgrounds that would refresh completely and consistently in real time. That means you can rotoscope, then play it back right away. That is an invaluable feature you could never do before. I also used scripts to set up my camera constraints. A single scene in the pop video might require 20 to 30 constraining objects for a single camera. I was able to script the constraints, then drop them into the timeline of the Animation Mixer. At that point, the timing is set up, and all I have to do is scroll through, position the camera on each shot and bang, I’m away!”

After the very short respite over Christmas, Kim and his team at Aldis Animation are preparing for a number of challenging projects. They are currently in negotiations to create special effects for a popular television series, and they have just signed an agreement with two character designers to create their own animated series.

“Suffice it to say that the character designers are very well thought of,” says Aldis mysteriously. “This is a new direction for us, but one that we’re all very eager to take. In fact, I’ll be working on some characterizations over the holidays, with the idea of getting a trailer out early in the New Year. We’ll also be setting up a subsidiary company to handle the in-house productions we’re planning on doing in the coming months and years, so these are exciting times.”

With 2001 now upon us, it’s fair to say that one good year deserves another at Aldis Animation.

Using ICE to do UV remapping on instances


I was playing around with Softimage, trying to set up a puzzle:
uvremap_puzzle
At first, I was using actual geometry and snapping to put together the puzzle, but then (after watching a Cinema4D tutorial that used the Cloner to assemble the pieces) I decided to use ICE to position the puzzle pieces. Halfway through that, I realized that the texturing was going to be a problem. There doesn’t seem to be an easy way to apply a texture to multiple ICE instances, and then make the texture stick when the instances fly away.

After trying a bunch of stuff (and crashing a lot), I took a look at the UV Remap parameters on the Image node:
uvremap_example

Then I created a 8×8 grid of 64 instances, and put all the possible min/max values in an array:
uvremap_show_values
If you look at the point IDs, and the array, the pattern is pretty obvious, and it allows you to use modulo and integer division to index into the array and get the right min/max values for each instance.

Here it is in ICE:
uvremap_ice_tree

Finally, the shader tree that gets the UV remap values and plugs them into the Image node:
uvremap_shader_tree

Installing PyQtForSoftimage in Softimage 2015


I use the Python 2.7.3 that comes with Softimage 2015 SP1, but I do also have Python 2.7 installed on my system.

  1. Download and install PyQt.
  2. Set the PYTHONPATH environment variable to point to the location of PyQt4. You could do this in setenv.bat, or in the System environment variables. In my case, I set it in setenv.bat to point to C:\Python27\Lib\site-packages, which is where I installed PyQt.
  3. Download and install the PyQtForSoftimage addon.
  4. Check that everything is working. Open the Plug-in Manager, find PyQtForSoftimage, and run some of the examples.