Friday Flashback #216


Jumanji’s Amazing Animals (Computer Graphics World, 1 Jan 1996)
ILM used SOFTIMAGE|3D to animate a CG elephant walking step by step over a crushed car.
jumanji

In one memorable shot in the stampede sequence, a CG elephant walks up and over a car, crunching the car underfoot. For this sequence, the ‘run cycles were abandoned in favor of hand animation.

elephant

Here’s the complete article (PDF, 7.4MB). The quality of photocopies back in 1996 was pretty bad.

Friday Flashback #215


Produced by Softimage in Montréal, 1994-1995, Osmose was an immersive, virtual environment utilizing 3D computer graphics and interactive 3D sound, a head-mounted display and real-time motion tracking based on breathing and balance.

john
John Harrison, “immersed” in Osmose development

install
Installation at Musée d’art contemporain de Montréal

georges
Georges Mauro,immersed in Osmose*

shadow
Shadow of immersant as seen by audience

treepondred
Scene from Osmose

What is Osmose

” …By changing space, by leaving the space of one’s usual sensibilities, one enters into communication with a space that is psychically innovating. For we do not change place, we change our nature.”
–Gaston Bachelard, The Poetics of Space, 1964 Clearing with Tree

Osmose is an immersive, virtual environment utilizing 3D computer graphics and interactive 3D sound, a head-mounted display and real-time motion tracking based on breathing and balance.
tree
Tree

Created by a team led by artist Char Davies, Director of Visual Research at Softimage (Montréal), Osmose is a space for exploring the perceptual interplay between self and world, i.e. a place for facilitating awareness of one’s own self as embodied consciousness in enveloping space. This work challenges conventional approaches to virtual reality and explores what Davies believes to be the most intriguing aspect of the medium, namely its capacity to allow us to explore what it means, essentially, to “be-in-the-world”.

Immersion in Osmose begins with the donning of the head-mounted display and motion-tracking vest. The first virtual space encountered is a three-dimensional Cartesian Grid which functions as an orientation space. With the immersant’s first breaths, the grid gives way to a clearing in a forest. There are a dozen world-spaces in Osmose, most based on metaphorical aspects of nature. These include Clearing, Forest, Tree, Leaf, Cloud, Pond, Subterranean Earth, and Abyss. There is also a substratum, Code, which contains much of the actual software used to create the work, and a superstratum, Text, a space consisting of quotes from the artist and excerpts of relevant texts on technology, the body and nature. Code and Text function as conceptual parentheses around the worlds within. Through use of their own breath and balance, immersants are able to journey anywhere within these worlds as well as hover in the ambiguous transition areas in between. After fifteen minutes of immersion, the LifeWorld appears and slowly but irretrievably recedes, bringing the session to an end.

forest
The Forest

In contrast to the hard-edged realism of conventional 3D computer graphics, the visual aesthetic of Osmose is soft, luminous and transparent, consisting of translucent textures and flowing particles. Figure/ground relationships are spatially ambiguous, and transitions between worlds are subtle and slow. This mode of representation serves to ‘evoke’ rather than illustrate and is derived from Davies’ previous work as a painter. The sounds (447K WAV) (1.8M AIFF) within Osmose are spatially multi-dimensional and have been designed to respond to changes in the immersant’s location, direction and speed: the source of their complexity is a sampling of a male and female voice.

rosetree
Tree in transition

The user-interface of Osmose is based on full-body immersion in 360 degree spherical, enveloping space, through use of a head mounted display. Solitude is a key aspect of the experience, as the artist’s goal is to connect the immersant not to others but to the depths of his or her own self. In contrast to interface techniques such as joysticks, Osmose incorporates the intuitive processes of breathing and balance as the primary means of navigating within the virtual world. By breathing in, the immersant is able to float upward, by breathing out, to fall, and by subtlely altering the body’s centre of balance, to change direction, a method inspired by the scuba diving practice of buoyancy control. The experience of being spatially-enveloped, of floating rather than flying or driving is key. Whereas in conventional VR, the body is often reduced to little more than a probing hand and roving eye, immersion in Osmose depends on the body’s most essential living act, that of breath — not only to navigate, but more importantly — to attain a particular state-of-being within the virtual world. In this state, usually achieved within ten minutes of immersion, most immersants experience a shift of awareness in which the urge for action is replaced by contemplative free-fall. Being supercedes doing.

rocks
The Subterrainean

lifeworld
The Lifeworld

Based on the responses of several thousand individuals who have been immersed in Osmose since the summer of 1995, the after-effect of immersion in Osmose can be quite profound. Many individuals feel as if they have rediscovered an aspect of themselves, of being alive in the world, which they had forgotten, the experiencing of which they find to be very emotional, leading some to even weep after immersion. Such response has confirmed the artist’s belief that traditional interface boundaries between machine and human can be transcended even while re-affirming our corporeality, and that Cartesian notions of space as well as illustrative realism can effectively be replaced by more evocative alternatives. Immersive virtual space, when stripped of its conventions, can provide an intriguing spatio-temporal context in which to explore the self’s subjective experience of “being-in-the-world” — as embodied consciousness in an enveloping space where boundaries between inner/outer, and mind/body dissolve.

The public installation of Osmose includes large-scale stereoscopic video and audio projection of imagery and sound transmitted in real-time from the point-of-view of the individual in immersion (the “immersant”): this projection enables an audience, wearing polarizing glasses, to witness each immersive journey as it unfolds. Although immersion takes place in a private area, a translucent screen equal in size to the video screen enables the audience to observe the body gestures of the immersant as a poetic shadow-silhouette.
Credits

Charlotte.Davies: Concept and direction
Georges.Mauro: Creation of graphics
John.Harrison: Virtual Reality software programming
D.Blaszczak: Sound design and programming
rb@accessone.com: Music composition and programming

Friday Flashback #213


From the Softimage Customer Stories, volume 1, issue 1, a 2001 customer story on Aldis Animation
click to read the full story (PDF) or scroll down
customer_story_the_aldis_ascension_2001_page1

THE ALDIS ASCENSION: SOFTIMAGE|XSI Helps Aldis Animation Keep Moving On Up
by Michael Abraham

When last we visited with Kim Aldis, founder and co-owner of London’s Aldis Animation Company, he and his crew were busy putting the beta version of SOFTIMAGE|XSI v.1 (codename Sumatra) through its paces.

Roughly a year after our first conversation, I reconnected with Aldis at his home number. I’d called the Aldis Animation offices the previous day, but the sound of holidays celebrations in the background suggested it wasn’t a very good time to talk. Damn the holidays anyway; they play hell on the work schedule of irredeemable procrastinators such as myself. But I digress.

Speaking with Aldis the morning after the night before, he sounded surprisingly upbeat. It’s been a good year for the company. So good, in fact, that everybody was feeling a little fried as the Christmas season approached. A party was definitely in order.

“We’ve had a pretty good year,” Aldis admits, with typical understatement. “We branded all the UEFA Cup pieces for Ford and did a whole bunch of stuff for the CITV network here in the UK. More recently, we created some titles for the British television game show Blankety-Blank (the British version of the old celebrity game show The Match Game). We also created backgrounds using SOFTIMAGE|XSI v.1.5 in conjunction with Avid|DS for a very challenging pop video promo. That project was really an interesting one.”

“Interesting” in this instance apparently means intensity of the mind-bending variety. As it turns out, the Hatiras video (entitled “Spaced Invader” and featuring the talents of nefarious MC/Rapper, Slarta John) was shot entirely on bluescreen, leaving Aldis to create all the backgrounds using SOFTIMAGE|XSI v.1.5. That is a lot of work in and of itself, but after careful consideration, Aldis felt he was up to it. That was before the Defected Records and Vigilante Productions realized that the video should be out before Christmas, which significantly compressed the timeframe.

Even with the original deadline compressed to a fortnight, Aldis isn’t one to complain. Adhering to his philosophy that there is always a solution within Softimage, he prepared to put his beta version of SOFTIMAGE|XSI v.1.5 under some of the most intense pressure the system has ever seen.

spaced002

spaced004

spaced003

spaced001

spaced006

spaced005

“Basically, they shot all the footage of Slarta John against a bluescreen,” explains Aldis. “About the same time they were shooting, I started working on the graphics. By the end of the first week, the footage had been offlined. We took that footage into Avid|DS together with the graphics I had managed to create by the start of the second week. Aries Brooker did a great job on the compositing, editing and effects using the Avid|DS system. I carried on working on the remaining graphics in the meantime, so it was all quite efficient. What was most encouraging in the midst of all the chaos though, was how well the new version of SOFTIMAGE|XSI performed. I’ve always really liked and relied on the system in the past, of course, but we’ve never put it under pressure quite this intense. We rendered an enormous number of frames. We did a lot of rotoscoping and matching of backgrounds. And, of course, there was all that bluescreen footage to deal with. I can quite honestly say that I wouldn’t have been able to get it done without version 1.5.”

Despite the stress and hard work, however, Aldis insists that the project was a thoroughly enjoyable one. Much of the credit for that goes to co-directors Ben Hume-Paton and Steve Lowe.

“Anytime you can work with directors with whom you can get on, you know that you’ve got a good thing going,” says Aldis knowingly. “It was the first time I’d worked with Ben, and we seemed to share the same vision of things. It was a pleasure, in spite of all the problems and even though we had to work so hard.”

Aldis also gives considerable credit to the latest version of SOFTIMAGE|XSI, both the success of the project and the positive experience it ultimately produced.

“It was great to see XSI working so well,” he says happily. “I really hope that people will sit up and take notice of the vast improvements in version 1.5. I love it: it goes from strength to strength. My favorite parts are the Render Tree and the scripting capabilities, which have both come a long, long way since the first version. The much-improved rotoscoping was key to this project. We had backgrounds that would refresh completely and consistently in real time. That means you can rotoscope, then play it back right away. That is an invaluable feature you could never do before. I also used scripts to set up my camera constraints. A single scene in the pop video might require 20 to 30 constraining objects for a single camera. I was able to script the constraints, then drop them into the timeline of the Animation Mixer. At that point, the timing is set up, and all I have to do is scroll through, position the camera on each shot and bang, I’m away!”

After the very short respite over Christmas, Kim and his team at Aldis Animation are preparing for a number of challenging projects. They are currently in negotiations to create special effects for a popular television series, and they have just signed an agreement with two character designers to create their own animated series.

“Suffice it to say that the character designers are very well thought of,” says Aldis mysteriously. “This is a new direction for us, but one that we’re all very eager to take. In fact, I’ll be working on some characterizations over the holidays, with the idea of getting a trailer out early in the New Year. We’ll also be setting up a subsidiary company to handle the in-house productions we’re planning on doing in the coming months and years, so these are exciting times.”

With 2001 now upon us, it’s fair to say that one good year deserves another at Aldis Animation.

Using ICE to do UV remapping on instances


I was playing around with Softimage, trying to set up a puzzle:
uvremap_puzzle
At first, I was using actual geometry and snapping to put together the puzzle, but then (after watching a Cinema4D tutorial that used the Cloner to assemble the pieces) I decided to use ICE to position the puzzle pieces. Halfway through that, I realized that the texturing was going to be a problem. There doesn’t seem to be an easy way to apply a texture to multiple ICE instances, and then make the texture stick when the instances fly away.

After trying a bunch of stuff (and crashing a lot), I took a look at the UV Remap parameters on the Image node:
uvremap_example

Then I created a 8×8 grid of 64 instances, and put all the possible min/max values in an array:
uvremap_show_values
If you look at the point IDs, and the array, the pattern is pretty obvious, and it allows you to use modulo and integer division to index into the array and get the right min/max values for each instance.

Here it is in ICE:
uvremap_ice_tree

Finally, the shader tree that gets the UV remap values and plugs them into the Image node:
uvremap_shader_tree

Installing PyQtForSoftimage in Softimage 2015


I use the Python 2.7.3 that comes with Softimage 2015 SP1, but I do also have Python 2.7 installed on my system.

  1. Download and install PyQt.
  2. Set the PYTHONPATH environment variable to point to the location of PyQt4. You could do this in setenv.bat, or in the System environment variables. In my case, I set it in setenv.bat to point to C:\Python27\Lib\site-packages, which is where I installed PyQt.
  3. Download and install the PyQtForSoftimage addon.
  4. Check that everything is working. Open the Plug-in Manager, find PyQtForSoftimage, and run some of the examples.