Graphics Processors (GPUs) Revisited

Another Telecosm brought another great talk by Jules Urbach. Hew as showing some new stuff (I even do not know I I can share it here, as he was requesting the cameraman to stop taping what was on the screens a number of times...). But anyway. You know - they have full ray tracing in the GPU. And he was showing how his models perform on stage. OK, I mean the computerized models of virtual reality. Humans with skin modeled several layers deep... some reflective, some absorbing different parts of light spectrum, with veins and bones below them... Or a model of the SpiderMan, all of them generated in high definition theater - like quality real time. This "real-time" part is the breakthrough. We have seen many computer - generated moves already, but nobody but OTOY can do it in real time. And all it takes is a number of clustered NVIDIA cards. This GPU trend is turning the computing industry upside down. Suddenly we have discovered GPUs are not only for graphics... they are supercomputers themselves.

Researchers at the University of Antwerp in Belgium have created a new supercomputer with just four NVIDIA 9800 GX2 graphics cards, it costs less than 4000EUR to build. They say the eight NVIDIA GPUs deliver the same performance for their work as more than 300 Intel Core 2 Duo 2.4GHz processors. And thanks to the CUDA framework NVIDIA delivers (abstracting heterogenous manycore computing), they can be programmed using standard tools and techniques to unleash the power of those chips.

NVIDIA themselves put a lot of R&D money into software libraries and platforms helping unleash the power of GPU. They recently acquired RayScale, a University of Utah spin-off. It means the in the next generation of graphics cards we may move from simple rasterization and polygon rendering to full ray tracing.

There is an excellent interview with NVIDIA's David Kirk on bit-tech.net, if you want to follow up this subject further. The takeaway is the CPU guys suddenly have a big competitor to worry about. Will the X86 architecture stand up the new challenges or will the new generation of [not only personal] computers be powered by GUPs having CPU offload just a housekeeping tasks?

This [GPU] trend was ignited initially by the Sony PlayStation 3, that is now becoming a common building block for supercomputing centers (see the "PS3's Cell CPU tops high-performance computing benchmark"). We can easily see the computing power is no longer a function of megahertz and gigahertz clock speeds. And it is not even a function of having several standard CPU cores on one chip. It is a matter of architecture and new paradigm of algorithms and code design. Intel seems to be a little bit lost in all those [multiple] threads... but the strange thing is I have not heard much on this subject from Microsoft... well... but what would you expect from a company run by a business - oriented bean - counter? They are too busy chasing Yahoo after all...

And the final loser in this game may be the Hollywood. Movie production will shift up one abstraction layer. With technologies from guys like Jules Urbach, all the artist will be needed for is to design a computer model of a new actor. And than the NVIDIA cluster will work through the screenplay script to output new production. Who needs humans for that job, after all?

Comments