fredag 26. mars 2010

Nvidia gets back in the game with long-awaited Fermi graphics chip

Since Advanced Micro Devices launched its newest generation of desktop graphics chips in September, Nvidia has been a distance second in the performance game for graphics chips. But today, Nvidia is launching its graphics chip, the GeForce GTX 400 series, code-named Fermi, in the hope of taking the speed demon title back.
These are high-end graphics chips that hardcore gamers care about, but in six months to a year they’ll be widely available to all sorts of PC consumers. The GTX 480 and GTX 470 graphics chips will debut in graphics cards for PCs at the prices $499 and $349, respectively. The company is announcing the products at its GeForce LAN party event at the Penny Arcade Expo East in Boston.
Fermi should have been out six months ago (Nvidia chief executive Jen-Hsun Huang, right, showed it off on Sept. 30). Drew Henry, general manager of the desktop graphics division at Nvidia, acknowledged that the chips are late in a conference call. The company originally tried to match AMD’s chip launch in September and wanted to launch Fermi with the Windows 7 operating system that also went on sale in the fall. That’s because Fermi supports DirectX 11, the latest graphics technology for making whiz-bang 3-D software.
But Nvidia’s chip design wasn’t ready and its manufacturing partner, Taiwan Semiconductor Manufacturing Co., wasn’t able to make enough of the complicated chips. TSMC had trouble producing 40-nanometer chips, a problem that also hurt AMD. But AMD’s ATI Radeon 5000 series chips were smaller and easier to manufacture, so TSMC’s troubles didn’t hurt AMD as much as Nvidia, which had designed an aircraft carrier of a chip.
The Fermi chips are the most complicated Nvidia has ever designed, with more than 3 billion transistors, or basic electrical components. The GTX 480 chip is about 1.5 to 2 times faster in raw performance than the predecessor GTX 285. For tasks such as geometry processing, the GTX 480 is eight times faster.
Henry said that Nvidia has also built into the chip better graphics features such as depth of field, which is a common movie technique where one object in the foreground is in focus and an object in the background is out of focus. The technique helps focus the viewer’s eyes on the exact part of the image that the director wants the person to see. The Nvidia chip also has built-in interactive ray tracing, for photo-like animation of objects such as shiny cars.
The Nvidia chip can also be used with Nvidia’s 3D Vision glasses. A single 400 series chip can power up to three displays with 3D Vision imagers. The Radeon 5000 series chips can power up to six displays, but not with the custom 3D glasses features.
The Nvidia chips are also bigger than the rival chips because they have built-in computing functions that are used for non-graphics processing. Based ont he CUDA programming architecture, the 480 processors in the Nvidia chip are able to accelerate non-graphics processing such as physics.
Analysts have observed that Nvidia is making a huge bet on CUDA in an attempt to make graphics processing units (GPUs) as powerful and versatile as central processing units (CPUs) made by Intel. But it comes at some cost. If it means, for instance, that the graphics chip becomes too complex, too hard to design, and slower and later than AMD’s graphics-focused graphics chip, then the CUDA technology is a kind of albatross. AMD has taken a completely different design approach. It has created Radeon series chips that are smaller, less powerful, and more power efficient. They’re so small that AMD can fit two graphics chips on a single graphics card. That results in a graphics system that AMD argues is more powerful and cheaper than what Nvidia can deliver.
Huang at Nvidia has said many times that CUDA is one of the most important advances in computing and its promise will be evident as programmers learn how to make use of it. Nvidia is already getting lots of its chips designed into supercomputers and servers, thanks to CUDA. And because of CUDA, the 480 chip can do physics processing 2.5 times faster than the previous generation. That means that the environment of game, such as water in a stream, behaves far more realistically, adding to the overall illusion of a graphics animation. The question at hand is whether CUDA is really helping, or hurting, Nvidia’s cause to bring graphics to the entire world.
This whole philosophical debate is kind of a distraction for gamers, who care only which graphics chip produces better imagery. Existing games will run much faster on the 400 series chips. The game Dark Void, for instance, runs 2.1 times faster on a 480 than the previous generation Nvidia chip. But game publishers who design for DirectX 11 technology will see much better graphics running on the same chips; the game at right, Aliens vs Predators, is one of the few DirectX 11 games available now. When you add a second graphics cards into one PC, using the Nvidia SLI technology, the performance is about 90 percent better.
One of the things that Nvidia added to improve graphics performance was tesselation. That built-in hardware — 16 tesselation engines — in the 480 chip enables much more movie-like effects. Nvidia claims that the chip is much faster than AMD’s ATI Radeon HD 5870 chip, but it remains to be seen how much gamers will care about that, given the fact that the 5870 chip is available in cheaper graphics cards.
The technical blogs will soon figure out just how much of an advantage Nvidia will have. But AMD hasn’t been sitting still. The odds are quite good that AMD will leapfrog the Nvidia Fermi chip sometime soon, and the graphics horse race will start all over again. If you want to check out what Fermi chips can do with DirectX 11 technology, check out the video below. In the middle of the video, a bridge explodes into a million parts. It’s one of the coolest images you’ll see with next-generation graphics today, and it’s far better-looking than anything the game consoles can do today.

Ingen kommentarer:

Legg inn en kommentar