Send News. Want a reply? Read this. More in the FAQ.   News Forum - All Forums - Mobile - PDA - RSS Headlines  RSS Headlines   Twitter  Twitter
Customize
User Settings
Styles:
LAN Parties
Upcoming one-time events:

Regularly scheduled events

NVIDIA Gets Physical

NVIDIA and Havok Demonstrate World's First GPU-Powered Game Physics Solution at Game Developer's Conference announces plans for Havok physics support in upcoming NVIDIA accelerators:

SANTA CLARA, Calif., March 20 /PRNewswire-FirstCall/ -- NVIDIA Corporation (Nasdaq: NVDA - News), the worldwide leader in programmable graphics processor technologies, and Havok, the game industry's leading supplier of cross- platform middleware, will be demonstrating a physics effects solution that runs completely on a graphics processing unit (GPU) -- an industry first -- at this year's Game Developer Conference (GDC) in San Jose, California (March 21st through 24th).

The result of an ongoing engineering collaboration between Havok and NVIDIA, this new software product from Havok -- called Havok FX(TM) -- enables the simulation of dramatically-detailed physical phenomena in PC games, when powered by GPUs such as NVIDIA GeForceŽ7 or 6 Series GPUs and further amplified with NVIDIA SLI multi-GPU technology. The Havok FX product is currently in early release to select developers and is expected to be available this summer.

View
48 Replies. 3 pages. Viewing page 1.
< Newer [ 1 2 3 ] Older >

48. Re: This could help PhysX Mar 21, 2006, 09:24 Fartacus
 
If by "support" you mean "can draw polygons for", then this is correct.

The pages you refer to are circa GDC 2003. If you think that the Ageia folks only began to contemplate this stuff sometime in the last three or four years, you're nuts.

Nope, they're circa GDC 2001. Here's the link: http://developer.nvidia.com/object/gdc2001_clothsim.html
And if you read the paper, you'll realize that it's not just drawing the polygons, it is also simulating the cloth. The only thing being handled on the CPU is the collision detection. But GPU's have come very far in the 5 years since GDC 2001.

Hell, even I was thinking about dedicated physics hardware back when I got a Voodoo 2, and I can assure you that there was no shader languages or fragment programs running on that thing.

I remember that a friend of mine assured me at the time that what with the relentless progress of CPUs, and the additional I/O issues inherent in moving such operations to an external card, dedicated physics processors would never happen

And you weren't the only one thinking of it. The guys at MathEngine were thinking about it too (before they went under). The guys at Nuron (they came out with an FPGA reconfigurable coprocessor board for PC's, and one of the things they envisioned accelerating was physics). I was thinking about in 2000-2001 when I was doing FPGA design. And I didn't discount the idea back then. But I also didn't predict where GPUs would be today. Physics is a task that is so incredibly well suited to GPUs that there is no point in building dedicated physics hardware.

 
Reply Quote Edit Delete Report
 
47. Re: This could help PhysX Mar 21, 2006, 09:14 Fartacus
 
Sigh, you know very well that the implementation of "cloth simulation" in current GPU's is merely acceleration of the vertex operations. They do very little for actual collision. And fluid simulation along the lines of the implementation in PhysX is merely not possible on current graphics hardware without severe slowdowns. They just don't have the extra power available.

I was referring to the implementation of cloth simulation in Geforce 3 class hardware, which is far from curent generation hardware. And collision response is handled on the hardware on that demo, just the detection is handled on the CPU. Current generation GPUs can do the entire simulation, including collision detection. Fluid simulation along the lines of PhysX is very possible in today's GPUs without severe slowdowns.

Frankly, I don't like the elitist overtones you've got going on there. Don't insult my intelligence.

It's no more of an elitist overtone than you're carrying, buddy. I'll assume that "don't insult my intelligence" means that you acknowledge that there is no specialized hardware for physics processing on the PPU.

I suppose that everything is only a matter of time, but I don't operate on those sorts of assumptions. I operate on what I've seen from Ageia compared to what the knowledge I have been given in that press release. It paints a very bright picture yes, but where are the hard numbers? The GDC presentation will better clarify things.

You don't operate on those sorts of assumptions? Translation: you ignore technology trends. And don't assume you're the only person who has seen Ageia technology either.

My, aren't we abrasive. I presume nothing. Ageia is a company full of physics experts. NVidia and ATI are not. I'm well aware of the major role physics plays in graphics engineering, but physics acceleration is simply not their focus. I have seen the technology at Ageia in action on more than one occasion. I'm sorry you if you haven't, but I encourage you to visit their booth at the GDC. The presentation they have planned is pretty amazing.

You assume that disagreement is equivalent to abrasiveness? My, aren't you the patronizing one? You presume that Ageia is a company full of physics experts, and that NVidia and ATI are not. You ignore the fact that Havok, a company that has been doing physics middleware for much longer than Ageia, is working with Nvidia and ATI on physics acceleration. And you ignore the fact that ATI and NVidia are on product refresh cycles that make companies like Intel and AMD look like they're standing still. If Ageia's PPU is faster on initial release, the performance advantage will disappear by the next GPU refresh cycle.

Enjoy GDC. And don't drink too much of the kool-aid at the Ageia booth.

 
Reply Quote Edit Delete Report
 
46. Re: This could help PhysX Mar 21, 2006, 03:02 SquirrelZero
 
I have news for you, GPU's have supported fluid and cloth simulation since before PPU's were a gleam in someone's eye. Just search for "cloth simulation" on NVidia's developer pages.

There is no fundamental difference between a GPU and a PPU in terms of capability. The PPU uses symmetric multiprocessing vector ALUs to achive the same thing that is done in a GPU shader ALUs (which are essentially arrays of stream oriented vector + scalar ALUs). Physics processing is primarily geometric operations for collisions, and linear systems and matrix solvers. GPU hardware is very well suited for these things.

Sigh, you know very well that the implementation of "cloth simulation" in current GPU's is merely acceleration of the vertex operations. They do very little for actual collision. And fluid simulation along the lines of the implementation in PhysX is merely not possible on current graphics hardware without severe slowdowns. They just don't have the extra power available.

Are you under the impression that PPU's have "physics circuits" that make them inherently faster for physics processing? Well guess what, they don't.

Frankly, I don't like the elitist overtones you've got going on there. Don't insult my intelligence.

"Exceed" is not being hopeful, and "far exceed" is not a distant hope. It's only a matter of time.

I suppose that everything is only a matter of time, but I don't operate on those sorts of assumptions. I operate on what I've seen from Ageia compared to what the knowledge I have been given in that press release. It paints a very bright picture yes, but where are the hard numbers? The GDC presentation will better clarify things.

You presume a lot. Graphics and simulation intersecting disciplines. And both NVidia and ATI have physics experts on staff. You also seem to discount the crew at Havok that have been involved in the effort. And I'll believe Ageia are physics experts when I see it.

My, aren't we abrasive. I presume nothing. Ageia is a company full of physics experts. NVidia and ATI are not. I'm well aware of the major role physics plays in graphics engineering, but physics acceleration is simply not their focus. I have seen the technology at Ageia in action on more than one occasion. I'm sorry you if you haven't, but I encourage you to visit their booth at the GDC. The presentation they have planned is pretty amazing.

This comment was edited on Mar 21, 03:10.
 
Reply Quote Edit Delete Report
 
45. Re: This could help PhysX Mar 21, 2006, 01:29 Shadowcat
 
I have news for you, GPU's have supported fluid and cloth simulation since before PPU's were a gleam in someone's eye.
If by "support" you mean "can draw polygons for", then this is correct.

The pages you refer to are circa GDC 2003. If you think that the Ageia folks only began to contemplate this stuff sometime in the last three or four years, you're nuts.

Hell, even I was thinking about dedicated physics hardware back when I got a Voodoo 2, and I can assure you that there was no shader languages or fragment programs running on that thing.

I remember that a friend of mine assured me at the time that what with the relentless progress of CPUs, and the additional I/O issues inherent in moving such operations to an external card, dedicated physics processors would never happen

 
Reply Quote Edit Delete Report
 
44. Re: More for the GPU? Mar 21, 2006, 01:13 ExcessDan
 
As for the Geforce video guy -- why put that in this thread?

It was a thread about nvidia and people seemed to know what they were talking about. That's why.

I would read the manual if I knew where it was. Instead I checked the nvidia support site and there is nothing at all about video in. Only info on video out which I've already done on my own.

As for not knowing my model number.. haha that's mostly because I'm an idiot. As soon as I saw Zathrus say it was a 6200 I realized I said the wrong model. Probably because I upgraded from a ti4200.

Rigs, I will think about that ATI card but the only thing with it is that my TV is 6' away and directly behind me. I watch TV while I'm sitting at my computer on a bigger separate screen. So if I can spend 15-20 dollars on this svideo plug for this VIVO box that came with my video card I'd rather take that option than 100+ on the tv wonder.

I appreciate the suggestion and the eventual defence though.

Mayor Dan
------------
http://www.last.fm/user/danorama/
Mario Kart DS Friend Code: 137498-739291
Animal Crossing DS Friend Code: 536936-348405
 
ExcessDan
Reply Quote Edit Delete Report
 
43. Re: This could help PhysX Mar 20, 2006, 22:26 Fartacus
 
>I expect that once the drivers mature, the GPU based solution will far exceed what PhysX is capable of.

I'll believe it when I see it. I'd be willing to bet the NVidia solution will offer an acceptable but subpar solution to Ageia's. I don't even think they intend to offer advanced functionality like fluid and cloth simulation. But having only seen PhysX in action, I guess the GDC will clarify things, won't it? "Exceed" is being hopeful, "far exceed" is a distant hope at best -- GPU's are GPU's, they're not PPU's.

I have news for you, GPU's have supported fluid and cloth simulation since before PPU's were a gleam in someone's eye. Just search for "cloth simulation" on NVidia's developer pages.

There is no fundamental difference between a GPU and a PPU in terms of capability. The PPU uses symmetric multiprocessing vector ALUs to achive the same thing that is done in a GPU shader ALUs (which are essentially arrays of stream oriented vector + scalar ALUs). Physics processing is primarily geometric operations for collisions, and linear systems and matrix solvers. GPU hardware is very well suited for these things. Are you under the impression that PPU's have "physics circuits" that make them inherently faster for physics processing? Well guess what, they don't.
"Exceed" is not being hopeful, and "far exceed" is not a distant hope. It's only a matter of time.

>And it is not a software solution, it is a hardware solution. Shader model 3.0 exposes all of the functionality that you need to do simulation entirely on the GPU.

Read what I said again. I said that it is a new software solution that takes advantage of existing hardware. So it is indeed both a hardware and software solution.

GPU physics processing is no more a software solution than PPU physics processing is. Both are stream-oriented processes, so the relevant scene data must be fed to and read back from the hardware in order to interact with (affect and be affected by) the game logic. That is the only software-centric part of the processing in either solution.

>And with NVidia and ATI's aggressive refresh cycles, there's no way Ageia will be able to keep up.

Again, I'll believe it when I see it. Neither NVidia or ATI are experts on physics. Ageia is.

You presume a lot. Graphics and simulation intersecting disciplines. And both NVidia and ATI have physics experts on staff. You also seem to discount the crew at Havok that have been involved in the effort. And I'll believe Ageia are physics experts when I see it.

 
Reply Quote Edit Delete Report
 
42. Re: This could help PhysX Mar 20, 2006, 21:52 SquirrelZero
 

I expect that once the drivers mature, the GPU based solution will far exceed what PhysX is capable of.

I'll believe it when I see it. I'd be willing to bet the NVidia solution will offer an acceptable but subpar solution to Ageia's. I don't even think they intend to offer advanced functionality like fluid and cloth simulation. But having only seen PhysX in action, I guess the GDC will clarify things, won't it? "Exceed" is being hopeful, "far exceed" is a distant hope at best -- GPU's are GPU's, they're not PPU's.

And it is not a software solution, it is a hardware solution. Shader model 3.0 exposes all of the functionality that you need to do simulation entirely on the GPU.

Read what I said again. I said that it is a new software solution that takes advantage of existing hardware. So it is indeed both a hardware and software solution.

And with NVidia and ATI's aggressive refresh cycles, there's no way Ageia will be able to keep up.

Again, I'll believe it when I see it. Neither NVidia or ATI are experts on physics. Ageia is.

 
Reply Quote Edit Delete Report
 
41. Re: This could help PhysX Mar 20, 2006, 21:46 Fartacus
 
I expect that once the drivers mature, the GPU based solution will far exceed what PhysX is capable of. And it is not a software solution, it is a hardware solution. Shader model 3.0 exposes all of the functionality that you need to do simulation entirely on the GPU. And with NVidia and ATI's aggressive refresh cycles, there's no way Ageia will be able to keep up.

 
Reply Quote Edit Delete Report
 
40. Re: This could help PhysX Mar 20, 2006, 21:35 SquirrelZero
 
It's an interesting development from NVidia, but I have a feeling it can't quite do what the PhysX card can. PhysX can pull off about 8000 rigid bodies at once -- on beta drivers, no less. It also has joint, cloth, volumetric particle and fluid simulation, which the NVidia press release above makes no mention of.

Another thing to note is that this is not the same as a PPU. It sounds more like the Havok engineers have found a way to utilize existing GPU routines to improve the performance of physics. There's no mention of new hardware, only a "software product" from Havok that uses GF6/7 GPU's. If so, what effect will this have on GPU performance? Does this mean that power is being taken away from the GPU to aid physics processing? I can imagine that some people wouldn't be very happy about that.

There's also the fact that one solution favors Havok (Source Engine, other games) and one favors NovodeX (Unreal Engine 3, Reality Engine, other games). That's a key split. Not having a unified physics solution is like the DirectX/GL wars all over again -- except this time, cards won't support both.

Ageia will be demonstrating the first actual game using PhysX at GDC as well. I've seen it in action already, and let me tell you, this is the future. That said, I'm looking forward to NVidia's demonstration, because that'll really draw the battle lines. If NVidia's solution can't keep up with PhysX, PhysX won't be quite as dead as people think.

This comment was edited on Mar 20, 21:39.
 
Reply Quote Edit Delete Report
 
39. Re: This could help PhysX Mar 20, 2006, 20:25 Dev
 
Yeah except for 1 thing, Once it takes off nvidia will just add dedicated physics hardware to the next gen graphics cards so they can charge more money for them.

So if you can get a kick butt latest gen top of the line graphics card for $500, and $550 for version with hardware accelerated physics, and the physics card is $100 as a standalone from phyX, then what do you think ppl will do?

Nvidia could also do similar in mid range solutions, $200 or $250, etc.

Peopel will save money and get the integrated solution. So even if the market grows, nvidia can easily kill any demand for standalone.

 
Reply Quote Edit Delete Report
 
38. This could help PhysX Mar 20, 2006, 19:52 Shadowcat
 
I see a lot of people saying that this will kill PhysX, but I beg to differ.

The problem with the first dedicated physics card is that the market for it is tiny. Game developers can't make games that require it because few people will have one, so the initial uses for it will be just a bit of added glitz. I want it to become prevalent due to the long-term benefits, but it seems like a hard sell even to me.

But if nVidia can add hardware-accelerated physics with nothing more than a driver update, suddenly there's a vast customer base with hardware-accelerated physics already in their machine. That means developers actually have a market to develop for, and the hardest problem for PhysX is suddenly solved by someone else! (provided the two approaches are sufficiently compatible).

So 'all' the PhysX guys have to do is create a superior solution.

The games will come, and if CPU+GPU+PPU gives you better performance than just CPU+GPU then people will buy that PPU.


edit: oops, I see that Beamer actually said a lot of this already.
This comment was edited on Mar 20, 19:57.
 
Reply Quote Edit Delete Report
 
37. Re: More for the GPU? Mar 20, 2006, 16:38 Rigs
 
As for the Geforce video guy -- why put that in this thread?

Because he can...it's not like it's the first time a Blues thread has strayed off-topic...

However, read your manual.

Don't state the fucking obvious, we're not that stupid...

Many Geforce cards have VIVO -- Video IN, Video Out. No TV tuner of course, but you should be able to plug in your gamecube.

Indeed, as do ATI. But like computers themselves, they come in all different kinds and colors. Hell, my X850 alone (which isn't a particularly noteworthy card or popular card to say the least) has many different configurations available from different manufacutres. Dan didn't even know the model card at first, which is fine. Just goes to show how hard it is to nail down a particular piece of hardware. I mentioned the TV-Wonder (or Remote Wonder, whatever the hell ATI is calling it nowadays) because he could altogether bypass the vid card as I did to play my Xbox in a window and catch the news at the same time...

Moral of the story...most Blues vets don't open their mouths unless they've exhausted almost all of the possiblities...

EDIT: Beamer just nailed the freakin' thread to the wall...congratualtions...

=-Rigs-=



"If we do not succeed, then we run the risk of failure." - Dan Quayle
This comment was edited on Mar 20, 16:41.
 
Avatar 14292
 
'I know what you think you are, what you want us to believe! But I don't buy it! For three years now you've been pulling everyone's strings, getting us to do all the work, and you haven't done a damn thing except stand there and look cryptic.'
Reply Quote Edit Delete Report
 
36. Re: Cool Mar 20, 2006, 16:26 Beamer
 
PhysX never had a chance. Most computers are low in room these days, anyway, although on-board ethernet and the obsolescence of the modem have helped.

But there's no demand. Graphics cards helped something every game has and needs. They were used constantly. Physics cards need to be supported, and without a user base support might not come quickly. Graphics cards helped by getting early adopters. Once people saw VQuake or GLQuake they knew they needed it. With physics cards? What will be the killer app? Will there be a game that they run on their non-PhysX rig that runs noticeably better on a PhysX rig the way games with GL ran so much differently and better? Will print and internet be able to sell it as well as they could by simply showing screenshots of clear water in Quake?

There's really just no huge demand, not enough for a separate product. For most people that money would be better going to more RAM.
However, if graphics cards start incorporating this, even if they do throw a PPU on next to the GPU, it will take off. People will be more willing to pay a little extra, plus they can get tons of OEM deals, opening up a reason for developers to actually support the product.

-------------
Doomriders: the first new band worth a signature - http://www.deathwishinc.com/
 
-------------
Music for the discerning:
http://www.deathwishinc.com
http://www.hydrahead.com
http://www.painkillerrecords.com
Reply Quote Edit Delete Report
 
35. Re: Cool Mar 20, 2006, 15:34 Fartacus
 
You can't ever forget preconceptions on the part of buyers, whether correct or incorrect. If the majority of customers who would be interested in this feat harbors some suspicion that sharing the GPU would reduce framerates--not an unreasonable suspicion either--then it will fail in the marketplace.

Maybe you can't for the first generation. But you ultimately can. The consumers will go with the best value for the $. And consumers can be educated.

 
Reply Quote Edit Delete Report
 
34. Re: Cool Mar 20, 2006, 15:21 Zathrus
 
the question is how long they can keep on coming up with beefier GPUs. They are bound to hit a similar limit to computation

When they eventually hit that limit they can refine the way they make the cards. Right now GPUs are designed at a very, very high level. It's fast to develop, but leads to a lot of redundancy. If they designed them more like CPUs are designed then they could optimize things much better. Of course, then we're talking about a new GPU every 3-5 years with minor revisions (higher clock speed, some new minor features) every few months. Just like CPUs.

I don't know that the market would really support that change though; maybe they'll just going for multiple chips with further parallelization.

I think that you're right about the PPU market though -- the CPU and GPU market are likely to eat away the demand from underneath it.

 
Reply Quote Edit Delete Report
 
33. More for the GPU? Mar 20, 2006, 15:19 Ahnteis
 
=(

I'm already limited in the resolution and AA/AF I can run modern games at. I have a hard time believing that CPU is affecting those.

Of course, physics are also slowing down my CPU -- so really, I have to lean towards a dedicated physics card as long as it's under $100.

As for the Geforce video guy -- why put that in this thread? However, read your manual. Many Geforce cards have VIVO -- Video IN, Video Out. No TV tuner of course, but you should be able to plug in your gamecube.

 
Reply Quote Edit Delete Report
 
32. Re: Cool Mar 20, 2006, 15:15 DedEye
 
I have no issues with including physics processing onboard, but they're including proprietarty technology. Now there's a good chance that ATI includes a totally different stanadard. Now we have fragmentation.

Just what we need, another Glide.

 
Avatar 14820
 
Reply Quote Edit Delete Report
 
31. Re: Cool Mar 20, 2006, 15:05 Ratty
 
You can't ever forget preconceptions on the part of buyers, whether correct or incorrect. If the majority of customers who would be interested in this feat harbors some suspicion that sharing the GPU would reduce framerates--not an unreasonable suspicion either--then it will fail in the marketplace.

 
Avatar 22908
 
Reply Quote Edit Delete Report
 
30. re: one of the first comments Mar 20, 2006, 15:03 Dev
 
RTFA, it says Geforce 6 and 7, not some new series of $700 cards. I.e. driver update.
In fact, just RTF blurb that blues has

Also I agree with the earlier comments, this has killed physX before it even started (as long as it gives a decent performance boost).
For instance (totally made up example with made up numbers), lets say you can give up 10% of graphics card performance for 30% increase in physics which is 20% of CPU load for an overall game performance increase of 15%.

Then everyone will go nuts over this because they get extra FPS out of the game, despite the graphics card actually going slower.

This comment was edited on Mar 20, 15:05.
 
Reply Quote Edit Delete Report
 
29. Re: Cool Mar 20, 2006, 14:51 Fartacus
 
>It'd also be a huge waste of real estate to have a separate core for this

That's what they said for GPUs in the first place.

It didn't work out.

Now, this nVidia trick is a nice quick fix, but it's not gonna hold in the long term. The market's gonna want dedicated PPUs - and the first graphics card maker to put said PPU on their own graphics card as a standard will have a HUGE advantage in the early-to-medium period.

(Still waiting for PhysX's card...)

The example of GPU's vs. CPU's is an extremely bad one. CPU's are very bad at the kind of stream-oriented vector processing that is needed for 3d graphics, which is what GPU's were designed to handle.

The type of computation and dataflow that occurs for physics and graphics processing is very similar. So a GPU is well-suited to the task of physics processing.

If I have a choice between a GPU with N transistors dedicated to only graphics processing + a PPU with M transistors dedicated to only physics processing, or only a GPU that can handle both equally well with N + M transistors, I'll take the GPU over the GPU + PPU anyday. The transistor usage is more efficient (i.e. there is no redundancy for MMU's, I/O, etc.), and the hardware usage can be tuned for a particular application.

 
Reply Quote Edit Delete Report
 
48 Replies. 3 pages. Viewing page 1.
< Newer [ 1 2 3 ] Older >


footer

Blue's News logo