NVIDIA/AGEIA Q&A

The NVIDIA AGEIA PhysX Acquisition Q&A on FiringSquad talks with NVIDIA's Derek Perez about the acquisition of AGEIA, a fellow all-caps hardware company. He uses the same "heterogeneous computing model" phrase used yesterday to announce the move (story), an interesting take, since it sounds like they plan to homogenize AGEIA's PPU onto their GPUs, saying "Physics is a natural for processing on the GPU." As for existing stand-alone PhysX cards, he says: "We will continue to support the current line of Ageia products that are on the market today."
View : : :
12.
 
Re: I wanna get
Feb 5, 2008, 21:05
12.
Re: I wanna get Feb 5, 2008, 21:05
Feb 5, 2008, 21:05
 
physics aren't remotely complex enough, like graphics, as to necessitate the need for external processing and B) physics complexity in games will never, ever outpace the CPU's ability to handle it.

Bullshit. Physics are probably more complex than graphics. Rendering transformed polygons, rasterizing to 2d scenes, blending layers and layers of polygons of varying materials is naturally suited to a dedicated GPUs.

Physics are the same way. Games have had to avoid many aspects of physics due to their complexity and inability to process in a real time game. True geometric shattering, fluid dynamics, accurate deformation of models, and countless other aspects of physics remain untouched due to the complexity of modelling them in real time. CPU's are not up to the task of all that. The only thing that keeps current CPUs ability to handle it is by developers careful use of only specific aspects of physics. We've only scratched the surface of physics in games. More realistic collision responses by material types, vehicles, various joint types are but a small portion of what can be done with physics. Even within those domains the implementations in games is simplified quite a bit.

Phyics on the GPU will probably remain as an eye candy focus for a while at least, but it's not impossible to have game effecting physics on a seperate processor(GPU or PPU). Reading back object transforms, or getting collision results back is well within the capability of GPU read operations, as it doesn't approach the complexity(in terms of bandwidth) of reading back textures or procedural meshes, etc. Until there is enough market penetration from this sort of thing developers probably won't use it for gameplay effecting features though. In the end whatever reduces CPU load ultimately gives the rest of the game more headroom. Physics and collision happens to be one of the more expensive aspects of games apart from rendering. The quicker we parallelize that, whether on the GPU or multicore CPUs, the more is opened up for the rest of the game, like AI.

Also regarding AI, as I am a professional AI developer(Pandemic Studios on Mercs2), as well as hobby developer(Omni-bot), I can mention that collision and physics are a large cost of AI, for vision, pathfinding, cover determination, etc... and the cheaper that ultimately gets the better it is for that aspect of AI. If NVidia helps us work towards putting game effecting physics onto the GPU it's a win for developers in general, just as offloading graphics was way back with the introduction of video cards.

Date
Subject
Author
18.
Feb 6, 2008Feb 6 2008
22.
Feb 13, 2008Feb 13 2008
24.
Feb 14, 2008Feb 14 2008
2.
Feb 5, 2008Feb 5 2008
5.
Feb 5, 2008Feb 5 2008
6.
Feb 5, 2008Feb 5 2008
19.
Feb 6, 2008Feb 6 2008
7.
Feb 5, 2008Feb 5 2008
8.
Feb 5, 2008Feb 5 2008
9.
Feb 5, 2008Feb 5 2008
10.
Feb 5, 2008Feb 5 2008
13.
Feb 5, 2008Feb 5 2008
11.
Feb 5, 2008Feb 5 2008
 12.
Feb 5, 2008Feb 5 2008
   Re: I wanna get
14.
Feb 5, 2008Feb 5 2008
15.
Feb 5, 2008Feb 5 2008
     Re: I wanna get
16.
Feb 6, 2008Feb 6 2008
      Re: I wanna get
20.
Feb 6, 2008Feb 6 2008
      Re: I wanna get
21.
Feb 7, 2008Feb 7 2008
       Re: I wanna get
23.
Feb 14, 2008Feb 14 2008
        Re: I wanna get
17.
Feb 6, 2008Feb 6 2008