We were also told that DirectX 12 will support all of this across multiple GPU architectures, simultaneously. What this means is that Nvidia GeForce GPUs will be able to work in tandem with AMD Radeon GPUs to render the same game – the same frame, even.
This is especially interesting as it allows you to leverage the technology benefits of both of these hardware platforms if you wish to do so. If you like Nvidia's GeForce Experience software and 3D Vision, but you want to use AMD's TrueAudio and FreeSync, chances are you'll be able to do that when DirectX 12 comes around. What will likely happen is that one card will operate as the master card, while the other will be used for additional power.
What we're seeing here is that DirectX 12 is capable of aggregating graphics resources, be that compute or memory, in the most efficient way possible. Don't forget, however, that this isn't only beneficial for systems with multiple discrete desktop GPUs. Laptops with dual-graphics solutions, or systems running an APU and a GPU will be able to benefit too. DirectX 12's aggregation will allow GPUs to work together that today would be completely mismatched, possibly making technologies like SLI and CrossFire obsolete in the future.
There is a catch, however. Lots of the optimization work for the spreading of workloads is left to the developers – the game studios. The same went for older APIs, though, and DirectX 12 is intended to be much friendlier. For advanced uses it may be a bit tricky, but according to the source, implementing the SFR should be a relatively simple and painless process for most developers.
ForgedReality wrote on Feb 25, 2015, 18:14:Rigs wrote on Feb 25, 2015, 10:19:ForgedReality wrote on Feb 25, 2015, 10:00:
There's no reason to do that these days though, since AMD can't seem to compete, and nVidia is constantly forcing them to drop their prices.
Psh, bite your tongue! There are ATI/AMD fanboys here...I know because I'm one of them. AMD might not be able to compete CPU-wise, but their videocards still give Team Green a run for their money, no doubt. I'm sure a lot of Radeon users would agree. My HD7850 easily stands up to today's games, even AC: Unity, with only some minor ticks down on the ol' graphics slider...
=-Rigs-=
My GTX580 is two years older and it still outperforms...
http://gpuboss.com/gpus/Radeon-HD-7850-vs-GeForce-GTX-580
Slick wrote on Feb 25, 2015, 20:00:RaZ0r! wrote on Feb 25, 2015, 19:34:
Sounds like LucidLogix Virtu MVP
indeed. that was my first thought.
but 2 cards working on THE SAME FRAME? that sounds like a bridge too far. I'd love to see it work, but from what i know ATI and Nvid use very different architectures to achieve the performance relative to each other. It would seem like a real fustercluck to get them to be working on the same frame. does the CPU only spit out the geometry and z-buffer from the bottom half of the scene to one card? and have to re-calculate for the top half? I'd think that there would be a lot of unoptimized shit going on.
still a nifty idea. shows progress towards true parallelization, which to me is the real goal here. However i wouldn't bank on cross-vendor cards working on the same FRAME, that seems like a long bet. the %900 i'm still onboard with though, if a game was %900 more demanding on the CPU side of things using a CPU-optimized API that is...
RaZ0r! wrote on Feb 25, 2015, 19:34:
Sounds like LucidLogix Virtu MVP
Rigs wrote on Feb 25, 2015, 10:19:ForgedReality wrote on Feb 25, 2015, 10:00:
There's no reason to do that these days though, since AMD can't seem to compete, and nVidia is constantly forcing them to drop their prices.
Psh, bite your tongue! There are ATI/AMD fanboys here...I know because I'm one of them. AMD might not be able to compete CPU-wise, but their videocards still give Team Green a run for their money, no doubt. I'm sure a lot of Radeon users would agree. My HD7850 easily stands up to today's games, even AC: Unity, with only some minor ticks down on the ol' graphics slider...
=-Rigs-=
Mashiki Amiketo wrote on Feb 25, 2015, 13:29:
That's why it worked perfectly fine right? If it works fine, and people don't have a problem. And then they go out of their way to disable it, all that says is they threw a hissy fit. All they have to do in a support environment is say "we don't support that configuration and will offer no support on it." Problem solved.
Mashiki Amiketo wrote on Feb 25, 2015, 13:29:Shineyguy wrote on Feb 25, 2015, 11:27:
To be fair, their statement about it was that they couldn't support such setups as that so they disabled it in their drivers.
From a support standpoint, I can completely agree with what they did. I would not want to have to try to support a competing vendors software and it's interaction with my own.
That's why it worked perfectly fine right? If it works fine, and people don't have a problem. And then they go out of their way to disable it, all that says is they threw a hissy fit. All they have to do in a support environment is say "we don't support that configuration and will offer no support on it." Problem solved.
jdreyer wrote on Feb 25, 2015, 12:31:Nope cause if you had a high end AMD card for primary, you'd only need a 2 or 3 generation old Nvidia card for PhysX (for example). You'd probably buy used and not new so Nvidia wouldn't make any money.
I don't see why they would. They'd still be selling cards to people that use AMD primarily. Wouldn't a sale in that case be better than no sale at all?
Creston wrote on Feb 25, 2015, 14:33:
Maybe DX12 will have some amazing magic in place that would just allow you to parallel boost performance like that, but this bit does not seem to indicate so: Lots of the optimization work for the spreading of workloads is left to the developers.
I really don't think this will wind up having much, if any, effect at all.
{PH}88fingers wrote on Feb 25, 2015, 13:38:
on Nvidia 970: I laughed https://www.youtube.com/watch?v=spZJrsssPA0
jdreyer wrote on Feb 25, 2015, 12:33:Creston wrote on Feb 25, 2015, 11:15:
I'll believe it when I see it, but it would be pretty awesome.
I think few devs would ever bother with putting in the work required, though, (even if it's not difficult) since it'd be a pretty rare occurrence where people are using a hybrid system like that.
I think a lot of people have extra vid cards hanging around they'd throw in their system if they thought they could get extra performance from it.
Shineyguy wrote on Feb 25, 2015, 11:27:
To be fair, their statement about it was that they couldn't support such setups as that so they disabled it in their drivers.
From a support standpoint, I can completely agree with what they did. I would not want to have to try to support a competing vendors software and it's interaction with my own.
Dagnamit wrote on Feb 25, 2015, 12:42:
Does anyone know if the XBone could support hardware upgrades? Seems to me like with the API you could sell an upgraded Xbone or an upgrade kit with out changing out the primary SoC that's already being sold. Shift the costs of upgraded games to devs, who need to code the support into the game, who can then turn around and charge $80.00 for the HD version.