NVIDIA's Cloud Rendering

NVIDIA announces CloudLight, a cloud-based "system for amortizing indirect lighting in real-time rendering." They say this new framework "explores tradeoffs in different partitions of the global illumination workload between Cloud and local devices, with an eye to how available network and computational power influence design decisions and image quality." This video offers a look at what this means in case the following explanation isn't crystal clear:
We introduce CloudLight, a system for computing indirect lighting in the Cloud to support real-time rendering for interactive 3D applications on a user's local device. CloudLight maps the traditional graphics pipeline onto a distributed system. That differs from a single-machine renderer in three fundamental ways. First, the mapping introduces potential asymmetry between computational resources available at the Cloud and local device sides of the pipeline. Second, compared to a hardware memory bus, the network introduces relatively large latency and low bandwidth between certain pipeline stages. Third, for multi-user virtual environments, a Cloud solution can amortize expensive global illumination costs across users. Our new CloudLight framework explores tradeoffs in different partitions of the global illumination workload between Cloud and local devices, with an eye to how available network and computational power influence design decisions and image quality. We describe the tradeoffs and characteristics of mapping three known lighting algorithms to our system and demonstrate scaling for up to 50 simultaneous CloudLight users.

View : : :
7.
 
Re: NVIDIA's Cloud Rendering
Jul 29, 2013, 10:45
7.
Re: NVIDIA's Cloud Rendering Jul 29, 2013, 10:45
Jul 29, 2013, 10:45
 
eRe4s3r wrote on Jul 29, 2013, 10:08:
Basically, the bandwidth required is extremely small... latency is the real linchpin for moving light-sources. And if it can stay within 100ms it's fine.

Correct, latency is the issue. Part of that is the processing time but the point of this tech basically is that it eliminates that aspect. 100ms might be tolerable although in my experience with virtual audio piping, 50ms is what you really want...and honestly that is feasible.

More interestingly, it hints at grid-based geographical load balancing, something I predicted would eventually be big more than 5 years ago (it was and is being researched at Palo Alto). Even the old experts (I've seen network designers from the 80s make "informed" posts on the technology in recent times) really miss the point that bottlenecks in the network are the main issue. IPv6 will take away the barriers to an effective grid, but I'm getting ahead of myself with regard to this specific announcement by Nvidia. Needless to say that being able to rely on locality means lower latency and less redundant information transfer.

Long story short, it optimizes rendering through prediction and the use of cloud networking.
Date
Subject
Author
1.
Jul 29, 2013Jul 29 2013
2.
Jul 29, 2013Jul 29 2013
3.
Jul 29, 2013Jul 29 2013
4.
Jul 29, 2013Jul 29 2013
5.
Jul 29, 2013Jul 29 2013
 7.
Jul 29, 2013Jul 29 2013
  Re: NVIDIA's Cloud Rendering
9.
Jul 29, 2013Jul 29 2013
10.
Jul 29, 2013Jul 29 2013
11.
Jul 29, 2013Jul 29 2013
12.
Jul 29, 2013Jul 29 2013
15.
Jul 29, 2013Jul 29 2013
19.
Jul 29, 2013Jul 29 2013
6.
Jul 29, 2013Jul 29 2013
8.
Jul 29, 2013Jul 29 2013
13.
Jul 29, 2013Jul 29 2013
14.
Jul 29, 2013Jul 29 2013
16.
Jul 29, 2013Jul 29 2013
17.
Jul 29, 2013Jul 29 2013
18.
Jul 29, 2013Jul 29 2013
20.
Jul 29, 2013Jul 29 2013
21.
Jul 29, 2013Jul 29 2013
22.
Jul 29, 2013Jul 29 2013