User information for Aelith

Real Name
Aelith
Nickname
None given.
Email
Concealed by request - Send Mail
Description
Homepage
None given.

Supporter

Signed On
March 24, 2009
Total Posts
10 (Suspect)
User ID
54849
Search For:
Sort Results:
Ascending
Descending
Limit Results:
 
10 Comments. 1 pages. Viewing page 1.
Newer [  1  ] Older
22.
 
Re: OnLive on OnLive Skeptics
Mar 31, 2009, 12:11
22.
Re: OnLive on OnLive Skeptics Mar 31, 2009, 12:11
Mar 31, 2009, 12:11
 
You're missing the most important source of latency, which is multi-threaded frame buffering. Modern game engines split up the work across numerous CPU's/threads. There are different strategies on how to handle that, but some form of pipelining is almost always used. So one CPU thread is doing physics for frame N, and another CPU thread is doing rendering work for frame N-1, and the GPU is working on frame N-2, while the monitor/TV is displaying frame N-3.

This easily adds up to 3-4 frames of lag for some games, which is over 100ms of latency just playing the game locally.

Total lag can actually be measured using a digital camera - see this gamasutra article: [url=]http://www.gamasutra.com/view/feature/3725/measuring_responsiveness_in_video_.php?print=1[/url]

So if you have a good internet connection and you can get a 30-50ms ping to their servers, then you can still get a low or lower total delay playing the game on the server - if the server is much faster than your local machine.

So this system can work if the server can:
1.) render the game at really high FPS (in order to eliminate the multi-frame buffering delays mentioned above)
2.) compress the video with near zero latency (which apparently they have solved - my bet is they are using special hardware for that)

But consider that most games are now multi-platform and are designed for the consoles as lowest common demoninator. Take a recent example like Left For Dead - which gets 30 fps on the 360 but x6 times that on a modern high end machine.

I'm more skeptical about the economics of having one GPU per user. Perhaps they've found a way to get multiple game instances running per GPU. There certianly is plenty of power for that for console ports (aka, almost everything today except crysis).

But if this service actually takes off and reaches a million simultaneous users, and they need one GPU per user, those data centers are going to break every super-computer record many times over.

10.
 
Re: OnLive Interviews
Mar 25, 2009, 05:55
10.
Re: OnLive Interviews Mar 25, 2009, 05:55
Mar 25, 2009, 05:55
 
Yeah yeah - prediction works to a degree, but probably only buys you a frame or maybe two before its noticeable. Console controler pads are a little more forgiving there, because they typicall are less precise and have momentum. Most people simply can't tell that when you press a button, the action actually happens 60-90 ms later.

100ms ping of network latency to their server probably wouldn't work. I think you'll need under 50ms and under 30 for it to work really well, but we'll see.

Low latency video means their compression starts immediately after the GPU finishes rendering the frame (or maybe, just possibly, it actually compresses in small tiles as it renders, but I digress). This is atypical. Video compression usually buffers many frames. Theirs can't.

I think what they have done is simply this - they have carefully removed ever other source of latency under their control to buy back the network ping, such that the total lag is the same (under 100ms, which is some human sweet spot).

Notice they have a special low latency wireless controller, typical controllers have more latency than they can afford.

And actually, I think fighting games would be the hardest - not FPS games. I remember tekken had some (insanely hard to pull off) special moves which required frame precise combos.
8.
 
Re: OnLive Interviews
Mar 25, 2009, 02:39
8.
Re: OnLive Interviews Mar 25, 2009, 02:39
Mar 25, 2009, 02:39
 
Yeah, seeing is believing so in the end it all comes down to how well the servers scale, and how well it actually works on most people's connections. If it works as advertised (and from what people are saying) latency is well hidden and image quality is reasonable.

But the most successful console by far right now, the Wii, doesn't even run at 720p.

And furthemore, running the game at high PC resolution with high AA factor and then downsampling and compressing will *still* probably look better than most console games.

At an extreme example, you can run Quake1 at 4000x3000 and it still looks like .... 1996 low poly graphics. Resolution quickly hits diminishing returns - pixel quality matters far more than quantity at even 720p.

To really sell the system though, they are going to need alot of games, and at least one killer exclusive.

This comment was edited on Mar 25, 2009, 03:22.
56.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 21:37
56.
Re: OnLive a Game Changer? Mar 24, 2009, 21:37
Mar 24, 2009, 21:37
 
No lag at all is an impossibility in the real world. It's simple physics.

This is true, but even single player games can have plenty of lag - as anyone with the wrong GPU/drivers/LCD etc has experienced. As I described in a previous post, its actually *possible* for them to stream a game to you from their servers with LESS total lag then you running the SAME game on your local console. Simple physics. Their servers can be very fast much faster than your 360 or PS3, and if your ping is low enough to their servers, this can completely compensate or even overcompensate for the network delay.

Just because packets are crossing the network does NOT necessarily mean more total system lag. Especially for a multiplayer game, where the packets have to traverse the network *anyway*.

Also, to point out again, they apparently are doing some form of server side prediction - rendering frames with a little prediction.

This comment was edited on Mar 24, 2009, 21:48.
55.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 21:30
55.
Re: OnLive a Game Changer? Mar 24, 2009, 21:30
Mar 24, 2009, 21:30
 
Well, from what I've heard of some devs who saw it at our office, everyone was surprised with the (lack of) latency - ie, you couldn't tell.

Scalabity is an issue, but its completely controllable by simply limiting how many accounts they sign up regionally to ensure they have enough server load at the time. That part is really quite simple - no rocket science. It seems that you are guys are just fishing for reasons for this to fail.

Which brings me to some legitimate reasons it could fail:

Well, I agree that hardcore PC gamers may hold off on this, until they have some killer Crysis munching game you can only play OnLive, and you can play it at high res with low latency.

But the console market is different - the majority of the games are still 720p at 30fps (1080p just isnt worth it yet), and the controllers and gameplay are suitable for higher total latency, for a variety of reasons.

To me, the immediate wins are console games of all flavors, but especially multiplayer games (where they can offer a much better experience), and of course, MMO's, where there's huge potential.

And BTW, its also available on the PC, and the device has a mouse keyboard compatibility.
54.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 21:21
54.
Re: OnLive a Game Changer? Mar 24, 2009, 21:21
Mar 24, 2009, 21:21
 
Google has massive data centers to be sure, but they don't need <30 ms latency - its just not a design issue. Google is crazy enough that they *are* quite concerned about latency, but just not at the same time scales - they are concerned with getting you back your search results within < 1000ms or so. Their big technical challenge is search CPU latency, not network latency. OnLive has very different technical challenges.

And yes, actually I'm a graphics engineer. I'm not talking about multi-gigabit internet connections - are you inane? - read my post - or read the actual OnLive annoucements:

-1 Mbps for SD resolution (average DSL speed)
-5 Mbps for 720p resolution (typical cable speed)
(presumably - 10-12 Mbps for 1080p resolution - Fios, etc. And no, it doesnt scale linearly with resolution - but this probably uneconomical for their outgoing bandwidth right now, but those prices are dropping - fast)

These numbers are in line with high quality H.264 compression - or maybe a little worse, but their codec has the additional constraint of extremely low latency and low CPU requirements. (However, having access to the rendered frame's depth buffer and potential motion vectors allows large speedup potential, not to mention they probably run the compression parlty on the GPU.) Apparently they have 100 patents on their compression tech, so its not typical off the shelf stuff.

You consider 50 million households with broadband a SMALL market? From broadband reports, current deployment looks like this:

25 million households with cable internet
20 million DSL (which includes some newer high speed, like ATT UVerse)
5 or so million other (including 2 million Verizon Fios - 20-50 Mbps)

overall broadband deployment is over 80% of American households today

Do you know how many 360's + Wii's are in north america? (look it up - < 50 million total) Again, from a market size perspective - the broadband market (DSL + Cable + Next gen) is comparable or *larger* than the console market in size - *today*. Not 5 years, not 15 years - right now.

As to which is growing faster, I havent seen that data yet, but my guess would definetly be broadband is signing up customers faster than the consoles are selling. Not to mention that Obama is going to sink $8 billion into upgrading america's broadband - and its not like the Fed is about to subsidize the PS4 for Sony.

5 years? Things are moving a little faster than that . . .

If this service has a good launch, I can also see it enticing more people to consider high speed internet, and I imagine they are partnering with the telcos to encourage exactly that.

Theres a lot of Ifs here to be sure, such as - are they going to have exclusive content? It would seem hard to compete without at least one killer system seller . ..
45.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 18:05
45.
Re: OnLive a Game Changer? Mar 24, 2009, 18:05
Mar 24, 2009, 18:05
 
In one millisecond, light moves 200 kilometers through fiber. This means the physical limit of a ping from LA to san fran is about 6 ms. Some of the newer home networks like FIOS are actually getting close to this physical limit (ie, with gigahertz routing hardware, the packets are barely slowed down at network hops).

First. LA is not 700 miles away from New York. Sacramento is 500 miles away from LA roughly. By car it's about 2700 miles, or 4300-4400 kilometers. That's a minimum of 22 ms travel time, not 6.

Did you notice I was talking about LA to san fran?

"The straight line distance between San Francisco and Los Angeles is approximately 346 miles or 557 kilometers" - 2.5ms one way at the speed of light through fiber - double that, round up - 6ms is about the theoretical minimum ping to LA to san fran

And actually you forgot to double it for round trip time - LA to NY across backbone transit right now is about 50ms actual round trip time, close to theoretical. But of course, that doesnt really matter, because they aren't inane enough to try and connect users to servers that far away.

I'm glad you took some classes on networking a while back. As I'm typing this, over a simple 1 mbps DSL connection, I have a 10-15ms ping to backbones and colocations in LA - far worse than physical but my packets are going to irvine 1st to switch backbones. And my ping to some locations in san francisco is 30-50ms. So I guess thats just impossible.

90% of the US population lives within a couple hundred kilometers (not miles) from a major metropolitan network hub.

[snip worthless assumptions]

They announced 7 or 10 data centers in the US (forget exactly), so I dont know why you are talking 900 miles. Read a little before you post.

And god no, they aren't using TCP, and no the decoder does not have to wait 1/30 a second to buffer a full frame of packets before it starts decompressing. Thankfully their engineers took a bunch of networking classes - or didn't need to

And I mention the LA to san fran connection because they demoed it at our company and that was the test connection - not to a server in LA, and it was smooth. And in reality thats a greater distance than what they are planning.

Impossible Magic? Or maybe a few more networking classes . ..
44.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 17:42
44.
Re: OnLive a Game Changer? Mar 24, 2009, 17:42
Mar 24, 2009, 17:42
 
I used Fios as an example of what the next generation will look like, and I think we should view that as the beginning of the next console generation. They will be able to play at HD+ resolutions with no lag. There is also ATT's UVerse which is fiber to the curb and is comparable and growing, and next generation cable internet.

But for current generation games, we have about 25 million cable households and 20 million DSL households. Most of the cable users will have or can get up to 5mbs, while most DSL users are closer to 1mbs, so in a rough look at the market data these are comparable or greater than the current console penetrations of the Wii and 360 in North America. Its a little more complex because DSL often has lower latency than cable, especially the older cable equipment, but its getter better all around.

100ms total delay is about the maximum for a console game, and my point was that some console games are already at that now, just playing singleplayer, soley because of pipelined threading techniques.

So, if you run your game at much higher FPS, the frame times are shorter, and the pipeline delays are reduced, and this can make up for the network latency. You can also reduce the pipeline stages - not all games do this btw, and some engines will work better than others. But still, by simply running the game at much higher FPS you can easily shave off 30ms of delay which can compensate for the network latency.

30ms of network latency may seem low to you, but it really depends on your connection AND the location of their data centers, but again my example was to show that many people on high speed connections can already get < 30ms ping from LA to san fran. If I were them I would probably have seperate NW and SW data centers - and I think they mentioned 7-10 or something for the US.

you can't really compare this latency wise to existing games, they really building data centers direclty connected to ISP backbones and geographically distributed for really low latency connections. Some people with cable connections have 30ms just to their backbone, but most of the DSL users and all the people on the newere high speed services will be fine.

They also may be doing something more tricky than the obvious, such as forward predicting the camera motion a few frames ahead on the server renderer. That could buy a few more frames with some slight game tweaks.

The changes to the game would really be about making it N-way splitscreen. One server process could run a number of clients and render many views of the same game. Some games will be relatively easy to convert to this model, others less so, but they are claiming about 2 weeks - so it some renderer changes.

And you need to look at benchmarks of really high end GPU's can do right now with current multi-platform games. Not crysis, but games like left for dead that are designed for 360 or PS3 as lowest common demoninator are getting 200 fps right now on a single high end GPU. Nvidia's latest GTX295 has 2 of those on one die and you can get 2 or 3 way SLI, so you can build a system today that could render Left for Dead for 16, maybe more views at 720p or even higher. I imagine they need this kind of scalibity to make this profitable - having one GPU per logged on customer ... well those data centers would have to be world records.

As for the multiplayer benefits, it really depends on the game, but the difference is the server and clients or peers would all be connected on a very high speed local network, so scaling to large number of players would suddenly be very feasible - you see?

And the interesting thing, again, is IF and WHEN they get developers deals to develop new games for this system.

35.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 16:07
35.
Re: OnLive a Game Changer? Mar 24, 2009, 16:07
Mar 24, 2009, 16:07
 
Creston - they don't stream textures or any of the game data. They stream the actual rendered frames, compressed using their own video codec. So the game is played entirely on their servers, you are playing the game over the internet via remote control. This is radically different than any current streaming system - like steam, where you download the game data.

With OnLive, since you are just plyaing the game via remote control, you don't have to load, install, etc. - any games residing on their servers will be immediately available to play, with no wait - and you can play them on a simple device, because a video codec like theirs is not very CPU intensive compared to a full game. You could watch a commercial for a game, and then literally pick up your controller and start playing it for 5 minutes for free, and then get an option to purchase. Don't even need demos, installs, dvds, etc.

Those are the immediate selling points for consumers, and I pointed out the selling points for developers in my previous post. But its really interesting when we devs start making games designed for the cloud, not just ports of console games. It will be like skipping several console generations and just open all kinds of insane possibilities.

33.
 
Re: OnLive a Game Changer?
Mar 24, 2009, 15:56
33.
Re: OnLive a Game Changer? Mar 24, 2009, 15:56
Mar 24, 2009, 15:56
 
As a game developer, I really hope this succeeds. I hope they have the capital required to really pull this off.

The timing is about right. In the US, Verizon Fios alone is up to about 2 million homes, add in the total numbers from high speed ADSL and the better cable telco packages and you have a substantial market which has >5mbps with low latency - no, not quite everyone yet, but comparable or greater to say the xbox360 market in north america, and I expect it to grow quickly now.

In one millisecond, light moves 200 kilometers through fiber. This means the physical limit of a ping from LA to san fran is about 6 ms. Some of the newer home networks like FIOS are actually getting close to this physical limit (ie, with gigahertz routing hardware, the packets are barely slowed down at network hops).

So if the datacenter is close and has peering connections to the backbones (which they have to do anyway to get bandwidth cheap enough), you could get pings from LA to san fran at 10-15ms. This is less than the refresh latency for even the newest LCD monitors, and is less than the average half-frame latency of a CRT scan.

So the network latency is not necesarily a big deal - if you are reasonably close and more importantly, you have a good ISP. In fact, its just one little part of the total latency when playing a game. Most of the latency actually is internal and caused by multiple pipelined frame buffering techniques we use in modern console engines - your controller input comes in on frame N, physics is still processing frame N-1, the rendering thread is working on frame N-2, and the GPU is actually rendering frame N-3, and the screen is displaying frame N-4. Since most console games run near 30fps, this can cause 100ms of delay easily, without even going across the network.

But if your game is running on a scary PC rig with a modern GPU, it can run that same console game at 200fps at 720p, no problem - so all the internal delays are reduced dramatically. So you could actually have less latency playing the game over the network, IF you have a good connection. Strange, but true. However, I am sure they are probably using those hefty GPUs to render several clients per GPU, so the milage may very .. but still - its completely possible for this to work right now, on the current internet, with less total lag than playing the same game rendered locally on the console.

But thats not what is exciting about this. Playing existing games designed for the now aging current generation is important for them launching the service, but what is really interesting is when we start making new games designed specifically just to run on a super-computer.

This will change multiplayer games the most. Suddenly, getting a thousand players into an FPS is as easy as getting 8. Right now it is very difficult and extremely ineffecient to scale complex simulations across many client machines - there simply isn't enough bandwidth to move all the data. But if you cluster all those machines together into a super-computer, you can combine all the resources effectively.

Actually, remarkably, it will be more effecient to have many players in the same game, because the memory cost of the environment and the bulk of all the calculations for physics, AI, lighting, etc, can be shared. Only the final rendering for a camera view needs to be done uniquely for each client, and this is a small fraction of the work already on a high end system. You wouldn't have a GPU for each connected client! It can be orders of magnitude more effecient than that.

Can you imagine a game engine designed to run on 1,000 high end GPU's, like AMD's new fusion render cloud? I can.

Our current multi-platform and console games get to use maybe 100 gigaflops. A modern cloud center will have over 10-100,000X that ... petaflops. Imagine Left for Dead in a real city, like new york, with millions of humans and zombies, tens of thousands of simultaneous players, photo perfect voxel traced graphics, complete physics modelling - hell, with that power, you could start doing particle physics for trillions of particles. In fact, a 1,000 GPU server center - like AMD's new system, will rival the top supercomputers in the world, which the government uses for - massive real-time particle simulations. Its just insane.

Now I just hope that AMD, NVIDIA, microsoft, and sony don't each launch their own competing cloud models, or if they do, its all based on PC standards.
10 Comments. 1 pages. Viewing page 1.
Newer [  1  ] Older