As a game developer, I really hope this succeeds. I hope they have the capital required to really pull this off.
The timing is about right. In the US, Verizon Fios alone is up to about 2 million homes, add in the total numbers from high speed ADSL and the better cable telco packages and you have a substantial market which has >5mbps with low latency - no, not quite everyone yet, but comparable or greater to say the xbox360 market in north america, and I expect it to grow quickly now.
In one millisecond, light moves 200 kilometers through fiber. This means the physical limit of a ping from LA to san fran is about 6 ms. Some of the newer home networks like FIOS are actually getting close to this physical limit (ie, with gigahertz routing hardware, the packets are barely slowed down at network hops).
So if the datacenter is close and has peering connections to the backbones (which they have to do anyway to get bandwidth cheap enough), you could get pings from LA to san fran at 10-15ms. This is less than the refresh latency for even the newest LCD monitors, and is less than the average half-frame latency of a CRT scan.
So the network latency is not necesarily a big deal - if you are reasonably close and more importantly, you have a good ISP. In fact, its just one little part of the total latency when playing a game. Most of the latency actually is internal and caused by multiple pipelined frame buffering techniques we use in modern console engines - your controller input comes in on frame N, physics is still processing frame N-1, the rendering thread is working on frame N-2, and the GPU is actually rendering frame N-3, and the screen is displaying frame N-4. Since most console games run near 30fps, this can cause 100ms of delay easily, without even going across the network.
But if your game is running on a scary PC rig with a modern GPU, it can run that same console game at 200fps at 720p, no problem - so all the internal delays are reduced dramatically. So you could actually have less latency playing the game over the network, IF you have a good connection. Strange, but true. However, I am sure they are probably using those hefty GPUs to render several clients per GPU, so the milage may very .. but still - its completely possible for this to work right now, on the current internet, with less total lag than playing the same game rendered locally on the console.
But thats not what is exciting about this. Playing existing games designed for the now aging current generation is important for them launching the service, but what is really interesting is when we start making new games designed specifically just to run on a super-computer.
This will change multiplayer games the most. Suddenly, getting a thousand players into an FPS is as easy as getting 8. Right now it is very difficult and extremely ineffecient to scale complex simulations across many client machines - there simply isn't enough bandwidth to move all the data. But if you cluster all those machines together into a super-computer, you can combine all the resources effectively.
Actually, remarkably, it will be more effecient to have many players in the same game, because the memory cost of the environment and the bulk of all the calculations for physics, AI, lighting, etc, can be shared. Only the final rendering for a camera view needs to be done uniquely for each client, and this is a small fraction of the work already on a high end system. You wouldn't have a GPU for each connected client! It can be orders of magnitude more effecient than that.
Can you imagine a game engine designed to run on 1,000 high end GPU's, like AMD's new fusion render cloud? I can.
Our current multi-platform and console games get to use maybe 100 gigaflops. A modern cloud center will have over 10-100,000X that ... petaflops. Imagine Left for Dead in a real city, like new york, with millions of humans and zombies, tens of thousands of simultaneous players, photo perfect voxel traced graphics, complete physics modelling - hell, with that power, you could start doing particle physics for trillions of particles. In fact, a 1,000 GPU server center - like AMD's new system, will rival the top supercomputers in the world, which the government uses for - massive real-time particle simulations. Its just insane.
Now I just hope that AMD, NVIDIA, microsoft, and sony don't each launch their own competing cloud models, or if they do, its all based on PC standards.