The way Half-Life determines the "latency" value shown in its scoreboard is different than what users generally call "ping". The word latency is a deliberate decision to emphasize that the value shown is a better representation of actual network play than a mere ping. How is ping calculated and how is latency different.

Ping is a simple round trip time for a message from one computer to another. It is independent of whether the machine is playing a game etc. It generally is a best case communication round-trip time.

In Half-Life, the server tracks round trip times for packets that it sends to the client. The problem that arises is that the client, if it is not running at a high framerate, can have the message sitting in its network queue for a significant amount of time. For instance, a client that is chugging along at 10 fps ( yuck! ) is using about 100 milliseconds to process each frame. The scenario goes like this, on average, you can assume that a message arrives some time durring that 100 ms. window. If you assume it can arrive anywhere in that time frame, it's quite possible that the client has already read messages from the network for that frame. If so, then you have to wait 100 ms. until the next time the message queue is read. Then the client must act on the messages. Finally, the client sends its next movement command to the server. When the server receives this movement command, it looks at when it sent out the message that the movement command corresponds to and computes the latency based on that round trip time. If the server is not running at a high frame rate, or even it it's running at 40 fps or so, then it's message queue can lead to inflation of the round trip time as the client's reply sits in that message queue. These numbers are somewhat exaggerated, but you see the point. The server computes the ping over the last 64 messages it received from the client ( ignoring dropped packets ). If the client sees any kind of transient network backlog, or low framerate, it can really skew the overall average latency that is reported.

So the road to better latency values involves trying to improve your framerate as much as possible, as well as playing on servers that are running at decent ticrates. ( Such things as video mode, max number of decals, and other settings can have a huge impact on framerate ).

This brings up another misconception about benchmarking HL performance. In HL, the timedemo and playdemo commands do not work the same way as in other Quake/Quake2 engine games. In particular, demos always try to play back in the same amount of time it took to record them. Thus a demo that is 10 seconds long and has 240 frames will always play back in roughly 10 seconds. It could be that less frames are rendered on a slow playback machine. But, at most, 240 frames will be rendered ( in this example ). The bottom line is not to use HL demos for any benchmarking. Much better is to use "timerefresh" in a known spot ( or several of them ) to get an idea of your framerate. Or to run with host_speeds set to 1 -- which prints out the current frames per second to the notification area at the top of the screen. These provide a useable benchmark.

Finally, some folks are wondering about how the HL frontend determines network speed ( number of green or red dots ). In particular, there is a misconception that those numbers somehow reflect the round-trip message time between the master server and the particular server listed. This is simply not the case. The protocol works like this. A quick connection is made to the master server by your machine to request a raw list of IP addresses for all currently running servers. Once you get this list, you don't talk to anymore. Instead, your machine then contacts each server it has an IP address for. The time you send out a message to that server is marked and when you get a response is also marked. The difference in time is used to determine the network speed. It's really that simple. In, however, we try to get a more accurate number by sending ten simple "ping" requests to each server and waiting for responses to each one. We count the number of responses received, and each of the round trip times for any responses. These numbers are averaged to arrive at a more accurate round trip time. But these numbers do not reflect the framerate dependencies that I describe above for the "in-game" experience. If you run hl.exe with -numericping, then the green dots are replaced with two numbers, the round trip time in milliseconds and the percentage of packets that did not generate a response to the "ping" request.

In 1008, we added a way for server operators to broadcast certain information about the quality of their servers. We don't do this by default, as some server operators might not want such information exposed to users. But if the server operator runs the server and sets the cvar "sv_type" to 1, then when the server is queried by the HL frontend, GameSpy, PingTool, or some other server querying program, then the value of sv_type returned will include the type of operating system being used by the server ( e.g., Win32 or Linux -- note, the Linux server has not been released yet ), the cpu MHZ of the server ( e.g., 450 ), and whether the server is a listen server ( "hl.exe" ) or a dedicated server ( "hlds.exe" ). Of course, the best experiences occur on dedicated server running at high speed using Linux. The only information we don't have at our disposal is the servers bandwidth ( e.g., T1, etc. ).