featured_netsettings

CS:GO – Netsettings for competitive play

Netsettings were always a very complex and hot discussed topic in Counter-Strike and there is still a lot of misunderstanding about the netcode in CS:GO. We will try to enlighten the whole netsettings debate and explain you which netsettings you should use for competitive play.


Recommended Netsettings

Straight to the point – these are our recommended netsettings for usual high-speed internet (DSL6000+). They are optimized for competitive play on 128 tick servers. However, your netsettings will be automatically adapted to Valve’s official Matchmaking servers, which only use 64 tick.
rate "128000"
cl_cmdrate "128"
cl_updaterate "128"
cl_interp "0"
cl_interp_ratio "1"


Explaination of the config variables

  • rate “128000” (def. “80000”)
  • Max. bytes/sec the host can receive data

  • cl_cmdrate “128” (def. “64”, min. 10.000000 max. 128.000000)
  • Max. number of command packets sent to server per second

  • cl_updaterate “128” (def. “64”)
  • Number of packets per second of updates you are requesting from the server

  • cl_interp “0” (def. “0.03125”, min. 0.000000 max. 0.500000)
  • Sets the interpolation amount (bounded on low side by server interp ratio settings)

  • cl_interp_ratio “1” (def. “2.0”)
  • Sets the interpolation amount (final amount is cl_interp_ratio / cl_updaterate)


Official explaination of the new net_graph

The following questions were answered by Vitaliy Genkin (Valve employee) via csgomailinglist, after they decided to restrict all net_graph values above 1 in April 2014.

csgo_new_netgraph

What does the value represented by “sv” mean now?
Value of “sv” shows how many milliseconds server simulation step took on the last networked frame.

What does the +- next to the sv represent?
Value following sv +- shows standard deviation of server simulation step duration measured in milliseconds over the history of last 50 server frames.

What does the current value for var represent?
Value for sv var when server performance is meeting tickrate requirements represents the standard deviation of accuracy of server OS nanosleep function measured in microseconds over the history of last 50 server frames. The latest update relies on it for efficiently sleeping and waking up to start next frame simulation. Should usually be fractions of milliseconds. Value for client var near fps net graph display is showing standard deviation of client framerate measured in milliseconds over the history of last 1000 client frames. By using fps_max to restrict client rendering to maintain a consistent fps client can have framerate variability at a very low value, but keep in mind that system processes and 3rd party software can influence framerate variability as well.

Originally, it was considered respectable to have a var of less than 1, reasonable to have it spike as high as 2, but pretty much horrible to have a variance remain above 2 for any length of time. What would be the equivalent values for the three new measurements (sv, +-, and var)?
For a 64-tick server as long as sv value stays mostly below 15.625 ms the server is meeting 64-tick rate requirements correctly. For a 128-tick server as long as sv value stays mostly below 7.8 ms the server is meeting 128-tick rate requirements correctly. If standard deviation of frame start accuracy exceeds fractions of millisecond then the server OS has lower sleep accuracy and you might want to keep sv simulation duration within the max duration minus OS sleep precision (e.g. for a 64-tick Windows server with sleep accuracy variation of 1.5 ms you might want to make sure that server simulation doesn’t take longer than 15.625 minus 1.5 ~= 14 ms to ensure best experience).

Simplified & Summarized

Client-side:
fps var: low value = good
fps var: high value = bad
Server-side:
64 tickrate: sv < 15.625ms = good
64 tickrate; sv > 15.625ms = bad
128 tickrate: sv < 7.8ms = good
128 tickrate; sv > 7.8ms = bad


FAQ

This Guide is still work in progress – will be updated soon.