Christopher Sardegna's Blog

Thoughts on technology, design, data analysis, and data visualization.


Exploring CS:GO Player Consistency During 2017

Exploring CS:GO Player Consistency During 2017

One important and often unmeasured aspect of player performance in Counter-Strike: Global Offensive is consistency of performance. The ability to perform well in individual matches at critical moments is important, but consistently performing at a high level of skill creates a statistical divide between lucky players and highly skilled players. As a result, measuring empirical variance is an important aspect to determining a player’s skill level.

Data Used

Data comes from my stats database and includes all matches that were played during 2017. To narrow down the number of teams we focus on, only the top 25 teams ranked by our Glicko2 skill model are included. Finally, players must have more than 50 maps to be plotted.

Since teams change their lineups over the course of the year, players are placed on the chart based on their team. This means that players like GuardiaN show up twice: once under FaZe and once under NaVi. This also means that teams like NRG have more than 5 players on the graph since their lineup has been altered.

Measurement

To measure player consistency, we borrow a concept from probability theory called the coefficient of variation (CV). This measures a player’s standardized performance dispersion. Since this is a rather large and normal sample, this model uses the unbiased form of the coefficient of variation equation:

The result of this means that lower CV values result in less variation in performance, indicating a higher degree of consistency. As the CV increases, players become less and less consistent. This means we now have an empirical range with which we can determine the ability of players to perform consistently at a given level of performance.

Categories

Since this is an abstract statistical concept we can apply it to any given range of player performance measures. This analysis focuses on those popularized by HLTV: K/D1, ADR, KAST%, and Rating. Each chart uses the same formula for CV described above where each player is plotted based on their performance under a given team during the year.

Analysis

The data breaks down into four quadrants, in the top left we have players with a low average value and high consistency. Players in the top right have higher average ratings with similarly high consistency. Players in the lower half also follow this pattern of average values, only with lower consistency. Put simply: the further right a player is, the higher their average value for that metric; the further up they are, the more consistent a player performs with regard to that metric.

To view a fullscreen interactive version of this visualization, click here. On the webpage you can highlight individual players or teams as well as adjust the matches played filter to expand or contract the number of players featured in the chart.

Rating

Xantares, the player with the highest average rating in the data, is also extremely close to the average when it comes to consistency. This is not because he suffers from poor performances: only 20% of his performances fell below the average rating of 1.062. Rather, Xantares fell victim to his own success: a plethora of ratings (almost 10% of his maps) are above 2 on the Rating 2.0 scale. This means that these higher ratings are punishing him with regard to his consistency. Most players do not have as many ratings above 2.0, which means this is a unique case of a player being too good for the rating system to account for.

On the other side of things, Zeus’s stint on NaVi has been troubling for him. While his performance on Mousesports was relatively lackluster, his average rating of 0.88 on NaVi was the worst in the dataset. Compounding this, he was also the second most consistent player overall with a CV of 0.31. This means that the most consistent player in the top 25 teams was the worst player as far as ratings are considered. Causal variables aside, it is interesting to see how he was underperforming on NaVi.

Another player that was underperforming on NaVi was GuardiaN. In 2017, he boasted an average rating of 1.09 with a coefficient of variation of 0.43 on NaVi. Under FaZe, his average rating increased to 1.19 with a coefficient of variation of 0.31, making him the most consistent player in the dataset. Not only has he performed better under the FaZe roster, but also with more consistency.

Virtus Pro, the lowest rated team to be included in the dataset, entirely populates the bottom quadrant. The veteran lineup was unable to perform successfully through the year and the numbers bear that out, as their team’s average rating fall 0.08 below the population average. The team’s CV also indicates below average consistency.


It is important to consider the coefficient of variation alongside the metric used to calculate it. Consistency alone does not paint the entire picture, but combined with other metrics means it can tell a story about which players are empirically world-class.

Discussion: /r/GlobalOffensive | View as: PDF, Markdown


  1. Note that 7 players with > 20 K/D had these respective results ignored since they skew the data too much. CV is sensitive to outliers and maps where players have 1 or fewer deaths are too infrequent to include. ↩︎

  2. Note that the average Rating 2.0 is not 1.0 as claimed by HLTV but rather 1.06. ↩︎