With Call of Duty returning to league play, the commitment to finding ways to statistically measure player performance reemerges. Building off of a very successful year last year, the General Player Evaluation is returning for the Call of Duty World League. GPEs utilize a set of normal distribution models to rank players on several individual statistics. These scores are then averaged to produce a score that reflects a player’s ability against their peers across Call of Duty’s many facets. As always, a lengthy methodology is included below.
Optic Scump reigns supreme in the first iteration of General Player Evaluations for the Call of Duty World League, with dominant performances in all included statistical categories. His skill shows through in objective, slaying, and performance statistics. All four Optic players are in the top twenty, with Karma at #6, Crimsix at #14, and Formal at #20.
FaZe’s roster also all made the top twenty, with Enable and Clayster coming in at #4 and #5 respectively. Lacefield, a young player who gained exposure playing on the XGN rosters of late Advanced Warfare, rockets to #2 in the opening evaluations. Further investigation shows a statistically solid approach that rivals Scump in its consistency. SlasheR is EnVy’s most impressive player, demonstrating a flawless transition into the new game.
Despite two wins in the season, three members of team Question Mark round out the bottom of the evaluations. The team’s intangibles and ability to work as a coordinated team may be to blame for this. Furthermore, ColeChan has asserted himself as a solid above replacement level player. My theory as to why Question Mark is winning despite poor individual performances is that their record is an error in sample size (they did receive a free win when playing eLevate), and one that Pythagorean Win Analysis can solve with ease. I will be releasing win predictions shortly based off of a similar model as last year when the sample size permits.
My rankings are always a work in progress, they have and will continue to adapt as the data presents itself throughout the course of the game. The significant figures selected for this evaluation model are overall kills per respawn, outslay percentage, Uplink captures, time in hill, CTF captures, defends per game, SnD KDR, and -notably- team map win percentage.
I run a normal distribution model on each statistic, producing a score relative to the general population. I then average each of these scores to produce the final, well-rounded evaluation. Many of these are easy to justify. Kills per respawn may reward high engagement players, but this is compensated by outslay percentage and defends per game. Uplink captures, time in hill, and CTF captures account for the objective population in a fair manner. SnD KDR is one of the statistics that I have used throughout the year of my executing these evaluations, as it consistently responsible for map win rate. The final statistic, map win rate, solves an issue my rankings had in their early stages. Players are not punished for having above average teammates with its inclusion.