Forgot password
Enter the email address you used when you joined and we'll send you instructions to reset your password.
If you used Apple or Google to create your account, this process will create a password for your existing account.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Reset password instructions sent. If you have an account with us, you will receive an email within a few minutes.
Something went wrong. Try again or contact support if the problem persists.

Does Overwatch’s new Skill Rating undervalue supports and tanks?

The new competitive mode in Overwatch went live just a few days ago, and everyone’s grinding for their golden guns
This article is over 8 years old and may contain outdated information

The new competitive mode in Overwatch went live just a few days ago, and everyone’s grinding for their golden guns.

Recommended Videos

The mode uses a new Skill Rating metric to rank players, with higher ratings earning more points towards buying special cosmetic rewards at the end of the season. The Rating, a number on a 1 to 100 scale, is designed to measure just how good a player is at Overwatch. A Skill Rating of 50 is about average, meaning you’re better than about half the playerbase. Bump that up to 60 rating, and you’re in the top 10 percent of players. The highest rating at the moment belongs to Swedish pro André “iddqd” Dahlström at 83. Only 14 players rank 80 and above, most of them superstars on pro teams. Skill Rating is clearly doing its job well—for the most part.

There seem to be some quirks to how Skill Ratings are calculated that may impact the best way to earn them. It seems tougher to rank up with certain heroes. Support players, for example, may not skill up as fast as the players they’re healing.

Of the top 100 players in the world, only four spent most of their competitive playtime on support heroes, according to data from MasterOverwatch, which scrubs Battle.net profiles to fill a rankings database. Considering that fielding a support is a necessity at high levels of play, you’d expect at least one for every six players, or about 17 in that top 100—and that’s a low estimate. In esports games, teams usually field two supports, and teams queueing in larger groups of four to six players are often doing so in Competitive Play. It’s hard to imagine that at least some competitive games don’t feature Lúcio and Mercy together, or Zenyatta or Symmetra.

That Skill Rating difference has played out in controlled sets of games, too.

Cloud9 carry Lane “SureFour” Roberts was the first player to hit 80 Skill Rating when Competitive Play launched, queuing almost exclusively with his professional player teammates. When Roberts finished his 10 placement matches, he received a 77 Skill Rating, commensurate to his talent as one of the best players in Overwatch. Cloud9’s support Adam Eckel, who played the exact same 10 placement matches as part of a Cloud9-stacked queue, only received a 67 Skill Rating. His tank teammates hit 71. Derrick “reaver” Nowicki, the team’s other carry player, hit 74. All of those numbers rank in the top one percent of Overwatch players, according to MasterOverwatch, but that’s a pretty big discrepancy for players who contributed to winning the exact same games against the exact same opponents.

“If I’m playing with people that are rated 10 levels above me because they are DPS why wouldn’t I be a support at that high level as well?” Eckel said in an interview with the Daily Dot. “Why does the system put me 10 ranks below them to start?”

The biggest reason? Likely the performance component of Skill Rating, since other variables, like opponents’ skil,l were the same for the two players. In his Competitive Play developer update, game director Jeff Kaplan said that Skill Rating directly correlates to an internal matchmaking rating (MMR). He also gave some insight into just how that MMR, which is used to match players with similarly-skilled teammates and opponents, is calculated in a forum post on Jun. 21. MMR goes up or down when you win or lose, but the magnitude of the change depends on other factors, like your individual performance on the heroes you played compared to a baseline performance average for those heroes.

“Everyone has better and worse heroes and we have tons of data showing us what performance levels should be like on those heroes,” Kaplan wrote.

So in Cloud9’s case, Surefour ranked higher than Reaver thanks to outperforming him a little bit on similar heroes. Perhaps in Eckel’s case, he simply didn’t stand out on Mercy as much as Roberts did on Pharah, Genji, and Tracer. While that could be possible, even for a top tier support player (and it’s dubious if that’s the case, considering Eckel ranks No. 2 in Healing and No. 1 in deaths per game among all players, not just supports), the overall player rankings seem to indicate it’s a systemic problem, at least at the very top level of performance.

The top of the rankings are heavily skewed towards damage-dealing beasts. Similar to supports, most tanks are also underrepresented in the top 100. There are currently 19 players with a tank as their most played character. But the vast majority of those players mained Zarya and Roadhog, two tanks who pump out lots of statistics—both damage and healing or shielding. Many of those Roadhog players featured DPS heroes in their other slots. Only one Reinhart main is in the top hundred, and only two players featured Winston or D.Va, two characters who deal low damage but specialize in wreaking havoc and picking off vulnerable targets with high kill participation, in their top three player heros, despite Winston’s omnipresence in the esports metagame.

How performance is measured remains a black box, but it looks like there are some trends related to hero selection.

Based off anecdotal evidence from top level players climbing the ladder, it seems like the now-hidden “score” metric is a large part of the performance adjustment in Skill Rating loss and gain. “Score” was a catch-all which translated statistics like damage dealt, damage blocked, healing done, deaths, and objective time into a single metric. Some heroes, though, are better at compiling a multitude of numbers than others, like the aforementioned Zarya or Roadhog, compared to someone like Winston.

In Cloud9’s case, Eckel likely spent many of his games boosting SureFour’s damage as he flew around the map as Pharah, his most-played hero during his climb. Does Eckel receive credit for all the extra damage he made Pharah deal? Other support measurables, like healing, depend highly on your teammates’ performance. If they are simply playing in a way where they minimize their damage taken, you can’t put up big numbers. If your teammates never die, well, Mercy will never need to say those words during a game.

Eckel admits that it’s difficult to really gauge the impact Mercy has on a game, but he feels that the current MMR needs to do a better job.

As Kaplan said, Blizzard adjusts for the hero you play. If you’re on Mercy, they’re comparing your low damage numbers to other Mercys, not to heroes who are actually supposed to shoot stuff regularly. The problem is, even if you’re the best Mercy player in the universe, it’s hard to put up statistics that prove it. You’re more reliant on variable factors, like your teammates’ ability to deal damage while you boost them or escape fights alive to be healed, than if you were on McCree and hitting a headshot with every click.

The statistics show that the best support and tank players are struggling to stand out from their peers at the highest ranks of the competitive play queue, excluding semi-carry heroes Roadhog and Zarya.

Perhaps that means Blizzard needs to do more calibration on their performance metrics, so those strong Mercy and Reinhart games get a little more recognition. Perhaps it’s designed to be that way. You could easily argue that the more mechanically intensive carry champions, which add a heavy aim component to positioning, teamwork, and timing, really do take more skill to play than most supports and tanks, though that may be a tough argument to win.

However Skill Rating is calculated, it ultimately boils down to one thing: winning. Your rating will only rise when you win a match, so picking the hero that maximizes your chances of doing that, whether it’s Mercy or Soldier 76, is always going to produce the best results. So will playing to win the game over trying to “farm” statistics, like by spamming enemy shields to boost your damage and accuracy when there are real targets to shoot.

Measuring support performance is difficult in other games, too. In League of Legends, supports are underrepresented in the Challenger Series, despite their high impact on pro level matches. It may just be a fact of Overwatch that climbing the rankings is easier while picking certain heroes.

Adam currently sits at 77 skill rating, better than all but 65 players in the world. No matter what you play, anything is possible. He’s noticed some other quirks in the queueing system, like how difficult it is to gain rating when queueing as a group of six. Many players feel rating gains favor queueing as a duo or trio, which allows you to avoid stacks of five or six players but gives you a strong enough base to win consistently.

Still, If you really are the best Mercy player in the world, your Skill Rating will eventually reflect it. But it may take longer than if you’re on a hero that makes it easier to stand out. Just keep that in mind during the grind for your golden Caduceus Staff. Or maybe Caduceus Blaster. Whipping out that pistol and racking up some damage numbers certainly wouldn’t hurt, considering how they they seem to impact rating. Right? 


Dot Esports is supported by our audience. When you purchase through links on our site, we may earn a small affiliate commission. Learn more about our Affiliate Policy
Author