League of Legends players get a bad rap in video game culture for being chronic misbehavers. People believe the stories because League is so hyper competitive. Turns out the stories aren’t true.
Jeffrey Lyn holds a PhD in Cognitive Neuroscience from the University of Washington. He is also the Lead Designer of Social Systems at Riot Games. In a new post on the League of Legends forums on Thursday, Lyn announced that in 2014, 95 percent of all League players received no punishment of any kind. That means these players received no “chat restrictions, ranked restrictions, game bans or permanent bans,” according to Lyn.
Riot Games takes bad behavior, or “toxic behavior” as Lyn refers to such, very seriously. This makes perfect sense, as League is an online-only game, and therefore Riot Games’s business model ultimately depends on whether anyone wants to be in the League environment. Bad behavior, including the sort of racism, sexism, and homophobia for which the hardcore gaming community at large is often known for, could sink League if it went unchecked.
That’s why the company doesn’t simply moderate forums or take other, reactive measures to curb toxic behavior. Riot has put numerous systems in place, with the efforts led largely by Lyn, to proactively influence the behavior of League players in a positive fashion.
The Tribunal is a player-enforced justice system by which LoL players may review complaints against player behavior with evidence such as chat logs, and hand out punishments as players see fit. In March 2013, in a panel at the Game Developers Conference, Lyn revealed that 80 percent of the time punishments handed down by the Tribunal agreed with what Riot moderators themselves would have handed down.
The success of the Tribunal led to the formation of a team, led by Lyn, to take further steps to improve player behavior. The team decided to turn off cross-team chat by default, to make players take the extra step to turn the feature on. This led to over 30 percent increases and decreases in positive and negative chat respectively.
Reform Cards were added to the Tribunal system so that players who received punishments through the Tribunal could see precisely why they received those punishments, i.e. through being shown the chat logs which led to their “convictions.” This resulted in up to 10 percent decreases in negative chat following a temporary ban from playing League delivered via the Tribunal. League players who served as arbitrators/judges through the Tribunal also received public recognition for their efforts on their profile pages, which led to a 100 percent increase in Tribunal participation.
Lyn and his team are currently running more radical experiments in taking proactive steps against toxic behavior. An experiment in priming, or the psychological principle that human performance can be improved through the use of a stimulus given shortly prior to attempting a task, revolved around showing players messages in the loading screen prior to the beginning of a match. The color of the messages, and the sort of message they delivered, were the variables.
In July 2014, Lyn also announced that Riot would be experimenting with a machine learning based system to hand out instant bans to badly-behaving players. Machine learning uses algorithms to make predictions, based on observations of patterns. Also at the Game Developers Conference in 2013, Lynn revealed that Riot’s player behavior team had carefully hand-coded thousands of chat logs to develop dictionaries of key words that were positive, neutral, or negative. Those dictionaries, when applied to a single chat log, could predict bad behavior with 80 percent accuracy.