Twitch is taking steps to tackle harassment and toxicity on its platform, introducing an in-house chat filtering system for the first time.
The new ‘AutoMod’ feature will catch messages that could be offensive or contribute to a negative chat experience, and flag them for channel moderators.
The level of filtering can be set by individual channel owners, ranging from no moderation at all to strict moderation across four specific categories—identity language, sexually explicit language, aggressive language, and profanity.
Twitch already allows moderators to manually remove messages by banning or timing out the user, but these messages always appear before they are removed and can be missed, particularly in busy chats. With AutoMod, any message flagged will not appear until approved by a moderator.
However, in a busy chat it remains to be seen how moderators won’t become overwhelmed and end up with huge queues of messages waiting for review. Tests of the highest level of moderation show that the filtering is currently catching just about anything you can think of (the video is pretty obviously NSFW).
Twitch has repeatedly stated its commitment to working with partners to tackle harassment and abuse on its platform, but many channels either cultivate a toxic culture or simply don’t care to moderate it. While the AutoMod feature is a welcome step forward, it still requires a lot of manual mod work and it remains to be seen if channel owners will be willing to put in the time to clean up the service.