Twitch To Use Machine Learning To Combat Cyber Harassment

Twitch has created a new tool that uses machine learning to detect people who may be attempting to evade bans in its effort to combat things like hate raids, harassment, trolls, and the likes. The tool is said to be inspired in large part by community feedback around the need for better ways to curb ban evaders.


The new tool, called Suspicious User Detection, can identify users as “likely” or “possible” people who have evaded bans from a streamer’s channel. The machine learning model powering the tool identifies potential evaders by evaluating things such as their behavior and characteristics about their account and compares that information against accounts that have been banned from a streamer’s channel.


Messages from “likely” evaders won’t be sent to chat, but streamers and their mods can see them. Streamers and mods can choose to monitor a likely ban evader, which adds that user to a monitoring list and puts a message next to a user’s name noting the monitoring (as shown in the GIF below) or ban them. “Possible” evaders’ messages will appear in the chat, but streamers/mods can opt to have those messages blocked from the chat as well.

Twitch tackles cyberbullying with a new machine learning tool called Suspicious User Detection.


The company said it will turn Suspicious User Detection on by default, but streamers can tweak or turn off the tool if they want. Streamers and mods can also manually select to monitor users of whom they are suspicious. He added that the tool was designed to give mods and creators more information about potential ban evaders so they could make more efficient and informed decisions within their channel.


The tool carries some potential of curbing cyber harassment but it remains uncertain how effective the tool will be in practice or if ban evaders can find ways to get around the tool.

Leave a Reply

0 Comments
Inline Feedbacks
View all comments