[ad_1]

Twitch streamers can finally block banned users from watching their streams, thanks to a recent update to the platform’s anti-harassment features.

Twitch first announced the feature in August. Though tools to prevent banned users from participating in a stream’s chat have existed for years, streamers could do little from preventing banned users from actually watching their streams.

Now, channel owners can turn on bans by toggling on the “Stop banned users from viewing stream” in the moderation settings of their Creator Dashboard.

Twitch rolled out the feature in response to community feedback. During the August episode of Patch Notes, its monthly update series, Twitch Senior Product Manager Trevor Fisher said that banning unwanted viewers is the first step in dealing with the platform’s harassment issues.

“We’ve gotten a lot of feedback over the years, to be honest, that people want their channel bans to do more,” Fisher said.

The feature is also built into Twitch’s blocking tools — if a streamer blocks a user, the user will be automatically banned from watching streams. However, it only applies to users who are logged into their Twitch account, and for now, the site doesn’t enable IP blocking. That means that if someone is blocked from viewing a stream, they could still watch it if they log out of their account. Twitch eventually plans to add features to stop unwanted viewers from watching VODs, highlights and clips.

“Everyone in the comments claiming this is silly or a negative thing has never had a stalker or feared for their safety,” Twitch streamer Divatron9000 said on X (formally Twitter). “There’s a reason we’ve been asking for this feature for YEARS and I’m happy they are FINALLY listening.”

The feature isn’t available to everyone yet. In comments replying to streamers, Twitch said it may take time for all channel owners to have access. Twitch did not specify a timeline.

“These updates roll out over time, so some people get it a bit sooner than others,” the Twitch Support page posted.

 

Twitch has been rolling out stronger moderation tools in an effort to build a “layered” approach to safety. Last year, the platform added “Ban Evasion Detection” to catch users who try to get around channel bans. The tool uses machine learning to flag suspicious accounts and alert channel moderators. That platform also launched banned list swaps, so that channels can request and trade lists of banned users with each other. By accepting another channel’s request to swap lists, a channel will automatically restrict all users on the other channel’s list. Moderators can choose to manually approve or monitor the other channel’s banned users.

Marginalized communities are particularly vulnerable to targeted harassment on Twitch — particularly Black and trans streamers. In March, streamers pressured Twitch to crack down on hate raids, which flood a targeted streamer’s channel with vitriolic harassment, with the campaign #TwitchDoBetter.

In an interview with TechCrunch earlier this year, Twitch Product VP Alison Huffman said that the company has conducted “extensive” interviews with mods to better understand what safety tools they need.

“For a problem like targeted harassment, that is not solved anywhere on the internet,” Huffman said. “And, like it is in the non-internet world, it is a forever problem — and it’s not just one that has a singular solution.”

“What we’re trying to do here is just build out a really robust set of tools that are highly customizable, and then put them in the hands of the people who know their needs best, which are the creators and their moderators, and just allow them to tailor that suite of tools to meet their particular needs.”

 



[ad_2]

techcrunch.com

Previous articleUS stocks rise as traders wait for inflation data
Next articleParsec Crypto Analytics Firm Raises $4 Million In Investment Round Led By Uniswap and Billionaire Novogratz’s Galaxy Digital