Twitter Now Prompts Users To Revise ‘Harmful Replies’
Trolls beware. Twitter releases feature that will deliver a ‘reconsider prompt’ for users, if they trying to tweet something nasty or mean
Twitter continues its effort to clean up the toxic atmosphere on its platform, with the deployment of a feature it has been testing for the best part of a year.
This time last year Twitter began giving users a chance to rethink an offensive or hurtful reply to a tweet, by testing a prompt for users when they reply to a tweet using “harmful language.”
That experiment came after Twitter in January 2020 warned it would experiment with limiting replies to a user’s tweet, in effort to combat online abuse.
Rethinking replies
The way the new feature that was being tested works is when a user hits “send” on their reply, they will be told if the words in their tweet are similar to those in posts that have been reported, and asked if they would like to revise it or not.
This, Twitter hopes, will give users a chance to rethink or reconsider their potentially mean or nasty reply when things get heated.
Now a year later Twitter in a blog post has confirmed it is rolling out the feature to English-speaking Android or iOS users, that automatically detects mean replies and prompts people to review it before sending them.
Users will have three options when faced with this prompt – tweet as is, edit or delete.
“People come to Twitter to talk about what’s happening, and sometimes conversations about things we care about can get intense and people say things in the moment they might regret later,” Twitter blogged. “That’s why in 2020, we tested prompts that encouraged people to pause and reconsider a potentially harmful or offensive reply before they hit send.”
“Based on feedback and learnings from those tests, we’ve made improvements to the systems that decide when and how these reminders are sent,” it wrote. “Starting today, we’re rolling these improved prompts out across iOS and Android, starting with accounts that have enabled English-language settings.”
Twitter said it is seeking to get people to reconsider insults, strong language, or hateful remarks.
The microblogging service said its testing of the feature, resulted in people sending less potentially offensive replies across the service, and improved behavior on Twitter.
Twitter found that if prompted, 34 percent of people revised their initial reply or decided to not send their reply at all.
And it also found that after being prompted once, people composed, on average, 11 percent fewer offensive replies in the future.
And if prompted, people are less likely to receive offensive and harmful replies back.
Going forward Twitter said it will continue to explore how prompts – such as reply prompts and article prompts – and other forms of intervention can encourage healthier conversations on Twitter.
Toxic atmosphere
The move follows previous attempts by Twitter to ease the toxic atmosphere and comments the microblogging website is unfortunately famous for.
In May 2020 Twitter tested ‘new conversation settings’ that will allow users to limit who can reply to their tweets. Twitter said that it is looking into the feature, because “unwanted replies make it hard to have meaningful conversations.”
Co-founder and chief executive officer (CEO) of Twitter Jack Dorsey in April 2019, said he wanted to change the platform and move “away from outrage and mob behaviour and towards productive, healthy conversation.”
One of those measures to stop its platform being used to distort the political landscape for example, saw Twitter in November 2019 ban all political advertising worldwide.