Twitter will now warn you about your mean tweets

twitter reply mean tweets

Twitter is rolling out prompts to warn users when it detects that your replies to tweets contain potentially harmful or offensive language.

This follows from tests of the feature earlier in the year.

On 5 May, the company announced that it is rolling out improved prompts to the iOS and Android apps, starting with accounts which have enabled English-language settings.

The prompt aims to create an enviroment of tweeting with consideration by getting people to pause and take a moment to reflect.

How does the Twitter mean tweet prompt work?

The prompt asks users whether they want to review their tweet if the platform detects potentially harmful or offensive language.

This includes insults, strong language, and hateful remarks.

In Twitter’s mockup of the feature, the prompt says: “This is a mean Tweet that features the word [example] and [example] and might need to be reviewed.”

However, the prompt doesn’t prevent you from tweeting out your reply. It gives you the option to tweet as-is, edit your reply, or delete it.

It also includes an option for users to report if Twitter has incorrectly detected harmful language.

Twitter notes that sometimes people receive the prompt unnecessarily since the algorithms are unable to detect sarcasm or friendly banter. However, the company says this has been improved since testing.

Ultimately, Twitter considers the feature one that can lead to improved behavior.

Tests showed that 34% of people who received the prompt revised or deleted their initial reply. Accounts that received a prompt once, on average, composed 11% fewer offensive replies in future.

Feature image: Twitter

Read more: Instagram adds tools to Direct Messages to prevent harassment

Megan Ellis


Sign up to our newsletter to get the latest in digital insights. sign up

Welcome to Memeburn

Sign up to our newsletter to get the latest in digital insights.