Fitbit has launched a new Sleep Profile feature for its Premium subscribers, which provides an analysis of your sleep with different archetypes. While Fitbit…
Twitter is rolling out prompts to warn users when it detects that your replies to tweets contain potentially harmful or offensive language.
This follows from tests of the feature earlier in the year.
On 5 May, the company announced that it is rolling out improved prompts to the iOS and Android apps, starting with accounts which have enabled English-language settings.
The prompt aims to create an enviroment of tweeting with consideration by getting people to pause and take a moment to reflect.
How does the Twitter mean tweet prompt work?
The prompt asks users whether they want to review their tweet if the platform detects potentially harmful or offensive language.
This includes insults, strong language, and hateful remarks.
In Twitter’s mockup of the feature, the prompt says: “This is a mean Tweet that features the word [example] and [example] and might need to be reviewed.”
After testing and improving prompts that ask you to review a potentially harmful or offensive reply, we learned that this feature can help encourage more meaningful convos.
— Twitter Support (@TwitterSupport) May 5, 2021
However, the prompt doesn’t prevent you from tweeting out your reply. It gives you the option to tweet as-is, edit your reply, or delete it.
It also includes an option for users to report if Twitter has incorrectly detected harmful language.
Twitter notes that sometimes people receive the prompt unnecessarily since the algorithms are unable to detect sarcasm or friendly banter. However, the company says this has been improved since testing.
Ultimately, Twitter considers the feature one that can lead to improved behavior.
Tests showed that 34% of people who received the prompt revised or deleted their initial reply. Accounts that received a prompt once, on average, composed 11% fewer offensive replies in future.
Feature image: Twitter