Truecaller has added a new SMS feature to its app that filters important messages and protects users from spam and fraud. Smart SMS is…
Twitter never seems to get it right when it comes to online safety on its platform. The site has fought many a battle against users worried they aren’t doing enough to combat hate speech and harassment. After implementing a few changes in November, the site this week announced that it has rolled out even more in an attempt to create a safer platform for all.
The changes implemented include safer search results (which will no longer feature blocked or muted accounts), as well as keeping “low-quality replies” away from the top of threads.
The bulk of the criticism lobbied against the site addresses the rise of the extreme right in the US — and the subsequent aiding in the election of President Donald Trump. It is unlikely that these new changes will appease those concerned.
‘Low-quality replies’ and safer search are the new features heading to Twitter in the coming weeks
The biggest issue Twitter faces is that they’re cosmetic. They do not eliminate issues — they cover them up. It is also unclear what Twitter means when they describe “low-quality” replies. They assert that only “relevant conversation” will be shown, but does that mean it will fall into the trap of becoming an echo chamber like Facebook? If an original tweet uses hate speech, will “relevant conversation” mean “similar conversation”?
CEO Jack Dorsey asserts that they are continuing to work on ways to improve the site’s safety features, but it is unlikely they will ever win in this arduous ethical battle.
Making progress every day. We’ll fix this! https://t.co/wRiFAsjmbN
— jack (@jack) February 7, 2017
While these features are not yet live, Twitter does suggest that these changes will be coming to the site in the coming weeks.