AI-Enabled Samsung Galaxy Z Series with Innovative Foldable Form Factor & Significantly Improved Screen Delivers New User Experiences Across Productivity, Communication & Creativity The…
Twitter’s year-end calendar outlines steps to make it safer
The year I stopped getting advent calendars in December was a sad one. Not only did it mean I was getting old, but the joy of going to sleep every night knowing a chocolate was waiting for me the next day was a feeling I’d sorely miss.
But now there’s a new calendar in town as we creep towards the Christmas season: one not filled with chocolates, but rather Twitter’s updates on its safety work. And really, who can say which one is better?
In a blog post Thursday, the company wrote that it had “far too often in the past said [it’d] do better”, but had “fallen short” of its goals for transparency. So now we have dates on when the company will remind us that world leaders threatening war on its platform is in the public interest. Neat!
If you want to follow along at home, here’s when you’ll be getting some sweet updates on Twitter’s policies.
27 October
Non-consensual nudity: Twitter’s rules on non-consensual nudity will now include content where the victims weren’t aware images were taken (like up a skirt or with hidden webcams). Seems basic, but this is Twitter.
Suspension appeals: Twitter says that if an account is suspended for abuse and it appeals said suspension, the company will provide “detailed descriptions” of how the account violated its rules (provided the company didn’t make an error, of course).
1 November
Educating abusers: When accounts are locked for rule violation, Twitter says it will explain which policies have been violated with an email and inside the app.
3 November
Violent groups (new): Twitter will be suspending accounts for organisations “that use violence to advance their cause”. (If you’re worried this may be dangerously vague because many protests turn violent for a variety of reasons even when the cause itself promotes anti-violence, then you’re not alone.)
Hateful imagery and hate symbols (new): Twitter is cracking down on hate symbols in avatars and profile headers. Hateful imagery attached to tweets will come with a warning. The company probably means the likes of swastikas, but says that examples will only be announced once the policy is finalised.
Unwanted sexual advances: Apparently Twitter has already been “taking enforcement action on this content”, but it will now be explicitly stated in its rules.
From late October through to January 2018, Twitter will be addressing user safety issues
Media policy updates: Again, they’re going to be “more explicit” about what they consider “sensitive media” (like porn or graphic violence).
Spam, and “related behaviours”: Ever mysterious, Twitter says it will be adding “additional definitions around spam”. Will this include Russian bots? Find out 3 November.
13 November
New tech for report monitoring: Twitter will launch improved technology to rank reports “most likely” to violate its rules.
14 November
Process for reviewing reports (new): Look forward to an article in which Twitter explains “factors [it] considers when enforcing [its] rules” — and probably more reasoning as to why provoking World War III is newsworthy.
22 November
Hateful display name: “Abusive” display names will be disallowed.
14 December
Condoning and glorifying violence: Twitter will now also remove content that glorifies or condones acts of violence that could “result in death or serious physical harm”. Will the company define serious? Is a broken toe serious? A concussion?
Witness reporting (new): Changes will begin rolling out for those not the target of abuse, but rather third party “witness” reports. The company also says that third party reporters will receive notifications — which is confusing as I personally have reported multiple tweets on behalf of a “group of people” and received notifications for all of them.
20 December
Unwanted sexual advances, part II: Twitter says that when it reviews reports for unwanted sexual advances, it will look at previous interactions to better decipher if an interaction is consensual or not. The company’s wording is confusing, but I imagine it refers to an outsider reporting something they believe to be non-consensual, but is actually friendly banter.
10 January
Witness reporting, part II: Vague changes to witness reporting that will be “rolled out to all”.
And there you have it. Merry Christmas, and a happy new year.
Featured image: Dan Taylor-Watt via Flickr (CC BY 2.0, edited)