A simple Get to know me section on Instagram or TikTok poses a serious security risk as it aligns with common security questions used…
Labeling AI-generated images is long overdue as some users tend to create realistic images of prominent people without markers, we need labels and we need them fast.
It’s clear that apps need to take some form of control and Meta says they will be labeling AI-generated images on their Meta platforms, Facebook, Instagram, and Threads, as a measure to improve transparency.
Meta has partnered with a few firms in an effort to pinpoint common technical standards for identifying AI content on both video and audio.
Meta will be labeling images that users post on Facebook, Instagram and Threads when it can detect industry-standard indicators that the content is AI-generated.
This is all so people will know which images are imagined with AI and which aren’t.
Nick Glegg Meta’s been working with companies to develop common standards for identifying AI images.
Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI). The invisible markers we use for Meta AI images – IPTC metadata and invisible watermarks – are in line with PAI’s best practices..”
While encouraging creativity using AI tools such as Meta AI image generator, which helps people create pictures with sample text prompts, Meta says it’s important to answer users on where the boundary lies.
In their blog, Meta wrote: “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology. So it’s important that we help people know when photorealistic content they’re seeing has been created using AI. We do that by applying “Imagined with AI” labels to photorealistic images created using our Meta AI feature, but we want to be able to do this with content created with other companies’ tools too.”
What’s it all for?
While we wait for Meta to put together systems to detect AI, it’s important to find the reason for the shift, and the why for labeling images.
The first reason is simple, transparency, and labelling allows users to make informed decisions about engagement, and reliance on the information presented.
Copyright and attribution means AI images may incorporate elements from copyrighted material, and labeling can help clarify the source and potential copyright issue.
These days it’s easy to use AI to create realistic but fake images and videos to spread misinformation. Labelling AI can help mitigate the spread of harmful content and protect users from deception.
Where did it come from?
The idea to label images is not new, Google Image’s in one application looked at image recognition and labeling in 2016. Pinterest around 2017 introduced automatic board suggestions with YouTube implementing a system to flag violent or potentially violent and harmful content for review.
These were all in the pipeline and we are likely to see improvement when we add AI into the mix.