Twitter is testing a new feature that lets in-house contributors attach notes to tweets that may contain misleading images and videos.
Rolling out on May 31, the new pilot feature addresses misinformation associated with AI-generated images and manipulated videos.
Community Notes for media will appear for all tweets containing these types of content.
From AI-generated images to manipulated videos, it’s common to come across misleading media. Today we’re piloting a feature that puts a superpower into contributors’ hands: Notes on Media
— Community Notes (@CommunityNotes) May 30, 2023
Notes attached to an image will automatically appear on recent & future matching images. pic.twitter.com/89mxYU2Kir
Only contributors with a Writing Impact of 10 or above will have the option to mark tweets containing potentially misleading media, regardless of whoever tweets it. Writing Impact scores increase whenever users mark a Community Note as “Helpful.”
Once an image is noted by an author, it will be marked with an “about the image” label to inform readers how the media should be interpreted. The specific tweet itself will not be marked.
While the beta version only supports adding notes to images, the social media platform plans to expand the feature to include videos and tweets with multiple videos and images.
Community Notes was first launched in 2022 as “Birdwatch.” Following Elon Musk’s takeover, the fact-checking system was then renamed to Community Notes and made available to all Twitter users, along with many other features.
Previously, only users from the U.S. were allowed to be Community Notes contributors, but early this year the social media company also opened its program to contributors from the U.K., Ireland, Australia, and New Zealand.
Since then, the tool has been used to debunk false statements from popular tweets by adding context to them via links and reports.