Skip to main content
03 April, 2024

Social Media Takes on the Challenge of Differentiating AI-Generated Content

As generative AI technology accelerates, discerning between content crafted by humans and that generated by AI is becoming an increasingly complex task. The enhancement of AI’s capacity to produce content that rivals human quality amplifies this difficulty for the general populace. The World Economic Forum underscored the severity of this development in a recent report, designating AI-fueled misinformation as the foremost short-term threat to global stability, with the potential to deeply polarize societies and weaken democratic foundations.

Consequently, social media platforms are compelled to adopt a proactive stance and safeguard their users against the proliferation of AI-generated disinformation. Beyond individual vigilance, it is imperative that these platforms institute robust guidelines to curtail the misuse of AI-generated content. Below is a summary of the initiatives and policies major social media companies have introduced to address this emerging issue:

Youtube:

Youtube creators will be required to check a box that discloses whether their video “is altered or synthetic and seems real”, which will trigger a new label to be displayed on the video clip. Not all AI-generated content requires disclosure—for example, simple color adjustments and special effects powered by AI, or if the content is “clearly unrealistic.”

Meta (Instagram & Facebook):

Photorealistic AI-generated images created by Meta AI features will include visible markers as well as invisible watermarks to help users and platforms identify AI-generated content. Users will also be able to disclose whether the content is AI generated by adding a label. 

X:

X, formerly known as Twitter, wrote the clause “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (“misleading media”).” in April 2023. X will label or remove content that falls under this category, as well as lock accounts that continuously share heavily manipulated content for deceptive purposes.

TikTok:

Users should disclose realistic content created by generative AI clearly. Also, AI content that resembles real figures is prohibited. Users can toggle the “AI-generated content” label when posting the content, and it will not impact the engagement. Users can also report problematic AI-generated content for misinformation – Deepfakes, synthetic media, and manipulated media.

Despite these proactive efforts by social media entities to combat AI-generated disinformation, formidable obstacles persist. The refinement of AI poses challenges to identifying manipulated content, and the sheer volume of daily uploads taxes the monitoring capabilities of these platforms. The reliability of self-disclosure policies is also compromised by the potential duplicity of malicious actors. Vague criteria for determining what constitutes “clearly unrealistic” content may lead to enforcement inconsistencies, and the ongoing evolution of AI necessitates regular updates to content moderation practices. Furthermore, there is a critical balance to be struck, as excessively rigorous regulations might stifle the legitimate and innovative use of AI. These challenges underscore the need for a dynamic and comprehensive approach to content moderation, which should be coupled with persistent efforts to educate users about media literacy and the ethical deployment of AI technologies.