AI policy changes prompted by board critique of incoherence and confusion

Meta has announced changes to its policies on manipulated and A.I.-generated content in response to recommendations from its Oversight Board. The changes come after the board found that Meta’s previous policies were unclear and should be reconsidered. The review was prompted by a highly edited video of President Biden that appeared on Facebook, which had been manipulated to make it appear as if the president was behaving inappropriately towards his adult granddaughter. The Oversight Board determined that the video did not violate Meta’s policies because it was not manipulated with artificial intelligence and did not show the president saying or doing anything he did not actually do. However, the board criticized Meta’s policy as lacking justification and focusing on how content is created rather than on preventing specific harms, such as disrupting electoral processes.

In response to the Oversight Board’s recommendations, Meta will begin labeling AI-generated content starting in May and will adjust its policies to label manipulated media with informational labels and context. This is a departure from the previous approach of removing content based on whether it violated Meta’s community standards. The company will now add labels to a broader range of content beyond just manipulated media, particularly if the content poses a high risk of deceiving the public on important matters. Meta acknowledged that its previous policy on manipulated videos was too narrow, focusing only on those created or altered by AI to make a person appear to say something they did not say. With advancements in AI technology, the company recognizes the need to address manipulation in various forms, including audio and photos.

Meta’s Vice President of Content Policy, Monika Bickert, emphasized the importance of providing the public with more information and context when it comes to manipulated and AI-generated content. The company’s new approach aims to give users a better understanding of the content they are viewing and reduce the risk of misinformation spreading online. Bickert acknowledged that the company’s policy on manipulated videos was outdated and did not account for the evolving landscape of AI technology. By updating its policies and implementing labeling for a wider range of content, Meta hopes to improve transparency and accountability on its platform.

The changes to Meta’s policies on manipulated and A.I.-generated content come as the company prepares for the fall elections. By taking action to address the spread of misinformation and deceptive content, Meta aims to protect the integrity of the electoral process and ensure that users have access to accurate information. The Oversight Board’s recommendations served as a catalyst for these policy changes, highlighting the need for Meta to reassess its approach to content moderation. As technology continues to advance, it is crucial for companies like Meta to adapt their policies to address emerging challenges and protect users from harmful content online.

Share This Article
mediawatchbot
4 Min Read