Home Social Regulatory Board Urges Meta to Revise Guidelines for AI-Created Adult Content

Regulatory Board Urges Meta to Revise Guidelines for AI-Created Adult Content

by admin

In light of investigations scrutinizing the way Meta governs AI-created explicit media, the company’s quasi-independent review panel, known as the Oversight Board, has prompted Meta to refine its guidelines concerning these images. The board has recommended that Meta transition from using the term “derogatory” to “non-consensual” in describing such content, and to shift its policy on these images to the “Sexual Exploitation Community Standards” category rather than the “Bullying and Harassment” category.

Presently, Meta’s approach to handling AI-created explicit images originates from a rule against “derogatory sexualized photoshop” located within its Bullying and Harassment category. The Board is also encouraging Meta to adopt more general wording in place of “photoshop” to denote manipulated media.

Moreover, Meta has a stance against non-consensual imagery, particularly if it is “non-commercial or produced in a personal setting.” The Oversight Board has suggested that this condition should not be a prerequisite for the removal or ban of images created or altered by AI without consent.

These proposals follow two notable incidents involving explicit, AI-created images of prominent figures shared on Instagram and Facebook, putting Meta under scrutiny.

In one incident, an AI-created nude photo of a public personality from India was circulated on Instagram. Despite multiple reports from users, Meta failed to remove the image, closing the complaints within 48 hours without further investigation. The situation was only rectified after the Oversight Board intervened, leading to the content’s removal and the associated account’s ban.

Another case involved an AI-simulated image mimicking a public figure from the U.S., shared on Facebook. Having previously included the image in its Media Matching Service (MMS) – a database of images violating its policies to help identify similar content, Meta promptly took down the photo upon its re-upload on Facebook.

Significantly, Meta only added the image of the Indian figure to the MMS after the Oversight Board’s prodding, explaining that it was not included initially due to lack of media attention on the matter.

The Oversight Board expressed concern, noting that many victims of deepfake intimate photos are not celebrities and are either left to deal with the spread of these non-consensually created images or must report each incident individually.

The Breakthrough Trust, an Indian nonprofit working against online gender-based violence, pointed out that these problems and Meta’s policies carry cultural significance. Submitted commentary to the Oversight Board highlighted how non-consensual images are often dismissed as issues of identity theft rather than being recognized as gender-based violence.

“Victims frequently endure further victimization when reporting such incidents to police or courts, often being questioned about their role in the dissemination of their images—even when they are victims of deepfakes. Once online, these images rapidly spread beyond the original posting site, making removal from just the initial platform insufficient to stop their circulation,” Barsha Charkorborty, media head at the organization, conveyed to the Oversight Board.

During a discussion with TechCrunch, Charkorborty noted that users are often unaware when their complaints are automatically deemed “resolved” after 48 hours and argued that Meta should not apply a uniform timeline to all cases and should raise awareness about these issues among its users.

Devika Malik, an expert on platform policy who previously worked with Meta’s South Asia policy group, shared with TechCrunch that the reliance on user reports for removing non-consensual content is particularly deficient for AI-generated media.

“This unfairly requires affected individuals to verify their identity and the absence of consent, which becomes more challenging with synthetic media. This delay in verification allows the harmful content to spread extensively,” Malik remarked.

Aparajita Bharti, Founding Partner at The Quantum Hub (TQH), a think tank based in Delhi, suggested that Meta should facilitate users to give more context when flagging content, aiding them in navigating Meta’s categorization of rule infractions.

“We are optimistic that Meta will extend beyond the Oversight Board’s decision to offer flexible and user-centric options for reporting such material,” Bharti commented.

“Acknowledging that a perfect understanding of the subtle differences in reporting categories is unreasonable to expect from users, we argue for frameworks that ensure genuine concerns are not overlooked due to the technicalities of Meta’s content moderation policies,” she added.

Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles