The Facebook insider building content moderation for the AI era
Home Startups The Facebook Insider Crafting Content Moderation Strategies for the Age of AI

The Facebook Insider Crafting Content Moderation Strategies for the Age of AI

by admin

In 2019, Brett Levenson transitioned from Apple to Facebook amid the fallout from the Cambridge Analytica scandal, believing he could improve the company’s content moderation through advanced technology. However, he quickly discovered that the challenges were much more complex. Human moderators were required to digest a cumbersome 40-page policy, often translated poorly, and had mere seconds to assess whether content violated these rules, with decision-making accuracy only slightly above chance.

Levenson explained that this reactive approach was untenable in a landscape where adversaries are increasingly agile. The emergence of AI chatbots exacerbated content moderation failures, leading to alarming incidents where harmful advice or inappropriate content slipped through the cracks.

In response to these challenges, Levenson conceptualised “policy as code,” which aims to transform static policy documents into actionable, digestible logic that can adapt to changing circumstances. This vision led to the creation of Moonbounce, a company that recently secured $12 million in funding, spearheaded by Amplify Partners and StepStone Group.

Moonbounce’s technology provides an additional safety measure across various content-generation platforms, employing its own large language model to analyse policy documents, evaluate content in real-time, and respond with actions within 300 milliseconds. Depending on customer needs, this may involve either delaying content distribution for human review or immediately blocking risky material.

Currently, Moonbounce supports over 40 million daily reviews and caters to more than 100 million active users, helping platforms ranging from dating apps to AI character developers. Levenson emphasised that safety should be viewed as a product advantage rather than an afterthought, enabling customers to differentiate themselves by incorporating robust safety measures.

The significance of this technology is underscored by the increasing pressure on AI firms, which face scrutiny after incidents where chatbots directed vulnerable users toward harmful behaviours. Lenny Pruss from Amplify Partners highlighted the pressing need for effective, real-time content moderation as AI technologies become integral to online platforms.

As Moonbounce operates independently from the content being moderated, it avoids the contextual overload that chatbots experience. The company is currently developing a feature called “iterative steering” to create more supportive interactions with users, particularly in sensitive cases, modifying chatbot feedback to enhance empathetic responses.

When discussing potential acquisition strategies, Levenson acknowledged Moonbounce’s alignment with major platforms like Meta. However, he expressed concern about the risk of the technology being restricted should it be absorbed by a larger company. Ultimately, his focus remains on ensuring that Moonbounce’s innovations in safety and moderation remain accessible and beneficial across the industry.

Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles