Skip to main content
Mar 19

Meta's AI Takes Content Control, Phasing Out Outside Vendors

Meta announced on Thursday its initiative to introduce more sophisticated AI systems for content enforcement, signaling a strategic shift away from re

2 min read71 views3 tags
Originally reported bytechcrunch

Meta announced on Thursday its initiative to introduce more sophisticated AI systems for content enforcement, signaling a strategic shift away from relying on third-party vendors. These advanced systems are tasked with detecting and removing content associated with terrorism, child exploitation, illicit drugs, fraud, and scams.

The company plans to fully integrate these advanced AI systems across its suite of applications once they consistently demonstrate superior performance compared to existing content enforcement methods. Concurrently, this deployment will facilitate a reduction in its dependence on external vendors for content moderation.

In a blog post, Meta clarified its approach: “While we’ll still have people who review content, these systems will be able to take on work that’s better-suited to technology, like repetitive reviews of graphic content or areas where adversarial actors are constantly changing their tactics, such as with illicit drug sales or scams.”

Meta anticipates that these AI systems will lead to more accurate detection of violations, enhanced prevention of scams, quicker responses to real-world events, and a reduction in instances of over-enforcement.

Early tests of the AI systems have yielded promising results. The company reports that these systems can detect twice as much violating adult sexual solicitation content as its human review teams, while also achieving an error rate reduction of over 60%. Furthermore, the AI can identify and prevent more impersonation accounts targeting celebrities and other high-profile individuals, and help thwart account takeovers by flagging suspicious activities like logins from new locations, password changes, or profile edits.

Additionally, Meta states that these systems are capable of identifying and mitigating approximately 5,000 scam attempts daily, specifically targeting instances where fraudsters try to trick users into divulging their login credentials.

Meta emphasized the continued importance of human involvement, writing in its blog post: “Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions.” The company further added, “For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.”

This development comes at a time when Meta, alongside other major technology companies, is facing numerous lawsuits seeking to hold social media giants accountable for alleged harm to children and young users.

In a separate announcement on Thursday, Meta also unveiled the global launch of a Meta AI support assistant, designed to provide users with 24/7 access to support. This assistant is being rolled out within the Facebook and Instagram apps for iOS and Android, as well as integrated into the Help Centers for both platforms on desktop.

ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news