Meta revealed on Thursday its plan to introduce more sophisticated AI systems for content enforcement, as the company aims to decrease its dependence on third-party vendors. The AI will be tasked with identifying and removing content related to terrorism, child exploitation, drugs, fraud, and scams.
Meta intends to implement these advanced AI systems across its applications once they can consistently surpass the effectiveness of current methods. Concurrently, the company will lessen its reliance on external vendors for content enforcement.
“Although we will still employ human content reviewers, these systems will be better suited for tasks like repetitive reviews of graphic content or areas where malicious actors frequently change tactics, such as in illicit drug sales or scams,” Meta stated in a blog post.
Meta is confident that these AI systems will detect more violations with improved precision, better prevent scams, react swiftly to real-world incidents, and decrease over-enforcement.
The company reports promising early tests, showing the AI can identify twice as much violating adult sexual solicitation content compared to review teams while reducing error rates by over 60%. Additionally, the systems are said to detect and prevent more impersonation accounts involving celebrities and high-profile figures and help prevent account takeovers by spotting signals like logins from new locations, password changes, or alterations to profiles.
Moreover, Meta claims these systems are capable of identifying and thwarting around 5,000 scam attempts daily, where scammers attempt to trick individuals into providing their login information.
“Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high-impact decisions,” Meta noted in the blog post. “For example, human involvement will remain crucial in making the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement.”
This development coincides with Meta’s recent relaxation of content moderation policies following President Donald Trump’s second term in office. Last year, the company ended its third-party fact-checking program, favoring a Community Notes model similar to X. It also relaxed restrictions on mainstream discourse topics, encouraging users to engage with political content in a more personalized manner.
Simultaneously, Meta and other major tech companies are facing multiple lawsuits that aim to hold social media giants responsible for potential harm to children and young users.
In addition, Meta announced the launch of a Meta AI support assistant, providing users with 24/7 support. This assistant is being introduced globally on Facebook and Instagram apps for iOS and Android, as well as in the Help Center on their desktop platforms.

