AI-Powered System to Evaluate Potential Harms and Privacy Risks of Meta Apps Updates
Internal documents reportedly viewed by NPR suggest that an AI-powered system could soon be responsible for evaluating up to 90% of updates made to Meta apps like Instagram and WhatsApp, assessing potential harms and privacy risks.
According to NPR, a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission mandates privacy reviews of products to evaluate potential risks of updates. Previously, these reviews were predominantly conducted by human evaluators.
The new system will require product teams to fill out a questionnaire about their work, receiving an “instant decision” with AI-identified risks and necessary requirements for an update or feature to launch.
While this AI-centric approach may enable Meta to update products more swiftly, a former executive expressed concerns about the potential “higher risks” as negative consequences of changes may not be adequately addressed before causing issues in the world.
Meta has indicated a shift in its review system, highlighting that only “low-risk decisions” will be automated, with “human expertise” retained to address intricate and novel issues.