An artificial intelligence-based system could soon be responsible for assessing the potential harm and privacy risks of up to 90% of updates made to Meta’s apps, such as Instagram and WhatsApp, according to internal documents seen by NPR.
NPR notes that a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission requires the company to conduct privacy reviews of its products, assessing the risks of any potential updates.
Under the new system, according to Meta, product teams will be asked to fill out a questionnaire about their work, after which they usually receive an “instant solution” with the AI risks identified, as well as the requirements that the update or feature must meet before launch.
This AI-focused approach would allow Meta to update its products faster, but one former executive told NPR that it also creates “higher risks” because “the negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”
“As the risks evolve and our program matures, we are improving our processes to better identify risks, optimize decision-making, and enhance the people experience,” the spokesperson said. “We use technology to add consistency and predictability to low-risk decisions, and rely on human expertise to carefully evaluate and oversee new or complex issues.”