Google has made one of the most significant changes to its artificial intelligence principles since they were first published in 2018. The search giant has edited the document to remove the promise that it will not “develop or deploy” AI tools for use in weapons or surveillance technologies. Previously, these principles included a section titled “applications we will not use,” which is missing from the current version of the document.
Instead, there is a section called “responsible development and deployment.” In it, Google says it will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and generally accepted principles of international law and human rights.”
This is a much broader commitment than the one the company made late last month, when an earlier version of the AI principles was still available on its website. For example, with regard to weapons, the company previously stated that it would not develop AI for use in “weapons or other technologies whose primary purpose or application is to cause or directly facilitate harm to humans.” With regard to AI-assisted surveillance tools, the company has stated that it will not develop technologies that violate “internationally accepted norms.”









