Google has confirmed that it will sign the European Union’s Common Code of Practice on Artificial Intelligence, a voluntary framework that aims to help AI developers implement processes and systems in line with the EU Artificial Intelligence Act.
Notably, earlier this month, Meta announced that it would not sign the code, calling the implementation of EU AI legislation “excessive” and stating that Europe is “heading down the wrong path in AI.”
Google‘s commitment comes a few days before rules for suppliers of “general-purpose AI models with systemic risk” are set to come into effect on August 2. These rules are likely to affect major companies such as Anthropic, Google, Meta, and OpenAI, as well as several other large generative models, and they will have two years to fully comply with the AI law.
In a blog post on Wednesday, Kent Walker, Google’s president of global affairs, acknowledged that the final version of the code of practice was better than what the EU had originally proposed, but he still noted reservations about the AI Act and the code.
“We remain concerned that the AI Law and the Code risk slowing down the development and adoption of AI in Europe. In particular, departures from EU copyright law, steps that slow down approvals, or requirements that disclose trade secrets could slow down the development and adoption of European models, harming Europe’s competitiveness,” Walker wrote.
By signing up to the EU code of practice, AI companies agree to adhere to a number of guidelines, including providing updated documentation about their AI tools and services, not training AI on pirated content, and respecting content owners’ requests not to use their works in their datasets.
The EU’s landmark AI law, which regulates the use of AI based on risk assessment, prohibits certain “unacceptable risk” uses, such as cognitive behavioral manipulation or social scoring. The rules also define a set of “high-risk” applications, including biometrics and facial recognition, as well as the use of AI in areas such as education and employment. The law also requires developers to register AI systems and fulfill risk and quality management obligations.