US, Britain and Brussels to sign agreement on AI standards


Stay informed with free updates

The three major western jurisdictions building technologies for artificial intelligence have signed the first international treaty on the use of AI that is legally binding, as companies worry that a patchwork of national regulations could hinder innovation.

The US, EU and UK signed the Council of Europe’s convention on AI on Thursday, which emphasises human rights and democratic values in its approach to the regulation of public and private-sector systems. Other countries were continuing to sign the pact on Thursday.

The convention was drafted over two years by more than 50 countries, also including Canada, Israel, Japan and Australia. It requires signatories to be accountable for any harmful and discriminatory outcomes of AI systems. It also requires that outputs of such systems respect equality and privacy rights, and that victims of AI-related rights violations have legal recourse.

“[With] innovation that is as fast-moving as AI, it is really important that we get to this first step globally,” said Peter Kyle, the UK’s minister for science, innovation and technology. “It’s the first [agreement] with real teeth globally, and it’s bringing together a very disparate set of nations as well.”

“The fact that we hope such a diverse group of nations are going to sign up to this treaty shows that actually we are rising as a global community to the challenges posed by AI,” he added.

While the treaty is billed as “legally enforceable”, critics have pointed out that it has no sanctions such as fines. Compliance is measured primarily through monitoring, which is a relatively weak form of enforcement.

Hanne Juncher, the director in charge of the negotiations for the council, said: “This is confirmation that [the convention] goes beyond Europe and that these signatories were super invested in the negotiations and . . . satisfied with the outcome.”

A senior Biden administration official told the FT the US was “committed to ensuring that AI technologies support respect for human rights and democratic values” and saw “the key value-add of the Council of Europe in this space”.

The treaty comes as governments develop a host of new regulations, commitments and agreements to oversee fast-evolving AI software. These include Europe’s AI Act, the G7 deal agreed last October, and the Bletchley Declaration which was signed in November by 28 countries, including the US and China, last November. 

While the US Congress has not passed any broad framework for AI regulation, lawmakers in California, where many AI start-ups are based, did so last week. That bill, which has split opinion in the industry, is awaiting the state governor’s signature.

The EU regulation, which came into force last month, is the first major regional law, but the UK’s Kyle points out that it remains divisive among companies building AI software.

“Companies like Meta, for example, are refusing to roll out their latest Llama product in the EU because of it. So it’s really good to have a baseline which goes beyond just individual territories,” he said. 

Although the EU’s AI Act was seen as an attempt to set a precedent for other countries, the signing of the new treaty illustrates a more cohesive, international approach, rather than relying on the so-called Brussels effect.

Věra Jourová, vice-president of the European Commission for values and transparency, said: “I am very glad to see so many international partners ready to sign the convention on AI. The new framework sets important steps for the design, development and use of AI applications, which should bring trust and reassurance that AI innovations are respectful of our values — protecting and promoting human rights, democracy and rule of law.”

“This was the basic principle of . . . the European AI Act and now it serves as a blueprint around the globe,” she added.

Additional reporting by James Politi in Washington



Source link

Leave a Comment

Your email address will not be published. Required fields are marked *