Major blow to paris summit as US and UK refuse to sign global AI declaration
The Paris AI Action summit suffered a significant setback as the United States and the United Kingdom declined to sign a global declaration advocating for “inclusive and sustainable” artificial intelligence. While 60 nations endorsed the agreement, which aims to establish ethical, transparent, and secure AI governance frameworks, washington and London expressed reservations regarding its effectiveness in tackling broader regulatory and security concerns.
‘Not enough clarity’
As reported by The Guardian, a UK government representative stated that although the country aligns with many aspects of the declaration, it “didn’t provide enough practical clarity on global governance, nor sufficiently address harder questions around national security and the challenge AI poses to it.”
“We continue to collaborate with our global partners, as demonstrated by our commitments on sustainability and cybersecurity at the Paris AI Action Summit,” the spokesperson added.
Vance’s critique
The announcement came shortly after US Vice President JD Vance delivered a sharp critique of European AI regulations, cautioning that “excessive regulation of the AI sector [that] could kill a transformative industry.”
Addressing global leaders, including french president emmanuel macron and indian prime minister Narendra Modi, Vance warned that stringent oversight could hinder technological progress. He was particularly critical of the European Union’s regulatory approach, arguing that “we need international regulatory regimes that foster the creation of AI technology rather than strangle it, and we need our European friends, in particular, to look to this new frontier with optimism rather than trepidation.”
‘Little strategic room’
While some speculated that the UK’s refusal to sign the declaration was influenced by US policy, a government source dismissed such claims, according to The Guardian. However, a Labour MP suggested that “we have little strategic room but to be downstream of the US,” warning that overly restrictive policies could discourage collaboration with major American AI firms.
Campaigners and AI experts raised concerns over the UK’s stance, warning that it could undermine its credibility as a leader in ethical AI development. Andrew dudfield of Full Fact emphasised the need for “bolder government action to protect people from corrosive AI-generated misinformation,” while Gaia Marcus from the Ada Lovelace Institute argued that rejecting the declaration “goes against the vital global governance that AI needs.”
AI and authoritarian regimes
Additionally, vance warned against AI cooperation with authoritarian regimes, indirectly referencing china. Highlighting concerns over surveillance technology and data security, he stressed that “partnering with such regimes, it never pays off in the long term. Some of us in this room have learned from experience that partnering with them means chaining your nation to an authoritarian master that seeks to infiltrate, dig in and seize your information infrastructure. Should a deal seem too good to be true, just remember the adage that we learned in Silicon Valley: if you aren’t paying for the product, you are the product.”
The balance between risk and innovation
Vance also reflected on last year’s AI safety summit in the UK, implying that excessive caution could hinder progress. He argued that discussions on emerging technologies must strike a balance between risk management and fostering innovation. His speech underscored a fundamental divide in global AI governance—one that pits regulatory caution against technological advancement.
The refusal of the US and UK to sign the declaration signals deeper tensions in the global AI regulatory landscape. While European leaders push for stricter governance, the Anglo-American approach appears to prioritise innovation over regulation. This divergence raises pressing questions about the future of international AI cooperation and the ethical implications of unchecked technological growth.