AI regulation falters as UK and US snub international accord
Efforts to establish a unified global framework for artificial intelligence (AI) regulation encountered a major hurdle last week, as the United Kingdom and the United States declined to endorse an international statement signed by 60 other countries. Despite strong advocacy from French President Emmanuel Macron, the push for a cohesive AI governance model took a step backwards at the AI Action Summit in Paris.
The summit, attended by representatives from nearly 100 nations, culminated in a communiqué titled the ‘Statement on Inclusive and Sustainable Artificial Intelligence’. The declaration underscored the responsibility of governments to ensure that AI is deployed ethically and equitably, committing signatories to a vision where AI is ‘open, inclusive, transparent, ethical, safe, secure and trustworthy’. A central tenet of the statement was the call for a robust, inclusive global governance structure for AI, driven by fairness and progress.
UK and US withhold support
While 60 governments, including China, the European Union, and the African Union, lent their support to the accord, the UK and US conspicuously withheld their signatures—albeit for differing reasons. The UK government deemed the declaration too vague, citing a lack of practical clarity on AI governance and insufficient focus on national security concerns. An unnamed government spokesperson told the BBC that the UK sought a more concrete regulatory framework to address AI-related security risks.
Conversely, the US provided no official reason for its non-participation. However, Vice President JD Vance’s speech at the summit offered insight into the American stance. Expressing concerns over excessive regulation, Vance criticised international efforts that could place undue constraints on US technology firms. He highlighted the challenges posed by existing regulatory regimes, such as the EU’s Digital Services Act and the General Data Protection Regulation (GDPR), which he claimed impose ‘onerous compliance costs’ on businesses.
‘The Trump administration is troubled by reports that some foreign governments are considering tightening the screws on US tech companies with international footprints,’ Vance stated. ‘America cannot and will not accept that, and we think it’s a terrible mistake.’
While the speech was largely directed at the EU, it also underscored the UK government’s stance as it prepares its own approach to AI regulation. Prime Minister Rishi Sunak recently promised a ‘distinctively British approach’ to AI governance, focusing on balancing innovation with regulatory oversight.
Intellectual property and AI regulation in the UK
One of the key issues shaping the UK’s AI strategy is the legal status of data mining for training AI models. The current legal ambiguity surrounding the use of copyrighted material in AI development has been cited as a significant obstacle to innovation. Technology entrepreneur Matt Clifford, author of the government’s AI action plan, has stressed the need for swift legal clarity: ‘This has gone on too long and needs to be urgently resolved.’
A consultation document published just before Christmas echoed these concerns, emphasising that waiting for ongoing legal cases to provide clarity is not a viable option. Instead, it proposes direct legislative intervention to establish clear rules. The document outlines four potential approaches:
- Strengthen copyright law to mandate licensing for all AI-related data mining, ensuring creators are compensated but potentially deterring AI firms from operating in the UK.
- Introduce a broad data mining exception, allowing AI developers to use copyrighted materials without requiring permission, similar to Singapore and the US.
- Create a data mining exception with a rights reservation mechanism, permitting AI training on copyrighted content unless rights-holders explicitly opt out.
- Maintain the status quo, an option already ruled out due to the existing legal uncertainty.
The government has indicated a preference for the third option, arguing that it strikes a balance between control, access, and transparency. However, the proposal has drawn criticism. Lord Foster of Bath, chair of the Lords justice and home affairs committee, likened the opt-out system to requiring individuals ‘to stick “do not steal” labels on all your worldly goods’. He warned that such a move could undermine the UK’s well-regarded copyright framework, which has historically supported growth and investment in the creative industries.
The future of AI governance
The AI Action Summit’s declaration highlighted the transformative impact of artificial intelligence, stating that the technology is ‘unleashing a power of action unprecedented in the history of humanity, which will create immense opportunities, entail risks, and rapidly transform the main economic, political and social balances’.
Despite global recognition of AI’s disruptive potential, achieving consensus on regulatory measures remains elusive. While the UK and US hesitate to align with broader international efforts, the challenge of balancing innovation with oversight continues to shape the future of AI governance worldwide. Whether meaningful regulation can be achieved at a global level—or if nations will forge their own regulatory paths—remains an open question.