A former Chinese official engaged in a spirited debate with renowned AI scientist Professor Yoshua Bengio over an international AI safety report, setting the stage for a contentious discussion on the future of artificial intelligence.
Fu Ying, who previously served as China’s ambassador to the UK and as vice minister of foreign affairs, now holds an academic position at Tsinghua University in Beijing. She and Professor Bengio, often referred to as an “AI Godfather,” were both participants in a panel discussion preceding the two-day global AI summit in Paris, which commences on Monday.
The summit aims to bring together world leaders, technology executives, and scholars to assess the societal, governance, and environmental implications of AI. Among the high-profile attendees are OpenAI chief executive Sam Altman, Microsoft president Brad Smith, and Google’s chief executive Sundar Pichai. While Elon Musk is not officially on the guest list, it remains uncertain whether he will make an appearance.
During the discussion, Fu Ying humorously acknowledged the length of the AI safety report, which was co-authored by 96 global experts and led by Professor Bengio. She remarked that its Chinese translation spanned around 400 pages and admitted she had yet to finish reading it. She also subtly criticised the name of the AI Safety Institute, of which Professor Bengio is a member, noting that China had opted to establish The AI Development and Safety Network instead. She explained that while there were already many institutes, the chosen name underscored the importance of collaboration rather than just oversight.
One of the key focuses of the summit is addressing AI regulation in an increasingly divided world. The event follows significant industry disruption, including the recent unveiling of a powerful and cost-effective AI model by China’s DeepSeek, which has challenged US dominance in the sector.
Fu Ying and Professor Bengio’s exchange highlighted the ongoing geopolitical struggle surrounding AI development. However, Fu Ying also lamented the detrimental impact of US-China tensions on global AI safety efforts.
“At a time when the science is progressing rapidly, the relationship between the two nations is deteriorating, affecting unity and collaboration needed to manage risks,” she said. “It is very unfortunate.”
Offering insight into China’s AI progress, Fu Ying described an “explosive period” of innovation since China first unveiled its AI development strategy in 2017—five years before ChatGPT made a global impact. She acknowledged that rapid development comes with risks but refrained from detailing specific concerns.
She further argued that AI development should be based on open-source principles, allowing for transparency and collective problem-solving. Most major US tech firms, she noted, do not publicly share the underlying technologies behind their products.
“Open-source technology offers humans better opportunities to detect and resolve problems,” she asserted. “The lack of transparency among tech giants makes people nervous.”
Professor Bengio, however, countered this perspective, cautioning that open-source AI systems could also be exploited by malicious actors. Nonetheless, he conceded that, from a safety standpoint, open-source AI such as DeepSeek is easier to scrutinise compared to proprietary models like ChatGPT, whose inner workings remain undisclosed.
On Tuesday, world leaders including French President Emmanuel Macron, India’s Prime Minister Narendra Modi, and US Vice President JD Vance will take centre stage at the summit. Their discussions will revolve around AI’s impact on employment, its potential benefits for public services, and strategies to mitigate associated risks.
Additionally, a new $400 million partnership between multiple nations has been unveiled, aiming to foster AI initiatives that serve the public good, such as advancements in healthcare.
UK Technology Secretary Peter Kyle, speaking in an interview, emphasised the importance of keeping pace with AI development, warning that the UK risks falling behind. Dr Laura Gilbert, an AI advisor to the UK government, highlighted AI’s potential efficiencies within the NHS, questioning how the healthcare system could sustain itself without integrating AI.
Meanwhile, Matt Clifford, the architect of the UK’s AI Action Plan, which has been fully embraced by the government, suggested that AI’s impact would be more transformative than previous technological revolutions.
“The industrial revolution automated physical labour; AI is the automation of cognitive labour,” remarked Marc Warner, CEO of AI firm Faculty. He speculated that traditional job roles might not exist by the time today’s young children reach adulthood.
As AI continues to shape the world, the ongoing debate over its regulation, development, and ethical implications remains as pressing as ever.