In a controversial move, Google has removed a crucial pledge from its artificial intelligence (AI) principles, which previously stated that the company would refrain from using the technology for the development of weapons. This alteration comes as part of a broader update to its AI guidelines, a set of principles originally published in 2018. The update, which now excludes references to weapons and surveillance, has sparked concern among critics who question the company’s evolving stance on AI ethics.
The original AI principles included a commitment from Google not to pursue technology “that causes or is likely to cause harm,” specifically in areas related to weapons or the use of AI for surveillance in violation of internationally accepted norms. However, the updated guidelines have removed these specific provisions, instead offering a new focus on “responsible development and deployment.” In this revised section, Google pledges to implement “appropriate human oversight, due diligence, and feedback mechanisms” that align with user goals, social responsibility, and the principles of international law and human rights.
Google’s decision to revise these principles comes as the company acknowledges the rapid evolution of AI technology, which has moved from a niche research area to a ubiquitous platform now utilised by billions globally. In a blog post, James Manyika, Google’s senior vice president, and Demis Hassabis, the head of Google DeepMind, stated that the company’s AI principles needed to be updated to reflect the technology’s rapid expansion and increasing global impact.
“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications,” the blog post reads. “It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world.”
The tech giant added that as AI has become more widely used, the need for international collaboration on common principles had increased, a shift Google has expressed support for. Despite this, Manyika and Hassabis noted that the landscape surrounding AI is becoming increasingly competitive, particularly within an “increasingly complex geopolitical environment.”
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” the pair wrote. “And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
This shift in Google’s AI principles has raised concerns among experts, especially as debates about the ethical development and regulation of AI intensify. While many nations and technology companies have signed non-binding agreements regarding responsible AI development, the lack of enforceable international regulations remains a significant issue. Google’s decision to remove its pledge not to develop AI for weapons and surveillance has brought these concerns to the forefront, prompting calls for more robust governance frameworks.
James Fisher, chief strategy officer at AI firm Qlik, expressed concern about the change, highlighting the need for global standards and regulatory frameworks. “Changing or removing responsible AI policies raises concerns about how accountable organisations are for their technology, and around the ethical boundaries of AI deployment,” Fisher said. “AI governance will of course need to flex and evolve as the technology develops, but adherence to certain standards should be a non-negotiable.”
For the UK, which has positioned itself as a leader in AI safety and regulation, Fisher added that this decision emphasises the need for effective, enforceable AI governance. “The UK’s ability to balance innovation with ethical safeguards could set a global precedent, but it will require collaboration between government, industry, and international partners to ensure AI remains a force for good,” he concluded.
As the debate over AI governance continues to evolve, Google’s decision signals the growing complexities of AI ethics in the face of rapid technological advancement. Whether or not this move will shape future AI policies remains to be seen, but it highlights the delicate balance between innovation, competition, and ethics in the global AI race.