World

Google’s Willingness To Develop AI For Weapons Raises Concerns: Human Rights Watch

Google’s revised Artificial Intelligence (AI) policy signals a worrying shift  shift in the company’s stance on the development of AI for military applications, Human Rights Watch (HRW) has warned. This policy change, HRW argues, underscores the inadequacy of voluntary guidelines in governing AI ethics and the need for enforceable legal frameworks.

Previously, says Anna Bacciarelli, Senior Researcher with HRW, Google’s Responsible AI Principles, established in 2018, explicitly prohibited the company from developing AI “for use in weapons” or for applications where “the primary purpose is surveillance.” These principles also committed Google to avoiding AI development that could cause “overall harm” or that “contravenes widely accepted principles of international law and human rights.”

However, in its latest policy revision, Google has removed these clear restrictions. Instead, the company now states that its AI products will “align with” human rights, but without explaining how this alignment will be ensured. The lack of specificity raises concerns about potential ethical lapses and accountability in AI deployment.

HRW has strongly criticized this shift, warning that stepping away from explicitly prohibited uses of AI is dangerous. “Sometimes, it’s simply too risky to use AI, a suite of complex, fast-developing technologies whose consequences we are discovering in real time,” Anna Bacciarelli stated. The organization argues that AI systems, particularly in military applications, can be unpredictable and prone to errors that result in severe consequences, including harm to civilians.

AI Use in Warfare

Google’s decision to remove key restrictions from its AI policy highlights the limitations of voluntary corporate ethical guidelines, HRW asserts. “That a global industry leader like Google can suddenly abandon self-proclaimed forbidden practices underscores why voluntary guidelines are not a substitute for regulation and enforceable law,” Anna Bacciarellisaid.

HRW further emphasised that existing international human rights laws and standards already apply to AI technologies, particularly in military contexts. However, regulation is crucial in translating these norms into practice and ensuring compliance. Without enforceable legal mechanisms, companies like Google may prioritize market competition over ethical considerations.

The revised AI policy also raises concerns about the broader implications of AI use in warfare. Militaries worldwide are increasingly integrating AI into combat systems, where reliance on incomplete or faulty data can lead to flawed decision-making, increasing the risk of civilian casualties. HRW warns that digital tools powered by AI can complicate accountability in battlefield decisions, making it harder to assign responsibility for actions that could have life-or-death consequences.

Need Stronger Regulations

Google’s shift in policy marks a significant departure from its previous stance. The company has gone from refusing to develop AI for weapons to now expressing a willingness to engage in national security ventures that may involve AI applications in defence. This shift aligns with Google’s executives describing a “global competition … for AI leadership” and arguing that AI should be “guided by core values like freedom, equality, and respect for human rights.”

However, HRW contends that these statements ring hollow if the company simultaneously deprioritizes considerations for the impact of powerful new technologies on human rights. The organization warns that such an approach could lead to a “race to the bottom” in AI ethics, where companies prioritize technological advancements over fundamental rights protections.

The stakes in AI development, particularly for military applications, could not be higher. The use of AI in warfare presents unprecedented ethical and legal challenges, with significant risks to civilians and global security. Consistent with the United Nations Guiding Principles on Business and Human Rights, HRW asserts that all companies, including Google, must meet their responsibility to respect human rights across all their products and services.

In light of these developments, HRW is calling for stronger regulations to ensure that AI technologies are developed and deployed responsibly. Governments and international bodies must act swiftly to establish enforceable legal frameworks that prevent AI from being used in ways that could endanger human rights and global security.

Google’s AI policy change serves as a stark reminder of the potential dangers posed by unregulated AI development. As technology advances at an unprecedented pace, the need for comprehensive legal safeguards has never been more urgent.

(By arrangement with owsa)

OB Bureau
Share
Published by
OB Bureau

Recent Posts

Is Salman Khan & Atlee’s Much-Hyped Action-Drama With Rs 500 Cr Budget Shelved?

Mumbai: After directing Shah Rukh Khan in ‘Jawan’ and Varun Dhawan in ‘Baby John’, there… Read More

12 minutes ago

India To Host Next Global AI Summit Later This Year; PM Modi Allays Job Loss Fears

New Delhi: India will host the next global summit on artificial intelligence (AI) after co-hosting… Read More

18 minutes ago

Ranveer Allahbadia’s ‘Crass’ Question On India’s Got Latent Was Copied From This Show!

Mumbai: From Rakhi Sawant throwing a chair on stage or a comedian taking a dig… Read More

2 hours ago

Congress Picks Former Union Minister Bhakta Das As Party Chief In Odisha

Bhuabneswar: The Congress on Tuesday appointed senior party leader and former Union Minister Bhakta Charan… Read More

2 hours ago

Odisha Gets Proposal For $1 Billion Methanol Synthesis Project

Bhubaneswar: HIF Global India of Chile is keen to invest around $ 1billion USD in… Read More

2 hours ago

Tom Cruise’s Altered Looks At Super Bowl Sparks Cosmetic Surgery Rumours

New York: As Hollywood star Tom Cruise hyped up the showdown between the Kansas City… Read More

3 hours ago