• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Google Ditches Pledge Not To Use AI For Weapons Or Surveillance

February 5, 2025 by Deborah Bloomfield

Google’s parent company Alphabet has redrafted its policies guiding its use of artificial intelligence (AI), doing away with a promise to never use the technology in ways “that are likely to cause overall harm”. This includes weaponizing AI as well as deploying it for surveillance purposes.

ADVERTISEMENT GO AD FREE

The pledge to steer clear of such nefarious applications was made in 2018, when thousands of Google employees protested against the company’s decision to allow the Pentagon to use its algorithms to analyze military drone footage. In response, Alphabet declined to renew its contract with the US military and immediately announced four red lines that it vowed never to cross in its use of AI.

Publishing a set of principles, Google included a section entitled “AI applications we will not pursue”, under which it listed “technologies that cause or are likely to cause overall harm” as well as “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Surveillance and “technologies whose purpose contravenes widely accepted principles of international law and human rights” were also mentioned on the AI blacklist.

However, updating its principles earlier this week, Google scrapped this entire section from the guidelines, meaning there are no longer any assurances that the company won’t use AI for the purposes of causing harm. Instead, the tech giant now offers a vague commitment to “developing and deploying models and applications where the likely overall benefits substantially outweigh the foreseeable risks.”

Addressing the policy change in a blog post, Google’s senior vice president James Manyika and Google DeepMind co-founder Demis Hassabis wrote that “since we first published our AI Principles in 2018, the technology has evolved rapidly” from a fringe research topic to a pervasive element of everyday life.

Citing a “global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” the pair say that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.” Among the applications they now envisage for AI are those that bolster national security – hence the backpedaling on previous guarantees not to use AI as a weapon.

With this in mind, Google says it now endeavors to utilize the technology to “help address humanity’s biggest challenges” and promote ways to “harness AI positively”, without stating exactly what this does and – more importantly – doesn’t entail.

ADVERTISEMENT GO AD FREE

Without making any specific statements about what kinds of activities the company won’t be getting involved with, then, the pair say that Google’s AI use will “stay consistent with widely accepted principles of international law and human rights,” and that they will “work together to create AI that protects people, promotes global growth, and supports national security.”

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Biden wants to keep working on police reform bill but willing to take executive action
  2. The Case Of The Mystery Sea Urchin Killer Has Finally Been Solved
  3. Cancel The Apocalypse, Dead Star Will Not Come Dangerously Close After All
  4. A Solar Cemetery? Spain’s Largest Urban Solar Farm Is Being Built In Graveyards

Source Link: Google Ditches Pledge Not To Use AI For Weapons Or Surveillance

Filed Under: News

Primary Sidebar

  • Watch Platinum Crystals Forming In Liquid Metal Thanks To “Really Special” New Technique
  • Why Do Cuttlefish Have Wavy Pupils?
  • How Many Teeth Did T. Rex Have?
  • What Is The Rarest Color In Nature? It’s Not Blue
  • When Did Some Ancient Extinct Species Return To The Sea? Machine Learning Helps Find The Answer
  • Australia Is About To Ban Social Media For Under-16s. What Will That Look Like (And Is It A Good Idea?)
  • Interstellar Comet 3I/ATLAS May Have A Course-Altering Encounter Before It Heads Towards The Gemini Constellation
  • When Did Humans First Start Eating Meat?
  • The Biggest Deposit Of Monetary Gold? It Is Not Fort Knox, It’s In A Manhattan Basement
  • Is mRNA The Future Of Flu Shots? New Vaccine 34.5 Percent More Effective Than Standard Shots In Trials
  • What Did Dodo Meat Taste Like? Probably Better Than You’ve Been Led To Believe
  • Objects Look Different At The Speed Of Light: The “Terrell-Penrose” Effect Gets Visualized In Twisted Experiment
  • The Universe Could Be Simple – We Might Be What Makes It Complicated, Suggests New Quantum Gravity Paper Prof Brian Cox Calls “Exhilarating”
  • First-Ever Human Case Of H5N5 Bird Flu Results In Death Of Washington State Resident
  • This Region Of The US Was Riddled With “Forever Chemicals.” They Just Discovered Why.
  • There Is Something “Very Wrong” With Our Understanding Of The Universe, Telescope Final Data Confirms
  • An Ethiopian Shield Volcano Has Just Erupted, For The First Time In Thousands Of Years
  • The Quietest Place On Earth Has An Ambient Sound Level Of Minus 24.9 Decibels
  • Physicists Say The Entire Universe Might Only Need One Constant – Time
  • Does Fluoride In Drinking Water Impact Brain Power? A Huge 40-Year Study Weighs In
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version