• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

Google Ditches Pledge Not To Use AI For Weapons Or Surveillance

February 5, 2025 by Deborah Bloomfield

Google’s parent company Alphabet has redrafted its policies guiding its use of artificial intelligence (AI), doing away with a promise to never use the technology in ways “that are likely to cause overall harm”. This includes weaponizing AI as well as deploying it for surveillance purposes.

ADVERTISEMENT GO AD FREE

The pledge to steer clear of such nefarious applications was made in 2018, when thousands of Google employees protested against the company’s decision to allow the Pentagon to use its algorithms to analyze military drone footage. In response, Alphabet declined to renew its contract with the US military and immediately announced four red lines that it vowed never to cross in its use of AI.

Publishing a set of principles, Google included a section entitled “AI applications we will not pursue”, under which it listed “technologies that cause or are likely to cause overall harm” as well as “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” Surveillance and “technologies whose purpose contravenes widely accepted principles of international law and human rights” were also mentioned on the AI blacklist.

However, updating its principles earlier this week, Google scrapped this entire section from the guidelines, meaning there are no longer any assurances that the company won’t use AI for the purposes of causing harm. Instead, the tech giant now offers a vague commitment to “developing and deploying models and applications where the likely overall benefits substantially outweigh the foreseeable risks.”

Addressing the policy change in a blog post, Google’s senior vice president James Manyika and Google DeepMind co-founder Demis Hassabis wrote that “since we first published our AI Principles in 2018, the technology has evolved rapidly” from a fringe research topic to a pervasive element of everyday life.

Citing a “global competition taking place for AI leadership within an increasingly complex geopolitical landscape,” the pair say that “democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights.” Among the applications they now envisage for AI are those that bolster national security – hence the backpedaling on previous guarantees not to use AI as a weapon.

With this in mind, Google says it now endeavors to utilize the technology to “help address humanity’s biggest challenges” and promote ways to “harness AI positively”, without stating exactly what this does and – more importantly – doesn’t entail.

ADVERTISEMENT GO AD FREE

Without making any specific statements about what kinds of activities the company won’t be getting involved with, then, the pair say that Google’s AI use will “stay consistent with widely accepted principles of international law and human rights,” and that they will “work together to create AI that protects people, promotes global growth, and supports national security.”

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Biden wants to keep working on police reform bill but willing to take executive action
  2. The Case Of The Mystery Sea Urchin Killer Has Finally Been Solved
  3. Cancel The Apocalypse, Dead Star Will Not Come Dangerously Close After All
  4. A Solar Cemetery? Spain’s Largest Urban Solar Farm Is Being Built In Graveyards

Source Link: Google Ditches Pledge Not To Use AI For Weapons Or Surveillance

Filed Under: News

Primary Sidebar

  • 5 Animals That Have Absolutely No Business Jumping (In Our Very Humble, Definitely Unbiased Opinion)
  • Polar Vortex Patterns Explain Winter Cold Snaps Against Background Warming Trend
  • Scientists Tracked An Olm For 2,569 Days And It Did Not Move An Inch
  • Look Out For “Fireballs”: The Best Meteor Shower Of 2025 Is About To Commence, According To NASA
  • Why Do Many Large Language Models Give The Same Answer To This “Random” Number Query?
  • Adidas Jabulani: The World Cup Football So Bad NASA Decided To Study It
  • Beluga Whales Shake Their Blob-Like Melons To Say Hello And Even Woo A Mate, But How?
  • Gravitational Wave Detected From Largest Black Hole Merger Yet: “It Presents A Real Challenge To Our Understanding Of Black Hole Formation”
  • At Over 100 Years Of Age, The World’s Oldest Elephant Passes Away In India
  • Ancient Human DNA Reveals Earliest Zoonotic Diseases Appeared 6,500 Years Ago
  • Boys Are Better At Math? That Could Be Because School Favors Them Over Girls
  • Looptail G: Most People Can’t Recognize A Letter You Have Seen Millions Of Times
  • 24-Million-Year-Old Protein Fragments Are Oldest Ever Recovered, A Robot Listened To Spoken Instructions And Performed Surgery, And Much More This Week
  • DNA From Greenland Sled Dogs – Maybe The World’s Oldest Breed – Reveals 1,000 Years Of Arctic History
  • Why Doesn’t Moonrise Shift By The Same Amount Each Night?
  • Moa De-Extinction, Fashionable Chimps, And Robot Surgery – No Human Required
  • “Human”: Powerful New Images Mark The Most Scientifically Accurate “Hyper-Real 3D Models Of Human Species Ever”
  • Did We Accidentally Leave Life On The Moon In 2019 – And Could We Revive It?
  • 1.8 Million Years Ago, Two Extinct Humans Had One Of The Gnarliest Deaths In History
  • “Powerful Image” Of One Of The World’s Rarest Tigers Exposes The Real Danger In Taman Negara
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version