• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI May Infringe On Your Rights And Insult Your Dignity (Unless We Do Something Soon)

September 11, 2025 by Deborah Bloomfield

With every significant breakthrough in technology, humanity has had to reckon with equally significant drawbacks – and artificial intelligence (AI) is definitely no exception. We’re all aware of many of its downsides by now: it’s bad for the environment; it’s bad for our brains; increasingly, it’s even turning deadly – and let’s face it, it’s not really even good enough at its job to make up for it.

But according to a new paper from law professors Maria Salvatrice Randazzo and Guzyal Hill, that’s not the end of it. As things stand, they argue, AI models are little more than a “black box problem” – an input, an output, and a giant gaping “trust me, bro” in the middle where who knows what rights and boundaries are being crossed or why.

It is, Randazzo said in a statement this week, a “very significant issue” – and it’s “only going to get worse without adequate regulation.” AI algorithms are neither transparent nor accountable, she argues: they make decisions that humans can’t necessarily replicate or explain – and as a result, neither can we easily know whether our rights have been violated by the machine’s use.

You don’t have to look far for a real-world example of this. Only last month, AI startup Anthropic agreed to pay thousands of authors a total of $1.5 billion – yes, with a b – to settle a class-action lawsuit after allegedly downloading potentially millions of pirated books to train its chatbot. Before that, AI systems were catching heat in the UK for their use in the justice and welfare systems – applications which, by the nature of how such machines are trained, ended up perpetuating human prejudices, unfairly penalizing innocent people, and challenging existing laws around data protection.

Of course, none of this is actually AI’s fault – it’s just working as designed. The problem is that we expect too much of it: “AI is not intelligent in any human sense at all,” Randazzo explained. “It is a triumph in engineering, not in cognitive behavior.”

“It has no clue what it’s doing or why,” she said. “There’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

What’s needed, then, is a robust system of regulation – and in that respect, we’re currently seeing three distinct trends: there’s the “state-centric” approach favored by China; the “human-centric” approach being followed by the European Union (EU); finally, there’s the US, with what’s being called a “market-centric” approach.

“China’s ‘state-centric’ approach to AI governance asserts that states, not private sector actors, should be driving AI governance,” Randazzo and Hill write. “The approach is instrumental in strengthening China’s authoritarian system of government” both at home and abroad, they explain.

Meanwhile, “the EU’s ‘human-centric’ approach to AI […] strives to ensure that Western human values are central to the way in which AI systems are developed, deployed and monitored,” the pair write. “Primacy is given to the protection of fundamental rights, including those set out in the Treaties of the European Union and Charter of Fundamental Rights of the European Union.”

“This also entails consideration of the natural environment and of other living beings that are part of the human ecosystem,” the paper continues, “as well as a sustainable approach aimed at preserving the rights of future generations”

Finally, there’s the American way: “also known as ‘bottom-up multi-stakeholderism’, [this approach] asserts that the private sector and civil society should remain key players in AI governance,” the paper explains. “The USA underlines the importance of ‘AI freedom’, a generic expression that, drawing from human rights and freedom of speech rhetoric, calls for non-governmental regulatory frameworks for American AI technology companies, developed mostly through private-sector initiatives and self-regulation.”

But such a setup “may potentially pose many risks to the USA’s stability, safety, and security,” they write. It essentially leaves AI companies as their own little sovereign entities, wielding power far beyond the US’s geographical and legal footprint – and chasing profit over human interest, if necessary.

It’s perhaps not surprising, given their summation of the three systems, that Randazzo and Hill prefer the European approach, with its strong regulation and prioritization of human dignity. But without worldwide adoption, they caution, even those ruled by such human-led administrations will be at risk from the rise of unregulated AI.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” Randazzo warned.

“Humankind must not be treated as a means to an end.”

The paper is published in the Australian Journal of Human Rights.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. UK’s slow growth and rising inflation gives BoE headache – PMIs
  2. One Identity has acquired OneLogin, a rival to Okta and Ping in sign-on and identity access management
  3. Iron Sulfides In Hot Springs May Have Been The Catalysts Needed To Spark Life
  4. “Hidden” Changes To US Health Data Swapping “Gender” For “Sex” Spark Fears For Public Trust

Source Link: AI May Infringe On Your Rights And Insult Your Dignity (Unless We Do Something Soon)

Filed Under: News

Primary Sidebar

  • The Cavendish Experiment: In 1797, Henry Cavendish Used Two Small Metal Spheres To Weigh The Entire Earth
  • People Are Only Now Learning Where The Titanic Actually Sank
  • A New Way Of Looking At Einstein’s Equations Could Reveal What Happened Before The Big Bang
  • First-Ever Look At Neanderthal Nasal Cavity Shatters Expectations, NASA Reveals Comet 3I/ATLAS Images From 8 Missions, And Much More This Week
  • The Latest Internet Debate: Is It More Efficient To Walk Around On Massive Stilts?
  • The Trump Administration Wants To Change The Endangered Species Act – Here’s What To Know
  • That Iconic Lion Roar? Turns Out, They Have A Whole Other One That We Never Knew About
  • What Are Gravity Assists And Why Do Spacecraft Use Them So Much?
  • In 2026, Unique Mission Will Try To Save A NASA Telescope Set To Uncontrollably Crash To Earth
  • Blue Origin Just Revealed Its Latest New Glenn Rocket And It’s As Tall As SpaceX’s Starship
  • What Exactly Is The “Man In The Moon”?
  • 45,000 Years Ago, These Neanderthals Cannibalized Women And Children From A Rival Group
  • “Parasocial” Announced As Word Of The Year 2025 – Does It Describe You? And Is It Even Healthy?
  • Why Do Crocodiles Not Eat Capybaras?
  • Not An Artist Impression – JWST’s Latest Image Both Wows And Solves Mystery Of Aging Star System
  • “We Were Genuinely Astonished”: Moss Spores Survive 9 Months In Space Before Successfully Reproducing Back On Earth
  • The US’s Surprisingly Recent Plan To Nuke The Moon In Search Of “Negative Mass”
  • 14,400-Year-Old Paw Prints Are World’s Oldest Evidence Of Humans Living Alongside Domesticated Dogs
  • The Tribe That Has Lived Deep Within The Grand Canyon For Over 1,000 Years
  • Finger Monkeys: The Smallest Monkeys In The World Are Tiny, Chatty, And Adorable
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version