• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI May Infringe On Your Rights And Insult Your Dignity (Unless We Do Something Soon)

September 11, 2025 by Deborah Bloomfield

With every significant breakthrough in technology, humanity has had to reckon with equally significant drawbacks – and artificial intelligence (AI) is definitely no exception. We’re all aware of many of its downsides by now: it’s bad for the environment; it’s bad for our brains; increasingly, it’s even turning deadly – and let’s face it, it’s not really even good enough at its job to make up for it.

But according to a new paper from law professors Maria Salvatrice Randazzo and Guzyal Hill, that’s not the end of it. As things stand, they argue, AI models are little more than a “black box problem” – an input, an output, and a giant gaping “trust me, bro” in the middle where who knows what rights and boundaries are being crossed or why.

It is, Randazzo said in a statement this week, a “very significant issue” – and it’s “only going to get worse without adequate regulation.” AI algorithms are neither transparent nor accountable, she argues: they make decisions that humans can’t necessarily replicate or explain – and as a result, neither can we easily know whether our rights have been violated by the machine’s use.

You don’t have to look far for a real-world example of this. Only last month, AI startup Anthropic agreed to pay thousands of authors a total of $1.5 billion – yes, with a b – to settle a class-action lawsuit after allegedly downloading potentially millions of pirated books to train its chatbot. Before that, AI systems were catching heat in the UK for their use in the justice and welfare systems – applications which, by the nature of how such machines are trained, ended up perpetuating human prejudices, unfairly penalizing innocent people, and challenging existing laws around data protection.

Of course, none of this is actually AI’s fault – it’s just working as designed. The problem is that we expect too much of it: “AI is not intelligent in any human sense at all,” Randazzo explained. “It is a triumph in engineering, not in cognitive behavior.”

“It has no clue what it’s doing or why,” she said. “There’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”

What’s needed, then, is a robust system of regulation – and in that respect, we’re currently seeing three distinct trends: there’s the “state-centric” approach favored by China; the “human-centric” approach being followed by the European Union (EU); finally, there’s the US, with what’s being called a “market-centric” approach.

“China’s ‘state-centric’ approach to AI governance asserts that states, not private sector actors, should be driving AI governance,” Randazzo and Hill write. “The approach is instrumental in strengthening China’s authoritarian system of government” both at home and abroad, they explain.

Meanwhile, “the EU’s ‘human-centric’ approach to AI […] strives to ensure that Western human values are central to the way in which AI systems are developed, deployed and monitored,” the pair write. “Primacy is given to the protection of fundamental rights, including those set out in the Treaties of the European Union and Charter of Fundamental Rights of the European Union.”

“This also entails consideration of the natural environment and of other living beings that are part of the human ecosystem,” the paper continues, “as well as a sustainable approach aimed at preserving the rights of future generations”

Finally, there’s the American way: “also known as ‘bottom-up multi-stakeholderism’, [this approach] asserts that the private sector and civil society should remain key players in AI governance,” the paper explains. “The USA underlines the importance of ‘AI freedom’, a generic expression that, drawing from human rights and freedom of speech rhetoric, calls for non-governmental regulatory frameworks for American AI technology companies, developed mostly through private-sector initiatives and self-regulation.”

But such a setup “may potentially pose many risks to the USA’s stability, safety, and security,” they write. It essentially leaves AI companies as their own little sovereign entities, wielding power far beyond the US’s geographical and legal footprint – and chasing profit over human interest, if necessary.

It’s perhaps not surprising, given their summation of the three systems, that Randazzo and Hill prefer the European approach, with its strong regulation and prioritization of human dignity. But without worldwide adoption, they caution, even those ruled by such human-led administrations will be at risk from the rise of unregulated AI.

“Globally, if we don’t anchor AI development to what makes us human – our capacity to choose, to feel, to reason with care, to empathy and compassion – we risk creating systems that devalue and flatten humanity into data points, rather than improve the human condition,” Randazzo warned.

“Humankind must not be treated as a means to an end.”

The paper is published in the Australian Journal of Human Rights.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. UK’s slow growth and rising inflation gives BoE headache – PMIs
  2. One Identity has acquired OneLogin, a rival to Okta and Ping in sign-on and identity access management
  3. Iron Sulfides In Hot Springs May Have Been The Catalysts Needed To Spark Life
  4. “Hidden” Changes To US Health Data Swapping “Gender” For “Sex” Spark Fears For Public Trust

Source Link: AI May Infringe On Your Rights And Insult Your Dignity (Unless We Do Something Soon)

Filed Under: News

Primary Sidebar

  • The Man Who Fell From Space: These Are The Last Words Of Cosmonaut Vladimir Komarov
  • How Long Can A Bird Can Fly Without Landing?
  • Earliest Evidence Of Making Fire Has Been Discovered, X-Rays Of 3I/ATLAS Reveal Signature Unseen In Other Interstellar Objects, And Much More This Week
  • Could This Weirdly Moving Comet Have Been The Real “Star Of Bethlehem”?
  • How Monogamous Are Humans Vs. Other Mammals? Somewhere Between Beavers And Meerkats, Apparently
  • A 4,900-Year-Old Tree Called Prometheus Was Once The World’s Oldest. Then, A Scientist Cut It Down
  • Descartes Thought The Pineal Gland Was “The Seat Of The Soul” – And Some People Still Do
  • Want To Know What The Last 2 Minutes Before Being Swallowed By A Volcanic Eruption Look Like? Now You Can
  • The Three Norths Are Moving On: A Once-In-A-Lifetime Alignment Shifts This Weekend
  • Spectacular Photo Captures Two Rare Atmospheric Phenomena At The Same Time
  • How America’s Aerospace Defense Came To Track Santa Claus For 70 Years
  • 3200 Phaethon: Parent Body Of Geminids Meteor Shower Is One Of The Strangest Objects We Know Of
  • Does Sleeping On A Problem Actually Help? Yes – It’s Science-Approved
  • Scientists Find A “Unique Group” Of Polar Bears Evolving To Survive The Modern World
  • Politics May Have Just Killed Our Chances To See A Tom Cruise Movie Actually Shot In Space
  • Why Is The Head On Beer Often White, When Beer Itself Isn’t?
  • Fabric Painted With Dye Made From Bacteria Could Protect Astronauts From Radiation On Moon
  • There Used To Be 27 Letters In The English Alphabet, Until One Mysteriously Vanished
  • Why You Need To Stop Chucking That “Liquid Gold” Down Your Kitchen Sink
  • Youngest Mammoth Fossils Ever Found Turn Out To Be Whales… 400 Kilometers From The Coast
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version