• Email Us: [email protected]
  • Contact Us: +1 718 874 1545
  • Skip to main content
  • Skip to primary sidebar

Medical Market Report

  • Home
  • All Reports
  • About Us
  • Contact Us

AI Ethicists Highlight Three Horrifying Scenarios In Which Griefbots Could Stalk The Living

May 10, 2024 by Deborah Bloomfield

Speaking to the dead is now a reality as artificial intelligence (AI) technology has made “deadbots” and “griefbots” possible. These chatbots can simulate the language and personality of our deceased nearest and dearest, providing comfort for those who are grieving, but University of Cambridge scientists warn that griefbots could cause more harm than good, creating digital “hauntings” that are lacking in safety standards.

Advertisement

The ethics of grief tech were raised by one man’s experience with a tool known as Project December. As an early release of the AI technology ChatGPT-3, Project December offered paying users the chance to speak with preset chatbot characters or use the machine learning technology to create their own. Writer Joshua Barbeau was a user who went on the record with his experiences teaching Project December to speak like his fiancée, who at the time had been dead for over eight years.

Advertisement

By feeding the AI samples of her texts and a personal description, Project December was able to piece together lifelike responses using language models to mimic her speech in text-based conversation. The authors of a new study argue that these AI creations, based on the digital footprints of the deceased, raise concerns about potential misuse, which – grim as it is to contemplate – include the possibility of advertising being slipped in under the guise of our loved one’s thoughts.

They also suggest that such technologies may further distress children grappling with the death of a loved one by maintaining the illusion that their parent is still alive. It’s their concern that in doing so, griefbots don’t honor the dignity of the deceased, at the cost of the wellbeing of the living.

These thoughts were mirrored by psychologist Professor Ines Testoni of the University of Padova, who told IFLScience that the biggest thing we have to overcome after the death of a loved one is facing the fact that they are no longer with us.

“The greatest difficulty concerns the inability to separate from those who leave us, and this is due to the fact that the more you love a person, the more you would like to live together with them,” Testoni told IFLScience for the March 2024 issue of CURIOUS. “But also, the more one loves one’s habits, the more one wants to ensure that they do not change. These two factors make the work involved in separating and resetting a life that is different from what it was before death entered our relational field very time-consuming.”

Advertisement

The suffering that comes with that is something Testoni states is related to a lack of understanding of what it means to die. Much of the discourse surrounding what happens after we die is conceptually vague, making it tempting to look for evidence and find it wherever we can when we’re struggling to let go.

“A vast literature describes the phenomenon of continuing bonds, i.e. the psychological strategies of the bereaved to keep the relationship with the deceased alive,” explained Testoni. “Death education can help to deal with these kinds of experiences by allowing us to become aware of these processes and especially to understand where the doubt about the existence beyond death comes from, which leads us to painfully question where the deceased is.”

To demonstrate their concerns, the Cambridge AI ethicists outline three scenarios in which griefbots could be harmful to the living:

  • MaNana – a conversation AI service that simulates the dead, such as a grandmother, without the consent of the “data donor,” aka Grandma. It may run on a premium trial that, when it comes to an end, starts to implement advertising in the form of Grandma suggesting the bereaved order food from a certain delivery service.
  • Paren’t – A terminally ill woman may leave a griefbot that simulates her own personality with the goal of assisting a child through their grief after she has died. The griefbot may initially provide comfort, but if it starts to generate confusing responses, such as suggesting an in-person meet-up, it could delay the child’s healing.
  • Stay – An adult may create a griefbot of themselves to engage with their children after they have died, but that’s not to say that all the children want it. One of the children, now an adult, may wish to disengage with the tech, but instead be barraged with emails from their dead parent. Suspending the griefbot may feel as if it violates the wishes of the deceased, resulting in guilt and distress as the living feel they have no way out.

“We must stress that the fictional products represent several types of deadbots that are, as of now, technologically possible and legally realizable,” wrote the authors. “Our scenarios are speculative, but the negative social impact of re-creation services is not just a potential issue that we might have to grapple with at some point in the future. On the contrary, Project December and other products and companies mentioned in [the study] illustrate that the use of AI in the digital afterlife industry already constitutes a legal and ethical challenge today.”

Advertisement

They urge that griefbots should be crafted with consent-based design processes that implement opt-out protocols and age restrictions for users. Furthermore, if we’re to bring the dead back to life in the form of a chatbot, we’re going to need a new kind of ceremony to retire the griefbots respectfully, raising the question that if we are going to have to lose a loved one all over again, is such technology simply delaying the healing process?

“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI), in a statement. 

“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example. At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

The study is published in Philosophy & Technology.

Deborah Bloomfield
Deborah Bloomfield

Related posts:

  1. Events leading up to the trial of Theranos founder Elizabeth Holmes
  2. “Man Of The Hole”: Last Known Member Of Uncontacted Amazon Tribe Has Died
  3. This Is What Cannabis Looks Like Under A Microscope – You Might Be Surprised
  4. Will Lake Mead Go Back To Normal In 2024?

Source Link: AI Ethicists Highlight Three Horrifying Scenarios In Which Griefbots Could Stalk The Living

Filed Under: News

Primary Sidebar

  • Orcas Spotted Hanging Out With Pilot Whale Calves – What’s Going On?
  • Another One Of Colorado’s Reintroduced Wolves Has Died, Marking Fourth Death In 2025 Alone
  • This Disgusting-Smelling Tree Is Taking Over The US – And Some States Want It Gone
  • Unique Facial Tattoos Found On 800-Year-Old Andean Mummy Are Unlike Any Other Known
  • Famous Dark Streaks On Mars Might Not Be What We Were Hoping For
  • World First As US Surgeons Perform Successful Human Bladder Transplant
  • Think The Great Pyramid Of Giza Has Four Sides? Think Again
  • Why Are Car Tires Black If Rubber Is Naturally White?
  • China’s Terra-Cotta Warriors: What You Might Not Know
  • Do People Really Not Know What Paprika Is Made From?
  • There Is Something Odd Going On Inside The Moon, Watch These Snails Lay Eggs Through Their Necks, And Much More This Week
  • Inside Denisova Cave: The Meeting Point Of Neanderthals, Denisovans, And Us
  • What Is The 2-2-2 Rule And Can It Save Your Relationship?
  • Bat Cave Adventure Turns Hazardous: 12 Infected With Histoplasmosis
  • The Real Reasons We Don’t Eat Turkey Eggs
  • Physics Offers A Way To Avoid Tears When Cutting Onions. The Method Can Stop Pathogens Being Spread Too.
  • Push One End Of A Long Pole, When Does The Other End Move?
  • There’s A Vast Superplume Hidden Under East Africa That May Be Causing It To Split
  • Fast Leaf Hypothesis: Scientists Discover Sneaky Way Trees Use Geometry To Hog Nutrients
  • Watch: Rare Footage Captures Two Vulnerable New Zealand Species “Having A Scrap”
  • Business
  • Health
  • News
  • Science
  • Technology
  • +1 718 874 1545
  • +91 78878 22626
  • [email protected]
Office Address
Prudour Pvt. Ltd. 420 Lexington Avenue Suite 300 New York City, NY 10170.

Powered by Prudour Network

Copyrights © 2025 · Medical Market Report. All Rights Reserved.

Go to mobile version