Site icon Medical Market Report

AI Ethicists Highlight Three Horrifying Scenarios In Which Griefbots Could Stalk The Living

Speaking to the dead is now a reality as artificial intelligence (AI) technology has made “deadbots” and “griefbots” possible. These chatbots can simulate the language and personality of our deceased nearest and dearest, providing comfort for those who are grieving, but University of Cambridge scientists warn that griefbots could cause more harm than good, creating digital “hauntings” that are lacking in safety standards.

Advertisement

The ethics of grief tech were raised by one man’s experience with a tool known as Project December. As an early release of the AI technology ChatGPT-3, Project December offered paying users the chance to speak with preset chatbot characters or use the machine learning technology to create their own. Writer Joshua Barbeau was a user who went on the record with his experiences teaching Project December to speak like his fiancée, who at the time had been dead for over eight years.

Advertisement

By feeding the AI samples of her texts and a personal description, Project December was able to piece together lifelike responses using language models to mimic her speech in text-based conversation. The authors of a new study argue that these AI creations, based on the digital footprints of the deceased, raise concerns about potential misuse, which – grim as it is to contemplate – include the possibility of advertising being slipped in under the guise of our loved one’s thoughts.

They also suggest that such technologies may further distress children grappling with the death of a loved one by maintaining the illusion that their parent is still alive. It’s their concern that in doing so, griefbots don’t honor the dignity of the deceased, at the cost of the wellbeing of the living.

These thoughts were mirrored by psychologist Professor Ines Testoni of the University of Padova, who told IFLScience that the biggest thing we have to overcome after the death of a loved one is facing the fact that they are no longer with us.

“The greatest difficulty concerns the inability to separate from those who leave us, and this is due to the fact that the more you love a person, the more you would like to live together with them,” Testoni told IFLScience for the March 2024 issue of CURIOUS. “But also, the more one loves one’s habits, the more one wants to ensure that they do not change. These two factors make the work involved in separating and resetting a life that is different from what it was before death entered our relational field very time-consuming.”

Advertisement

The suffering that comes with that is something Testoni states is related to a lack of understanding of what it means to die. Much of the discourse surrounding what happens after we die is conceptually vague, making it tempting to look for evidence and find it wherever we can when we’re struggling to let go.

“A vast literature describes the phenomenon of continuing bonds, i.e. the psychological strategies of the bereaved to keep the relationship with the deceased alive,” explained Testoni. “Death education can help to deal with these kinds of experiences by allowing us to become aware of these processes and especially to understand where the doubt about the existence beyond death comes from, which leads us to painfully question where the deceased is.”

To demonstrate their concerns, the Cambridge AI ethicists outline three scenarios in which griefbots could be harmful to the living:

“We must stress that the fictional products represent several types of deadbots that are, as of now, technologically possible and legally realizable,” wrote the authors. “Our scenarios are speculative, but the negative social impact of re-creation services is not just a potential issue that we might have to grapple with at some point in the future. On the contrary, Project December and other products and companies mentioned in [the study] illustrate that the use of AI in the digital afterlife industry already constitutes a legal and ethical challenge today.”

Advertisement

They urge that griefbots should be crafted with consent-based design processes that implement opt-out protocols and age restrictions for users. Furthermore, if we’re to bring the dead back to life in the form of a chatbot, we’re going to need a new kind of ceremony to retire the griefbots respectfully, raising the question that if we are going to have to lose a loved one all over again, is such technology simply delaying the healing process?

“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI), in a statement

“This area of AI is an ethical minefield. It’s important to prioritise the dignity of the deceased, and ensure that this isn’t encroached on by financial motives of digital afterlife services, for example. At the same time, a person may leave an AI simulation as a farewell gift for loved ones who are not prepared to process their grief in this manner. The rights of both data donors and those who interact with AI afterlife services should be equally safeguarded.”

The study is published in Philosophy & Technology.

Source Link: AI Ethicists Highlight Three Horrifying Scenarios In Which Griefbots Could Stalk The Living

Exit mobile version