Advertisement

Griefbots: Digital Twins of the Deceased or a Step Too Far?

Discover how AI griefbots simulate communication with deceased loved ones—and the ethical, legal, and emotional issues they raise.

Have you ever wished to speak with a late relative, ask a question you never had the chance to, or hear their voice one more time? Thanks to generative artificial intelligence, it’s now technically possible to create a chatbot that simulates the speech and communication style of a deceased person. While it may sound like science fiction, this technology already exists and raises significant ethical questions: Who holds the “copyright” on a deceased person’s memory and identity?

What is a Griefbot and How Does It Work?

A griefbot is a digital model based on a large language model (LLM), trained using personal material related to a specific individual—messages, notes, recordings, and interviews. Using technologies like embedding search, voice cloning, and personalized interfaces, these bots simulate communication through text or speech. Initially seen in experimental projects like Project December, griefbots have evolved on platforms such as HereAfter AI and StoryFile, which allow users to create digital avatars based only on content left behind before the person’s death.

The Tech Behind It: From RAG to Voice Cloning

Creating a griefbot involves collecting data, processing it, and using a model that doesn’t generate answers randomly but retrieves content from a defined source. The most common method is RAG (Retrieval-Augmented Generation), which enables the system to pull information directly from its archive. In voice-based versions, synthetic speech can closely resemble the original voice. Prompt engineers play a crucial role in setting behavioral boundaries and tone.

Advertisement

The Human Motivation: Preserving the Bond

Griefbots stem not from novelty but from the human need to preserve bonds with the deceased. Some users create them to help children connect with grandparents they have never met, while others use them to prolong emotional conversations. In cultures that value oral history and remembrance, these systems can act as digital archives. However, some find them unnatural or even unsettling.

Legal Gray Areas: Do Griefbots Have Rights?

The legal framework for griefbots remains unclear and underdeveloped. In Serbia, no specific law regulates the use of digital replicas of deceased individuals. However, if family members believe a griefbot violates dignity or personality rights, they can invoke general personality protection laws or copyright regulations (such as Articles 82–86 of the Law on Copyright and Related Rights and Article 199 of the Law on Obligations).

Global Legal Practices: U.S. vs. EU

In the U.S., some states recognize posthumous rights to a person’s voice, likeness, and digital identity, which can be legally protected after death. In contrast, most EU countries lack such definitions, creating legal uncertainty and inconsistent practices.

Ethical Dilemmas: Project December and OpenAI’s Ban

Project December, which used GPT-3, was shut down after OpenAI restricted access, citing that simulating real individuals without consent goes against company policy. The ban wasn’t legal—it was ethical.

Conclusion: The Technology Exists, but Society Sets the Limits

In Serbia, griefbots are just beginning to emerge. For some, they preserve family legacies; for others, they cross an emotional or ethical line. Technologically, almost anything is possible—the real question is what we, as a society, are willing to accept.

Photo: AI

Question: What are memorial or grief bots?

Short answer: Memorial or grief bots (digital twins of the deceased) are AI-based chatbots that imitate the speech and behavior of deceased individuals, using their messages, recordings, and other digital traces.

Detailed explanation: The technology behind memorial or grief bots relies on large language models (LLMs) and methods such as Retrieval-Augmented Generation (RAG). Data is collected from private archives – emails, text messages, voice recordings, and even social media posts. Based on this, an interactive digital entity is created that can communicate in a way similar to the deceased. While the idea promises preservation of memory, it also raises questions about the boundary between remembrance and simulation.

Question: How does the technology behind memorial or grief bots work?

Short answer: Memorial or grief bots combine several AI techniques: RAG for searching personal data, voice cloning for reproducing speech, and prompt engineering to control behavior and communication style.

Detailed explanation: RAG ensures that the system does not generate random sentences but relies on the actual archival material of the person. Voice cloning creates a convincing auditory experience, while carefully crafted prompts define tone, formality, and communication limits. The technology is similar to that used in modern virtual assistants, but here the focus is on personalized replication of the deceased, which carries significant emotional and ethical weight.

Question: Why do people use memorial or grief bots?

Short answer: The main motivation is the desire to preserve contact with the deceased, as a form of digital remembrance and continuation of relationships.

Detailed explanation: Some see memorial or grief bots as a source of comfort, enabling them to continue conversations with loved ones. In cultures where family memory is central, they may become digital archives preserving the voices of ancestors. On the other hand, many find them unsettling, as the interaction feels unnatural and resembles emotional manipulation. This creates a divide between those who see the technology as preserving tradition and those who see it as interfering with the natural grieving process.

Question: What are the main ethical challenges of memorial or grief bots?

Short answer: The biggest concern is simulating a person’s identity without their consent, raising issues of privacy, misuse, and emotional manipulation.

Detailed explanation: Project December (based on GPT-3) showed how controversial this is – it was shut down due to ethical concerns over using identities without consent. Even if the family approves creating a bot, it raises the question of whether the deceased would ever have agreed to it. There is also the risk of data misuse, especially when sensitive information is stored that could be exploited outside the context of “remembrance.”

Question: How are memorial or grief bots regulated by law?

Short answer: Most countries lack clear regulation – in Serbia, general personality protection laws may apply, while the U.S. and EU have partial and differing approaches.

Detailed explanation: Serbia has no specific regulations on digital replicas of the deceased, but provisions of the Law on Obligations (non-material damage) and the Law on Copyright may be applied. In the U.S., some states recognize voice and digital identity rights as inheritable, while the EU still lacks a unified framework. This legal vacuum makes the field uncertain and prone to misuse, especially by international providers operating outside domestic jurisdiction.

Question: Is this technology acceptable?

Short answer: The question is not whether it is possible to build a memorial or grief bot, but what society considers ethically and culturally acceptable.

Detailed explanation: The technology already exists and is advancing quickly. Some see it as a tool for preserving collective memory and a way for future generations to better know their ancestors. Others believe it erases the boundary between life and death, which can have serious psychological consequences. This opens a wider debate about whether society should allow “virtual life after death” or set clear limits to prevent abuse.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement