Advertisement

Griefbots: Digital Twins of the Deceased or a Step Too Far?

Discover how AI griefbots simulate communication with deceased loved ones—and the ethical, legal, and emotional issues they raise.

Have you ever wished to speak with a late relative, ask a question you never had the chance to, or hear their voice one more time? Thanks to generative artificial intelligence, it’s now technically possible to create a chatbot that simulates the speech and communication style of a deceased person. While it may sound like science fiction, this technology already exists and raises significant ethical questions: Who holds the “copyright” on a deceased person’s memory and identity?

What is a Griefbot and How Does It Work?

A griefbot is a digital model based on a large language model (LLM), trained using personal material related to a specific individual—messages, notes, recordings, and interviews. Using technologies like embedding search, voice cloning, and personalized interfaces, these bots simulate communication through text or speech. Initially seen in experimental projects like Project December, griefbots have evolved on platforms such as HereAfter AI and StoryFile, which allow users to create digital avatars based only on content left behind before the person’s death.

The Tech Behind It: From RAG to Voice Cloning

Creating a griefbot involves collecting data, processing it, and using a model that doesn’t generate answers randomly but retrieves content from a defined source. The most common method is RAG (Retrieval-Augmented Generation), which enables the system to pull information directly from its archive. In voice-based versions, synthetic speech can closely resemble the original voice. Prompt engineers play a crucial role in setting behavioral boundaries and tone.

Advertisement

The Human Motivation: Preserving the Bond

Griefbots stem not from novelty but from the human need to preserve bonds with the deceased. Some users create them to help children connect with grandparents they have never met, while others use them to prolong emotional conversations. In cultures that value oral history and remembrance, these systems can act as digital archives. However, some find them unnatural or even unsettling.

Legal Gray Areas: Do Griefbots Have Rights?

The legal framework for griefbots remains unclear and underdeveloped. In Serbia, no specific law regulates the use of digital replicas of deceased individuals. However, if family members believe a griefbot violates dignity or personality rights, they can invoke general personality protection laws or copyright regulations (such as Articles 82–86 of the Law on Copyright and Related Rights and Article 199 of the Law on Obligations).

Global Legal Practices: U.S. vs. EU

In the U.S., some states recognize posthumous rights to a person’s voice, likeness, and digital identity, which can be legally protected after death. In contrast, most EU countries lack such definitions, creating legal uncertainty and inconsistent practices.

Ethical Dilemmas: Project December and OpenAI’s Ban

Project December, which used GPT-3, was shut down after OpenAI restricted access, citing that simulating real individuals without consent goes against company policy. The ban wasn’t legal—it was ethical.

Conclusion: The Technology Exists, but Society Sets the Limits

In Serbia, griefbots are just beginning to emerge. For some, they preserve family legacies; for others, they cross an emotional or ethical line. Technologically, almost anything is possible—the real question is what we, as a society, are willing to accept.

Photo: AI

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement