An “AI afterlife” is now a real option, but what becomes of your legal status
.jpg)
Creating an interactive “digital twin” that can speak with loved ones after you die has moved from sci-fi to a consumer product. Generative AI now enables “griefbots” or “deathbots,” tools that simulate a person through a voice model, video avatar, or text chatbot trained on their messages, recordings, photos, and other data. Some are built by grieving families after a death, while a growing number of services invite people to build their own digital twin while still alive, then switch it on later.
That shift, says Wellett Potter, Senior Lecturer in Law, University of New England, from posthumous re-creation to pre-planned digital replication, changes the legal and ethical picture. Instead of someone else reconstructing you without consent, you are actively authorizing a company to create an AI simulation of you for future use. The catch is that consent does not answer the bigger question: once you are gone, what rights, controls, and protections follow your identity, your data, and the AI outputs that speak in your name?
How an AI twin is made, and what you are really handing over
Most “AI afterlife” services work in a similar way. You provide training data in a guided format, for example: recorded stories in your voice, written answers to prompts, personal memories, values, and preferences, plus photos or video for likeness. The system uses that material to generate a model that can respond like you, at least within the limits of the data and the technology.
What often gets overlooked is the agency you delegate in the process. You are not only uploading files; you are entering a contract that may give a private company ongoing power to store, process, transform, and re-present your identity after you die. That can include decisions about who gets access, how long the twin persists, whether the model can be updated, and what happens if the service changes hands, changes terms, or shuts down.

The core legal puzzle: your identity is not always treated as property
Many people assume they “own” their voice, face, personality, and name the way they own a house or a bank account. In law, that is not consistently true.
In Australia, the point is especially sharp: there is no broad, standalone “personality right” or general right of publicity that automatically lets someone control commercial uses of their image or likeness. Legal protection tends to come indirectly, through a patchwork of copyright, consumer protection, defamation, passing off, confidentiality, and privacy-related rules. That patchwork can leave gaps when the harm is not a traditional scam or a clearly defamatory statement, but a simulated presence that feels wrong, distorted, or exploitative.
Other places approach identity control differently, and that matters because AI afterlife services operate globally.
- In parts of the United States, many states recognize a “right of publicity” that can restrict commercial use of someone’s name, image, likeness, and sometimes voice. In some states, this right can continue after death and can be managed by an estate.
- In the United Kingdom and many common-law systems, identity control is often pursued through passing off, misuse of private information, breach of confidence, and data protection, rather than a single unified “publicity” right.
- Across the European Union, data protection rules can be more central. Even when “the dead” are not covered the same way as the living under privacy statutes, the data of living relatives, plus consumer and platform rules, can still create obligations and leverage.
So the same digital twin, built from the same files, can be treated very differently depending on where you live, where the company is based, where the servers are, and where the users interacting with the twin are located.
Copyright helps, but only up to a point
Copyright law tends to protect specific works in material form, not “you” as a concept. A personality, presence, or “self” is too abstract to be copyrighted. However, the inputs you provide often are protected: your written responses, recorded stories, photos, and videos are works that can carry copyright or related rights.
The hard part is the outputs. If your digital twin produces new text or speech autonomously, many legal systems struggle to treat that output as a copyrighted work owned by “you,” especially if it is not the product of human authorship in the traditional sense. That creates practical questions:
- Who owns the AI-generated outputs, if anyone does?
- Can the company claim ownership through contract, even if copyright is unclear?
- Can your estate control, delete, or license the outputs, or are they governed entirely by the service’s terms?
Moral rights, where they exist, usually protect human creators against false attribution and derogatory treatment of their works. They tend not to fit neatly when the “work” is a machine-generated response that imitates a person rather than a human-authored creation.
Privacy and data protection: the living still have rights, the dead often do not
Privacy is one of the biggest pressure points, and it gets complicated fast.
In many jurisdictions, privacy and data protection laws are primarily designed to protect living persons. After death, those rights can weaken or disappear, depending on the country. Even so, an AI twin can still trigger legal exposure because:
- the training dataset may contain private information about other living people (family, friends, partners, colleagues)
- interactions with the twin may reveal sensitive details about living people, or generate them
- the service may continue processing and profiling data for analytics, product improvement, or marketing
Even when a legal right ends at death, the ethical risk does not. Families can be harmed by disclosures, distortions, or manipulative monetization of grief, and the legal remedies may be indirect and inconsistent.
Contracts quietly do most of the work, and that should worry you
Because clear “identity ownership” rules are uneven, the terms and conditions often become the main governing system. Contracts can set:
- whether the company can reuse your materials to train other models
- whether your twin can be used in marketing or demonstrations
- what happens to your data if the company is acquired
- how long data is stored, whether deletion is possible, and what “deletion” actually means
- who is authorized to trigger the twin, manage access, or request shutdown
This is why “consent” is not a single checkbox. The real consent is the full contract, plus any later changes the platform may impose. Many services reserve the right to update terms, which raises an uncomfortable scenario: you might agree to one arrangement, then years later your estate is dealing with a different one.
What happens if the company fails, gets sold, or the tech changes?
Digital afterlife services depend on the long-term survival of a business. That is a fragile foundation for something marketed as “forever.”
If the business closes, the model might vanish, or worse, the assets might be sold. If the company is acquired, the buyer may have different standards and incentives. If the technology shifts, older models might be “upgraded” in ways that change the personality or tone, even when the inputs remain the same.
That raises a second loss: loved ones can grieve again if the twin disappears, changes dramatically, or becomes inaccessible due to pricing changes. Most legal systems do not yet treat that as a recognizable consumer harm with tailored remedies, even if it is emotionally significant.

Distortion and drift: the twin may not stay “you”
Even with careful training, generative systems can misrepresent. Outputs can drift, especially when models are updated or when they generate responses probabilistically rather than retrieving fixed recordings. Over time, a deathbot can become a stranger wearing familiar language.
That is not only an emotional problem, it can become a reputational and safety problem. A distorted twin could say things you never believed, disclose secrets, or give advice that causes harm. When that happens, responsibility becomes murky: is it the company, the person who configured the settings, the family member who prompted the model, or no one at all?
Existing legal tools, defamation, consumer law, negligence, platform liability, are not designed for a speaking simulation of the dead that is neither fully a product nor fully a person.
A global regulatory gap is forming, and it will not close itself
The direction is clear even if the rules are not. As grief tech grows, regulators will face pressure to define:
- posthumous control of identity and likeness
- baseline consumer protections for digital afterlife services
- limits on reuse of training data
- clear rules for access, deletion, portability, and shutdown
- accountability when a simulated person causes real-world harm
Different jurisdictions will likely take different routes, some through privacy and consumer law, some through new AI-specific rules, some through new “personhood-adjacent” rights for identity and likeness. Until that happens, the contract remains the main gatekeeper.
Practical safeguards if someone is considering an AI twin
Anyone thinking about creating a digital twin for after death should approach it like a major financial decision, not a novelty app. Practical safeguards include:
- reading the terms for ownership, reuse rights, and data deletion
- clarifying who controls access after death, and how that is verified
- insisting on clear shutdown and export options, including what happens if the company folds
- limiting third-party data in your training set, especially private stories involving others
- documenting your wishes in writing for your family, and aligning it with estate planning where possible
Technology can offer comfort, and in some cases it may genuinely help people process loss. What it cannot yet offer is legal clarity. For now, the “AI afterlife” is not only a new form of presence, it is a new form of vulnerability, one that sits at the intersection of grief, commerce, and a legal system that still treats identity as something you are, not something you control.
Source: General, With reporting by The Conversation

.jpg)