The rise of virtual beings leads to new questions regarding the rights of digital representations.
Technological advancements mean that we are rapidly approaching a time when artificial agents will be indistinguishable from that of human beings. For some, this marks an opportunity to breathe life into dynamic worlds with new creatures – artificially intelligent beings make excellent virtual pets.
For others, the value is in creating non-player characters that provide a rich backstory to further immerse players in the game and enrich quests. And for a third group, the value in virtual beings isn’t the ability to create a new person: it’s to recreate an existing one.
In a YouTube video that recently went viral, a woman is shown “reunited” with her late daughter in a virtual world. The mother tearfully interacts with the virtual recreation of her child, whose digital body was derived from a photogrammetry capture of the late girl’s younger sister. The virtual version of the daughter gives her siblings advice to not fight, which is a programmed response based on home videos provided by the family.
The comments on the story were divisive, to say the least. Almost immediately commentors split into two schools of thought at odds with one another: on the one hand, virtual beings can be used to honor the memory of those who have passed; on the other, the use of a person’s likeness and behaviors contains a vast potential for abuse and misuse. While many of the arguments surfaced around this video focus on whether or not the application of the technology is the problem, in this post, I’m going to zoom out to identify opportunities for a consent-driven framework through which we can evaluate the ethical, moral, and responsible uses of personal information in generating virtual beings.
What do we mean by ‘virtual being’?
Virtual beings are a relatively fuzzy concept – some people may have slightly different sets of criteria for what qualifies as a virtual being and there is not one universally “correct” use of the term as of this movement. To get started, we’ll start wide, and narrow in on the specific set of considerations that matter in the context of our consent framework.
At its most general, a virtual being is a digital agent in a software application that is designed to mimic a set of real-world behaviors. This definition would cast the widest net around most agents in existing 3D applications and includes rudimentary non-player characters (NPCs) in video games, or an animated pet that follows a user around. We may form emotional attachments to these types of virtual beings, but traditionally, their behaviors have had a discrete set of programmed responses to player input. With innovations in the machine learning space, virtual beings can be given characteristics that are used to generate novel, unique responses, rather than predetermined, pattern-matched ones. This allows us to programmatically create rich, non-predictable virtual beings that are trained to respond with specific mannerisms, but with their own “voice”.
Basing virtual beings on real people is not a new concept. Historically, there has already been a consent-driven framework for representative virtual beings when these characters have been based on a specific person – we see this in contracts that cast actors as playable or non-playable characters in hyper-realistic games like Detroit: Become Human. However, immersive technology tools have made it easier than ever for real, living people to be turned into 3D models.
If you’ve ever made a 3D scan of yourself – do you know who has access to use your likeness? Did you sign something that granted the rights to use that in some way? Fortunately, if you’re reading this, you’re probably still alive, so you have well-established legal rights to any damages that may stem from the abuse of your likeness that is counter to the directives you may have given or desire. But what about the likeness of those who have passed away? That is the core issue we explore today.
Death: For the Individual, or the Community?
There are many factors that shape our views on death. In individualism-oriented culture, death has largely been thought of as a taboo — a finite state of being where the wishes of the deceased individual are generally honored. You can see this in the legal structures related to inheritance, estate planning, and probate law in the United States: when the wishes of a decedent are known in a reliable manner, those wishes are largely honored.
This is true of material possessions, and it’s true of bodily autonomy (the decedent’s wishes regarding organ donation, when specified, cannot be overridden by surviving family members even if they disagree with the deceased’s directives) – the death is about the individual, and what they wanted. In contrast, cultures more influenced by collectivism have historically taken a more community-oriented approach, where what is best for the interests of the living are prioritized over what might have been best for the individual. Arguably, this is true in both life and death, but that’s for another time.
When it comes to the conversation around virtual beings, this is the crux of the issue: is the application that creates such agents doing so in a way that acts upon the individual’s wishes, or so that the community can grieve in a healthier manner? How do our existing views on death shape our visceral reactions to an application like the one featured in this video – and what role does VR play in how we handle questions about our mortality?
At the core of this question is the model of consent that is required for a virtual being to exist, when that virtual being is designed to represent a previously-living human. Is there a statutory waiting period after which it becomes acceptable to represent anyone who has died? What if the person is living, but wants to grant their likeness to an application? Do dead people have a right to consent to being turned into a virtual being? What if their recreation revokes that consent?
Right to Representative Posthumous Dignity
Beyond the societal lens through which we view death, we may be able to act upon a general assumption that, assuming appropriate consent was given (either by the individual themselves, or under the belief that it is for the benefit of the living community around the decedent), there are objective cases where creating virtual beings on behalf of a deceased individual is acceptable. From that point, we find ourselves presented with a question of posthumous dignity. While we talk often in immersive technology ethics about privacy and security, the question of digital dignity is rarely addressed directly.
Indeed, more widely, the question of how we define rights related to posthumous digital dignity has yet to be fully explored. The dignity of a deceased user, indeed, also has potential implications of the dignity of those associated with the decedent – and technological advancements have outpaced our current legislative, regulatory, and judicial processes in this arena. What are the damages that may stem from an unchecked ability for a virtual being to be used as the developer of the being sees fit when that virtual being shares the likeness of an actual person? Would you be bothered if a 3D model of a recently deceased loved one was suddenly available as a default avatar on VRChat?
And so, the next consideration in creating these beings is from the perspective of dignity. It isn’t a surprise that with dignity also comes a question of… consent! Indeed, the easiest way to understand how a person’s likeness should or should not be used is to ask the person directly, while they’re living and able to make that directive known. But failing an individual, personalized way that people are able to make this information known, it seems as though there is an opportunity to create an industry-standard set of best practices that companies adhere to when it comes to representing deceased users in a dignified manner so as to protect not just the dignity of the user, but of the surviving members of their community.
On Identity and Training Agent Behavior on Personal Data
Assuming that a user has granted consent to “seeding” a virtual agent with a generally accepted understanding of the dignity awarded to said virtual being, there becomes a question of how much personal information should be used to generate the agent. Speech patterns, body language, physical movement behaviors, voice, appearance, sentiment – we have tools today that are capable of training algorithms to learn from all of these sets of data that we produce as humans. While there is an implied understanding that a user consents to be represented as a virtual being, it is important to articulate the extent to which personal data can be used.
For example, some users may be okay with their voice or conversational mannerisms being used to seed a virtual being, others may be comfortable with their physical likeness being used, and still, other groups may be fine being entirely re-created as a virtual being that looks just like them and is trained on their conversation history, tweets, writings, voice recordings, and videos.
Like many other types of digital information, there is currently little legal regulatory guidance in regards to data protection for the deceased, especially if there was no advance directive. Furthermore, there is a question of interplay between systems: if I grant a VR application the ability to create a virtual agent of me, is there an implication that the consent goes so far as to request videos from family members to build a stronger likeness? Or for them to scrape the internet to find all of my conversation histories? Would they go so far as to read my Facebook messages to understand how I communicate with other people? I would argue that consent to creation is not enough and that an opt-in, granular system for data directives is a necessary component of creating a virtual being from a person’s likeness.
Contextual Use of Implied Identity
Let’s imagine that a user (or their community) has granted consent to all of these things. Now we get to another critical question to examine – the context in which your representative virtual being will be used after your death. Do you agree to a perpetual license of a specific VR platform using your representative virtual being? Can the algorithms or underlying data that power the virtual being be sold? Can multiple different virtual beings derived from your personal information be made? Should there be restrictions around the types of environments that virtual beings built from your information can appear in?
This area is probably one of the most difficult to enumerate a specific framework for. It is nearly impossible to outline all of the possible ways that a virtual being could be used, and for a consent-driven model to completely cover all future cases. For example, imagine that a user consents to having their representative virtual being used for educational purposes, but not for medical purposes. After their death, the platform that is the custodian of the virtual being implements a new category of ‘medical education’. Which consent directive applies to the existing agent? Perhaps the agent could decide for itself.
So, a concrete framework to identify contextual use of one’s identity could be to provide a general directive based on an initial set of assumptions and then consent to the framework through which you’d like future decisions to be made.
Where do you stand?
I’ve given this a lot of thought, and I still don’t have a concrete answer based on the information gathered. When I’ve asked others for their thoughts, the perspectives vary widely – some say that they’d like their online information put into the public domain and view their representative agents as a form of digital immortality, while still others want everything wiped out completely. This, to me, is why building a consent-based framework is so critical: there just isn’t a one-size-fits-all solution about what we want to do with our identities, and we need to begin building those frameworks into our VR applications now.
Feature Image Credit: Getty Images