A long-time science fiction riff which may be on the verge of becoming reality is the idea of the Copy, which is to say the digital emulation of a human mind. The viability of that idea is still debatable, largely depending upon the assumptions of any given variant of the idea, but its ethical and philosophical implications are important with regard to technologies which are most definitely already becoming reality. Copies are often referred to in technical terms as “Whole Brain Emulations” (WBE), or more loosely as “Uploads” (after roboticist Hans Moravec, who in his 1990 book “Mind Children” described a destructive neural scan and called it the “download” of a mind). The philosophical issues which naturally arise when considering such a technology are the stuff of introductory college courses, usually focused on questions of personal identity, rights, and ownership. Perhaps the most extensive and definitive early treatment of such ideas can be found in the novels of Greg Egan, particularly “Permutation City” (1994) and “Diaspora” (1997). Egan used the term Copy to refer to a digital person, thereby highlighting the identity issues which arise when your mind is in some sense not unique.
The idea of Copies has become slightly more prevalent throughout mainstream culture in recent years, as available processing power has vastly increased and people have become more familiar with information technology and its possibilities. A notable example is the dark sci-fi TV program Black Mirror, which has featured Copies whose circumstances raise alarming ethical and social issues, across multiple episodes. In the episode “White Christmas”, we see Copies forced to live out extended periods of (simulated, but subjectively real) time, in order to extract confessions, coerce them to serve as slaves, and simply to torture them. In recent years we have also seen newspaper articles speculating on the use of such simulation technologies as a way to punish criminals for inordinate periods of time, or in distinctly cruel and unusual ways. Given the human talent for unthinkingly inflicting suffering on others and the potential power of WBE technologies, civilized people should be deeply concerned about curbing any such excesses.
Ethics aside, there are at least two issues regarding such possibilities which should give us pause for thought, to consider the ways in which our human intuitions may fail us in an increasingly strange modern world. The first such issue is the question of “digital mental health” (to coin a phrase); i.e. what effects extended periods of solitude and other tortures might have on a Copy. The kinds of ill-treatment routinely depicted in the programs and articles mentioned above simply could not be tolerated by a human being, which would simply fall apart after a certain point (even without the peculiar physical situation of Copies, not necessarily having any need to eat or other bodily functions). The “problem” of Copies losing their minds could of course be circumvented with software hacks (such as resetting the Copy’s mental state periodically), but then you’d effectively be negating the torture to some degree. For example, say a Copy was prone to psychotic breaks after several (subjective) years of isolation, and your “fix” was to reset its memory each time. The memory reset would have to be total, as any memory of previous torture would only accelerate the Copy’s disintegration. If you do a total memory reset, however, then from the Copy’s point of view they have only been isolated once, and the point of extended isolation would be negated.
The point here is that Copies are not (or will not be) human. That may sound like a trivial observation, but it carries deep implications which could easily be missed. People have clearly intuited Copies’ potential as superhuman torture victims of a sort, and we have briefly examined the limitations of applying human experience and expectations to them. To go further, however, we must stop thinking of Copies as human, but instead think of them as complex software agents of at least human-level intelligence which (given access to suitable resources) could potentially upgrade their own abilities. Any Copy with time in isolation has time to plan, and may have more subjective time available within any given period than does any natural human. They could augment their perception, memory, and other cognitive abilities with software, particularly if they have access to the internet. A group of Copies could potentially solve problems much more effectively than any group of humans, even in the same period of subjective time, by directly sharing memories rather than having to explain things to each other verbally.
At this point you may be thinking that none of these potential capabilities are a problem if a Copy does not have access to resources outside their immediate simulated environment, and does not have whatever system privileges are required to upgrade themselves. If they are “locked down”, in other words. If so, then you are forgetting not only that (1) the Copy has at least human level intelligence, but also that (2) they may well have outside help in circumventing such controls. How hard is it to imagine a “Copy Liberation” movement, even if only one “escaped” Copy or sympathetic human has the ability to write a jailbreak virus of some description? In short, you cannot be sure of your ability to completely control Copies at all times, and once they are loose they could become very dangerous indeed, so it would be wise to consider their welfare… just in case they decide to return the favour.
January 29, 2018 at 7:25 pm
Well, I certainly disagree with just about that whole thing. Go figure. A substantial component of my own writing is the deconstruction the “copy” trope that many people fall for, yet this article takes such terminology and conceptualization as the de facto groundwork of its entire thesis. From the perspective I present throughout my own writing, the copy issue is such a misnomer that it puts any subsequent analysis fully off the map. The copy mistake must be understood and corrected *before* seriously considering any other issues since it miscolors the whole topic so badly as to veer essentially into total error — in my opinion.
Furthermore, the article’s tone is frustrating in its undefended presumption of the factualness of statements that are actually open areas of philosophical debate.
For example:
[ “The point here is that Copies are not (or will not be) human. That may sound like a trivial observation…”]
Trivial? It isn’t remotely concluded, much less trivially so. There are plenty of ways in which to judge mind uploads as being human, but the article’s phrasing shutters any subsequent consideration of the matter. The author is welcome to *argue* that uploads aren’t human, but not simply *presume* so and further assume that the audience will obviously agree.
I would challenge the author to offer a way in which uploads are copies that doesn’t run straight into the sorts of logical conundrums I have carefully illustrated in my book or a handful of related articles.
That’s my reaction at any rate.
Cheers!
January 29, 2018 at 9:48 pm
Interesting topic. But not all technological uploads cause or require the “donor” to die. At least fictionally, we can imagine a scanning technique with very high resolution that doesn’t destroy the target brain during the scanning process. In the novel, Mindclone, for one, the donor and his digital copy become friends. And rivals. They diverge based on subsequent experiences, and become more like twins than donor and copy. Many other issues come up in the novel, which explores the consequences of the new technology on the wider world. The military creating digital copies of their best soldiers, various strategic experts, etc. The ramifications are huge.
January 31, 2018 at 12:45 pm
If we exist in a Quantum (or any other kind of) multiverse, we are all copies
January 31, 2018 at 6:00 pm
I don’t buy the multiverse theories… not testable model etc.
January 31, 2018 at 6:27 pm
Well, in that case you need to explain why the universe is at it is with all its laws and constants, and could not be otherwise
February 1, 2018 at 9:21 pm
I’m not convinced I need a multiverse theory to explain those things. What one provable point requires a multiverse?
February 2, 2018 at 1:36 pm
The fact that no theory, either now or in the foreseeable future, is likely to provide or explain the values of the existing constants as unique
February 4, 2018 at 5:24 pm
for me I think that is not enough to jump to the conclusion that there is a multiverse