Sing The Body Electric

“Mind Uploading” is the idea that the pattern of information which constitutes your perceptual awareness, memories, personality, and all other cognitive functions can be abstracted from the brain it developed in, and “run” on a different computational substrate. In other words; that the stuff which makes you, you could in principle escape the inherent limitations of human biology… such as inevitable short-term mortality. If it is plausible, that is a profoundly powerful and transformative idea.

Of course, the uploading idea has a myriad of opponents. The vast majority are ill-informed people whose opposition relies more on instinct and straw-clutching than good arguments well supported by evidence. To be fair, the same could be said of the uploading idea’s many dilettante fans who simply like the notion without having seriously researched its plausibility. The paragraphs below offer a whirlwind tour of objections to uploading, and the degree to which they should be taken seriously.

Where to Begin? You Are Already A Machine

Human argumentation is rarely half as rational as we like to imagine it is. For a start, our estimates and judgments of whether an argument is correct are heavily dependent on context. More specifically, we are overly influenced by what are known as “frames” or “anchors”; i.e. by the initial point of reference we use to start thinking about… anything. For example, a million dollars sounds like a lot to a homeless person, and like considerably less to Bill Gates.

This is highly relevant to arguments about uploading, because people tend to begin those arguments from different starting points, depending on whether they like the idea or not. Opponents of uploading tend to start out with an implicit assumption that humans and machines are very different things, and never the twain shall meet (for one reason or another). Uploading advocates, however, will frequently argue that the human organism is already a machine of sorts, thus acting as a kind of living testimony to the possibility of intelligent, conscious machines.

The core issue tends to be a fundamental misunderstanding (albeit one that is often deliberate) over the question of what it is to be a machine. Opponents invariably define machines in terms of those artificial devices which already exist or have existed, whereas advocates focus on the underlying principles of known organisms and artifacts. In case you hadn’t guessed; I am an uploading advocate, and I believe that we are – in the deepest sense – already machines, and always have been.

Computational Power, S-Curves, & Technological Singularities

Of course, that still leaves a considerable (some would say intractable, even impossible) gulf between our current technical ability on the one hand, and the ability to intelligently alter, replicate, and improve upon our own biological machinery on the other. For a cogent, exhaustive argument for the ability of accelerating technological development to deliver on these promises, I would suggest reading “The Singularity Is Near” by Ray Kurzweil.

The basic premise of that book is that technological innovations make more innovation easier to produce, which in turns leads to the (already well observed) acceleration of change. Accelerating change leads to an exponential (rather than linear) pattern, by which we might reasonably expect to see twenty thousand years of technological innovation at the c.2000 CE rate by the end of the 21st Century. That is definitely enough innovation to bridge the kind of technical gap we’re talking about. Of course, opponents like to deny that accelerating change even exists, but their claims are increasingly hard to take seriously if you pay attention to the latest developments coming out of cutting-edge labs.

Minds, Bodies, and… Intestines?

Broadly speaking, on the technical level (i.e. leaving aside arguments that we can upload minds, but shouldn’t), there are two types of opponent argument. One is that the mind cannot be reduced to information and thus modelled. The most common version of that argument comes from religion, involves “souls” (whatever they are), and is addressed further below. The second is that the mind can be modelled in terms of information, but we are modelling the wrong information.

I would not want to dismiss that second argument too quickly. To be frank, more often than not it is perfectly on the money. It’s just that I believe we are moving closer and closer to modelling (and understanding) the right information all the time. Let’s be clear, here: The oft-heard refrain that “the mind and consciousness are complete mysteries, we have no idea how they work” are ridiculous, infantile catchphrases used only by people who are wilfully ignorant of the last twenty years of developments in cognitive neuroscience and related scientific disciplines.

AI research is littered with ridiculously simplistic assumptions from people who’ve had little or nothing to do with cognitive science or any related discipline, working on their own narrow-domain problems and then somehow assuming that their models capture the intricacies of, well… everything. The first “AI Winter” and the challenge of developing competent AI chess players was perhaps the most notable early wake-up call in that department. To cut a long story short, the moral of that story is that AI researchers have a habit of making lots of huge, terrible assumptions.

These days, it’s much harder to find a serious researcher who thinks you can abstract away most neurological processing without “throwing the baby out with the bathwater”. These days, complexity is increasingly respected and explored, which means not only not dismissing it, but also not holding it up as some magical ‘deus ex machina’ from which consciousness will emerge if we can only hook enough artificial neurons up to each other…

Anyway, such issues lead to some interesting grey areas, which are often (in my opinion) misused for the purposes of argument. For example, certain biologists have made a lot out of observed connections between the human gut microbiome and “enteric nervous system” on the one hand and cognition as a whole on the other. The research literature essentially says that human intestinal health affects our mood and other personality aspects. On the one hand, that is an entirely reasonable observation, of course. It is hardly surprising that our moods and cognitive abilities are highly sensitive to the state of the body they are instantiated in!

It is quite another thing, however, to suggest (as opponents sometimes do) that this intestinal “second brain” (so-called by popular science writers) is intrinsic to intelligence or conscious awareness, or any harder to model than any other part of the extended nervous system. You could argue up this garden path for a long time, but the basic reality can be illuminated with a simple Reductio Ad Absurdum: Do you really believe that if you could fully capture everything happening in a person’s brain but not their (personal, specific) intestines, then something fundamentally definitive about that person would be missing? If you do, then I would hazard that you have some rather, ahem, fringe notions about what information is actually processed by the enteric nervous system.

Leaping the Gap from Data to Software

Another intriguing, and yet ultimately spurious objection to uploading is to say that you can collect all the neurological data you want, but without some kind of “animating force” in the form of properly configured software then it would be for nothing. On a certain level this argument can carry some weight, but again it’s easy to take that too far.

The value of this opposition argument is inevitably correlated with the degree to which uploaders are committed to a degree of abstraction of human neural activity. Basically, we know that humans are intelligent and consciously aware. With a technology that modelled the human nervous system down to each individual atom, there is no need for software that has any “magic sauce” beyond faithfully replicating the physics of atomic interaction. Of course that would require a staggering amount of computational power to achieve if it is even possible (the jury seems to be out on that, depending upon the computational assumptions you make), so the natural temptation is to take shortcut. Just model entire molecules, neurons, neuron-clusters, brain regions… and so on. The more abstraction you rely upon, the more you have to rely upon software to bridge the gap.

That is an entirely fair point. It is not, however, any kind of argument that uploading is impossible. To the contrary, it is an argument for the establishment of the circumstantial boundaries within which uploading is possible, given sufficient available computational power.

A Final Note on Souls and Other Fictions

If you believe that you could perfectly capture every conceivable physical aspect of a person down to the atomic level, putting aside all of the technological achievement required to do such an incredible thing, and still believe that something important is being missed out, then it seems fairly safe to say that you believe in souls.

Not in some metaphorical, poetic sense, but in proper old-fashioned, literal “soul stuff” which somehow acts like a physical substance but obeys none of the laws of physics, and which people only imagine exists because they read about it in a work of fiction (and/or refuse to believe that they could be made of the same stuff as literally everything else in the observable universe).

If that is your position, then I’m afraid I only have two words for you: Grow Up.

Further Reading

AI Transcends Human Cognitive Bias