Transhumanity
where strange brilliant ideas for the future intermingle and breed…

Home > Articles > The Struldbrug Fallacy, Life Extension, and Computation

The Struldbrug Fallacy, Life Extension, and Computation

Posted: Sun, March 17, 2013 | By: Extropia DaSilva



THE STRULDBRUG FALLACY

When surveys are conducted asking if people would like to live much beyond 120 years old, the answer is often “no”. When asked to explain the reason behind their answer, replies tend to be along the lines of too much life would be boring, and how would the Earth support us all? What I want to show is that all such arguments stem from various misunderstandings and are therefore hopelessly weak.

Take the argument that life would not be worth living if it went on for too long. There may be a grain of truth in this statement, but I simply cannot believe that 120 years is sufficient time in which to have been there, seen it, done it, known it all. In fact, I think even a lifespan of 1000 years would not be nearly long enough to have exhausted all opportunities for education and entertainment. It is a big universe, after all. Yet so many people truly believe life would be boring if it were not much longer than 10 decades.

SO, WHAT IS THE STRULDBRUG FALLACY?

Why is this? I think it is because they have fallen foul of what I call the ‘Struldbrug Fallacy’. It is named after a race of people in Swift’s Gulliver’s Travels who are immortal. They can never die, but the cruel catch is that they age just as normal people do. The eponymous traveller’s initial marvelling at the gift of immortality (no end to the opportunity to improve oneself) turns very bleak as he considers how frail a 90-year-old is compared to a person in the full bloom of youth, and therefore how much frailer a 190-year-old must be.

Gulliver's Travels - Title Page

The Struldbrug Fallacy consists of pondering the prospect of extreme life extension in the following way: Beyond a certain age, the older you get, the frailer you tend to be. No 90-year-old is anywhere near as fit and healthy as they were in their youthful days. If I were much older than that, I would surely be even more frail and even less able to enjoy life. So what would be the point in living beyond 100? But such reasoning completely misunderstands the way in which extreme life extension will be brought about. The intention is NOT merely to add more years to our current life cycle. Rather, the goal is to SLOW DOWN the ageing process to the point where negligible senescence is achieved. Senescence refers to the progressive loss of physical robustness that happens as we age and so negligible senescence means very little to no increase in physical and mental frailty as time goes by.

Jonathan Swift

It is important to understand that this in no way prevents death from accident or malice. It is just that, no matter how many birthdays have passed, you are no more likely to die than you were at any earlier point in your life. Some ethicists like Leon Kass have argued that immortality would rob us of the opportunity to lay down our life in some heroic act, but that clearly mistakes indefinite life for immortal life. A fit and healthy 200-year-old firefighter would have been in as much danger, and every bit as heroic, if such a person had been caught up in the Twin Towers attack.

Once you grasp the true goal of life extension, you can see how the question “Would you like to live to be 100?” ought to be rephrased to something like “Would you want to be prevented from dying this year?”. I would hope that most fit and healthy people would reply in the positive and have no end of good reasons to want to see another 12 months go by. And if you asked them again, many, many decades in the future, way past their 100th birthday and yet just as fit, surely they would be just as reluctant to see life terminated at the end of that year, no excuses.

A FOOL’S HOPE?

Another reason for not wishing life were a great deal longer is that it seems like a fool’s hope. We know, from the fact that every person ever born has grown old and died, that avoiding a similar future is not an option. Increasing senescence leading to a non-negotiable expiry date is the inevitable fate awaiting us all. But this kind of reasoning has been used before. In 1839, Dr Alfred Velpeau stated: “The abolishment of pain in surgery is a chimera. It is absurd to go on seeking it today”. It must have seemed perfectly logical that cutting a person open with a knife could not help but cause them pain. Pain is also a phenomenon that is easy to explain in terms of evolutionary benefit - it is natural that injury should result in such strong signals. But regardless of the fact that cutting someone with a knife hurts and regardless of the fact that there is good reasons why this should be so, you can probably see that the good doctor was quite wrong to argue that knife and pain were inseparable. The discovery, in 1846, of ether anesthesia, put paid to that.

The lesson we learn from this example is that advances in knowledge and technology can sometimes alter our perception of what is inevitable. Admittedly, we still lack a convenient means to disable the various underlying principles of increasing senescence, but we ARE gaining clearer understandings of exactly why our bodies grow frail with time. And, more importantly, strategies have been outlined to counter each and every one discovered so far. There really does seem to be nothing behind the aging process that could not be fixed, given suitably advanced bio and nanotechnology. You do tend to get opposing voices condemning such pursuits as defying nature but that just goes to show that the goal of negligible senescence is indeed not impossible. If it were, why fret about its eventual success?

By the way, arguing against extreme life extension on the grounds that it defies nature almost certainly places its advocate in the unfortunate position of being an utter hypocrite. We defy nature when we cure disease, when we wear glasses to correct poor vision, when we turn up the heating on a cold winter’s day, when we perform open-heart surgery on a patient with their capacity to feel pain temporarily turned off. Most importantly, modern civilisation enables many to live more than twice as long as would be expected in a state of nature. You could write a very long list of all the ways in which our species has used technology to defy nature, and it is inevitable that even the most staunch believer in the unnaturalness of indefinite lifespans has used some of them.

ENVIRONMENTAL CONCERNS

But perhaps the warning against defiance of nature probably has more to do with the supposed environmental consequences of extreme life extension. These are the “Where would we all live? Earth has finite resources” style of arguments, or the objection that it is immoral to hang around, spending the kids’ inheritance. I think that this objection is no less weak and flawed than the others. Our species’ tendency to put short-term profit before long-term environmental consequence is due to the fact that, so far, we have tended to die before the price for our partying had to be paid. When climate scientists warn that we will find life very difficult in the year 2100 unless we change our ways, most people old enough to understand the basics know they will be dead by then. A consequence that comes into effect AFTER you die might as well be one that never happens at all. It is somebody else’s problem. And those people not old enough to understand the science are by definition not in a position to do anything about it.

A WASTE OF RESOURCES

Look, I don’t know for sure that if humans had lifespans measured in centuries rather than decades we would be less inclined to be apathetic towards drastic consequences hundreds of years in the future, but it does seem like a reasonable conclusion. On the other hand, dying only a few decades after our minds mature is a tragic waste of resources. Each person carries in their head a vast database of knowledge, all of which is lost when they die (save for what portion of it they recorded for posterity). Terry Grossman asked us to imagine that one person’s life experience equalled one book. That being the case, every year, natural death robs us of 52 million books, worldwide. You can appreciate what a waste of knowledge that is by understanding that the US Library of Congress holds 18 million volumes. Therefore, it is like burning the world’s largest collection of books to the ground, three times over.

People who think the best way to secure our future is to have children and die ‘on time’ are saying: “Let’s continue to wipe out a generation’s worth of knowledge, and wait while the ignorant generation slowly matures to the point where they can understand the problem, and then wait yet more years while they work out a solution”. But what happens if that generation takes a lifetime to solve these issues? They then have a duty to die. Wave goodbye to another 52 million books.

This attitude would make some kind of sense of the Earth’s natural resources were reset for each generation. But this is obviously not the case. The finite resources we have consumed remain consumed. So long as the human race persists, the question of what to do once certain resources are exhausted remains. Your dying will not solve this problem. Indeed, you might be somebody who possesses knowledge that could be key in solving it. So, really, we have two precious resources being wasted by the cycle of birth and death: natural resources and human knowledge. Rather than the indiscriminate loss of knowledge through death, surely it would be better if knowledge only went extinct because it could not stand up to objective scrutiny.

The issue of fertility does raise a seemingly valid objection. Nobody wants to die so long as life is worth living. Equally, nearly everybody would like to become a parent. As the Earth really does have finite resources, it seems we cannot have our cake and eat it. That objection is commonly raised in discussions about living indefinitely: how would the planet support us all? This kind of argument fails to take into consideration the full impact of the knowledge and technologies required to achieve negligible senescence. It would require exquisite control over matter at the molecular level, and technology like that would be capable of managing the world’s resources far more efficiently than today’s industries.

NANOTECHNOLOGY

If the layperson is aware of one product of nanotechnology outcompeting nature, you can bet it is the dreaded gray goo. I would like to point out that our current technology allows us to support a worldwide population far beyond anything Thomas Malthus would have thought possible, but it is becoming increasingly obvious that sustaining modern civilisation demands far more resources than the Earth has to offer. “Ah”, comes the inevitable response, “we can seek new planets to colonise”. this is often held up as being some grand vision of our future, but I beg to differ. It seems to me that this notion of endlessly replicating humans consuming the resources of one planet and then spreading out to do likewise to other worlds is a picture of the human race as a galactic viral infestation.

It would be much better if we learned to use the resources of THIS planet as efficiently as possible, and it doesn’t get much more efficient than manufacturing everything we need at the level molecular nanotechnology would allow. You might wonder, though, if nanotechnology will support the human race once we develop nanomedicine. It has been estimated by nanomedicine researcher Rob Freitas that preventing 99% of naturally occurring medical problems would enable us to live for more than a thousand years. Assuming that medical problems include infertility, it might be the case that nanomedicine results in humans becoming animals that never fail to bring a pregnancy to term and can expect to live for a millienium. One would think that even Drexlerian nanotechnology would be insufficient to sustain a species like that.

It seems, then, that there is a valid objection. The medical knowledge required to halt the aging process could also be used to eradicate infertility. Indeed, given the natural urge to procreate and the anguish felt by couples unable to start families, the case for eradicating infertility could be argued rather strongly. However, to argue that the technologies required for engineered negligible senescence could also be used to treat infertility, and that this would inevitably lead to explosive population growth, is to put the transhumanists’ desire for indefinite lifespans in the wrong context. People tend to treat radical life extension as the goal, rather than one more necessary step towards a richer future. You might call this poverty of imagination, the tendency to miss the bigger picture while focusing on minor details.

THE BIG PICTURE

In version 3.11 of The Principles of Extropy, Max More wrote, “Extropy means seeking more intelligence, wisdom, and effectiveness perpetually overcoming constraints on our progress and possibilities as individuals, as organisations, and as a species”. Extropians accept that the laws of physics may impose certain constraints, but even here there is a necessity to continually question our faith in the reliability of our understanding of physics and hence our assumptions of the limitations those laws impose. As soon as science finds a way through any barrier, extropians consider it an imperative to develop whatever practical means there are to achieve this.

THE EFFECT OF LITERACY ON FERTILITY RATES

So engineering negligible human senescence should be seen, not so much as a goal, but as a by-product of a greater drive toward expanding our opportunities to learn more, enjoy more, continue to strive towards finer levels of self-development. But so what? What does this have to do with perceived explosions in population growth? Well, there is extensive evidence proving a high correlation between female literacy and fertility rate. James Martin, one of the world’s most respected authorities on the impact of technology on society, wrote in The Meaning of the 21st Century that, in the 1980s in many of the world’s poorest countries, only 3% of the women could read, and the average number of children a woman had was seven, sometimes eight. As female literacy spreads, fertility rates drop. When almost all women can read, the average number of children a woman has is often below two. A chart of literacy rate against fertility rate doesn’t have smooth mathematical curves, but its message is unmistakable. Teaching women to read slows the population growth.

UNESCO Chart of Fertility versus Adult Female Literacy: 2000-2009

Of course, lessons in effective birth control are essential, too. But why should literacy have this impact on fertility rates? If I were to hazard a guess, I would say that since literacy is an important step towards being educated, and as being educated leads to better career prospects and higher aspirations; these women come to feel they have more to contribute than just raising children. Certainly, that is the case with women in First World societies.

The great advances expected in genetics, robotics, information technology, and nanotechnology will converge and combine to open up vast new markets, which will in turn will open up a wealth of opportunities in terms of career prospects and lifestyle choices. Moreover, these technologies may very well put an end to the need to divide our lives up into distinct chapters, since this is by and large dictated by the tick of our biological clock.

This part of a person’s life is spent in basic education, that part in higher education. After that comes career/kids, and don’t forget to save for a comfortable old age. People continue to assume that this course of events will play out, not only for their lives, but the lives of their kids and grandchildren. But already we are seeing the power of science and technology to disrupt the status quo. We see people long past natural child-bearing age nevertheless giving birth, and the sight of a 70-year-old cradling her newborn babe is but a taste of things to come.

In the future, very young children whose brains are wired directly to computer networks running formidably powerful forms of artificial intelligences may be fundamentally smarter than even the most gifted of today’s adults. Such bright eight-year-olds might have no trouble completing one of today’s courses in higher education. And what about the adults? Today there is a very good reason why the thought of old-aged people becoming parents receives the negative reaction that it does. It is because these people can be expected to die before they have raised their child to adulthood. Like it or not, there is a window of opportunity in which it is biologically most acceptable to have offspring. But people long past their 70th birthdays who are nonetheless as physically robust as they were at age twenty could decide to have children, and it would be difficult to see why this would raise the objections it does today. Similarly, a person as old as that who still feels a need to put off having children in order to enjoy their own life could hardly be said to be risking the window of opportunity closing in their face. Thanks to engineered negligible senescence, it wouldn’t ever close.

DEATH BY ACCIDENT CAN STILL OCCUR

Well, it might. Negligible senescence only makes you equally likely to die from accident or misfortune as a person in the full bloom of their youth; it is by no means immortality. Lives may still be lost to natural or human-created disasters, and so our continued existence on this planet may well necessitate the creation of new lives. So long as the birthrates were not higher than the average amount of lives lost each year, this would obviously result in sustainable population growth. But can we really be certain that the vastly wider set of life experiences available to transhuman societies really would ensure enough people delay becoming parents to keep the population levels from rising too high? One might think that such a high-tech society would be far less accident-prone than ours, and with lives measured in centuries, enough people would need to postpone parenthood for a very long time. Surely, it must still be the case that the Earth’s ability to support a technologically advanced civilisation would be pushed beyond all reasonable limits?

MOORE’S LAW…AND BEYOND!

But if you think this is the case, you are seriously underestimating the planet’s capacity to support intelligence. Intelligence is a form of information processing, and it is an oft-noted fact that, through our technology, we have continually discovered ever-more efficient forms of computation. In 1965, Intel founder Gordon Moore predicted, “by 1975, economics may dictate squeezing as many as 65,000 components on a single silicon chip”. He believed this would be the case because, since the first chip had been invented, the number of transistors that could be packed onto a chip had doubled every two years. This regular cycle was later adjusted to a doubling of processor power every 18 months. Carver Mead called this prediction Moore’s Law.

The weird thing about Moore’s Law is that rumours of its demise keep cropping up and always turn out to have been exaggerated. Obstacles are seen looming on the horizon, prompting insiders to announce that Moore’s Law will not continue, but nevertheless it does. Still, nobody would be foolish enough to predict that intergrated circuits will double their performance forever, because there really are fundamental limits that will ultimately prevent us squeezing more performance out of ICs. But the consensus of opinion among computer scientists is that the demise of Moore’s Law will not mean an end to the doubling of computing power.

They believe this is so because the regular doubling was not a phenomenon that began with sillicon chips. In actual fact, as Ray Kurzweil has noted, this growth in computing power runs smoothly back through 5 paradigms of information technology. Singularity theorist John Smart, moreover, has noted that Computation (which he defines as “forming an encoded internal representation of the laws or information of the actual environment”) has discovered ever-more-clever ways of using matter, energy, space and time to process information, and that this has been happening long before humans came on the scene. In his own words: “our planet’s history of accelerating creation of pre-biological (atomic and molecular-based), then genetic (DNA and cell-based), neurologic (neuron-based), and memetic (mental-pattern-based) information arises out of, and controls, the continuous reorganisation of matter-energy systems”.

So any one technology will inevitably run up against limits. But a more generalised capability like computation, storage, or bandwidth tends to follow a pure exponential, bridging across a variety of technologies.

Actually, the laws of physics place ultimate limits on the growth of computing, but before we get around to discussing them, we need to look at another good reason why the growth of computing power won’t end with the integrated circuit. It is because there exists another information processor that puts contemporary computers to shame. The human brain is 100 million times more efficient in power/calculation than the best processor, and it stands as an existence proof of the levels of computation that can be reached.

It also points us towards the 6th paradigm of computing systems, which is three-dimensional molecular computing. Current ICs cannot be stacked in a 3D volume because so much heat would be generated that the silicon would melt. Carbon nanotubes, widely held to be crucial components of 6th-paradigm computing, are incredibly heat-resistant and can therefore be used to construct cubes of computing circuitry, in contrast to today’s chips. Another advantage with using molecules to store memory bits and to act as logic gates is that molecules are so very tiny. Moore’s Law is fundamentally driven by miniaturisation. Semiconductor feature sizes shrink by half every 5.4 years in each dimension, and this means that the number of elements per square millimetre doubles every 2.7 years. Current logic gates are under 50 nanometres wide, and chips pack in billions of components. But, incredibly, a single drop of water contains roughly 100 times more molecules than all the transistors that have ever been built.

Molecular electronics is not just theoretical. A company called Zeta Core has built molecular memories using multiporphyrin nanostructures. The key to using these molecules as a storage medium lies in the fact they can be oxidised and reduced (electrons removed or replaced). Multiporphyrins have already demonstrated up to 8 digital states per molecule. Other nanotechnologists have proposed encoding information in fluorinated polythene molecules, where each bit is marked by the presence of fluorine or hydrogen on a certain carbon atom. Such a system would use 10 atoms per bit, which would correspond to 5*10^21 bits if diamondoid densities were reached. Analyses of existing nanotube circuits point to a one-inch cube of such circuitry performing 10^24 calculations per second (CPS). Estimates for the brain’s computational capacity range from 10^14 CPS to 10^19 CPS. Assuming the highest estimate is true, 10^24 CPS is equal to one hundred thousand human brains - one hundred thousand brains, packed into a device not much larger than a sugar cube. You can start to see how the resources the earth can provide for intelligence might, in fact, go a very long way.

QUESTIONS, QUESTIONS

Two questions that might be asked are: ‘How do we calculate the computational capacity of the human brain?’ and ‘Aren’t brains different from computers?’ The second question is really an objection to AI, and it is one that does not take into account neuromorphic modelling, which involves using technologies to analyse how a brain region works and using this knowledge to develop software running functionally equivalent algorithms. The pace of building working models is only slightly behind the availability of brain scanning and neuron-structure information. We can take a region we have already reverse-engineered, take our knowledge of its capacity, and extrapolate that capacity to the entire brain by considering what portion of the brain that region represents. Various estimates of different regions all result in similar orders of magnitude for the entire brain - somewhere between 10^14 and 10^15 CPS. The highest estimate (10^19 CPS) assumes that we must simulate every nonlinearity in every neural component, but it is generally believed that this level of detail is unnecessary unless you are uploading a person (we’ll get to that later).

The common objection to AI (‘but we don’t know how the brain creates intelligence’) assumes our current level of understanding will never improve. This is clearly ridiculous. In contemporary neuroscience, models are being developed from diverse sources that include brain scans, interneural connection models, neuronal models and psychophysical testing. Thanks to increasingly sophisticated search engines, the 50 thousand neuroscientists worldwide can easily find, share, and add to this growing body of knowledge. And they are being helped by the scientists and engineers who are building ever-more accurate brain-scanning technologies. Nanotechnology expert Rob Freitas has exhaustively analysed the feasibility of using micron-scale robots to scan a living brain cell by cell, molecule by molecule, thereby allowing us to copy the neural patterns of the brain into another medium without necessarily understanding their higher-level organisation.

WHAT HAS THIS TO DO WITH INDEFINITE LIFESPANS?

You might be thinking that I have gone off on a wild tangent. Rather than talking about indefinite lifespans and finite living space, I am talking about AI. But the drive to reverse-engineer our internal organs will have many medical benefits. For instance, as UC San Diego’s Andrew McCulloch has pointed out, we can do a good job now of modelling on a computer what happens to cardiac cells in a heart failure, and predict how a heart contraction will respond to a drug. As organ-simulation software matures, drug trials will be simulated and yield results in hours, rather than months as is the case now. Another benefit is that neuromorphic models of brain regions could be installed in living brains when the biological region fails. We have already starting doing this with cochlear and retinal implants, with an artificial hippocampus in development.

FAREWELL TO THE MEATBODY

Nanobots will eventually be able to repair our bodies at the molecular level, thereby effectively halting the aging process. But why be satisfied with simply maintaining bodies as we know them today? It would be far better to inhabit the morphable bodies and explore the enormous possibilities of virtual reality, if only we had a way of transferring our very consciousness into cyberspace.

MIND UPLOADING

Anders Sandberg explained that there are many advantages to life as a software (rather than a biological) being. Less resources are needed to sustain the being, evolution by intelligent design instead of natural selection becomes much more realisable, and the limits to the being’s existence are determined by the computing system it exists in, rather than a constant body. A very radical transhumanist proposal - uploading - involves scanning a brain at such a fine-grained level that everything stored on it, all the memories, personality traits, etc., of its owner are faithfully transferred to a model of that brain running on a suitable computing platform. As you may well imagine, whether this could ever work in practice, and if a copy of a mind can be said to be the same as the original person, are both controversial points. Here I will assume uploading is a viable future technology and explain how, given the limits of information processing and storage, we will be able to support numbers of uploaded humans that beggar the unaugmented imagination.

THE PHYSICS OF COMPUTATION

The amount of computation that can be performed is ultimately limited by the amount of information that can be stored in an isolated region of space with a finite energy content, and by access to available materials. The former tells us that there is a finite size that the miniaturisation of computing elements can reach, thereby placing an upper limit on the computational and memory densities of a system. As we progress from micro to nano computing, it is perhaps tempting to suppose we will then progress towards pico, femto, and so on through infinitesimally small scales. But quantum uncertainty ultimately prevents phase space from being divided too small, because you cannot encode information if the partitions are so fine they are impossible to distinguish.

So the number of bits that can be stored on a hydrogen atom is ultimately limited to one million bits. The so-called Bekenstein bound tells us that the particles comprising one average human have the potential to store 10^45 bits. Now, if Hans Moravec is to be believed, 10^15 bits is sufficient to encode one human-brain equivalent, and assuming a thousand times as much storage would be required for the body and its surrounding environment, the person’s living space would consume 10^18 bits. As for the world and its entire population, that could be encoded in 10^28 bits. That is, of course, a very large number of bits, since it literally describes a world of information. But the optimised storage capacity of the particles in one human - 10^45 bits- is astronomically larger. It is equivalent to the biospheres of a thousand galaxies. I will say that again, just to make sure it sinks in: Encoded as properly efficient cyberspace, the bits represented by one human being would provide enough computation to support a population of uploaded people equal to 10 billion people for every star in a thousand galaxies! One person!

It boggles the mind, but we need to be cautious when dealing with capabilities pushed to the very limits permitted by physical laws. The technical capability required to achieve these limits would be prodigious. In order to reach the ultimate density of one million bits per hydrogen atom, it is necessary to first convert all of the atom’s mass into energy. That is essentially what happens in a thermonuclear explosion, and Ray Kurzweil noted that “we don’t want [an explosion] so this will require some careful packaging”. Moreover, completely converting an atom’s mass into low-energy photons (each of which stores one bit) requires matter/antimatter combination and annihilation, or even the transformations of matter and energy that occur in the extreme environments of black holes. One might well question technology’s ability to scale to these levels.

Never mind, though, because the computational potential of ordinary matter is very high indeed. Kurzweil noted that a 2.2-pound rock weighs about the same as a brain but when it comes to computation, one far outperforms the other. You would be forgiven for thinking the brain must be the winner here, but that is a prejudice brought about by an inability to easily see the activity happening at the atomic level. Here we find electrons being shared back and forth, particles changing spin and rapidly moving electromagnetic fields. The latter alone represents one million, trillion, trillion, trillion calculations per second.

However, the belief that a brain is a better computer than a rock is justified by something called computational efficiency - in other words, the fractions of matter and energy taking place in an object that represent USEFUL computing. That’s where the stone loses out; the structure of the atoms is effectively random and no good for performing useful work. A brain is slightly more organised to perform useful computing, but is still far from its potential of 10^42 CPS. A 2.2-pound object, properly organised, would have a capacity equal to ten trillion Earths, each with a population of 10 billion people.

PROGRESSIONS IN STORING/PROCESSING KNOWLEDGE

We have progressed from storing knowledge in our minds and imparting it through language, to manipulating the environmental resources of matter, energy, space, and time to perform both roles. In modern civilization, our accumulated wealth of knowledge exists in the cyberspace of the Internet - 600 billion pages - a massively decentralised computer with a total RAM of roughly 200 terabytes and 10 terabits of data coursing through it every second. No single neuron, no single brain component, is capable of reaching human levels of intelligence, but the ensemble clearly is. Similarly, while no individual computer has achieved the 20-petahertz threshold for intelligence, the computer and its distributed chip of billions of PCs has, and its growth has clear parallels with the way brains develop.

When I described our brain as sitting roughly halfway on a logarithmic scale between a rock and the ultimate computer, I was assuming an equivalent mass for all three. Of course, we hardly restrict information processing to a mere 2.2 pounds of matter. Once we have the nanotechnological capability to stop aging, we shall also be able to provide all material needs extremely inexpensively, at least where physical wealth is concerned. For a nanotechnological society, value is almost entirely represented by information. It seems reasonable to assume, then, that the less capable matter is at storing and processing information, the less valuable it will be. To borrow a phrase from Stross’s Accelerando, “if it isn’t thinking, it isn’t working”. Given that the potential computing power of 2.2 pounds of matter is 10^42 CPS, the potential locked in the 6*10^24 kilograms of the entire planet must be many orders of magnitude higher. But if we measure MIPS (millions of instructions per second) per millimetre, we find very little useful computation occurring. In terms of its ability to process our thoughts, most of the solar system is a dead loss, and we would barely scratch the surface of its potential if humans migrated to and filled every body orbiting the Sun.

Still, the Internet represents outward growth of computing, and the number of chips is increasing at a rate of 8.3% per year. Natural selection arranged biological matter to perform crude computations, and we now use those abilities to increase the computational capacity of our resources. If we assume available energy is the total output of the Sun (roughly 4*10^26 Watts) and available matter is represented by everything orbiting it (roughly 10^26 kg), we begin to see the outlines of a future internet on a scale beyond the imagination: literally, a star-sized Internet. The current Internet is a computing system comprised of a global network of PCs. This future computing substrate will consist of enough information processors to englobe the Sun in a cloud of computing platforms.

MATRIOSHKA BRAINS

According to J. Robert Bradbury, each individual component requires a power collector, such as high-efficiency solar cells; computing components, which would ideally be nano-CPUs with high-bandwidth optical communications channels to similar devices; storage components, with the ideal being photonic storage which would allow the lowest possible amount of energy with which to store a bit; and radiation protection. Some of the material locked up in planets and other celestial bodies is not usable for energy production or computing. Resources like iron, helium, or neon could be used to provide shielding against high-energy cosmic rays.

As well as designing and building such components, we must also develop an assembly process that can be scaled up to handle the mammoth task of reducing planets to streams of elements and then reassembling them into the needed parts. Bacteria demonstrate a way to provide sufficient numbers of assemblers in a short space of time, via exponential assembly. In only four days, a single bacterium produces enough replications to fill a sugar cube. In four more days, there are enough to fill a village pond, and four days later the bacterium’s offspring would fill the Pacific Ocean. Within two weeks, provided it does not run out of resources (which is, of course, what always happens) a single bacterium will have converted itself into a mass of bacteria equal in mass to an entire galaxy. This demonstration of the feasibility of molecular manufacturing is often held up as a proof of principle for molecular nanotechnology.

It should be pointed out, though, that nanotechnology is not required for any stage in Bradbury’s proposal. The exponential assembly, for instance, could be handled by the kind of automata John von Neumann described in 1965. Von Neumann’s description, along with Feynman’s speech “There’s Plenty of Room At The Bottom”, laid down the groundwork for Drexler’s vision. But nanotechnology would be the optimal choice, so I assume here that it will be in widespread use by the time a project of this magnitude is attempted.

According to Bradbury, the construction job begins with the conversion of one or more asteroids into solar-power collectors. It will take several years to manufacture enough solar collectors to harvest the 10^23 Watts required for the next stage: Building enough power collectors to harvest the Sun’s entire output. If we assume a power-to-mass harvesting capability of 10^5 W/kg, the sun’s 4*10^26 Watts implies a mass requirement of 10^21 kg for solar collectors in Earth orbit, with the mass requirement reducing if we build closer to the star (for obvious reasons, we cannot build too near to the Sun). There is enough useful material locked up in the asteroid belt to provide the required solar collectors.

More likely, though, is that the 10^23 Watts from Stage One will be beamed to Mercury, and the bulk of that planet used to provide sufficient power collectors to harvest the total output of the Sun. That energy would then be used to run the process of disassembling all but the giant gas planets, which would require extra energy in order to lift matter out of their gravitational well. Assuming exponential replication of nano-assemblers, disassembling the minor planets will take weeks to months. Von Neumann-style self-replicating factories would require years or decades to complete the task. The raw materials would then be reprocessed into computronium, or matter/energy organised to perform computations as efficiently as possible.

Bradbury assumes this will be rod-logic nanocomputers capable of operating in conditions hot enough to melt iron. High-temperature rod logic nanocomputers could be made from diamond (melting point of 1235 degrees Kelvin), aluminium oxide (melting point of 2345 degrees Kelvin), or titanium carbide (melting point of 3143 degrees Kelvin). As well as the nanocomputer, each component consists of a solar array facing the sun to harvest useful energy, a radiator for disposal of waste energy, and the surface of the computer will incorporate communication arrays of light transmitters and receivers, composed of vertical-cavity-surface-emitting lasers to provide high-bandwidth communications to adjacent devices.

As a consequence of the 2nd Law of Thermodynamics, the computers will produce heat that must be disposed of. But, rather than just radiating this waste energy into space, another even larger shell of nested computing elements could enclose the first one and do what work is possible with that energy, and another beyond that, and so on. If the inner shell runs close to the melting point of iron, the outer shells would be almost as cold as liquid helium. The radiation emitted by the outer shells would consist of low-energy infrared photons which are extremely difficult to harvest for direct conversion into electricity. The outer layers are therefore likely to use mirrors to focus thermal energy and heat engines with Carnot cycles to gather power.

Russia is famous for its wooden dolls that can be opened to reveal a smaller doll nested inside, and one inside that… dolls all the way down. They are called Matrioshka Dolls. Here we have a shell of computronium containing another, and another, with a star sitting in the centre providing the energy for its thought processes (actually, “cloud” might be a better description than “shell”, since the orbiting computing platforms will not be a solid sphere). Bradbury named his theoretical mega-scale computer the Matrioshka Brain (MB). How powerful a computer is it?

If no nanotechnology were used, and we instead relied on most of the silicon in Venus as raw material and current trends in silicon-wafer production, the MB would have a thought capacity a million times greater than that of 6 billion people. We can safely assume nanocomputers will be used, in which the case the MB’s computational capacity would be ten million billion times GREATER than the DIFFERENCE between a human and a nematode worm. That would be sufficient capacity to emulate the entire history of human thought in a couple of microseconds. The population of uploads such a system could support would be equivalent to a population of 6 billion people for every star in the Milky Way Galaxy. Other theorists like Anders Sandburg have calculated that forms of computing more advanced than rod-logic nanocomputers could achieve 10^47 bits, thereby giving enough capacity to support the biospheres of more than a thousand galaxies.

At a conservative estimate, then, the resources available to us will provide room for six hundred billion uploaded people, and orders of magnitude more if 10^47 bits can be reached. Technology as advanced as a Matrioshka Brain is a result of trends already underway, like Moore’s Law, Dickerson’s Law (which tracks the rise in our ability to solve 3D protein structures), and Bell’s Law (every decade a new class of computer emerges from a hundred-fold drop in processing power). It has been noted that technology is becoming organic and nature is becoming technologic. The latter is driven by attempts to understand the information technology of biological processes so that they can be reprogrammed for negligible senescence etc via biotechnology, or upgraded with nanotechnology.

By incorporating biological lifelike architectures, computing systems are increasingly able to guide their own self-improvement, while at the same time we increasingly think of them as extensions of our own minds. The gradual dissecting of the components and functions of the structures of the brain and the rise of programming methodologies increasingly able to model human intelligence is enabled by increasingly close collaborations between neuroscience and computer science. Progress in these areas and others is showing that the association of increasing maturity and decreasing ability is no more unavoidable than the association of pain with surgery. We can fix it. We imagine that solving the problem of aging will have negative consequences. This is true, but as we have seen, these will not be the problems many people think. In particular, the objection that we lack the resources to support people with indefinite lifespans is nonsense. Economics, or the study of the allocation of scarce goods, has long driven a process known as the marginalization of scarcity, in which we learn to produce goods with increasing efficiency at less cost. The tools and knowledge that will enable us to engineer indefininite lifespans will also provide tools to manage our local resources so efficiently we shall comfortably provide for hundreds of billions of uploaded people.

BUT STILL THE SEARCH CONTINUES…

And yet the problem of death is only postponed by the technology of the Matrioshka Brain. These pinnacles of human civilization support uploads for as long as their host star provides energy. The Sun will continue to do so for a very long time, but it cannot do so forever. After it dies, how will the uploaded population persist? What knowledge must be applied in order for life to continue? This is a question best left for post-human civilizations to answer. The search for a cure for senescence is part of a much grander drive that defines us as a species: the desire to reach beyond our limits. Darwinian evolution provided us with the tools necessary to drive an autoevolutionary process from which minds enormously more powerful than ours will emerge.

Their problems are not ours to solve.



Comments:

Great article, all of it - from Jonathon Swift to the Matrioshka Brain - all excellent.

By Hank Pellissier on Mar 22, 2013 at 10:14am


Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.







What color is a white cat?



Subscribe

Enter your email address:

Books

The Transhumanist Reader by Max More and Natasha Vita-More
The Transhumanist Reader by Max More and Natasha Vita-More
Invent Utopia Now
Invent Utopia Now
The Apartheid of Sex: Manifesto on the Freedom of Gender
The Apartheid of Sex: Manifesto on the Freedom of Gender
More Books
Videos
‪Maitreya One - Go the Mile and End Aging‬
‪Maitreya One - Go the Mile and End Aging‬
Transhumanism and the 2nd Law of Thermodynamics - Video by Gennady Stolyarov II
Transhumanism and the 2nd Law of Thermodynamics - Video by Gennady Stolyarov II
XFF 2012 Pre-Party Review
XFF 2012 Pre-Party Review
More Videos