where strange brilliant ideas for the future intermingle and breed…

Home > Articles > Is Price-Performance the Wrong Measure for a Coming Intelligence-Explosion?

Is Price-Performance the Wrong Measure for a Coming Intelligence-Explosion?

Posted: Wed, April 03, 2013 | By: Franco Cortese

Most thinkers speculating on the coming of an intelligence explosion (whether via AGI or uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think an lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. Ipresent a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours, which equates to roughly 250 subjective years for every hour, or 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as optical computation or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even basic functional modalities having nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg in their 2008 Whole Brain Emulation Roadmap [6] for instance have argued that if we understand the operational dynamics of the brain’s low-level components, we can then simulate these and the emergent functionalities and phenomena will emerge therefrom.

Mind Uploading is Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations converge so as to instantiate higher-level functions and faculties, then we don’t need to wait for software improvements to catch up with hardware improvements. This means that a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a very small scale (which is easier, as simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions) Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by so large a degree that it happens before computational price performance drops to a point where the basic processing power required to simulate a brain is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures. Such a scenario could make basic processing power, or IPS, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance. The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principals of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So the government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase profits (i.e. modifying them to optimize a specific performance metric/objective) and indeed would be unable to obtain intellectual property rights over a technology without being able to describe how it performs its function(s).

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Is Upload+AGI easier to create than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined wholly by processing-power and thus remain largely unaffected by the state of the current software. If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) rather than computational price performance - and may be more determined by computational processing power than by processing power + software improvements. Moreover, development in AGI may proceed faster via the vicarious method outlined here – that is, having an upload or team of uploads work on the software and/or hardware improvements AGI relies on – than by directly working on such improvements in “real-time” physicality.

But to what extent is one’s ability to accelerate those factors determining how soon an intelligence explosion occurs actually increased by running oneself as an upload on today’s petascale supercomputers?

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter him.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize or increase his current functional modalities (e.g. what we coarsely call “intelligence”) thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (requiring a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something, and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his ability.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters - thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own – which is the very problem they are trying to mitigate in the first place.

Those who see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [though “capability to affect change” might be a better measure for power] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risks associated with an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by increased existential risk. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.


Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost.


  3. The creation of an upload may be independent of software performance/capability, and may be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on both computational and software advantages.

    • If this second conclusion is true, it means that an upload may be possible today, whereas AGI must still wait for software to catch up with hardware.

    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!


[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion - Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14-17. NANOCON. 2. [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008-3. [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.


Petascale computers are not sufficient to emulate a human mind.  We can barely emulate a small part of a cat brain today and that take nearyl a gigawat of power to run.  So no, a wealthy individual today cannot upload.  Not to mention we have no idea how to upload even if we had the hardware capacity.

Raw ops per second will not get you a brain.  You need massive parallelism and the right wiring / programming to get there.  It will be some time before we know how to do that.  We do not yet understand the operational dynamics of the brain or how those give rise to intelligence.  If you are going to use raw speed with less hardware parallelism then you seem to have a software challenge to deal with.

If you are going to run the upload 10^6 times faster then multiple the required cps accordingly.  Optical computation will not be that much faster unless you can introduce more massive parallelism. 

We do need to wait for neuroscience at the very least and likely for software as well.  Personally I am not a believer in the emulation approach to AGI.  It is akin to attempting to do powered flight by emulating a bird in copious detail.  There is also a software/psychological aspect.  We don’t know how much of the human brain is needed for intelligence and sanity as an upload.  If we have to emulate a pretty full virtual world environment to keep the upload sane or even functional this is a pretty major software undertaking.

AFAIK we don’t know yet how to full emulate even one cortical column to our full satisfaction.  We are quite a ways from being able to upload the first human brain state. I would be very surprised if we are able to do this before the price/performance drops down $1000 per human brain equivalent.

An upload is an AGI as far as I am concerned.  Its entire brain and state of existence is non-biological.  That it is imprinted from a previously existing human being doesn’t change a whole lot.  If it also thinks 10^6 faster then it is not going to seem at all human rather quickly.

Why would the upload into a petascale or even exascale computational system be able to rewire said system which would be essential to some kinds of improvements?  Granted it could figure itself out a lot faster (if it could be done this way at all). But then what?  It is any a machine designed to hold a human brain emulation.  It substrate is not at all virtual.  Nor is its brain. It is a physical embodiment. 

Where is the upload going to copy itself to, exactly? 

CEV?  Even the creators disown it as workable.  Not that I have heard of a replacement yet. Trying to guarantee Friendly AI is and alway was doomed to failure.  There is no way to guarantee any such thing when we cannot even define what It is to anyone’s satisfaction.

How imminent an intelligence explosion is is equivalent to one or more of

a) sufficient understanding of the human brain to actually be able to emulate it given sufficient hardware;

b) sufficient hardware for the emulation at an affordable enough (to at least billionaires or a national government) costs;

c) understanding enough of uploaded brain requirements to design a suitable envinornment for sanity/functioning;

d) development of non-emulation AGI.

You cannot get there with only (b).  Not possible.

By Samantha Atkins on Apr 03, 2013 at 7:53pm

I’m wondering if this has not already been done, or at least should be done for an animal.

Maybe we could map a Pig’s brain and upload it… obviously we would the n need a little software to discern it’s behaviour, this would give us a starting point though..

By Transhu Manism on Apr 04, 2013 at 4:29am

Thanks for the comments Sam!

I agree that massive parallelism may very well be necessary, but perhaps for different reasons (are you arguing that it’s necessary to achieve the processing power required, in the event that serial processing can’t keep up with the processing achieved via parallelism in the brain?). My stance on why we need parallelism is that the seriality of current computers may be equivalent to disconnecting a given neuron from the rest of the brain every time it fires. But I think that this would disrupt subjective continuity only, and that we could still embody all the outward empirical signs of a mind (i.e. it would act like one) using serial computers.

I argued that the estimates for required processing power made by Kurzweil, Moravec and others have been surpassed by contemporary computers, and that hypothetically, provided we can emulate the brain’s low-level component (e.g. neurons or single-neuron components, like ion-channels, ion-pumps and sections of membrane), it might be possible to do this today. So I’m not arguing that it is totally independent of software, but rather that it doesn’t really require massive software improvements. We do need more data from neuroscience. But the key feature differentiating uploading from other approaches to AGI is that we could emulate the low level components with not-too sophisticated software (at least in comparison to what would be needed to emulate higher-level neural regions, on their own scale) and generate the higher-level regions and functions that otherwise would have required massive software improvements. Because the system-in-question (the brain) already exists, we can reverse engineer it on a very low level, and thereby bypass the need to figure out the comparatively harder problem of how such low-level operations converge to create the emergent functions and phenomena of mind. So while it still needs software of course, it doesn’t need the same kind of categorical paradigm shifts in thinking and methodology that AGI does.

So, I’m not necessarily saying that their estimates are correct, but that petascale computers have surpassed their estimates. So an upload, given sufficient software to emulate the low-level dynamics (e.g. of single neurons), could hypothetically be run today, was my argument.

I would agree that Uploading isn’t required for AGI of course. I think Ben Goertzel has made a similar plane-bird metaphor. But I think that uploading may be significantly easier, because figuring out the operational dynamics on a low level (which would hypothetically translate into or generate the higher level operational dynamics, without our needing to understand them on that scale), and making software to emulate single-neuron or sub-neuron dynamics, will likely be easier than either a.) making software sufficient to simulate the operational dynamics of higher-level components on their own scale, directly, or b.) creating AGI from scratch.

The gist of the argument is that uploading doesn’t need to wait for LARGE software improvements that AGI does. Thus, it is likely to be possible before AGI. It then follows that if someone were uploaded prior to the creation of an AGI, his subjective speed up (and increased ease-of-self-modification) could allow him to create AGI, directly or vicariously, faster than it would have otherwise taken if it were developed in physicality.

(part 1)

By Franco Cortese on Apr 04, 2013 at 6:40am

This leads to the counter-intuitive conclusion that is MAY be easier to create Upload+AGI than AGI alone/directly. This claim would hold if uploading were easier for any reason, and doesn’t necessarily rely on those reasons used in the piece (specifically that uploading doesn’t need MASSIVE improvements or fundamental paradigm shifts in software, whereas AGI does).

The only reason any of this is possible is because the system we seek to recreate already exists, and because an understanding of the easier-to-understand lower-level operations would translate into the higher-level harder-to-understand operations. So this is why it doesn’t rely on massive improvements to software (it will still require software of course, but not fundamental paradigm shifts in the thinking/methodology underlying them), which it WOULD require if we needed to understand how the lower-level operations converge to create the higher-level/emergent operations - but we don’t, as long as instantiating them on that lower level necessarily translates to instantiating those higher level operations.

Yes, the substrate “running” the mind is indeed technically physical, but there is still a sufficient difference to make a difference, so to speak. In virtual or 2nd order embodiment, rather than the physical components say moving around, informational components run on those physical components are “moving around”. State-transitions of low-level components represent state-transitions of the virtual components.

For instance, say we used multiple logic gates/elements to simulate one single logic gate. Then we could change the type of virtual logic element simulated by the physical ones, without actually replacing the physical ones. By being embodied virtually, one can make changes to it without actually making a PERFECTLY CORRESPONDING change to the physical elements simulating them. We could replace a virtual logic gate with a different virtual logic gate, without needing to actually replace the physical logic gates simulating the virtual ones. So too with an emulated brain, if physical substrate represents virtual substrate, we can modify the virtual components without making corresponding changes to the physical substrate (other than the small changes necessary for rewriting information). Thus, to self-modify in physicality would involve developing a system to go into the brain and make physical modifications. To self-modify in virtuality involves a re-writing of information. Instead of building a system to change the morphology of a neuron, we could simply replace the information representing that neuron, which is MUCH easier. The physical substrate is changing, or being “rewired” so to speak, but negligibly in comparison to what those changes represent in virtuality. This is why I refer to it as 2nd order embodiment. If we use physical substrate to represent virtual substrate as information, rather than just implementing the brain in physical substrate directly, then making modifications means rewriting information, rather than building systems to enact actual physical changes.

Right now our substrate isn’t virtual. If we add another level to the hierarchy however, and make the physical substrate virtual (thus using physical hardware to run a program that simulates physical hardware which THEN runs the “software” of the mind - as in brain-emulation) then we can rewire simply by rewriting information, rather than re-wiring the lowest level or actual physical substrate.

This is why I argue that the ease-of-self-modification would be much higher for an upload (2nd-order embodied person) than for a 1st-order embodied person (i.e. us).

And I’d just like to make clear that I wasn’t advocating CEV. I have many problems with it, not gone into in this piece, and I generally agree with your comment. They seek to try and predict what they want to make for the express purpose of thinking that which they cannot, and to conceive that which they cannot.

And just to be perfectly clear (I don’t think you argued against this however), the only reason it could occur before the necessary hardware is available for say $1000 is because it might be possible to do it before such a time, on a supercomputer, and that an upload that runs him/herself on such a supercomputer could accelerate developments in either self-modification or AGI-creation, using subjective speedup and increased ease-of-self-modification (which makes Intelligence amplification easier), by such a degree that an intelligence explosion occurs within the time it takes for the price that processing power to drop to $1000.

Thanks for your comments Sam, it’s nice to finally speak with you “directly”.

By Franco Cortese on Apr 04, 2013 at 6:44am

Several questionable assumptions in this article:

1. There’s no practical technology currently available to scan a large brain with sufficient detail. This problem is discussed in “WBE: A Roadmap” (Ref 6). It’s not at all clear when such technology will become available.

2. We don’t know what level of detail will be “sufficient” to emulate a brain. It’s likely that a very high resolution scanning will be necessary, given the “Blind Replication” goal. When you don’t understand something, you need to copy more detail. This makes 1. even harder to achieve.

3. Even when the technology for 1. and 2. appears, it will allow scanning of a dead brain tissue. This means we will be able to replicate a “dead brain”. Obviously, dead brains are not very intelligent, and it’s not at all clear how to bring a dead brain to life (biological or not), especially if the copy isn’t perfect (and it will be pretty hard to make a “perfect copy”).

4. To address the concern in 3. it might be possible to replicate a simpler brain structure that can potentially be grown into a brain (like an embrio brain), and then given opportunity to learn like a human child does.

5. Signal speed does not equate to processing speed. This is especially true when you are blindly replicating something designed to work at slow speeds. Therefore there’s no reason to assume an emulated brain will work any faster than a human brain.
This means a learning stage mentioned in 4. will likely take comparable amount of time it takes a human brain to learn (years). As any circuit designer will tell you, it’s hard to speed something up when you don’t understand how it works (and often it’s hard even when you do).

6. Even if the above problems are solved, the resulting intelligence will be something resembling a human being, nothing less, nothing more. What makes you think this being will want to do anything? What makes you think you can force it to do anything? It’s really hard to imagine its motivations, or values. It’s not clear if it will have any.
Perhaps if you give it a physical body — a human-like body with human-like senses, then maybe it will do something, but still, its behavior is likely to be very different from a behavior of a healthy person.

7. The central idea of the article — WBE + AGI is easier than AGI.

Let’s look at the current status of both fields: in AGI, probably the most impressive achievement to date is IBM Watson. It was able to “understand” sophisticated (but very specific) questions, and answer them correctly given access to relevant information.
With all its limitations, Watson is a pretty impressive piece of technology. It’s not hard to see the future where Watson will grow into something that can do much more, given more processing power, new/better algorithms, and access to more data.

WBE progress, on the other hand, can be evaluated by looking at Blue Brain Project (the core of the future Human Brain Project). They showed a simulation of about 100 cortical columns. It’s not entirely clear how well those columns are simulated, because no functional virtual organism has ever been demonstrated. Not even the smallest brain has ever been simulated (C Elegans worm has only ~300 fully mapped neurons, yet no one has been able to simulate it so far. David Dalrymple is currently working on it, and he claims to be able to do it in the nearest future — we’ll see).

Bottom line — so far it seems AGI has been more successful compared to WBE. Of course, this may change in the future (especially given the amount of funding promised for the HBP), but I don’t see any arguments in the article to support the claim that WBE is “easier” than AGI.


By Michael on Jun 21, 2013 at 2:39pm

Franco, I don’t quite understand what you meant here:

“...seriality of current computers may be equivalent to disconnecting a given neuron from the rest of the brain every time it fires. But I think that this would disrupt subjective continuity only…”

If we are talking about “Blind Replication”, and it means copying “structure”, not “function” of the brain, then it should not matter if we perform simulation of parallel or serial computers.

Also, I’m not sure if you actually read “WBE: A Roadmap” by Sandberg and Bolstrom, but they convincingly show that “Blind Replication” is an extremely hard undertaking, and is likely to take many decades. The reason I’m questioning your knowledge of that paper is that you repeatedly call a simulation on the scale of individual neurons as a “low level simulation”. In fact, that would be a high level simulation, at least according S & B. They identified 11 levels of detail for WBE, and the scale of individual neurons takes 4th place from the top.
Also, Henry Markram repeatedly mentioned the need for molecular level simulations.

BTW, plane/bird metaphor was made by Richard Feynman in his excellent “Lectures on Computation”.

By Michael on Jun 21, 2013 at 2:58pm

Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.

What color is a red fox?


Enter your email address:


Invent Utopia Now
Invent Utopia Now
Human Destiny is to Eliminate Death: Essays, Rants, and Arguments about Immortalism

More Shit That Pisses Me Off
More Shit That Pisses Me Off
More Books
‪Annalee Newitz—Writing about the Future‬
‪Annalee Newitz—Writing about the Future‬
Ray Kurzweil on Singularity 1 on 1
Ray Kurzweil on Singularity 1 on 1
Science Against Aging - Video by Maria Konovalenko
Science Against Aging - Video by Maria Konovalenko
More Videos