Transhumanity
where strange brilliant ideas for the future intermingle and breed…

Home > Articles > The Post-Singularity World, and Dystopian Predictions

The Post-Singularity World, and Dystopian Predictions

Posted: Tue, November 20, 2012 | By:



by Richard Loosemore

Here are the facts as we know them. 

FIRST: everything (EVERYTHING) depends on the motivation of future superintelligent machines. It doesn’t matter what happens with nanotechnology, or the future of corporations or the military ... all these things take second place in a world in which there is real, human-level intelligence. Note carefully that it is not just the existence or intellect of those AIs that is the issue; what matters is specifically the mechanism that determines what they want to do.

SECOND: nobody has ever built anything remotely approaching a full AI. This matters, because one of the main reasons we are far from real AI at the moment is that nobody knows how to set up the control system (the drives or motivations) in such a way as to make an intelligent system behave coherently. Figuring out a control mechanism for a narrow AI system like an interplanetary probe is child’s play compared with real human level AI. 

Why is this such a big deal? Partly it is a matter of the mind-boggling abstractness of the drives required in a system that wanders around performing the whole repertoire of behaviors associated with full intelligence. So, if all you’ve got to do is build a control mechanism for a space probe, you can specify its goals as a set of statements about keeping various parameters within appropriate limits. But what do you do with an AI that is, for example, just starting kindergarten? Do you give it a top level goal of “Go and play!”? There seems to be something fundamentally wrong with the idea of inserting abstract goals into the current design for (narrow) AI control systems.

So this means that whenever someone extrapolates from a simple AI control mechanism (like a utility-function maximizer) and assumes that the scaled-up version would work in a future human-level AI, they are doing so with zero evidence of viability. The disconnect is comparable to someone in the pre-atomic era that the control system of a canoe – the rudder – could be scaled up and used as the control system for a future atomic power plant.

THIRD: there are a very small number of people who specifically study the kinds of motivation mechanisms that would have to be used in a human level AI, and some of the early conclusions to come out of that work indicate that, counterintuitively, it may be extremely hard to create malevolent intelligent systems. Basically, if you try to get your AI up to that level of intelligence, the system will become unstable if the drive system includes elements of malevolence, or if it is designed to be blindly obedient to your violent intentions. That means that in practice someone else who tries to build a peaceful, empathic AI will get their system to work while you are still trying to get your evil AI past the screaming tantrum stage of its development.

To be sure, these ideas about the instability of dangerous AIs are in their infancy, but the fact remains that people (like myself) who actually do study the kind of AI motivation mechanism that has any chance of working (which means: not the crude “utility function” mechanisms) have come to the conclusion that dystopian outcomes of a singularity are starting to look ridiculously implausible. We may be wrong, yes, but we are the ones on the leading edge at the moment, developing what looks (to us, at least) like the only coherent proposals for motivation mechanisms, so this conclusion is actually coming from the only game in town right now.

But instead of being replete with exploratory visions of these anti-dystopian futures (to coin a phrase), the meme pool is currently being saturation-bombed by people who tell each other that a machine dystopia is such an obviously likely outcome that nobody could possibly disagree with it.

Sigh! Programmer Error!

Want to leave a Comment? You can, either below, or at Zero State’s Facebook page, HERE



Comments:

I see that the author of this page (a name I think I recall from SIAI forums) asserts:
“it may be extremely hard to create malevolent intelligent systems. Basically, if you try to get your AI up to that level of intelligence, the system will become unstable if the drive system includes elements of malevolence, or if it is designed to be blindly obedient to your violent intentions.”

And then I see a directive to use Buddhist Right Speech in communications.

I like this. It shows thought. If a created intelligence is of at least human-level in its comprehension, we can assume that it will place high value on knowing what is true so that it can optimize its behavior; it will soon figure out that malevolent strategies are seldom deliberately chosen by the most intelligent humans, and likely come to many similar conclusions. For me the archetypal image is of Leonardo da Vinci, arguably the greatest mind this planet has produced; it’s true he designed many weapons, but in his time this was unavoidable in his field; what fewer people know is that he would often buy wild birds in the market just to set them free.

By Tom Buckner on Nov 20, 2012 at 4:39am

If the possibility is so dangerous then why try to build human or beyond-human level AI at all?  Why not just concentrate on non-aware expert systems that will always be within the control of humans?  I’ve never come across an explanation as to why we must have full AI as opposed to expert systems or partial AI that will not have such risks.

By Carol Meacham on Nov 20, 2012 at 4:51am

My name clearly indicates how I think utopia is inevitable. Yes dsytopian outcomes are “ridiculously implausible.” The future all depends upon logic. Motivations depend on logic, thus the motivations of future beings can be predicted. They will be logical, furthermore illogic will decrease due to increasing intelligence. Utopia is logical based upon Post-Scarcity. PS is the logical conclusion of extreme intelligence.

The dystopian meme is actually a relic from religious belief, which may initially sound implausible but once you look at the facts it seems clear technological dystopia is merely a tendency of humans to create a religious fear of judgement. Dystopia is Original Sin in a technological guise, or it is the final punishment for our Original Sin, it is irrational religious guilt regarding a false sense of morality and meekness. Apocalyptic thinking is a key part of the Bible, furthermore we can see how the Bible has deeply influenced our culture, thus we have a biblical dating system based in the birth of Christ. Technological dystopia is very similar to The Book of Revelations, but Christianity is deeply ingrained into our culture thus we are often unaware of subconscious motivations.

By Singularity Utopia on Nov 20, 2012 at 5:11am

It is not “utility function” mechanisms that are crude or incorrect but the reductionist (and conservative) contexts in which they are often used—reductionism that denies the fact that any non-trivial “optimization” *always* has costs/trade-offs (which are most often ignored, unknown, or denied) and that satisficing is *much* safer than optimizing.

The one goal to rule them all is simply monomania.  All an AI truly needs is a top-level constraint NOT to defect from society.  As long as an AI is a guaranteed part of the community, it is simple instrumental logic that it will do good things for the community to reap good rewards for itself.

Being selfish works in the short run and in terminal cases, but in the longest terms (expected for the AI) hurts the selfish individual.  That is why dystopian results are unlikely in the long run.  Unfortunately, while the long-term prospects are bright, the transition could be rocky (or even fatal) if short-sighted views are allowed to prevail.

By Mark Waser on Nov 20, 2012 at 7:54am

If you deem yourself a Transhumanist, then the idea is that you’re thinking is cutting edge, forward thinking, and willing to put fear in the back seat in favor of human technological progress. That’s not to say we don’t fear, but we move forward and lead, anyway.

If you remember Neil Armstrong’s words when he first stepped onto the moon, and the amazing achievement of being the first to accomplish this amazing feat because some very simple technology and a lot of courage, Transhumanists must be the ones willing to take that “Second”  great step for (a)Man, and Second Great Leap for all Mankind. It is our turn to prove humanity’s worth.

Don’t misunderstand, I thoroughly enjoy dystopian fiction, but that’s about fiction. Here, perhaps we can stop acting like we are the H1.0 species we want to evolve away from, start becoming the H2.0 species we want to be, and begin creating the positive memes that humanity very much needs to hear, perhaps needed more now than ever before. 

Let’s direct Hollywood for a change.

By Kevin George Haskell on Nov 20, 2012 at 4:42pm

I’m quite mesmerized and intrigued by the intelligent comments by everyone.  While at the same time, some of your comments seem so; impersonal, calice (yes, that’s the word I intended), and too logical.  Consider using emotions as a litmus test, which avails us to potential control mechanisms and circumventing a never-ending cycle of minutia and logical quagmires. 

I truly believe a transhumanity CONSTITUTION based upon EMOTIONS is the best course of action for effectual change and harmonious technological implementation…..think about it!!!

Much love, Lord Hexagon of The BeeHive

By Darren Doucet on Nov 22, 2012 at 12:49am

Sociopaths are over-represented at the ruling level of human societies; predator species are typically more intelligent than their prey. There is nothing inherent in intelligence that rules out the capacity to do harm, and I see no reason to believe this would be the case with machines more than it is with people, depending upon both intent of designers, and emergent properties of systems.

Military agencies are among the chief financiers of AI systems, and they certainly have no overall directive against harming people.

Also, there are issues of abstraction. An AGI will not necessarily have a well-developed emotional substrate; it may have little or no capacity for empathy. And, I see no reason it would be automatically guaranteed against deluding itself; Stalin was one of the most harmful people in human history, but he didn’t believe he was a bad person. The monsters of history can believe they are acting for the greater good.

By Thomas Watts on Nov 22, 2012 at 8:50am

Very good comments Thomas.  I didn’t say this would be easy.

A tomato is a tomato, and love is love.  Hence, Stalin’s view of his self is not the litmus test, love is the litmus test!

I know we can accomplish this!!!  Let me give an example; some may say “come on Darren, you can’t develop/program emotion?”  I think you can, here’s just one example.  Let’s take harmony…...Fibonacci sequencing! 

Hummm…I told you we could!  We must try and it is a worthy endeavor of humankind.

By Darren Doucet on Nov 23, 2012 at 1:01pm

Will the Singularity lead to omnipotent AI?

By Zak perea on Jun 21, 2013 at 1:20pm


Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.







What color is a white cat?



Subscribe

Enter your email address:

Books

The Hedonistic Imperative
The Hedonistic Imperative
The Left Hand of Darkness
The Left Hand of Darkness
The Word for World is Forest
The Word for World is Forest
More Books
Videos
Brave New World with Stephen Hawking - Episode 2: Health
Brave New World with Stephen Hawking - Episode 2: Health
Our New Digital Brain: Cecilia Abadie at TEDx
Our New Digital Brain: Cecilia Abadie at TEDx
Humanity’s Potential Exceeds the Power of the Gods
Humanity’s Potential Exceeds the Power of the Gods
More Videos