Transhumanity
where strange brilliant ideas for the future intermingle and breed…

Home > Articles > Utopia Sucks

Utopia Sucks

Posted: Mon, February 04, 2013 | By: Scott Jackisch



One often stumbles upon Utopian visions in the thought-space of futurists.  Supposedly HG Wells was one such and he was touted by Kim Stanley Robinson at Humanity+ last year as having this massive positive effect on society, including Bretton Woods.  But Pinker takes a dimmer view of Utopians and suggests that any worldview that includes a goal of infinite utility lasting forever rationally justifies the most horrible atrocities to be committed toward that end and pulls out Pol Pot and Hitler as his bogeymen.  The fact that we can have a somewhat coherent set of alleged Utopians that includes both Pol Pot and HG Wells suggests some problems.  First, terms like Utopia or Utility or Infinite Fun are poorly defined and even if we could all agree on a universal good, the best approach to reach those ends are difficult to determine.

Take Kevin Kelly’s criticism of Thinkism which might suggest that we need some than intelligence to solve the world’s problems.  Michael Anissimov understandably takes exception to that argument, and Kelly’s argument is clearly flawed in some ways.  (uh, you can already simulate biology today, Mr. Kelly)  But progress toward any grand social goal, let alone Utopia, is deeply constrained by messy cultural artifacts like economics, politics, and even (God help us) religion.  We have enough food to feed the world, and we have the technology to get to Mars. (or close enough)  So why don’t we do those things?  Clearly not everyone agrees that feeding the world or going to mars are the right things to do.  So how to choose a Utopia?  One solution is to create a Godlike AI to rule them all, over-riding all these conflicting goals by assuming everyone would agree if they were just simulated properly.

This is problematic for a bunch of reasons.  But I fear that math is a poor tool to use to solve the best-path-to-utopia equation, err, problem.  Too much hand-waving is required.  For example, even if we assume that Infinite Fun will be had by populating the universe with “humans,”  how do assign probabilities to different approaches to achieve that?   Even if we drink the thinkism koolaid, one could argue that Augmented Intelligence is more likely that Artificial Intelligence.  I mean, we have a good track record with Augmented Intelligence.  Arguably every application we call AI now is just Augmented Intelligence.  Humans are running these programs and debugging the code.  Maybe we could just bootstrap to rulers of the universe by augmenting a bunch of humans.

More likely is that these cultural artifacts like economics, politics, religion, and even taste will bog us down.  Maybe that’s ok.  Maybe  static visions of Utopia are basically over-fitting and wouldn’t be adaptive to changing environments.  A caveman would probably have imagined a Utopia of endless summer with fat, lazy herds of meat passing continuously by his cave…  Actually that doesn’t sound bad when I think of it, but you get my point.

 This essay first appeared in Scott’s blog, Oakland Futurist, HERE 



Comments:

I am not sure why anyone can think working towards utopia justifies atrocity to get there, I certainly don’t have this view. I think the path to utopia will make life incrementally better until life is perfect. If the path to utopia does not entail incremental improvements then the definition of utopia is misguided, or perhaps for propaganda purposes “authoritarianism” has been deceptively named “utopia.” Pol Pot was a dictator and he was implementing communism whereas utopia is about total freedom from all governments or dictators, utopia entails an absolute lack of controlling authority thus there is no possibility for people to be oppressed.

Furthermore I disagree that Hilter and Pol Pot were Utopians, they are not renowned for stating they wanted to create “utopia,” they allegedly wanted to create a better world but this has generally been the case with all leaders throughout history, leaders typically state there are trying to create a better world. Nazism (or National Socialism, or The National Socialist German Workers’ Party) makes no reference whatsoever to “utopia,” neither does Mein Kampf mention “utopia,” thus it seems linkage of utopia to Hitler or Pol Pot is merely a smear, it is a logical fallacy, it is guilt by erroneous (false) association. For whatever reason some people want to fabricate history via making connections that simply do not exist. The utopia-Hilter-Pol-Pot connection is comparable to stating artificial intelligence is bad because Hitler was an artificial intelligence researcher. Even if Hitler was an AI researcher this association would not condemn all AI research, it is similar to how Nazi genetic engineering does not condemn the whole field of gene therapy.

I have previously addressed Kelly’s ludicrous “thinkism” and I will try to address this preposterous theory again, but I do struggle to respond to Kelly’s absurd attempt at rational thinking because Kelly is guilty of hand-waving, empty bluster, he is not saying anything sensibly, he is building castles in the sand or sky, his views about thinkism are pure fantasy. You see thinking is actually very valuable and the Singularity is about the application of thinking to create very proficient technology. Thinking is constantly creating technological devices able to function at quicker and quicker speeds, we also see how 3D-printers are improving the fineness of the prints and the prints are produced in quicker times even at this early stage of 3D-printing evolution. The intelligence explosion will not happen in one thinking leap from our current technology to explosive super-intelligence. Before the Singularity happens there will be an increasing proficiently of technology, it is an exponential growth where successive tech is better (quicker) than previous tech, which due to the nature of exponential growth the progress becomes explosive but there are logical steps, there are incremental gains, before the Singularity happens, it is not merely thinking isolated from the products (devices-technology) of thinking. The application of thinking does solve problems, you only need to look at the world around you to see how thinking solved problems, look at the computer you are using, it was created by thinking, the application of thinking to manifest ideas. Kevin Kelly actually needs to think. I think Kevin Kelly’s views from 2008 are outdated; his views were in an age before Siri, Watson, or Robot Adam had risen to prominence.

In the year 2013 we see how AIs are becoming embodied in the world via Siri, Watson, prospective Google Cars, and many other situations where rudimentary AI is being used. AI within our world will become more prolific similar to how mobile phones have quickly become an integral and very popular part of our world, we will see an speedy evolution of AI competence, the future is clear if you have a modicum of insight. Kelly’s views are similar to someone who thinks cancer cannot be cured because it has never been cured previously.

Finally let’s consider the issue of scarcity regarding food to feed the world, you need to realise scarcity continues to persist despite greater abundance, furthermore greed has been a crucial survival trait during times of greater scarcity, and even today greedier people (billionaires) live longer (statistics show poor people die earlier) thus there is today an incentive to be greedy despite greater abundance. Considering how scarcity has defined life on Earth it is natural for greed to persist when there is less need to be greedy due greater abundance. Bill Gates is giving away a large portion of his wealth but before such altruism becomes more popular we will need to see greater advances towards Post-Scarcity. People will need to feel more secure regarding abundance whereas abundance is relatively new, but without doubt greater abundance is coming.

By Singularity Utopia on Feb 05, 2013 at 2:40am

“Did you know that the first Matrix was designed to be a perfect human world? Where none suffered, where everyone would be happy. It was a disaster. No one would accept the program…I believe that, as a species, human beings define their reality through suffering and misery. The perfect world was a dream that your primitive cerebrum kept trying to wake up from.” - Agent Smith, The Matrix

By Reeve Armstrong on Feb 05, 2013 at 1:47pm

So utopia is impossible because it didn’t work out in the Matrix film? A film, a fictional film, really? So maybe we simply need to make a film where utopia is possible if films define reality?

By Singularity Utopia on Feb 06, 2013 at 12:12am

Singularity Utopia,

If you don’t understand how (near) infinite good lasting forever justifies atrocities, then you just haven’t done the math: 

http://lesswrong.com/lw/kn/torture_vs_dust_specks/

By Scott J on Feb 17, 2013 at 2:32am


Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.







What color is a red fox?