I shouldn’t have been surprised at the controversy that arose.
The cause was an hour-long lecture with 55 slides, ranging far and wide over a range of disruptive near-future scenarios, covering both upside and downside. The basic format of the lecture was: first the good news, and then the bad news. As stated on the opening slide:
Some illustrations of the enormous potential first, then some examples of how adding a high level of ambient stupidity might mean we might make a mess of it.
Anyone can predict stuff, but only a few get it right…
Ian Pearson has been a full time futurologist since 1991, with a proven track record of over 85% accuracy at the 10 year horizon.
A Singularitarian Utopia Or A New Dark Age?
We’re all familiar with the idea of the singularity, the end-result of rapid acceleration of technology development caused by positive feedback. This will add greatly to human capability, not just via gadgets but also through direct body and mind enhancement, and we’ll mess a lot with other organisms and AIs too. So we’ll have superhumans and super AIs as part of our society.
But this new technology won’t bring a utopia. We all know that some powerful people, governments, companies and terrorists will also add lots of bad things to the mix. The same technology that lets you enhance your senses or expand your mind also allows greatly increased surveillance and control, eventually to the extremes of direct indoctrination and zombification. Taking the forces that already exist, of tribalism, political correctness, secrecy for them and exposure for us, and so on, it’s clear that the far future will be a weird mixture of fantastic capability, spoiled by abuse…
There were around 200 people in the audience, listening as Ian progressed through a series of increasingly mind-stretching technology opportunities. Judging by the comments posted online afterwards, some of the audience deeply appreciated what they heard:
Thank you for a terrific two hours, I have gone away full of ideas; I found the talk extremely interesting indeed…
I really enjoyed this provocative presentation…
Provocative and stimulating…
Very interesting. Thank you for organizing it!…
Amazing and fascinating!…
But not everyone was satisfied. Here’s an extract from one negative comment:
After the first half (a trippy sub-SciFi brainstorm session) my only question was, “What Are You On?”…
Another audience member wrote his own blogpost about the meeting:
A Singularitanian Utopia or a wasted afternoon?
…it was a warmed-over mish-mash of technological cornucopianism, seasoned with Daily Mail-style reactionary harrumphing about ‘political correctness gone mad’.
These are just the starters of negative feedback; I’ll get to others shortly. As I review what was said in the meeting, and look at the spirited ongoing exchange of comments online, some thoughts come to my mind:
* Big ideas almost inevitably provoke big reactions; this talk had a lot of particularly big ideas
* In some cases, the negative reactions to the talk arise from misunderstandings, due in part to so much material being covered in the presentation
* In other cases, Isee the criticisms as reactions to the seeming over-confidence of the speaker (“…a proven track record of over 85% accuracy”)
* In yet other cases, I share the negative reactions the talk generated; my own view of the near-future landscape significantly differs from the one presented on stage
* In nearly all cases, it’s worth taking the time to progress the discussion further
* After all, if we get our forecasts of the future wrong, and fail to make adequate preparations for the disruptions ahead, it could make a huge difference to our collective well-being.
So let’s look again at some of the adverse reactions. My aim is to raise them in a way that people who didn’t attend the talk should be able to follow the analysis.
(1) Is imminent transformation of much of human life a realistic scenario? Or are these ideas just science fiction?
The main driver for belief in the possible imminent transformation of human life, enabled by rapidly changing technology, is the observation of progress towards “NBIC” convergence.
Significant improvements are taking place, almost daily, in our capabilities to understand and control atoms (Nano-tech), genes and other areas of life-sciences (Bio-tech), bits (Info-comms-tech), and neurons and other areas of mind (Cogno-tech). Importantly, improvements in these different fields are interacting with each other.
As Ian Pearson described the interactions:
* Nanotech gives us tiny devices
* Tiny sensors help neuroscience figure out how the mind works
* Insights from neuroscience feed into machine intelligence
* Improving machine intelligence accelerates R&D in every field
* Biotech and IT advances make body and machine connectable
Will all the individual possible applications of NBIC convergence described by Ian happen in precisely the way he illustrated? Very probably not. The future’s not as predictable as that. But something similar could well happen:
* Cheaper forms of energy
* Tissue-cultured meat
* Space exploration
* Further miniaturization of personal computing (wearable computing, and even “active skin”)
* Smart glasses
* Augmented reality displays
* Gel computing
* IQ and sensory enhancement
* Dream linking
* Human-machine convergence
* Digital immortality: “the under 40s might live forever… but which body would you choose?”
(2) Is a focus on smart cosmetic technology an indulgent distraction from pressing environmental issues?
Here’s one of the comments raised online after the talk:
Unfortunately any respect due was undermined by his contempt for the massive environmental challenges we face.
Trivial contact lens / jewellery technology can hang itself, if our countryside is choked by yoghurt factory fumes.
The reference to jewellery took issue with remarks in the talk such as the following:
Miniaturization will bring everyday IT down to jewellery size…
Decoration; Social status; Digital bubble; Tribal signalling…
In contrast, the talk positioned greater use of technology as the solution to environmental issues, rather than as something to exacerbate these issues. Smaller (jewellery-sized) devices, created with a greater attention to recyclability, will diminish the environmental footprint. Ian claimed that:
* We can produce more of everything than people need
* Improved global land management could feed up to 20 billion people
* Clean water will be plentiful
* We will also need less and waste less
* Long term pollution will decline.
Nevertheless, he acknowledged that there are some short-term problems, ahead of the time when accelerating NBIC convergence can be expected to provide more comprehensive solutions:
* Energy shortage is a short to mid term problem
* Real problems are short term.
Where there’s room for real debate is the extent of these shorter-term problems. Discussion on the threats from global warming brought these disagreements into sharp focus.
(3) How should singularitarians regard the threat from global warming?
Towards the end of his talk, Ian showed a pair of scales, weighing up the wins and losses of NBIC technologies and a potential singularity.
The “wins” column included health, growth, wealth, fun, and empowerment.
The “losses” column included control, surveillance, oppression, directionless, and terrorism.
One of the first questions from the floor, during the Q&A period in the meeting, asked why the risk of environmental destruction was not on the list of possible future scenarios. This criticism was echoed by online comments:
The complacency about CO2 going into the atmosphere was scary…
If we risk heading towards an environmental abyss let’s do something about what we do know – fossil fuel burning.
During his talk, I picked up on one of Ian’s comments about not being particularly concerned about the risks of global warming. I asked, what about the risks of adverse positive feedback cycles, such as increasing temperatures triggering the release of vast ancient stores of methane gas from frozen tundra, accelerating the warming cycle further? That could lead to temperature increases that are much more rapid than presently contemplated, along with lots of savage disturbance (storms, droughts, etc).
Ian countered that it was a possibility, but he had the following reservations:
* He thought these positive feedback loops would only kick into action when baseline temperature rose by around 2 degrees
* In the meantime, global average temperatures have stopped rising, over the last eleven years
* He estimates he spends a couple of hours every day, keeping an eye on all sides of the global warming debate
* There are lots of exaggerations and poor science on both sides of the debate
* Other factors such as the influence of solar cycles deserve more research.
Here’s my own reaction to these claims:
* The view that global average temperatures have stopped rising, is, among serious scientists, very much a minority position; see e.g. this rebuttal on Carbon Brief
* Even if there’s only a small probability of a runaway spurt of accelerated global warming in the next 10-15 years, we need to treat that risk very seriously – in the same way that, for example, we would be loath to take a transatlantic flight if we were told there was a 5% chance of the airplane disintegrating mid-flight.
Nevertheless, I did not want the entire meeting to divert into a debate about global warming – “that deserves a full meeting in its own right”, I commented, before moving on to the next question. In retrospect, perhaps that was a mistake, since it may have caused some members of the audience to mentally disengage from the meeting.
(4) Are there distinct right-wing and left-wing approaches to the singularity?
Here’s another comment that was raised online after the talk:
I found the second half of the talk to be very disappointing and very right-wing.
Someone who lists ‘race equality’ as part of the trend towards ignorance has shown very clearly what wing he is on…
In the second half of his talk, Ian outlined changes in norms of beliefs and values. He talked about the growth of “religion substitutes” via a “random walk of values”:
* Religious texts used to act as a fixed reference for ethical values
* Secular society has no fixed reference point so values oscillate quickly.
* 20 years can yield 180 degree shift
* e.g. euthanasia, sexuality, abortion, animal rights, genetic modification, nuclear energy, family, policing, teaching, authority…
* Pressure to conform reinforces relativism at the expense of intellectual rigor
A complicating factor here, Ian stated, was that:
People have a strong need to feel they are ‘good’. Some of today’s ideological subscriptions are essentially secular substitutes for religion, and demand same suspension of free thinking and logical reasoning.
A few slides later, he listed examples of “the rise of nonsense beliefs”:
e.g. new age, alternative medicine, alternative science, 21st century piety, political correctness
He also commented that “99% are only well-informed on trivia”, such as fashion, celebrity, TV culture, sport, games, and chat virtual environments.
This analysis culminated with a slide that personally strongly resonated with me: a curve of “anti-knowledge” accelerating and overtaking a curve of “knowledge”:
In pursuit of social compliance, we are told to believe things that are known to be false.
With clever enough spin, people accept them and become worse than ignorant.
So there’s a kind of race between “knowledge” and “anti-knowledge”.
One reason this resonated with me is that it seemed like a different angle on one of my own favorite metaphors for the challenges of the next 15-30 years – the metaphor of a dramatic race:
* One runner in the race is “increasing rationality, innovation, and collaboration”; if this runner wins, the race ends in a positive singularity
* The other runner in the race is “increasing complexity, rapidly diminishing resources”; if this runner wins, the race ends in a negative singularity.
In the light of Ian’s analysis, I can see that the second runner is aided by the increase of anti-knowledge: over-attachment to magical, simplistic, ultimately misleading worldviews.
However, it’s one thing to agree that “anti-knowledge” is a significant factor in determining the future; it’s another thing to agree which sets of ideas count as knowledge, and which as anti-knowledge! One of Ian’s slides included the following list of “religion substitutes”:
Animal rights, political correctness, pacifism, vegetarianism, fitness, warmism, environmentalism, anti-capitalism
It’s no wonder that many of the audience felt offended. Why list “warmism” (a belief in human-caused global warming), but not “denialism” (denial of human-caused global warming? Why list “anti-capitalism” but not “free market fundamentalism”? Why list “pacifism” but not “militarism”?
One online comment made a shrewd observation:
Ian raised my curiosity about ‘false beliefs’ (or nonsense beliefs as Ian calls them) as I ‘believe’ we all inhabit different belief systems – so what is true for one person may be false for another… at that exact moment in time.
And things can change. Once upon a time, it was a nonsense belief that the world was round.
There may be 15% of truth in some nonsense beliefs…or possibly even 85% truth. Taking ‘alternative medicine’ as an example of one of Ian’s nonsense beliefs – what if two of the many reasons it was considered nonsense were that (1) it is outside the world (the system) of science and technology and (2) it cannot be controlled by the pharmaceutical companies (perhaps our high priests of today)?
(5) The role of corporations and politicians in the approach to the singularity
One place where the right-wing / left-wing division becomes more acute in the question of whether anything special needs to be done to control the behaviour of corporations (businesses).
One of Ian’s strong positive recommendations, at the end of his presentation, was that scientists and engineers should become more actively involved in educating the general public about issues of technology. Shortly afterward, the question came from the floor: what about actions to educate or control corporations? Ian replied that he had very little to recommend to corporations, over and above his recommendations to the individuals within these corporations.
My own view is different. From my life inside industry, I’ve seen numerous cases of good people who are significantly constrained in their actions by the company systems and metrics in which they find themselves enmeshed.
Indeed, just as people should be alarmed about the prospects of super-AIs gaining too much power, over and above the humans who created them, we should also be alarmed about the powers that super-corporations are accumulating, over and above the powers and intentions of their employees.
The argument to leave corporations alone finds its roots in ideologies of freedom: government regulation of corporations often has undesirable side-effects. Nevertheless, that’s just an argument for being smarter and more effective in how the regulation works – not an argument to abstain from regulation altogether.
The question of the appropriate forms of collaborative governance remains one of the really hard issues facing anyone concerned about the future. Leaving corporations to find their own best solutions is, in my view, very unlikely to be the optimum approach.
In terms of how “laissez-faire” we should be, in the face of potential apocalypse down the road, I agree with the assessment near the end of Jeremy Green’s blogpost:
Pearson’s closing assertion that in the end our politicians will always wake up and pull us back from the brink of any disaster is belied by many examples of civilisations that did not pull back and went right over the edge to destruction.
After the presentation in Birkbeck College ended, around 40-50 of the audience regrouped in a nearby pub, to continue the discussion. The discussion is also continuing, at a different tempo, in the online pages of the London Futurists meetup. Ian Pearson deserves hearty congratulation for stirring up what has turned out to be an enlightening discussion – even though there’s heat in the comments as well as light!
Evidently, the discussion is far from complete…
This essay was originally posted at David’s blog HERE