Transhumanity
where strange brilliant ideas for the future intermingle and breed…

Home > Articles > The Fourth Evolutionary Transition - technology’s role in creating a new type of organism   

The Fourth Evolutionary Transition - technology’s role in creating a new type of organism   

Posted: Wed, December 19, 2012 | By: Extropia DaSilva



Welcome to this transcript of my Christmas 2012 Thinkers Lecture. This year, the topic is about ‘the fourth evolutionary transition’. What is that, exactly? We begin to get a clue by looking at termites.

TERMITES AND THE THREE EVOLUTIONARY TRANSITIONS

The science writer Lewis Thomas described termites as one of nature’s seven wonders. What is so amazing about them cannot be seen by examining termites as individuals. As Lewis wrote, “there is nothing at all wonderful about a single, solitary termite”. But something wonderful happens when the number of termites reaches a critical mass. Lewis described what happens as being like the termites:

“had suddenly received a piece of extraordinary news, they organise into platoons and begin stacking up pellets to precisely the right height, then turning the arches to connect the columns, constructing the cathedral and its chambers in which the colony will live out its life”.

So, which termite masterminded all this construction? Wrong question. Because, as Lewis pointed out, termites:

“are not the dense mass of individual insects they appear to be; they are an organism, a thoughtful, meditative brain on a million legs”. 

When a large number of a species of animal coordinate behaviour to the extent that termites do, the collective is described as a ‘superorganism’.

Put a termite under the microscope and you will see that its body is made up of millions of cells of different types. Even more than the termite, each cell cannot be thought of as a solitary thing, because it is part of a society, and it depends on that society for its survival.

In the early history of life, single-celled organisms were all that existed. An interesting experiment was conducted in which a single-celled alga was allowed to replicate for over a thousand generations before a single-celled predator was introduced. Within two hundred generations, the alga began clumping together, with hundreds of cells in a clump at first but eventually pairing down to eight cells per clump. This was an optimal number that made each clump large enough to avoid predation but small enough for each cell to pick up enough light to survive.

One can imagine how, over time, single cells clumping together would evolve slightly different cells in the group, the effect of this difference being a wider range of behaviour. Predators with more hunting skills, prey with more ways of defending themselves. After a billion or so years multicellular societies became the incredibly complex, coordinated systems we know as plants and animals.

Turn up the magnification so that you can see the structure of each cell, and you will find that it, too, is a society. Although we may think animals are powered by using oxygen to slow-burn organic compounds in order to gain energy while plants get theirs by photosynthesising light, the fact is that not one cell in your body knows what to do with oxygen and no plant cell can extract energy from light. 

Inside each and every animal cell there are other, bacteria-like organisms called mitochondria, while inside every plant cell we find chloropasts. It is these mitochondria that know what to do with oxygen and the chloropasts that know how to get energy from sunlight.

Scientists believe that those mitochondria and chloropasts were once free-living single-celled organisms, living independent lives. Then, a relationship was formed between some such single-celled organism and a bacterium, and over hundreds of millions of years this symbiotic relationship gave rise to the eukaryotic cell, a high-tech miniature machine that was to become the foundation for all multicellular life on Earth. Richard Dawkins explained, 

“all our cells are… stuffed with bacteria which have become so transformed by generations of cooperation with the host cell that their bacterial origins are almost lost to sight”. 

Even more than the individual cells in a multicellular organism, the bacteria-like mitochondria and the cells in which they live cannot be thought of as separate things, even if far back in the dim and distant past the ancestors of those mitochondria did live independent lives.

There is yet another example of co-operation, an event that is one of the mysteries of science. Even the simplest single-celled organism is actually quite a complex chemical system. Billions of years ago, such systems of gradually increasing complexity made the transition from non-life to life. Although scientists are increasingly learning to craft such systems in the laboratory, none seem to come with an unambiguous label defining them as either alive or not. This is probably because it is intrinsically arbitrary to ask at which point any system of increasing complexity becomes ‘alive’. 

This point was emphasised by Robert Hazen, a professor of earth science at George Mason University, Fairfax, Virginia:

“Any attempt to formulate an absolute definition that distinguishes between life and non-life represents… a false dichotomy… Rather, life must have arisen from a sequence of emergent events- diverse processes of organic synthesis followed by molecular selection, concentration, encapsulation and organisation into various molecular structures… what appears today as a yawning divide between non-life and life obscures the fact that the chemical evolution of life occurred in this stepwise sequence of successively more complex stages”.

 So, to recap, there have been three great transitions, each one resulting in a new kind of life formed from a union of existing ‘organisms’:

TRANSITION ONE: The increasingly complex biochemical systems that ultimately evolved into bacteria-like cells.

TRANSITION TWO: The combination of bacteria into cells, resulting in the eukaryotic cell.

TRANSITION THREE: The organisation of eukaryotic cells into multicellular forms.

THE DEVELOPMENT OF ‘METAMAN’.

Vernor Vinge
Vernor Vinge

So what is the fourth evolutionary transition? In order to perceive it, we need not a microscope but a ‘macroscope’- a point of view that can take in the whole Earth and dense networks of activity happening over the course of generations (but becoming increasingly fast).

In his paper on the technological Singularity, Vernor Vinge outlined several pathways that could lead to superhuman intelligence. One is particularly relevant to what I am talking about:

“The Internet Scenario: Large computer networks (and their associated users) may ‘wake up’ as a superhumanly intelligent entity”.

People often labour under a false impression when considering this scenario. They think it is suggesting that if we connect enough computers together and write or breed enough of the right kind of software, then, like with termites, a ‘critical mass’ will be achieved and, behold! The Internet comes alive. But the scenario is not concerned with computer networks alone, but rather how they are used as a part of human groups. It is those humans, after all, that help create the link structure Google depends upon for its trawling of the Web for relevant searches, that engage in the ongoing arguments from which Wikipedia’s articles are created and revised, and which organise social-media lead revolutions like the Arab Spring.

Also, that network of digital devices can only function thanks to the existence of other, older networks. We plug our devices (or their battery chargers) into electric outlets, drawing power from electric grids. The hardware we buy comes from production plants, all of which rely on other factories and mines from around the world to supply them with parts they need, and a global network of transportation to ship those parts to required destinations. The skills needed to design the software and hardware rely on networks like the education system and scientific research (without which, for example, we would not have the laws of electromagnetism which underpins so much of the modern world). All this requires capital, provided by economic systems, and full bellies, provided by a global agricultural system.

Greg Stock believes that, when we consider all the physical and intangible networks woven throughout the world today, we can indeed perceive the existence of a planet-sized super-organism. He refers to it as ‘Metaman’:

“Metaman processes huge amounts of information by combining human thought and computer calculation within the various organised networks of human activity”.

People who study human societies believe it is no accident that we move towards more complexity. Instead, it is an inevitable consequence of a simple fact: Whenever a society solves its problems, the success that brings leads to more (and more complex) problems. For one thing, societies that prosper tend to grow in size, thereby putting a strain on available resources, therefore requiring more elaborate means of acquiring necessities. A small tribe might sustain itself by collecting water from a watering hole, but at some point an expanding population is going to have to build an irrigation system, along with a system of management when there are too many canals for ad-hoc repairs to be practical.

As the complexity and number of problems a growing populace faces grows, it becomes increasingly necessary to divide tasks up into specialised skills. In today’s world, especially in 1st world countries, people rely on the skills of others to provide nearly all of which they need. And such is the complexity of most modern products that it is infeasible for any individual craftsperson to design and build them. Instead, hierarchical organisations are required, in which the manufacturing process is broken down into a series of micro-tasks overseen by layers of management.

 

But, hierarchical organisations must also face the problem of increasing complexity and the ultimate solution is to fundamentally alter the way in which society is organised, and how we think about technological and economic systems. In a hierarchy, there is always a ‘head’ who must make final decisions, but once complexity grows too large for any individual to try and get their head around the whole thing, hierarchies have to give way to distributed decision-making facilitated by networks. As Kevin Kelly observed:

“We find you don’t need experts to control things. Dumb people, who are not as smart in politics, can govern things better if all the lower rungs, the bottom, is connected together. And so the real change that’s happening is that we’re bringing technology to connect lots of dumb things together”.

By the way, when Kelly calls people dumb he does not mean they are stupid. Instead, he means networks of human activity and the technological networks facilitating it can handle problems and make decisions beyond the capabilities of any individual. Whenever we perform feats like detecting hints of ‘dark energy’ or track changes in global climate, such feats should really be attributed to the sum total of human and machine networks comprising ‘metaman’. As Greg Stock put it:

 

“When I speak not of ‘humans’ or ‘society’ but of ‘Metaman’ accomplishing something, I do so to acknowledge the role played by these immense and complex collaborations that are ubiquitous in the developed world”.

THE “INTERNET OF THINGS” AND THE PROBLEM OF DATA-MANAGEMENT.

The technologies we are relying on to connect ‘dumb’ things together in order to expand and deepen the sensory awareness of the planetary super-organism are mostly digital technologies. The emergence of digitisation had a profound effect on how technology, and the socio-economic systems supporting (and supported by) them, are perceived.

Walk through any urban area, and the prevalence of digital devices is apparent. Almost everyone you pass is either holding a smartphone to their ear or gazing at its screen. If current rates of consumption are maintained, by 2015 there should be some 4.5 billion smartphones in the world. And this is but one example of the plethora of digital devices that are expected. As the cost of computing, sensing and communicating decreases, it becomes feasible to add connectivity to more and more everyday things.

To give some idea of the scale of this ‘Internet of Things’, consider the number of addresses the latest revision of the Internet’s primary communications protocol is designed to handle. The current version, IPV4, can provide up to 4 billion addresses. But that is not nearly enough. IPV6 will provide up to 340 trillion, trillion, trillion addresses, enough to give every atom on Earth its own unique IP address.

OK, so we probably will not go quite as far as turning every atom into a web-enabled object. But we should definitely expect a future in which the Internet expands to cover more and more of the globe, and its web becomes increasingly tightly woven as more and more nodes are added.

Along with its advance into developing nations via wireless communication and cheap mobile devices, the Internet will even encompass the oceans. This is the ambition behind so-called ‘Cabled Ocean Observatories’, a network of buoys and robotic craft that will carry sensors detecting, among other things, biological and chemical properties throughout the water column, and geophysical observations made on the sea floor. 

As I said, the increasing presence of the Web and the ubiquity of digital devices is altering our perception of a great many things. One such change was anticipated back in 1995 by Eric Schmidt, the CTO of Sun Microsystems:

“When the network becomes as fast as the processor, the computer hollows out and spreads across the network”.

This phenomenon is now happening with ‘cloud computing’ in which more and more of the files and apps once stored locally are instead kept in datafarms like the ones Google operate, streamed to personal digital devices as and when needed. Google’s services require its growing cluster of servers to act as one machine, and that requires many parallel operations to be carried out at once. This move can be likened to the shift in manufacturing ushered in by the industrial age, in which factories broke up production into thousands of parts to be performed simultaneously, rather than relying on workers in separate shops turning out finished products step by step.

Kevin Kelly reckoned that, some time around 2015, desktop operating systems will become obsolete. He wrote:

“The Web will be the only operating system worth coding for. You will reach the same distributed computer whether you login by phone, PDA, laptop or HDTV”.

The act of turning objects into digital devices will dramatically speed up recombination. Recombination has always been the essence of invention. No new technology ever appeared out of thin air but was instead created by combining bits and pieces that already existed. When devices become digital they are all, at heart, objects of the same type. That is, data-strings. Therefore, as Brian W. Arthur (author of ‘The Nature Of Technology’) pointed out:

“Digitisation allows functionalities to be combined, even if they come from different domains”.

Moreover, the fact that these devices communicate over networks means that recombinations can happen remotely. For instance, ‘Ninja Blocks’ are small devices intended to make it very easy to add communications and sensing capability to everyday objects, allowing one to create things like phone-controlled coffee machines.

The effect of all this is likely to be a very rapid increase in the rate of invention, as we configure and reconfigure various digital objects into new combinations. The economics of the past were built on assumptions of predictability and order, befitting a world in which mechanical systems behaved with clockwork predictability. The digital age is ushering in a perception of technology as a kind of chemistry, one always recreating itself in new combinations. According to Brian W. Arthur:

“Economics is beginning to respond to these changes and reflect that the object it studies is not a system in equilibrium, but an evolving, complex system whose elements- consumers, investors, firms, governing authorities- react to patterns those elements create”.

 

Large Hadron Collider
Large Hadron Collider

When talking about digital devices one finds oneself using words like ‘communicating’, ‘sensing’, and in some cases ‘self-configuring’ and ‘self-healing’. These are terms that used to apply exclusively to biological systems. Perhaps, though, it is not surprising that we need to use more and more biological terms in order to describe the behaviour of our networks of digital devices. After all, we learned, from studies of the origin of life, that there is no fundamental divide between the animate and the inanimate. There are only systems of increasing complexity that gradually acquire more and more lifelike characteristics. We should therefore expect that, as technology becomes more sophisticated, it will become less mechanistic and more biological, sensitive and cognitive to its surroundings.

However, this increase in the number of digital devices comes with a cost. This increase, along with the growth of high-speed communications networks and high-capacity storage systems, has resulted in vast amounts of data being generated every second. Modern scientific tools like the Large Hadron Collider or the Australian Square Kilometre Array are capable of generating several petabytes of data per day, and Google’s database of hundreds of petabytes is swelled daily by incoming data orders of magnitude larger than the whole web of a decade ago. The cost is a decrease in human attention, as it becomes impossible for us to even scratch the surface of such vast quantities of data.

More and more we must turn to machine assistance. One way of dealing with the data deluge is to automate the process of scientific discovery as far as possible. The popular image of astronomers looking through telescopes is not a particularly accurate portrayal of modern astronomy. Instead, we use robotic instruments with sufficient intelligence to, say, tell a star from a galaxy and which can detect phenomenon too subtle for human senses (such as a star blinking for a nanosecond due to an asteroid passing in front of it). We also rely on automated processes. Most of the galaxy images collected by the Sloan Digital Sky Survey were never viewed by humans but were instead extracted from wide-field images reduced in an automatic pipeline. 

So, modern astronomy employs autonomous, semi-intelligent instruments which relay data to datacenters, and those datacenters use various techniques to further filter the data before finally relaying it to the computer monitors which are what professional astronomers look at. 

It has been argued by some that science itself is undergoing dramatic change thanks to the petabyte age, giving rise to ‘Data-Intensive Science’. Traditionally, science has been built around testable hypotheses, and crucial to this method are models that determine underlying mechanisms. With that in hand, correlation can be confidently connected with causation. But Chris Anderson of Wired Magazine argued:

“Petabytes allow us to say: ‘Correlation is enough’…We can analyse the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot”.

It should be emphasised that we are not talking about AIs pushing human experts towards obsolescence here. Rather, we are talking about an approach to ultra-intelligence involving cooperation between networks of machines with ‘non-humanlike intelligence’ capable of exploring datasets in ways impossible for humans, and humans employing skills like pattern recognition that machines struggle with. The trick is for these to interoperate effectively, such that the strengths of one compensate for the weaknesses of the other.

No human, for example, can comprehend an equation with several hundred million variables, but Google’s clusters handle such datasets no problem (Google converts the entire web into a big equation with several hundred million variables, which are the page ranks of all the web pages, plus billions of terms that are all the links). But, equally, the web contains lots of information humans comprehend easily- such as the context of visual images- which are profoundly hard for machines to make sense of. So, collectively, Google and its associated users form an entity that can mine vast sets of data for relevant information and extract useful knowledge from it.

The Human Brain Project
The Human Brain Project

The most important contribution computers and software tools can bring in this context is not intelligence per se, but rather knowledge management. This is necessary because science is rapidly transforming from a “cottage industry model in which one small team in a single location was responsible for the entire procedure of a particular line of inquiry, from collecting the data to writing the paper, to a more ‘industrial approach’ involving large, distributed teams of specialists collaborating around the world”.

The ‘Human Brain Project’, for instance, will rely on collaborations from teams in Switzerland, Germany, Spain, France (to name a few of the countries involved) drawing on expertise in areas like ‘clinical neuroscience’, ‘pharmacology’, ‘numerical analysis’, ‘animal physiology’ and ‘robotics and mechatronics’.

Multidisciplinary science faces a grand challenge, in that science throughout the 20th century fragmented into more and more specialised disciplines, with vocabularies largely incomprehensible to outsiders. This ultra-specialisation means that a scientist in one field might need to access the same data as another scientist, but from a very different perspective. The challenge, then, is to organise the world’s data so that it is easily accessible and simple to share across boundaries of specialised knowledge.

Fundamental to this approach is a drive to ‘objectify knowledge’, organising it into standard, machine-understandable representations. Whereas today’s cloud computing services chiefly focused on scalable platforms for computing, tomorrow’s will be much more concerned with the management of knowledge, driven by semantic approaches such as machine encodings of terms, concepts, and relationships. Contemporary examples of this ‘knowledge layer’ include the ‘Open Web Alliance’, which is an “open collaborative community (seeking) to organise the massive amounts of information flooding the biological sciences and other sciences”. Another example is Wolfram Alpha, an “online service at computes answers and relevant visualisations from a knowledge base of curated, structured data”.

Ultimately, the goal is to organise the world’s data so that it is a simple matter to look at some data and find all the information relevant to it, and gain insights by fusing data from multiple disciplines and domains. Combined with techniques like natural-language processing, ‘semantic web’ and other methods for objectifying knowledge, it will be possible to ask things like ‘fetch me the incidence of outbreak of flu across Asia and find correlations with migrating birds” and be represented with texts and visualisations that contain just the right information needed (provided the information is there, somewhere, among the world’s databases).

WORKING TOGETHER

Jeanette Wing, professor of computer science at Carnegie Mellon University, has talked about how computer science techniques and technologies are being applied to different disciplines, resulting in ‘computational thinking’. So, we have ‘computational ecology’ (concerned with simulating ecologies) and ‘eco-informatics’ (concerned with collecting and analysing ecological information). We have ‘computational biology’ (concerned with simulating biological systems) and ‘bioinformatics’ (concerned with the study of methods for storing, retrieving, and analysing biological data). Jeanette Wing wrote:

“Computational methods and models give us the courage to solve problems and design systems that no one would be capable of tackling alone”.

Today, if you search images on Google, it does a pretty good job of finding relevant results. This is not thanks to AI alone, but a combination of human knowledge, choices about that knowledge recorded in simple acts like clicking on a hyperlink or altering a search query, and computer networks mining that data so as to organise it more effectively.

Whereas before we relied upon hierarchical organisations to produce things like vast collections of images and encyclopaedias, now we can rely on a kind of automatic pooling of knowledge in which patterns of user activity lays down trails and systems of knowledge self-organise into categories richer and more complex than the relatively simplistic categories we used to order our knowledge by. We see the rise of ‘meganiches’ in which social networking enables individuals with rare and specialised interests to find like-minded souls, organising into groups as large as any previously achieved by mainstream media.

A lot of this collaborative effort is conducted freely, without expectation of extrinsic reward. Kevin Kelly noted:

“One study found that only forty percent of the web is commercial. The rest runs on duty or passion”.

 

One result of this freely-given effort is a reduction in the cost of failure. By and large, organisations that have employees are biased toward steady producers. But with something like Wikipedia we see a huge imbalance in participation. A typical article will have hundreds contributing one edit each and only a few contributing a substantial portion of the main body of text. But, since nobody is being paid that is absolutely fine and there is no temptation to try and address this inequality. Individually, of course, single edits would amount to negligible improvement. But those simple acts accumulate. Wikipedia harnesses different levels of effort and different skills and organises it all into what is probably the top source of reference of our time. Remember, it is not the technology of Wikipedia alone that achieved this, but that technology and the society of human users it supports. 

Similarly, to ask Google something is not simply to rely on large clusters of computers in some data farm somewhere. It is also to rely on human effort, much of it negligible when considered individually but producing powerful effects once those individual efforts are pooled together. 

At some point in history, we crossed a threshold, from designing technologies that could, in principle, be undertaken by individuals, to those that absolutely require interdisciplinary knowledge spread across a great many people. Compare the Large Hadron Collider to the Great Pyramid. Obviously, the construction of both was of a scale no individual could undertake. But I do believe an individual could draw up a complete blueprint of the Great Pyramid. But no person, no matter how clever and how polymathic they may be, could ever design a machine as complex as the LHC. Such machines absolutely require collaborative creation supported by networks of communications and information technologies. 

So, if we now have technologies whose complexity rules out their being designed by a single human mind, are they not, by definition, the result of superhuman effort? In a private conversation, J. Storrs Hall told me:

“I think it should be clear that the Internet is already a superhuman entity. Hell, even a ten-person company is a superhuman entity. The question is, is it one that can cause a singularity?”.

CAN THE INTERNET SCENARIO CAUSE A SINGULARITY?

I think so, for a couple of reasons. One was described by Luis Van Ann (the inventor of CAPCHTA) in a TED talk called ‘Massive-Scale Online Collaboration’. You might have heard of ‘Dunbar’s Number’, which refers to the maximum number of individuals with whom one can maintain stable social relationships. If you look at the number of people involved in large-scale projects such as the Panama Canal or the Apollo Moon Landing, they all involved roughly the same number of participants- somewhere in the region of 120,000. This is because it has always been impossible to coordinate- let alone pay- teams whose number of participants exceeded the hundreds of thousands.

However, the Internet is enabling us to assemble teams numbering in the hundreds of millions. It is likely that you yourself have been part of some such massive-scale online collaboration. Every time you type a RE-CAPCHTA, for instance, you are one of hundreds of millions of people helping to digitise the world’s books.

Equipped with the right technological aids, ordinary people can achieve great things. It took teams of gamers playing ‘Foldit’ just ten days to model the Mason-Pfizer Monkey Virus Retroviral Protese- a feat that had eluded scientists for fifteen years. 

If a hundred thousand people working together can put a man on the moon, what might a hundred million, working together along with vast computing resources and ‘Data-Intensive Science’- be capable of?

The other reason that this could lead to a Singularity is because the plethora of objects entering the digital domain not only enables a dramatic speedup in the recombination of things. Thanks to an ever-denser communications network and increasingly efficient search technologies, group formation is becoming increasingly easy. Moreover, a machine-curated knowledge-layer would go some ways to meeting Vernor Vinge’s challenge:

“We need to extend the capabilities of search engines and social networks to produce services that can bridge barriers created by technical jargon and forge links between unrelated specialities, bringing research groups with complementary problems and solutions together”.

With many of the costs of group formation greatly reduced, it would be viable to pursue real blue-sky thinking and explore multiple possibilities. Mega-teams with interdisciplinary expertise would form, break apart, reform in different combinations, as the projects they are involved in fail to take off or show signs of advancing toward some goal. As Clay Shirky reasoned:

“Open systems, by reducing the cost of failure, enable their participants to fail like crazy, building on the successes as they go”.

 

David Brin
David Brin

When we combine this more rapid exploration of possibility space via recombinations of specialised knowledge with an increasingly efficient assessment of worldviews against an objective reality we can so powerfully measure (thanks to the network of sensors monitoring the planet’s various systems) that should result in more paradigm shifts in scientific theory happening faster.

It will not just be scientific research that will be improved by increasing effectiveness of group formation, data analysis and sensing of global systems. In a private correspondence, David Brin told me:

“One important aspect is that we will see better and better tools for discourse that allow more rapid building of ad-hoc teams of humans and AI that directly solve problems in real time: “Smart mobs” that bypass slower tools like corporations and governments”.

NATURAL-BORN CYBORGS

There are multiple pathways to a technological singularity, from building artificial superintelligence to genetically engineering humans to be super-geniuses. But it seems to me that the ‘Internet Scenario’ is the one most likely to get us there first, because it relies on trends well underway, driven by basic human needs to organise into groups and communicate knowledge. This scenario does not rely on designing machines to do everything people are good at (a profoundly difficult challenge) nor does it involve turning people into machines (a moral and ethical minefield if ever there was one). It relies only on the further co-evolutionary development of humanity and its technology. Human brains are particularly suited to this form of symbiosis. 

One reason why this is so can be found by considering vision. The strange thing about vision is that there is a contradiction between the world that we see and what we should see given the construction of the eye. Our daily experience is of a full colour, highly-detailed scene. But the middle of the retina (the fovea) is packed with colour-sensitive neurons (or ‘cones’) whereas beyond about ten degrees from the middle there are only ‘rods’- neurons that only detect light and shade. This must mean that what we are actually seeing is a visual scene in which the centre is sharp-focus and full colour and the edge is blurry and devoid of colour.

It is believed that the visual system does not construct a detailed model of what is ‘out there’ at all, but settles instead on encoding a rough gist of the scene. But, at any moment, by repositioning the fovea via sequences of rapid-eye movements known as saccades, we can acquire detailed information from any particular point ahead of us at any particular time. According to Andy Clark, where possible the brain prefers to rely on ‘meta-knowledge’ which basically means ‘knowing how to find out’. In his own words:

“Having a super-rich, stable inner model of the scene could enable you to answer certain questions rapidly and fluently, but so could knowing how to rapidly retrieve the very same information as soon as the question is posed”.

In Clark’s view, the belief that the brain is the source of human intelligence is only partially correct. In fact, human intelligence can only be understood by considering interactions between the brain, the body, and cultural and technological environments. Clark explained:

“What the human brain is best at is learning to be a team player in a problem-solving field of nonbiological props, scaffoldings, instruments and resources- natural-born cyborgs ever-eager to dovetail their activity to the increasingly complex envelopes in which they develop, mature and operate”.

 

Brains like ours are poised to incorporate ubiquitous, invisible-in-use technologies into our mental models. To illustrate this point, Clark pointed out that, when asked “do you know the time?” a person with a watch would say “yes”. But if you ask someone if they know what such-and such a word means, they would reply “no, but I can find out” and go consult a dictionary. Notice though, how both scenarios appear the same. A person is asked something they do not know, and they consult some tool in order to find out.

The difference lies in the ease at which that information can be retrieved. The more ‘invisible-in-use’ a tech becomes, the more akin to our neural substrates it is. While writing, for example, an author is using the prosterior parietal subsystems, which make appropriate adjustments to hand orientation and finger placement. Only, nobody uses such systems in any conscious sense. Similarly, if you asked me, “can you define the word ‘happy’?” I would not reply, “no, but I can retrieve the information from my memory systems”. I would just tell you.

Equipped with a watch, then, a person is a hybrid biotechnological system whose conscious self represents a fairly thin layer, sitting between unconscious neural subsystems ‘below’ and cultural/technological systems ‘above’ and these systems all operate harmoniously to enable ‘you’ (this system that includes the wristwatch and knowledge of how to use it) to know the time. It seems reasonable to assume, then, that if a dictionary could be accessed as easily as a watch can inform us of the time, we would incorporate that into our mental models of who we are, and what we are capable of doing.

Increasingly, of course, we are inhabiting cultural and technological environments that enable us to access all kinds of information whenever we need it. When asked how we would know if the Internet and its human users had ‘woken up’ as a superorganism, Valkyrie Ice told me:

“The creation of a ubiquitous device that contains a personal tutor/ assistant/memory manager/researcher…Oh, wait, that’s what smartphones are becoming. Gee, looks like the scenario is already underway. It’s just going to take a few more years to improve upon. Once Watson and Siri develop into something more akin to [John] Smart’s ‘digital twin’, and enable every individual to have all-the-time access to the full realm of human knowledge, along with an interface that optimises to fit each individual’s learning and thinking patterns, this will be the most likely outcome”.


 

Valkyrie Ice
Valkyrie Ice

Valkyrie is talking about mobile or wearable devices that offer near-constant access to cloud-based apps. Knowledge-management software that ‘learns to be me’. In other words, learns how best to complement an individual’s strength and weaknesses. It has long been known that the brain is highly plastic. Violin players, for example, show enhanced regions responsible for motor control, thanks to the amount of complex finger movements their art requires. Neural Constructivists believe the brain’s adaptability extends beyond merely fine-tuning existing circuitry and involves the actual construction of new neural circuitry. This would make the brain a constructive learning system, in which the basic computational resources alter and expand (or contract) as the system learns. As it is experience that drives this process, it would mean we come to have designer brains purpose-built to dovetail to reliable problem-solving systems.

At the same time, those external systems are also becoming increasingly adaptable, ‘learning’ from human users so as to provide better services. Google captures the search behaviour of its users, using everything from how we punctuate, how often we click on the first result, and many other patterns of behaviour, in order to guide future improvements to the system. We are progressing from external cognitive systems that evolved over a period of generations, to systems evolving in near realtime as petabytes of data from a plethora of networked sensors capture user behaviour to be analysed by Google-sized computing resources.

WHO REALLY BENEFITS? 

We are offloading more and more aspects of our thinking to external systems. But, who really benefits? The individual? Or those vast systems we are plugged into? It is rightly pointed out that services which appear free are actually paid for in data about ourselves. As media theorist Douglas Rushkoff pointed out, a Facebook user is not really a consumer. Rather, the user is the commodity in which the company ‘Facebook’ trades. In ‘The Blind Giant’, Nick Harkaway wrote:

“Being a consumer, a customer, implies a measure of control over the relationship…The commodity, on the other hand, gets the minimum necessary attention to keep it in a marketable state”.

In this context, being in a marketable state means being somebody who is a good target for advertisement. The more the individual can be pigeonholed into categories, the more effective advertising will be. Are the friends recommendations you receive and the search results you get serving to expand your horizons and open your mind, or are they serving to put you in a bubble that narrows your view, making you a more convenient commodity?

It must surely be the case that companies like Google, fed daily with petabytes of data on social behaviour, and the combined computing and brain power to analyse it, know far more about what influences us to buy, what psychological drives push us to that final decision, than the individual does. In a world in which we will depend so much on services like Google Now to help organise our lives, it would behoove us to learn more about what influences us, so we can apply those systems in ways that help us make better, more informed decisions.

We need to know what can safely be unlearned, what knowledge that was once vital but which is now irrelevant in the digital age. We need to be sure which aspects of cognition can be offloaded to external systems and which should remain ‘within the brain’ if we do not wish to grow less intelligent. Perhaps most importantly, we need to encourage use of social networks to create smart mobs, to become a member of groups who are truly much more than the sum of their parts, rather than trap ourselves in bubbles that merely reinforce our prejudices. 

CONCLUSION

At the macroscale, where do we stand right now? Mike Wing, IBM’s Vice President of Strategic Communications, reckoned, “the planet itself- natural systems, human systems, physical objects- has always generated an enormous amount of data, but we weren’t able to see it, hear it, capture it. Now we can, because all of this stuff is instrumented. And it’s all interconnected… So, in effect, the planet has grown a central nervous system”.

This central nervous system is enabling us, as components of a superorganism, to tune in on the heartbeat of nations, to organise smart mobs that can help bring down corrupt regimes, that can track weather patterns and help reduce the human cost of hurricanes. It is bringing the world and its people into our homes, and exposing us (for better, for worse, and certainly both) to the world.

When, though, will the final push that sends us over the threshold to a post-singularity era, happen? More importantly, when will we know this has happened? If we consider that the Internet scenario involves a symbiotic relationship, an alliance of mutual benefit between human and technological systems, I would say that Michael Chorost provided the best answer. He wrote:

“There may come a day when we start to see behaviour that simply does not make sense in terms of what we know about hardware, software, and human behaviour”.

That would indeed be a sign that the Fourth Evolutionary Transition had resulted in the awakening of a fundamentally new kind of entity. 

This essay was originally published at Extropia DaSilva’s blog, HERE



Comments:


Leave a Comment:

Note We practice Buddhist Right Speech in our communication. All comments must be polite, friendly, and on topic.







What color is a blue sky?