De Mente Artificiosa et Philosophia: Stoicism in the Post-Singularity Future By Steven Umbrello & Tina Forsee


 

Futurists like Ray Kurzweil believe that advancements in the field of artificial intelligence will culminate to a point in the near future to allow humans to transcend their biological form. This is what he calls the Singularity and he describes it as follows:

Within a quarter century, non-biological intelligence will match the range and subtlety of human intelligence. It will then soar past it because of the continuing acceleration of information-based technologies, as well as the ability of machines to instantly share their knowledge. Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity, full-immersion virtual reality incorporating all of the senses (like The Matrix), “experience beaming” (like “Being John Malkovich”), and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned. Non-biological intelligence will have access to its own design and will be able to improve itself in an increasingly rapid redesign cycle. We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it. That will mark the Singularity. (Kurzweil, 2005)

He says that our evolution will eventually lead to “greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity, and greater levels of subtle attributes such as love”. We literally go from homo in corpore to deus ex machina.

The Singularity means different things to different people. Some paint a picture similar to the one above, playing into the techie dream of self-created immortality. Others warn of hubris and conjure up doomsday. Stephen Hawking is well cited for telling BBC: “The development of full artificial intelligence could spell the end of the human race.” Others scoff at both sides, calling these speculations “rapture for the nerds.” They point out that Singularity isn’t likely to happen anytime soon, and if/when it does, it won’t be anything like what’s currently being discussed.

We make no predictions here except one: Humans will remain essentially the same. That’s not to say we can’t learn or evolve—as we surely will if Kurzweil turns out to be right—but that we will always need to know our purpose and relationship to the world, especially when we are the ones guiding it.

Here we ask, in what sense will traditional moralities—philosophies of life such as Stoicism—fit into the grander scheme of humanity’s technological future? Will they be obsolete? These questions do not have a definite answer; however, if the Singularity is an event in which individuals retain their individuality in the period of transcendence, then there is no reason to think that beliefs and creeds that define individuals won’t also be preserved. Insofar as artificial intelligence’s (AIs) retain individuality with individual beliefs, they have the potential to come into conflict with one another and within themselves. There may be a need for a new ethics in a new era, but there will probably be an overlap with moral philosophies that we’ve been discussing for centuries.

We hope that Kurzweil is correct in saying that AIs will have not only greater intelligence but also greater love and creativity. In this case we would not only retain our conceptions of morality but also perhaps even enhance them.

Advancing Stoicism for a New Age

When technology advances, we expect to reap material benefits. Kurzweil says that the Singularity will usher in a new era in which many of the problems we face today will be solved:

It’s true that the dramatic scale of the technologies of the next couple of decades will enable human civilization to overcome problems that we have struggled with for eons. Nanotechnology will enable us to create virtually any physical product from information and very inexpensive raw materials, leading to radical wealth creation. We’ll have the means to meet the material needs of any conceivable size population of biological humans. Nanotechnology will also provide the means of cleaning up environmental damage from earlier stages of industrialization. (Kurzweil, 2005)

Although the Singularity may bring people closer to ‘perfection,’ we should acknowledge that we will never be perfect. AIs may have a longer life span, but they will eventually die. While the technological advancements to both our planet and ourselves may solve many of the issues that we consider threatening today, they will also introduce new problems:

But these developments are not without their dangers. Technology is a double-edged sword— we don’t have to look past the 20th century to see the intertwined promise and peril of technology. (Kurzweil, 2005)

The “intertwined promise and peril” of technology has always held the same lesson: material benefits don’t ensure happiness or fulfillment, though it’s generally acknowledged that they play some part. The danger is that they’ll distract us from achieving a deeper understanding of fulfillment by posing as ends in themselves. Our inventions will always need to be guided by a broader principle. A philosophy like Stoicism could be useful in cultivating virtue, not only for humans but also transcended artificial intelligences.

Understanding Stoic Virtue and Emotion

To better understand how Stoicism benefits modern humans and how it can be applied to AIs, it’s important to know what Stoic virtue is. According to the Stoics, virtue—moral good—is the only true good. Virtue is both a necessary and sufficient condition for attaining happiness in one’s life. The first century Stoic philosopher, Epictetus, wrote:

Some things are in our control and others not. Things in our control are opinion, pursuit, desire, aversion, and, in a word, whatever are our own actions. Things not in our control are body, property, reputation, command, and, in one word, whatever are not our own actions. – Enchiridion, 1

Once we come to understand that our own value judgments are under our control, we gain peace of mind. Stoicism, however, is not a purely theoretical philosophical school. The Stoics were taught that philosophy was a living discipline whose sole purpose was to make the individual practitioner a better (more virtuous) person. The second century Roman emperor, Marcus Aurelius, is an excellent exemplar of a Stoic philosopher, a man struggling to remind himself that he is in control of all his judgments and thus all his actions. In his personal journal, his primary surviving work, he writes: “Wherever life is possible, it is possible to live in the right way.” (Meditations, 5.16.) Although much more can be said on the topic of Stoic virtue ethics, we can see that the Stoics believed that living virtuously is the epitome of living as a true philosopher.

Furthermore, a common objection against Stoicism is that it’s apathetic, passive to a fault or utterly unemotional. This is a misunderstanding and misinterpretation of Stoic texts. Firstly, one cannot attain a happy life without the emotion of happiness; this is what we would call Stoic Joy (Irvine, 2009). Subsequently, understanding that an individual has the potential for absolute control over his or her value judgments and mode of thinking does not lead to apathy or the lack of emotion, but rather a more intimate understanding of the external world as well as a better understanding of one’s own capabilities. External things do not perturb the practicing Stoic. Once the external impression is brought before their deliberating faculties for review, Stoics can then determine if it warrants assent or rejection. Consequentially, they control their emotions by consciously determining the necessary emotions to attribute to a particular impression (but only if that impression is deemed adequate). Therefore, to call a Stoic apathetic or unemotional is not only misguided, but also utterly incorrect. Stoicism does not dismiss emotional reality, but instead faces it head on.

Stoics understand better than most how emotions fit into the larger landscape of the human condition. Our emotions, judgments, and beliefs are an integral part of our social lives as social animals. Therefore, taking emotions into account when discussing artificial consciousness is no superfluous undertaking, but rather an essential step in understanding how AIs understand themselves.

AI Emotion

Cerebral types might wish to banish emotions into the aether of the past as we advance AI. Emotions get in the way of reasoning and clear thinking; they sometimes cause us to do things we regret, against our better nature. However, emotions might play a more integrated role in our thinking than we’d like to believe. Reaching Singularity might not be possible without first studying emotions, objectively deconstructing them in ways that lays bare their constitution and external manifestations.

There’s reason to believe that emotions and intuitive thinking will play an important role in creating Superintelligence—AI that can outperform humans in a general way, not just in one area such as chess playing (Cardon, 2006). Creative thinking is integrally linked with emotion and how those emotions affect our perceptual states (Chrisley, 2008.) So far we can create AI that outperforms us intellectually, but there is much to be desired in the realm of emotional development. However, we’re making headway in the interpretation of emotion in areas such as facial and vocal recognition, and these areas seem likely expand, especially in light of Google’s new and extremely advanced creation, the D-Wave quantum computer. These are first steps in emotional/intuitive deconstruction. We have a long way to go, but the trajectory in this direction seems promising.

As emotional-creative technologies advance, moral understanding and unification with our creations will likely grow increasingly more urgent. Moral philosophies that address and guide emotions in an all-encompassing way might be useful, though the traditional versions may evolve and advance. Deconstruction begs for thoughtful reconstruction.

Universal Citizenship

Assuming the Singularity will involve an integration of emotive behavior, it seems fair to take beneficial social interaction as some AI’s raison d’être. Perhaps at that point, humans will be doing the learning. We would do well to ask: What happens if our social nature evolves?

Although many today tend to look at Stoicism as a simple philosophical psychology that can help to overcome problems, we have to take a step back and look at it from a more objective scope. Stoicism is a philosophy that is aimed at cultivating virtue amongst those who adopt it as their philosophy of life. It is undeniable that Stoic practice produces some psychological benefits. However, this is not the goal of Stoicism. It is and always has been about being a virtuous person.

Thus, if we project our thoughts into the future and assume, as we have already posited, that AIs are in fact individuals, then there is no reason to believe why they would not benefit from being virtuous. Seeing that AIs will be more interconnected than ever, formulating an essential character of a cosmopolitan1 or civis mundi2 is extremely important. Only by understanding one’s place in the universe, seeing the inconsequential nature of many—if not all—of the external products of our existence are we able to understand how small we really are. Marcus Aurelius would embrace what we would call the cosmic perspective; he would step back and understand the world and all its substance as if he were viewing the world from above:

What is man? His life is a point in time, his substance a watery fluxion, his perception dim, his flesh food for worms, his soul a vortex, his destiny inscrutable, his fame doubtful. In sum, the things of the flesh are a river, the things of the soul all dream and smoke; life is war and a posting abroad; posthumous fame ends in oblivion. What then can guide us through this life? Philosophy, only philosophy. It preserves the inner spirit, keeping it free from blemish and abuse, master of all pleasures and pains, and prevents it from acting without purpose or with the intention to deceive; ensuring that we lack nothing, whatever others may do or not do. It accepts the accidents of fate as flowing from the same source as we ourselves, and above all, it waits for death contentedly, viewing it as nothing more than the natural dispersal of those elements composing every living thing. If the constant transformation of one element into another is in no way dreadful, why should we fear the sudden dispersal and transformation of all our bodily elements? This conforms with nature, and nothing natural is bad. – Meditations, 2.17

Kurzweil continues by saying that eventually AIs will expand further into the universe, colonizing other planets and literally becoming citizens of the cosmos. This prospect opens up doors to meeting new life forms and new civilizations. If this is to be the case for our future progeny, cultivating a philosophical theory of universal citizenship and a virtuous nature could be useful. Adopting a philosophy like Stoicism that has cosmopolitan ideals at its heart can benefit those whose network of social reliance grows ever wider. The culture of the future will undoubtedly be varied, however Stoicism’s emphasis on cultivating universal citizenship is promising. Stoicism already provides us with the optimal prescription for the colonization of the cosmos.

Conclusion

Humans may change, they may even transcend, but we cannot eliminate all the ‘problems’ that humans deal with on a daily basis. There will be a whole new host of issues and dilemmas that technologically advanced artificial intelligence will have to face. Just because we cannot fathom what those problems may be doesn’t mean that they won’t exist; and as long as they do exist then a virtue-based philosophy like Stoicism will still have a place in the future.


Sources

Cardon, Alain. “Artificial Consciousness, Artificial Emotions, and Autonomous Robots.” Cognitive Processing 7, no. 4 (2006): 245-67. Accessed March 8, 2015. http://link.springer.com/article/10.1007/s10339-006-0154-7.

Chrisley, Ron. “Philosophical Foundations of Artificial Consciousness.” Artificial Intelligence in Medicine 44, no. 2 (2008): 119-37. Accessed March 8, 2015. http://www.sciencedirect.com/science/article/pii/S0933365708001000.

Epictetus, Enchiridion.

Hawking: AI Could End Human Race (BBC News) http://www.bbc.com/news/technology-30290540

Irvine, William Braxton. A Guide to the Good Life the Ancient Art of Stoic Joy. Oxford: Oxford University Press, 2009.

Kurzweil, Ray. Accelerating Intelligence. (KurzweilAI Singularity QA Comments) http://www.kurzweilai.net/singularity-q-a

Marcus Aurelius, Meditations.

Popper, Ben. Rapture of the Nerds: Will the Singularity Turn Us Into Gods or End the Human Race? (The Verge) http://www.theverge.com/2012/10/22/3535518/singularity-rapture-of-the-nerds-gods-endhuman-race

Originally published in The Stoic Philosopher on July 01 2015