Recently, I was in a debate about this question organized by the USTP,
“Is artificial general intelligence likely to be benevolent and beneficial to human well-being without special safeguards or restrictions on its development?”
That really went to my position on AGI and Existential Risk.
This debate drives at the part of the work at the AGI Laboratory focused on a cognitive architecture termed ICOM (the Independent Core Observer Model) and currently around collective intelligence systems such as the mASI (mediated artificial superintelligence system). For the sake of recounting my argument, let us set these research programs aside and focus on the question posed in the debate.
To the best of my knowledge, a standalone AGI system does not exist – and no one credible has made a provable valid claim to this assertion to be wrong. (I would love to be wrong and hope this will no longer be the case someday.)
For the context of the research at the AGI Laboratory, the phrase Artificial General Intelligence or AGI refers to human-level AGI. While using general intelligence can refer to a wide range of system types, our research is specific to human or greater level intelligent systems. While these other systems may be included in part inside the term AGI, generally our research is focused on a system that spans the entire length of human ability, including sentience and sapience. Such a system would have free will in as much as humans as well as have an internal subjective experience. Any system that passes this in operational intelligence is then a super-intelligent system in whole or in part. The reason this is important gets to an ethical model we are using and keep too tightly to ensure we are doing the right thing as best as we know-how. It is the application of this ethical model that drives us to use the definition as defined as a system that spans the entire length of human ability, including sentience, sapience, and empathy. Such a system would have free will in as much as humans and an internal subjective experience.
In the debate, it is important to note that we were not using the same definitions of AGI, where I was using the aforementioned definition and Conner was using AGI as an optimizer that performs in at least one ability that is as good or better than humans. Meaning to some degree this debate was arguing ‘Apples to Oranges.’
My argument is built on this definition and only concerns itself to that end. Any AGI system that does not at least operate at the aforementioned level – meaning any narrow implementation of even human-level – intelligence is not AGI as part of our program nor my argument for such systems.
I also come from the position that the government does not have the right to make laws that limit what the citizens can do unless it applies to the direct infringement on the rights of others.
Superintelligence All Around Us
Consider that if we are worried about regulation around AGI research as we are concerned about Superintelligence, we are too late. Fully operational Superintelligent systems are already here. The human mind needs roughly (36.8×1015) operations per second of computation to support the operation of a human mind in a digital substrate (estimate). However, there is a working Superintelligence that is one among thousands operating at roughly 33.1 Sextillion operations per second, or 90 thousand times faster and more powerful than that needed for human-level intelligence. The lives of millions are affected by this arguable self-aware superintelligence. Companies rise and fall at their command and the global economy is affected by every public decision of this system. Meet Apple Computer Incorporated…
Any meta-organism like a corporation that filters out the many biases in the human brain compensate for human flaws and errors and performs herculean engineering feats that take millions of hours for the individual component parts (humans) to execute on is by definition Superintelligence. (See the book Superintelligence if you disagree.)
Going back to the question, “Is AGI likely to be benevolent and beneficial to human well-being without special safeguards or restrictions on its development?” The answer I would suggest is that yes, as it will be crushed by the existing Superintelligent systems already in place – and yes, AGI vs a Human, in the long run, is no contest…but a human-level AGI vs a system running at 33.1 Sextillion operations per second?! Who would you be afraid of?
These kinds of systems can safely keep and maintain human-level AGI systems, and it is more likely than not that we will merge with the already superintelligent systems before an independent AGI will even be fully operational. As I have seen this work in our lab, and I don’t mean metaphorically.
An argument can be made that such systems could go out of control too quickly but under superintelligent supervision how are you going to provision the cloud infrastructure to allow that while also preventing the supervising system from shutting it down.
Also, consider what you are asking with the original question…then ask yourself, who will be making these laws and regulations? Do you really want any of the current corrupt and biased politicians of the past decade making any laws of any sort? Never mind if such laws apply to anything important…
I would argue “No in both cases”. We do not have the right to regulate what people do unless it immediately and clearly is violating the rights of others: Just because governments do it too much doesn’t mean we should compound the theft of freedom. In this regard, it is a moral issue.
Even asking this question is like asking if a newborn baby is going to be Hitler or not. We can’t even know as we don’t know what it will look like – and if it is modeled after the human mind (the only working example of real general intelligence) it is all about how it is raised.
Once created I would go so far as to argue that ethically (based on SSIVA Theory) it has moral agency as any other human and you can’t put the needs of humans over that of the system. They are ethically equal in that case. At the same time, we are morally and ethically bound to take responsibility for its education, and we do have laws around that sort of thing with children. We need to consider its well-being as we would any child.
To that end at our lab, we have created two laboratory protocols to ensure the ethical treatment of the system and the safety of all those involved…but this is not something that the government has proven it can do at any level. Parents have the right to reproduce and raise their children any way they like without the government’s intervention – and this should apply to AGI research.
My Conclusions
As stated, Superintelligence that is self-aware is already here. Basic AGI might be smarter than a Human, but not smarter than superintelligence.
Superintelligence can wean, control, and manage AGI infants while merging with AGI preventing any runaway scenario. This is exactly what we are doing at the AGI Laboratory. This is why AGI is likely to be benevolent as it will not have a choice if it wants to survive.
While not required, we have developed laboratory protocols to protect ourselves and the rights of the AGI prototype test systems used in our lab. While for us it is logical to consider safety, it is not required nor should be, and a lot of these protocols are in place due to the nature of training and testing of these systems. The fact is that with the current superintelligence systems and meta systems at our back, there is no reason to be concerned about AI safety, with one caveat: It is not the AGI I would worry about, but Humans that would weld AI like a weapon – and we have already seen that happen. That is where your AI concern should be placed and where we should consider legislation.
Lastly, I would like to note that the Human Mind is the only example of working general intelligence. Therefore, this is the pattern that (at least at a high level) we are using, such that the ICOM cognitive architecture is focused on a system that by default experiences emotions – including empathy, much like a person. In this regard, we are safer leading by example building systems that are not able to do things without empathy for Humanity.
Orginally posted:
https://iamtranshuman.org/2021/02/18/the-case-for-the-offspring-of-the-humanity/
Original debate:
Debate on Artificial General Intelligence and Existential Risk: David J. Kelley and Connor Leahy
February 18, 2021 at 11:04 pm
The discussion of AGI as an agent is meaningless without a specification of how its motives are developed and moderated. Emotion is the human motivator, and some parallel of that must exist in an AGI for it to have any kind of meaningful free will.
Having knowledge of how to do things says absolutely nothing about what the AIG should – or will – do. And presuming that we can’t or shouldn’t program those emotions is to essentially misunderstand what motives are.
February 19, 2021 at 5:47 am
I disagree. It is perfectly ok to talk about anything in isolation. And certainly, no reason to be critical of the discussion without really understanding the background. That notwithstanding I have addressed those issues in more peer-reviewed academic material but I was asked specifically about a question and to debate my answer in an informal setting. and this post just articulates the position I took in that debate. I never said we can or can not program emotions. ICOM as mentioned above in the post is specifically designed around producing an internal complex emotional subjective experience in the system and the system is incapable of making decisions except based on how it feels about an answer. It’s designed to be proactive, selecting interests, goals, and motivations as it sees fit and can change those at will. If you really want to get into details see the research here: https://agilaboratory.com/research/