Newton Lee in partnership with Springer is working on an upcoming book covering transhumanist topics, one of the chapters covers IVA (Intelligence Value Argument) which is summary of the chapter titled: “The Intelligence Value Argument and Effects on Regulating Autonomous Artificial Intelligence” which I wrote and am including only the first part of that chapter on IVA.

Abstract: This paper is focused on the Intelligence Value Argument or IVA and the ethics of how that applies to autonomous systems and how such systems might be governed by the extension of current regulation. IVA is based on some static core definitions of ‘Intelligence’ as defined by the measured ability to understand, use and generate knowledge, or information independently all of which are a function of sapience and sentience. The IVA logic places the value of any individual human and their potential for Intelligence and the value of other systems to the degree that they are self-aware or ‘intelligent’ as a priority. Further, the paper lays out the case for how the current legal framework could be extended to address issues with autonomous systems to varying degrees depending on the IVA threshold as applied to autonomous systems.


In this chapter, I articulate the case using the Intelligence Value Argument (IVA) that, “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on meaning a single fully Sapient and Sentient software system has the same moral agency [10] as an equally Sapient and Sentient human being. We define ‘ethical’ according to as pertaining to or dealing with morals or the principals of morality; pertaining to right and wrong in conduct. Moral agency is, according to Wikipedia; is “an individual’s ability to make moral judgments based on some notion of right and wrong and to be held accountable for these actions. A moral agent is “a being who is capable of acting with reference to right and wrong.” Such value judgments need to be based on potential for Intelligence as defined here. This, of course, also places the value of any individual human and their potential for Intelligence above virtually all things save the one wherein a single machine Intelligence capable of extending its own Sapient and Sentient Intelligence is of equal or more value based on a function of their potential for Sapient and Sentient Intelligence. It is not that human or machine Intelligence is more valuable than the other inherently but that value is a function of the potential for Sapient and Sentient Intelligence and IVA argues that at a certain threshold all such Intelligences should be treated equally as having moral equivalence. Given this equality, we can in effect apply the same rules that govern humans and apply them to such software systems that exhibit the same levels of Sapient and Sentient. Let us start from the beginning and define the key elements of the of the IVA argument as the basis for such applications of the law.

Now one might think that the previous statement should have been moral value as an equally Sapient and Sentient being but this is not the case. While the same moral value is implied, it’s the treatment as equals in making their own mind through their own moral agency that is the same. Anymore ‘value’ then that becomes abstract is subjective It is that the moral agency that is the right we assign to those Sapient and Sentient Intelligences based on the value of the potential of such entities is the same.

What is the most important thing in existence?

On the surface, this seems a very existential question but, in truth, there is a simple and elegant answer; Intelligence is the most important thing in existence. You might ask why? Why is Intelligence so important as to be the most important thing in existence especially when ‘value’ is frequently so subjective?

First, let us acquire some context by defining what Intelligence is in this context; which will then act as our base frame of reference for the rest of this paper. There are, in fact, a lot of definitions for Intelligence as can be seen by its definition on Evolutionary Computer Vision [1]

“Intelligence … defined in many different ways including, but not limited to, abstract thought, understanding, self-awareness, communication, reasoning, learning, having emotional knowledge, retaining, planning, and problem-solving.”

As you can see, there are a lot of the ways the term can be understood; but in this paper ‘Intelligence’ is defined as the measured ability to understand, use and generate knowledge or information independently. This definition allows us to use the term ‘Intelligence’ in place of sapience and sentience where we would otherwise need to state both in this context.

It is important to note that this definition is more expansive than the meaning we are assigning to Sapience, which is what a lot of people really mean when they use the often-misunderstood term sentience. Sapience [11]:

“Wisdom [Sapience] is the judicious application of knowledge. It is a deep understanding and realization of people, things, events or situations, resulting in the ability to apply perceptions, judgments, and actions in keeping with this understanding. It often requires control of one’s emotional reactions (the “passions”) so that universal principles, reason, and knowledge prevail to determine one’s actions. Wisdom is also the comprehension of what is true coupled with optimum judgment as to action.”

As opposed to Sentience [15] which is:

“Sentience is the ability to feel, perceive, or be conscious, or to have subjective experiences. Eighteenth-century philosophers used the concept to distinguish the ability to think (“reason”) from the ability to feel (“sentience”). In modern western philosophy, sentience is the ability to have sensations or experiences (described by some thinkers as “qualia”).”

Based on this definition, we, in fact, see the difference with the term Sapience vs Sentience where Sapience which is more closely aligned with the intent of what I am driving at here. That notwithstanding though, it is Sapience and Sentience together that we will consider by using the term Intelligence to mean both.

In the case of this paper, we will apply Sapience to refer specifically to the ability to understand one’s self in every aspect; through the application of knowledge, information and independent analysis, and to have subjective experiences. Although Sapience is dependent on Intelligence, or rather the degree of Sapience is dependent on the degree of Intelligence, they are in fact different. The premise that Intelligence is important, and in fact, the most important thing in existence, is better stated as Sapient Intelligence is of primary importance but Intelligence (less than truly Sentient Intelligence) is relatively unimportant in comparison.

This brings us back to the point about “Why?” Why is Intelligence, as defined earlier, so important? The reason is: without Intelligence, there would be no witness to reality, no appreciation for anything of beauty, no love, no kindness and for all intents and purposes no willful creation of any kind. This is important in from a moral or ethical standpoint in that only through the use of applied ‘Intelligence’ can we determine value at all even though once Intelligence is established as the basis for assigning value the rest becomes highly subjective but not relevant to this argument.

It is fair to point out that even with this assessment that there would be no love or no kindness without Intelligence to appreciate. Even in that argument about subjectivity, it is only through your own Intelligence you can make such an assessment, therefore, the foundation of any subjective experience that we can discuss always gets back to having Intelligence to be able to make the argument.

Without an “Intelligence” there would be no point to anything; therefore, Intelligence is the most important quality or there is no value or way to assign value and no one or nothing to hold to any value of any kind.

That is to say that “intelligence” as defined earlier is the foundation of assigning value and needed before anything else can be thus assigned. Even subjective experience of a given Intelligence has no value without an Intelligence to assign that value.

Through this line of thought, we also conclude that Intelligence being important is not connected with being Human nor is it related to biology; but the main point is that Intelligence, regardless of form, is the single most important ‘thing’.

It is, therefore, our moral and ethical imperative to maintain our own or any other fully Sentient and Sapient Intelligence (as defined later with the idea of the Intelligence Value Argument threshold) forever as a function of the preservation of ‘value’.

On Artificial Intelligence

Whatever entity achieves full Sapient Intelligence, as defined above, it is therefore of the most ‘value’. Artificial Intelligence referring to soft A.I. or even the programmed behavior of an ant colony is not important in the sense of being compared to fully Sapient and Sentient Intelligence; but the idea of “Strong AI” that is truly Sapient Intelligence would be of the most value and would therefore be classified as any other human or similar Sapient Intelligence.

From an ethical standpoint then, ‘value’ is a function of the ‘potential’ for fully Sapient and Sentient Intelligence independent of other factors. Therefore, if an AGI that is ‘intelligent’ by the above definition and is capable of self-modification (in terms of mental architecture and Sapient and Sentient Intelligence) and increasing its ‘Intelligence’ to any easily defined limits then its ‘value’ is at least as much as any human. Given that ‘value’ tends to be subjective IVA argues that any ‘species’ or system that can this limit is said to hit the IVA threshold and has moral agency and is equal ethically amongst themselves. This draws a line in terms of moral agency in which case we have a basis for assigning AGI that meets these criteria as having ‘human’ rights in the more traditional sense or in other words ‘personhood’.
This of course also places the value of any individual fully Sapient and Sentient Intelligence human or otherwise and their potential for Sapient and Sentient Intelligence above virtually all other considerations.
Removing ‘Artificial’ from Intelligence

The IVA line of thought really would take the term ‘Artificial’ out of the term(s) “Artificial Intelligence.” If such a software system is fully Sapient and Sentient it is therefore not an ‘artificial’ Intelligence but it would be better to call it a manmade Intelligence or a machine Intelligence as the term ‘artificial’ besides implying being man made also implies that it is a fake Intelligence and a real. A real ‘AGI’ would not be fake based on the IVA line of thinking and would have as much moral value as any other human being.

Threshold Ethics

One problem with the IVA threshold is determining the line for Sapient and Sentient Intelligence. The IVA threshold is the threshold at the point of full Sapient and Sentient in terms of being able to understand and reflect on one’s self and ones technical operation while also reflecting on that same process emotionally and subjectively. We draw the line not just at that threshold but at the potential of meeting that threshold which allows us to better address edge cases such with a stable clear-cut line. Using IVA thinking, a post threshold Intelligence meaning one that has meet the IVA threshold cannot be prevented from creating new Intelligences ethically and must also therefore consider that any being whose potential for being fully Sapient and Sentient without direct manipulation or re-engineering at the lowest mechanical (chemical/biological or physical level) then such a being is considered post threshold as well from an ethical standpoint. Therefore, any such ‘Intelligence’ regardless of form has the same rights as any other Sapient or Sentient being whose creators are then ethically bound to exercise the rights of that entity until such time as it is developed enough to take on its self as the fully Sapient and Sentient being that it is or will be.

Note: this also implies that an AGI will not meet the threshold until the first AGI actually does meet the threshold. A baby AGI does not meet the threshold until it is proven that system is capable of developing on its own without additional engineering either by human or other Intelligence.

Along those lines then IVA would argue that any action that would kill or prevent an entity that meets the bar from being fully Sapient or Sentient would therefore be unethical unless there is a dire need to save the lives of other entities at or above the IVA thresh hold.

Defining the Bar for the IVA threshold

Having a discreet method of measuring Sapient and Sentient Intelligence is important not just for the IVA threshold model in this paper’s application but for research into AI systems general. While the above definition of Sapient and Sentient Intelligence in the abstract allows us to discuss the matter from a common point of reference; for addition work to be built on this paper it is important to be more precise in defining Sapient and Sentient Intelligence as references in IVA theory.

There are a number of systems like the “Intelligence Quotient” [17] tests but that is not as specific to Sapient and Sentient Intelligence as we would want given the key differences between Intelligence as normally defined and “Sapience and Sentience” as used here. The best model for this comes from a paper [16] by a Dr. Porter from Portland State University where in the 2016 BICA proceedings he has a paper articulating an indexed system for measuring consciousness. While individual elements of Dr. Porters system for assessing, consciousness might be subjective the overall system is the best quantifiable method currently and until such time as a better or more refined system exists we use this method to strongly identify the IVA threshold.

In Dr. Porter’s method, we essentially have a scale of 0 to 133 where the standard human is around 100 on that scale. Given that the IVA threshold is about the potential for Sapience and Sentience we can say that having a consciousness score potential as a specifies of roughly 100 points on the Porter scale is high enough to say that, ‘that species’ or ‘Intelligence’ meets the IVA threshold test. There is some differential as the Porter test does not differentiate between Sapient and Sentient but it is inclusive enough to give us a basis or measurement of approaching the IVA threshold. This allows us to apply that standard to machines Intelligences we may create in the lab to determine at what point they are capable of meeting that standard.

Comparing and Contrasting Related Thinking to IVA

In building out an argument to support the aforementioned ethical model based on the ‘value’ of Intelligence as it relates to Sapient and Sentient entities such as artificial general Intelligence software systems and humanity, let us compare other related lines of thinking as it relates to the following cases.

Utility Monster and Utilitarianism

The Utility Monster [1] was part of a thought experiment by a Robert Nozick related to his critic of utilitarianism. Essentially this was a theoretical utility monster that got more ‘utility’ from X then humanity so the line of thinking was that the “Utility Monster” should get all the X even at the cost of the death of all humanity.

One problem with the Utility Monster line of thinking is that it puts the wants and needs of a single entity based on its assigned values higher than that of other entities. This is a fundamental disagreement with IVA where IVA would argue that you can never put any value of anything other than other Intelligences themselves. This would mean that the utility monster scenario would be purely unethical from that standpoint.

Utilitarianism does not align with IVA thinking for an ethical framework as Utilitarianism asserts that ‘utility’ is the key measure in judging what we should or should not be ethical whereas the IVA (Intelligence value argument) makes no such ascertain of value or utility except that Sapient and sentiment Intelligence is required to assign value and past that “value” then becomes subjective to the Intelligence in question. The Utility Monster argument completely disregards the value of post threshold Intelligences and by IVA standards would be completely unethical.

Buchanan and Mortal Status and Human Enhancement

In the paper ‘Moral Status and Human Enhancement” [3], the paper argues that against the creation of inequality regarding enhancement. In this case the IVA is not really related directly unless you get into the definition of the IVA ethical basis of value and the fact that having moral agency under IVA mean’s only that intelligence can make a judgement as to any enhancement and it would be a violation of that entities rights to put any restriction on enhancement.

Buchanan’s paper argues that enhancement could produce inequality around moral status which gets into areas that IVA doesn’t address or frankly disregards as irrelevant except in having full moral agency we would not have the right to put any limits on another without violating their agency.

Additional deviations with Buchanan include that sentience is the basis for Moral status whereas IVA makes the case for sentience and sapience together being the basis for ‘value’ which we assume that definition or intent is similar to this idea of ‘moral status’ articulated by Buchanan.
Intelligence and Moral Status

Other researchers such as Russell Powell further make a case that cognitive capabilities bear on moral status [4] where IVA doesn’t directly address moral status other than the potential to meet the IVA threshold grants that moral status. Powell suggests that mental enhancement would change moral status, IVA would argue once it an entity is capable of crossing the IVA threshold the moral status is the same. The largest discrepancies between say Powell and IVA is that Powell makes the case that we should not create persons where IVA would argue it’s an ethical imperative to do so.
Persons, Post-persons and Thresholds

Dr. Wilson argues in a paper titled “Persons, Post-persons and Thresholds” [5] (which is related to the aforementioned paper by Buchanan) that ‘post-persons’ (being enhanced persons through whatever means) do not have the right to higher moral status where he also argues the line should be Sentience to assign ‘moral’ status whereas IVA would argue that the line for judgement of ‘value’ is that of Sapience and Sentience together. While the bulk of this paper gets into material that is out of scope for IVA theory but specific to this line for moral status IVA does build on the line for ‘value’ or ‘moral status’ including both Sapience and Sentience.

Taking the “Human” Out of Human Rights [6]

This paper really supports the IVA argument to a large degree in terms of removing ‘human’ from the idea of human rights. Generally IVA would assert that ‘rights’ is a function of Intelligence being sapience and sentience and anything below that threshold would be a resource whereas Harris’s paper asserts that human rights is a concept of beings of a certain sort and should not be tied to species but still accepts that a threshold or as the paper asserts that these properties held by entities regardless of species which would imply also that such would extend to AI as well which would be in line with IVA based thinking. What is interesting is that Harris further asserts that there are dangers with not actively pursuing research further making the case for not limiting research which is a major component of IVA thinking.

The Moral Status of Post-Persons [7]

This paper by Hauskeller in part is focused on Nicholas Agar’s argument on the moral superiority of “post-persons”, and while IVA would agree with Hauskeller that his conclusion in the original work are wrong; namely he asserts that it would be morally wrong to allow cognitive enhancement, Hauskeller argument seems to revolve around the ambiguity of assigning value. Where IVA and Hauskeller differ is that as a function of Intelligence where IVA would place absolute value on the function of immediate self-realized Sapient and Sentient Intelligence in which case a superior Intelligence would be of equal value from a moral standpoint. IVA disregards other measures of value as being subjective due to being required to be assigned by Sapient and Sentient intelligence to begin with. IVA theory asserts that moral agency is based on the IVA threshold.


Now if we go back to the original paper by Agar [8], it is really his second argument that really is wildly out of alignment with IVA namely that Agar, argues that it is ‘bad’ to create superior Intelligences.  IVA would assert that we would be morally or ethically obligated to create greater because it creates the most ‘value’ in terms of Sapient and Sentience Intelligence.  It is not the ‘moral’ assignment but the base value of Sapient and Sentient Intelligence that assigns such value as subjective as that maybe.  Agars ambiguous argument that it would be ‘bad’ and the logic that “since we don’t have a moral obligation to create such beings we should not“ is completely opposite of the IVA argument that we are morally obligated to create such beings if possible.

To see the rest you’ll have to wait for the upcoming book from Springer…

Cited References

  1. Olague, G; “Evolutionary Computer Vision: The First Footprints” Springer ISBN 978-3-662-43692-9
  2. Nozick, R.; “Anarchy, State, and Utopia (1974)” (referring to Utility Monster thought experiment)
  3. Buchanan, A.; “Moral Status and Human Enhancement”, Wiley Periodicals Inc., Philosophy & Public Affairs 37, No. 4
  4. Powell, R. “The biomedical enhancement of moral status”, doi: 10.1136/medethics-2012-101312 JME Feb 2013
  5. Wilson, J.; “Persons, Post-persons and Thresholds”; Journal of Medical Ethics, doi: 10.1136/medethics-2011-100243
  6. Harris, J. “Taking the “Human” Out of the Human Rights” Cambridge Quarterly of Healthcare Ethics 2011 doi:10.1017/S0963180109990570
  7. Hauskeller, M.; “The Moral Status of Post-Persons” Journal of Medical Ethics doi:10.1136/medethics-2012-100837
  8. Agar, N.; “Why is it possible to enhance moral status and why doing so is wrong?”, Journal of Medical Ethics 15 FEB 2013
  9. Schwitzgebel, E.; Garza, M.; “A Defense of the Rights of Artificial Intelligences” University of California 15 SEP 2016
  10. Wikipedia Foundation “Moral Agency” 2017 –
  11. Agrawal, P.; “M25 – Wisdom”; – 2017 –
  12. Iphigenie; “What are the differences between sentience, consciousness and awareness?”; Philosophy – Stack Exchange;; 2017
  13. Solon, O.; “World’s Largest Hedge fund to replace managers with artificial intelligence”, The Guardian;
  14. Suydam, D.; “Regulating Rapidely Evolving AI Becoming A Necessary Precaution” Huffington Post;
  15. Prince, D.; Interview 2017, Prince Legal LLP
  16. Porter, H.; “A Methodology for the Assessment of AI Consciousness”; Portland State University, BICA 2016, Procedia Computer Science
  17. CC BY-NC-SA; “Introduction to Psychology – 9.1 Defining and Measuring Intelligence”;


Additional References

Rissland, E; Ashley, K.; Loui, R.; “AI and Law”, IAAIL;

Johnston, C.; “Artificial intelligence ‘judge’ developed by UCL computer scientists”, The Guardian;

Quinn Emanuel Trial Lawyers; “Article: Artificial Intelligence Litigation: Can the Law Keep Pace with the Rise of the Machines?”; Quinn Emanuel Urquhart & Sullivan, LLP;

Koebler, J.; “Legal Analysis Finds Judges Have No Idea What Robots Are”; Motherboard;

Hallevy, G.; “Liability for Crimes Involving Artificial Intelligence Systems”, Springer; ISBN 978-3-319-10123-1

Walton, D.; “Argumentation Methods for Artificial Intelligence in Law”; Springer; ISBN-13: 978-3642064326

* hero image used with permission, produced from Adobe stock by Marqui Woods