(JULY 2017) – At the AGI Lab in Provo we have a number of people working on different things and between everyone there are a number of great things coming out from a number of them including material related to two books, and some peer reviewed papers.  The material is in order of release.

BICA (Biologically Inspired Cognitive Architectures) Proceedings/Journal 2017 and Procedia Computer Science has accepted the following paper: Human-like Emotional Responses in a Simplified Independent Core Observer Model System 

Abstract: Most artificial general intelligence (AGI) system developers have been focused on intelligence (the ability to achieve goals, perform tasks or solve problems) rather than motivation (*why* the system does what it does). As a result, most AGIs have an unhuman-like, and arguably dangerous, top-down hierarchical goal structure as the sole driver of their choices and actions. On the other hand, the independent core observer model (ICOM) was specifically designed to have a human-like “emotional” motivational system. We report here on the most recent versions of and experiments with our latest ICOM-based systems. We have moved from a partial implementation of the abstruse and overly complex Wilcox model of emotions to a more complete implementation of the simpler Plutchik model. We have seen responses that, at first glance, were surprising and seemingly illogical – but which mirror human responses and which makes total sense when considered more fully in the context of surviving in the real world. For example, in “isolation studies”, we find that any input, even pain, is preferred over having no input at all. We believe that the fact that the system generates such unexpected but “human-like” behavior to be a very good sign that we are successfully capturing the essence of the only known operational motivational system.

AAAI Fall 2017 Symposium Paper titled: The Independent Core Observer Model theory of Consciousness and the mathematical model for Subjective Experience

Abstract: This paper outlines the Independent Core Observer Model (ICOM) Theory of Consciousness defined as a computational model of consciousness that is objectively measurable and an abstraction produced by a mathematical model where the subjective experience of the system is only subjective from the point of view of the abstracted logical core or conscious part of the system where it is modeled in the core of the system objectively. Given the lack of agreed upon definitions around consciousness theory, this paper sets precise definitions designed to act as a foundation for additional theoretical and real-world research in ICOM based AGI (Artificial General Intelligence) systems that can be measured objectively.

Springer Publishing by Newton Lee for an ‘un-named’ book coming out by the end of the year with a chapter: The Intelligence Value Argument and Effects on Regulating Autonomous Artificial Intelligence

Abstract: This paper is focused on the Intelligence Value Argument or IVA and the ethics of how that applies to autonomous systems and how such systems might be governed by the extension of current regulation. IVA is based on some static core definitions of ‘Intelligence’ as defined by the measured ability to understand, use and generate knowledge, or information independently all of which are a function of sapience and sentience. The IVA logic places the value of any individual human and their potential for Intelligence and the value of other systems to the degree that they are self-aware or ‘intelligent’ as a priority. Further, the paper lays out the case for how the current legal framework could be extended to address issues with autonomous systems to varying degrees depending on the IVA threshold as applied to autonomous systems.

AI Safety and Security 2017 Book by Dr. Roman V. Yampolskiy’s book including a chapter for review by Mark Waser covering AI Safety and Security around emotionally driven AGI systems including ICOM –  2017 should be out by the end of the year, book overview includes: “The history of robotics and artificial intelligence in many ways is also the history of humanity’s attempts to control such technologies. From the Golem of Prague to the military robots of modernity, the debate continues as to what degree of independence such entities should have and how to make sure that they do not turn on us, its inventors. Numerous recent advancements in all aspects of research, development, and deployment of intelligent systems are well publicized but safety and security issues related to AI are rarely addressed. This book is proposed to mitigate this fundamental problem. It will be comprised of chapters from leading AI Safety researchers addressing different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. The book would be the first textbook to address challenges of constructing safe and secure advanced machine intelligence. ”

* hero image used from adobe stock