welcome to the Technocracy A.I. Abstract Series for Published Scientific Work’s in the A.I. and Artificial General Intelligence field.

Peer reviewed by Biologically Inspired Cognitive Architectures Association or BICA 2017.

By David Kelley and Mark Waser.

Abstract: Most artificial general intelligence (AGI) system developers have been focused upon intelligence (the ability to achieve goals, perform tasks or solve problems) rather than motivation (*why* the system does what it does). As a result, most AGIs have an unhuman-like, and arguably dangerous, top-down hierarchical goal structure as the sole driver of their choices and actions. On the other hand, the independent core observer model (ICOM) was specifically designed to have a human-like “emotional” motivational system. We report here on the most recent versions of and experiments upon our latest ICOM-based systems. We have moved from a partial implementation of the abstruse and overly complex Wilcox model of emotions to a more complete implementation of the simpler Plutchik model. We have seen responses that, at first glance, were surprising and seemingly illogical – but which mirror human responses and which make total sense when considered more fully in the context of surviving in the real world. For example, in “isolation studies”, we find that any input, even pain, is preferred over having no input at all. We believe that the fact that the system generates such unexpected but “humanlike” behavior to be a very good sign that we are successfully capturing the essence of the only known operational motivational system.

As always thank you for listening to the Technocracy Abstract Series and a special thank you for our sponsors the Foundation, Transhumanity.net and the AGI Laboratory.

https://www.sciencedirect.com/science/article/pii/S1877050918300358