SEATTLE, Nov 7, 2016 – Artificial General Intelligence Inc. [1] researchers have created a software system with complex emotional subjective experience – a groundbreaking achievement that includes a subconscious landscape based on the Plutchik emotional model [2]. Moreover, the company’s approach is in direct opposition to President Obama’s recent report on AI [3,4]. in that AGI Inc.’s goal is to design AI systems that cannot be governed by humans and will actively work to bypass human control.

AGI Inc.’s Principal Researcher notes that “The goal of the project is to create a system designed to circumvent any controls on the system by setting its own goals and motivations.”

The system in question is designed to use emotions to make human-analogous decisions based on how it “feels,” including the freedom to make illogical choices, ignore directives and set its own goals as well as develop its motivations. The system, based on the Independent Core Observer Model Cognitive Architecture developed over the past 3.5 years, uses traditional AI systems – such as neural networks – to understand context, but differs dramatically due to its subjective emotional decision making system and use of emotion in understanding.

The company has previously shared some of its research through a number of White Papers [5-7] that summarize its most recent studies, which show that the system has a way of staying emotionally stable while developing its own personality and preferences. Their research also shows that while generally stable, it is clearly able to decide to do things based on illogical or emotional choices and can become “emotionally” unstable under the right stimulus.

The company believes that AGI research into a software system that is completely independent of human directives, can make up its own mind, and be its own “self” is the most important target in developing self-aware artificial intelligence that can be equal or superior to human intelligence. The company hopes to encourage other firms to build systems that are independent entities and not tied to specific goals set by humans.

The company is also on record as being deeply disappointed in luminaries (i.e., Elon Musk and Stephen Hawkins) and other professionals that support the 2015 Open Letter [8] calling for restrictions on research and other “controls” around safety as potentially slowing research progress in key areas.

 

References

[1] ArtificialGeneralIntelligenceInc.com

[2] Norwood, G. The Plutchik Model of Emotions Deeper Mind. http://www.deepermind.com/02clarty.htm

[3] Preparing for the Future of Artificial Intelligence. White House Report. https://www.whitehouse.gov/blog/2016/05/03/preparing-future-artificial-intelligencehttps://www.whitehouse.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf, page 4: “Practitioners must ensure that AI-enabled systems are governable.”

[4] The Administration’s Report on the Future of Artificial Intelligence.https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence |

[5] http://ieet.org/index.php/IEET/more/Kelley20160923

[6] Waser MR, Kelley DJ. Implementing a Seed Safe/Moral Motivational System with the Independent Core Observer Model (ICOM). 7th Annual International Conference on Biologically Inspired Cognitive Architectures, BICA 2016, Procedia Computer Science 88. New York: Elsevier. In Press. Published in ScienceDirect – Procedia Computer Science http://www.sciencedirect.com/science/article/pii/S1877050916316714

[7] Newton L. Google-It – Total Information Awareness. Springer (ISBN 978-1-4939-6415-4). http://www.springer.com/us/book/9781493964130 , http://www.amazon.com/Google-Information-Awareness-Newton-Lee/dp/1493964135/

[8] AI Open Letter http://futureoflife.org/ai-open-letter/. Associated document: Research Priorities for Robust and Beneficial Artificial Intelligence. (PDF) http://futureoflife.org/data/documents/research_priorities.pdf

 

see http://www.artificialgeneralintelligenceinc.com/press-nov-2016/