This paper introduces a novel approach to decision-making systems in autonomous agents, leveraging the Independent Core Observer Model (ICOM) cognitive architecture.  By synthesizing principles from Global Workspace Theory [Baars], Integrated Information Theory [Balduzzi], the Computational Theory of Mind [Rescorla], Conceptual Dependency Theory [Schank], and Hierarchical Memory Theory [Ahmad] together has been developed a framework that centers on simulated emotion-driven processes as the core mechanism for generating goals and motivations in independent agents.  The ICOM system diverges from traditional logical reasoning models by incorporating non-logical, simulated emotion-based elements that mimic human-like decision-making capabilities, allowing us the ability to quantify the simulated emotional state of the machine and use that as the basis for decision-making and other motivational functions as it relates to that machine instance such as goal, actions or interest selection.  We will review experimental results with robust and simplified systems and how that may or may not have implications for more sophisticated simulated emotionally aware software agents.


The original project that led to ICOM (Independent Core Observer Model) was an unpublished survey of research related to AGI (Artificial General Intelligence), and one of the findings in that study, while not controlled enough to be scientific, was at least suggestive of the fact that of the current research as of 2012 that was ongoing did not have the solution of ‘motivation’ solved for general purpose AI (Artificial Intelligence) or AGI.  Out of this, the original research ICOM was built on neuroscience research by Antonio Damasio [Damasio].  His research demonstrated that humans’ choices are based on emotions or how humans feel about their choices.  That is not to say that humans can’t think logically based on Damasio’s work.  Still, they may like the feeling of making logical choices.  Looking at theories related to consciousness or subjective experience, this work was based on global workspace theory.  ICOM was built on including Global Workspace Theory [Baars], Integrated Information Theory [Balduzzi], Computational Theory of Mind [Rescorla], the Conceptual Dependency Theory [Schank], and Hierarchical Memory Theory [Ahmad].  Out of that work was developed the abstract theory of consciousness [Kelley], and this is where we will focus on for the rest of this paper on the single element of simulated feeling values used to create a simulated emotional global workspace that in abstract simulates the feeling or experience of emotions and using that as at the basis for driving motivation and related functions in ICOM based software agents.

This paper will not explore other details about ICOM and its workings.  Please refer to the ICOM Research Codex [Kelley] for further information about ICOM outside the context of this paper and emotional simulation.

A Simulated Emotion  

In ICOM, we use a Plutchik [Plutchik] Model of emotions.  There are numerous theories of emotions or theories about how emotions can be represented.  Still, Dr. Plutchik’s model seems the simplest and complete.  While there are arguments for different biological models in humans that a large part of what we call emotions is influenced by culture and the ‘Western’ model of perceived emotions [Barrett] for simplicity’s sake and to confine ICOM research to narrow the scope the Plutchik model was selected albeit inverted to make it easier to be computationally sound and allow research to move forward without entirely solving all the possibilities around emotion models in humans.

Looking at ICOM, an example of a Plutchik model catalogs human emotions on 8 emotion areas or vectors.  If we use floating point values to represent each emotion vector, we have an array of 8 floating point numbers.  As a comparative example, if we used the Wilcox model of emotions, it would take at least an array of 77 floating point values as well as other potential problems with consistency and seems to not represent anything more significant than what Plutchik did.  A lot of the complexity we see in the Wilcox model is defined by composites of emotion vectors in Plutchik.  In ICOM, we represent the state of the machine using this Plutchik model.

Instead of representing these Plutchik models as an array or single-line matrix, we will show them as they would map to Plutchik to make it easier to understand that mapping later on.  Understanding the mapping of one emotion, such as happiness, is not important to how math works for the simulated emotional system.

Let’s look at some examples:

Figure A – Changing Simulated Emotional States.

In this example, we can see that the simulated emotion states change based on various stimuli.  In State 1, this represents a blank emotional state.  After some input that has 1 emotional additive value for position A1.  After that is applied, we have state 2.  State 3 is essentially the same thing, with these simulated emotions changing based on various inputs.

What this means, for example, is that there is some process monitor on a computer.  Each time the network traffic is slow then, it generates some input into the system that has the emotion state A1+1, which is added to the machine’s emotional value of A1, as we see in state 2.  At the same time, a different input from some other element of the machine might have an input value of B1+1.  When added to the existing machine model, we have state 3, which is our current simulated emotional state.  You can also have another state change when the network connects, that is A1-.5, which would then lower the A1 value of the machine by .5.  In this manner machine, simulated emotional states can change based on what is happening in the machine, and this separates the nature of individual emotions to focus just on the structure and mathematical operations of a simulated subjective experience.

Let’s take a look at a more complex example based on how ICOM currently does it:

Figure B – Matrix Operation.

Now, one of the problems with simulating subjective emotional states with numbers that include the complex relationships between emotions and emotional states in the simulated model in ICOM is that we use a matrix to map a given Plutchik emotion model to its total effect on the system.  In this example, we see the start state of the model, and we have some input.  Instead of just adding it all together, we use this matrix that maps the relationships between the various emotional values we track and every other emotional value.  Using a matrix, we can multiply the input model by the matrix, take those values, and add them to the start state, which gives us the new state of our simulated model.  These state changes can be tied to any number of other triggers, but let us look at this in a mathematics method of the example we used above:

Figure C – Matrix Operation.

As we can see here, the ‘Plutchik’ models are treated as a single line matrix against the larger rules matrix.  We get an output of 8 values in an array that we map back to our visualization of those values.  Pull up a diagram of a Plutchik model.  You can see what these particular emotions reference, but it is not essential to understand the operations in ICOM.

In testing this approach, another issue with ‘subjective’ experience over time is how it affects how we as humans feel.  To simulate this complex interaction between our thoughts and how we think and external stimuli in ICOM, each thought can have emotional values of some sort or another that affect how we feel.  Plus, stimuli have various effects on the core Plutchik model, so we added a send set of machine states.  Stimuli and thoughts affect this second state only a small amount.  This has the effect of the first state being relatively volatile while the second state is more stable.  Granted, this can be controlled by the matrix used and could, therefore, vary wildly in implementation and variations of states or reactions in the model.  This method of having a secondary model allows us to use this second model to recover back to center over time, depending on additional input.

In ICOM, we have tuned the two models using two separate matrices.  Each time we have stimuli or even an internal cycle treated as stimuli, the input is computed to affect the core model, as stated above.  Still, it is also calculated against the second model using the second less volatile matrix and model.  Let us take a look at this visualization:

Figure D – Adding the Second Less Volatile Model.

In this, we can see we do both operations for each model with the input, and the second model changes less because of how we defined the rules.  This model is then used to drag the first model back to the center.  This acts as our buffer to keep the system from varying wildly over time and allowing the system to recover back to a stable point.

Now, let’s look at how this applies to decision-making.

Simulated Emotional Decision Making

Previously, we used a single input to represent the simulated emotional content of some random input.  The following example will focus on decisions or choices.  In ICOM, anything coming into the system generates a knowledge graph, and the context engine generates any number of actions that could be taken based on that graph.  It looks at previous experience and other data from the graph associated with any element of the new graph.  We need to look at the graph structure to understand how that works.

Figure E – Graph Structure Example.

In the current implementation of ICOM, each edge in the graph is one of these emotion Plutchik models and a type value allowing meta structures to be cataloged in the graph itself, among other reasons.  Given this structure, let’s look at an example of some input value.  The context engine determines that an ‘N’ number of actions can be taken in response to this input.  ICOM will check and see if there is some automated response in function similar to how some behaviors are part of the autonomic nervous system in humans, and you can’t really control the responses.  Generally, these are survival-related, like pulling your hand away from something burning hot before the signal gets to the brain.  In function, this can happen in ICOM, depending on how the instance is set up.

Let us look at the example of the input mentioned, and let’s say there are three possible actions.  Let’s look at this diagram:

Figure F – Example Input with Action Values.

Granted, this example is overly simplified but accurate.  As we can see, we have some input value that is of high enough interest and has action models associated with it that will make it to the global workspace.  There are three action models, and when we look at each action model’s net positive change, we see that it is highest with Action 3.  Essentially, the ‘action’ that makes the machine ‘feel’ best is selected regardless of other factors.   The possible result values are added to the Current State P.  We end up with the end state P. Note that we are saying the first row is positive and A2 for this example.  The rest of the positions in these Plutchik models are negative emotions.  A positive or negative number is different in terms of the positive effect on the system, depending on the current state.  Suppose a terrible idea at least makes the system feel “better” (relative to the emotional simulation).  In that case, it will take action unless some overwhelming bias affects these calculations before it is raised to the global workspace.  Once any action is selected and raised to the global workspace, it is too late, and the system will try to do that action.

In this example, we saw that action 3 has the most impact, with a positive effect of 4 points on the global workspace’s primary emotional model.  Thus, this simulated emotional model selects for a net positive impact.  This also means that even if the actions are all positive, if the current P state is high, those positive values could negatively affect the main P state.

The degree to which you could do this and add various features like filtering for bias and lowering the net positive based on that or based on some moral or ethical structure is more or less infinite.  Still, this configuration, along with the underlying sub-model, meaning two models in the global workspace, seems to provide results that appear human-like in terms of selecting what makes the system feel good.


Before coming to conclusions based on the paper’s content, we can draw a few possible conclusions or areas that could be researched in more detail to build on this.

Implications for Human-Robot Interaction: The ICOM’s ability to simulate human-like emotions and decision-making could significantly improve human-robot interaction.  Robots and autonomous agents with this system might better understand and predict human emotions, leading to more intuitive and empathetic interactions.

Ethical and Moral Decision-Making: While the paper touches on the potential for bias filtering, the broader implications suggest that ICOM could be used to develop autonomous systems based on ethical and moral guidelines.  This could be crucial for applications in sensitive areas such as healthcare, law enforcement, and education.

Adaptive Learning and Personalization: Using simulated emotions to drive goal and motivation generation implies that ICOM-based systems could be highly adaptive.  They could learn and personalize their responses and actions based on the emotional feedback from their environment, leading to more customized and effective interactions over time.

Enhancement of Creativity and Problem-Solving: ICOM could enhance AI systems’ creativity and problem-solving abilities by mimicking the human tendency to make decisions based on emotional states.  These systems might explore unconventional solutions and approaches that purely logical systems might overlook.

Mental Health Applications: The ability to simulate emotional states and understand their impact on decision-making could be leveraged in mental health applications.  AI systems could simulate and study emotional responses, potentially leading to new treatments and interventions for mental health conditions.

Impact on Long-Term AI Development: Incorporating non-logical, emotion-based decision-making represents a shift in AI development paradigms.  It suggests a move away from purely logical AI systems towards more holistic models that consider a broader range of human-like experiences and motivations, potentially leading to more robust and versatile AI.

Resilience and Stability in AI Systems: Using dual emotional state models (volatile and less volatile) to stabilize the system suggests that ICOM-based AI could be more resilient to erratic or extreme inputs.  This could make them more reliable in dynamic and unpredictable environments.

The ICOM framework introduced in the paper could serve as a foundation for future research in artificial general intelligence (AGI).  By addressing the motivation aspect, which is crucial for AGI, ICOM could pave the way for more advanced and human-like intelligent systems.  These highlight the broader impact and potential applications of the ICOM cognitive architecture beyond what is explicitly mentioned in the paper.

see the paper on Research Gate for more details, including the figures: 


Ahmad, S.; Hawkins, J.; “Properties of Sparse Distributed Representations and their Application to Hierarchical 397 Temporal Memory”; 24 MAR 2019; Cornell University Library

Baars, B.; Katherine, M; “Global Workspace”; 28 NOV 2016; UCLA

Baars, B.; McGovern, K.; “Lecture 4.  In the bright spot of the theater: the contents of consciousness;” CIIS 2005

Baars, B.; Motley, M.; Camden, C.; “Formulation Hypotheses Revisited: A Replay to Stemberger”; Journal of Psycholinguistic Research; 1983

Baars, B.; Motley, M.; Camden, C.; “Semantic bias effects on the outcomes of verbal slips”; Elsevier Sequoia 1976

Baars, B.; Seth, A.; “Neural Darwinism and Consciousness”; science direct – Elsevier 2004

Balduzzi, D.; Tononi, G.; “Qualia: The Geometry of Integrated Information”; PLOS Computational Biology 5(8): e1000462, 2009. doi:10.1371/journal.pcbi.1000462

Barrett, L.; Campbell, C.; et al.; “How Emotions Are Made: The Secret Life of the Brain;” ISBN-10 ‏ : ‎ 9780544133310; Mariner Books; First Edition; March 2017;

Kelley, D.; “Independent Core Observer Model Research Program Assumption Codex;” BICA 2019, Pre-conference Proceedings:

–––; “The Independent Core Observer Model Theory of Consciousness and the Mathematical model for Subjective Experience;” By ICIST 2018 – International Conference on Information Science and Technology – China – April 20-22nd.  (IEEE conference); Year: 2018, Volume: 1, Pages: 396-400; ISBN: 978-1-5386-6956-3;

Kelley, D.; Twyman, A.; “Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and the Associated Consciousness Measures;” By  AAAI Spring Symposia (AAAI SSS19) – Stanford, CA; March 2019;; Published Volume –; Published (PDF) –

Plutchik, R. 2002.  Emotions and Life: Perspectives from Psychology, Biology, and Evolution.  American Psychological Association.

Plutchik, R. 1980b.  A general psychoevolutionary theory of emotion.  In R. Plutchik, & H. Kellerman, Emotion: Theory, research, and experience: Vol. 1.  Theories of emotion (pp. 3-33).  Academic Publishers.

Plutchik, R. 1980a.  Emotion: A Psychoevolutionary Synthesis.  Harper & Row.

Plutchik, R. 1962.  The emotions: Facts, theories, and a new model.  Random House.

Schank, R. C. (1972). Conceptual dependence: A theory of natural language understanding.  Cognitive Psychology, 3(4), 552–631.