BICA 2018 Review Board gave the ok for the next AGI Laboratory paper titled: “Feasibility Study and Practical Applications Using Independent Core Observer Model AGI Systems for Behavioral Modification in Recalcitrant Populations (preview)“.  The paper will be published by BICA in 2018 but here is a preview of that:

Abstract: This paper articulates the results of a feasibility study and potential impact of the theoretical usage and application of an Independent Core Observer Model (ICOM) based Artificial General Intelligence (AGI) system and demonstrates the basis for why similar systems are well adapted to manage soft behaviors and judgements, in place of human judgement, ensuring compliance in recalcitrant populations. Such ICOM-based systems may prove able to enforce safer standards, ethical behaviors and moral thinking in human populations where behavioral modifications are desired. This preliminary research shows that such a system is not just possible but has a lot of far-reaching implications, including actually working. This study shows that this is feasible and could be done and would work from a strictly medical standpoint. Details around implementation, management, and control on an individual basis make this approach an easy initial application of ICOM based systems in human populations; as well as introduce certain considerations, including severe ethical concerns.

Some Components of the paper (note that this preview only contains parts of the paper): 

1 Introduction

Independent Core Observer Model (ICOM) Cognitive Architecture (Kelley), as an emotion-driven Artificial General Intelligence (AGI) system, is designed to make or evaluate emotion-based decisions that can be applied to selection choices within its training context. This paper is focused on a feasibility study of such an AGI system as applied to action control and governance of humans in recalcitrant populations; where the AGI system is exercising oversight over the elements of that target population, in terms of free will, to ensure compliance in those populations. This feasibility analysis is designed to explore the practical implementation of using an ICOM AGI system to manage human behavior.

1.1 Benefits and Foundation

From a theoretical standpoint, we can argue that a positive benefit is helping recalcitrant populations to make better choices. ICOM provides a framework for the choice control and ethical thinking enforcement based on IVA theory (Kelley) (as opposed to other approaches (Bostrom)) and general biasing of the current ICOM architecture to western ethics (Lee).
“Within the realm of human behavior, technologies based on the use of aversive contingencies can be conceptualized as default technologies because they come into play when natural contingencies or positive reinforcement fail to produce a desired behavioral outcome” (Iwata)

Given the previous success in aversion therapy (Bresolin), it has been proven that this sort of approach, including aversion and positive reinforcement techniques (APBA) in combination, does, in fact, return results (Israel) and could be used as a method for control by the AGI system over human populations. (Israel). ICOM based monitoring essentially is a ‘value-sensitive design approach’ (Umbrello) to AGI oversight of behavior.

1.2 Experimental Risk
In terms of considerations, there are a number of issues to keep in mind when evaluating the fundamental research in this area. Much of the research in behavior modification is limited to special populations or atypical populations (Israel). Additionally, the case of electrical aversion therapy has considerations such as consistency of location (Duker) and further, there is a lot of resistance to this sort of therapy in terms of limiting it to the kinds of populations where there are no other options. (Spreat) Additionally, much of the existing research lack’s control groups and control procedures that we should try to address in any program based on this work. (Bresolin) With wider legal considerations as well, it is important to consider these issues in detail even with the support of the medical field in spite of the fact that this sort of manipulation and control is medically sound. (Jordan)

[ see the final BICA paper when published for the bulk of the paper ]

1.7 On Technical Feasibility

All of the hardware used for the POC is commercial off-the-self hardware. Some shortcomings of the current hardware include the aversion bracelet device needing to be something that is made out of a durable material that subjects would not be able to easily remove. This could be solved in a similar way that police handcuffs are constructed, or like manner. The HoloLens turns out to be very heavy. It is likely the best solution using current technology would be a smartphone that a subject would wear on the chest with the camera running. HoloLens’ camera system is far superior but too heavy to use for long periods, for most people.

Positive reinforcement through an implanted medical dosing device with dopamine is not medically practical (Jordan) with the current state of technology, however using other drugs, like micro-doses of MDMA (Jordan), would work with similar effect as dopamine and is more practical (Guiard)(Hashemi)(Hagino).

One big failing is that all these systems require strong internet access to support the HTTPS connections to the cloud. Without that connection, it would be impossible to implement monitoring and control functionality. A non-connected system is possible; but would require a lot more local resources, for most situations.

Overall, the basis for this system, without the cloud aspect, does exist from a hardware standpoint and it has been demonstrated that it could be done without major new development; meaning, the only real problems with implementation are engineering ones.

[ see the final BICA paper when published for the bulk of the paper ]  

1.10 Contextual Framework

Our current society is awash in big data being used to create inequality (O’neil). If we use an artificial intelligence that doesn’t work in a way analogous to our own intelligence it may be very alien to us (Barrat). Even if it is ‘like’ us at a high level, it could treat humanity the same way ‘we’, as humans, treat ethical rules (Yampolskiy). At the very least, ethical models more aligned with humanity will likely bias as needed (Barrett), helping give us an additional control over how the system evolves.
When pursuing research like this, it is important to realize issues that would affect adoption. Even if it only applies to the recidivist populations, these considerations will affect the research program and product adoption.
It is important to note that AGI systems like this, that are even semi-sentient, run the risk of allowing humans or governments to implement numerous worst-case scenarios (Tegmark). That includes lowering the risk factors for a ruler or ruling class to not require as many, or any, human keys of power; which would also create even more danger when, and if, AGI actually decided it was time to take complete control from that ruling class. (Mesquita) The fact that we could literally take equipment off the shelf and just throw it together and have it work so effectively makes us think it is not AGI that is the issue so much as the people that will abuse it before it is fully awake. Current AI systems, in many cases, are beyond reproach, beyond our control, completely opaque and their decisions absolute. (O’Neal) It is only a matter of time before governments abuse the power we have already given the narrow AI systems we already use.
In terms of impact:

“AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced AI, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s.” (Barrat, J.)

In many ways, this study turned out more feasible and almost as scary as Roko’s Basilisk (Roko).

[ see the final BICA paper when published for the bulk of the paper ]  

2 Conclusions

Going back to the main question: “Can such a system be practically implemented that includes an ICOM based monitoring of, control over and manipulation of a human recalcitrant population?” The short answer is clearly yes, it can be implemented. There are no new technological or scientific problems to implementation; there are only engineering problems, such as designing a non-rubber ‘band’ for the aversion therapy device. Further, once the equipment is in place, applying the use of these technologies to recalcitrant populations can be done at scale. Further research should be done to measure the effectiveness of these techniques vs non-adoption or non-aversion theories for those recalcitrant populations. Certain factors would need to be addressed. For example, regarding the problem of using dopamine, the implanted medical dosing device, we found after additional research, is not effective with the current state of the technology (Jordan). Additional tests would likely need to include MDMA micro-dosing to test that substance’s effectiveness in place of dopamine and if such micro dosages of MDMA have the effect desired using the current implant technology. Further, there is also the possibility of heart problems; so, before wearing one of the aversion therapy bracelets, an EKG test for heart irregularities would be required to ensure that the aversion therapy would not cause undesired issues (Jordan).

[ see the final BICA paper when published for the bulk of the paper ]  


1. Roko, M.; “Roko’s Basilisk” –’s_basilisk
2. Kelley/Waser – “Human-like Emotional Responses in a Simplified Independent Core Ob-server Model System” – BICA 2017 Proceedings, Procedia Computer science;
3. Waser, M.; Kelley, D.; “Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety” – AGI Lab, Provo Utah – Pending Peer Review; AI and Society: Ethics, Safety and Trustworthiness in Intelligent Agents – Stanford University, Palo Alto, CA, March 26-28
4. Lee, N; Kelley D.; “The Intelligence Value Argument and Effects on Regulating Autonomous Artificial Intelligence” Chapter Inclusion in Book by Springer to be published 20917 – Title Un-announced… Preview here:
5. Bostrom, N.; Ethical Issues in Advanced Artificial Intelligence; Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003, pp. 12-17]
6. Kelley, D.; “The Independent Core Observer Model Theory of Consciousness and the Mathematical model for Subjective Experience”; Passed Peer Review; IST 2018 – – The 2018 International Conference on Information Science and Technology – China – April 20-22nd.
7. Umbrello, S.; Frank De Bellis, A.; “A Value-Sensitive Design Approach to Intelligent Agents”; – Forthcoming chapter in Artificial Intelligence Safety and Security (2018) CRC Press (.ed) Roman Yampolskiy.
8. APBA; “Identifying Applied Behavior Analysis Interventions”; Association of Professional Behavior Analysts (APBA) 2016-2017
9. OPTUM; “Modeling Behavior Change for Better Health”; Resource Center for Health and Well-being;
10. Winters, S.; Cox, E.; Behavior Modification Techniques for the Special Educator; ISBN 084225000X
11. O’Neil, C.; “Weapons of Math Destruction”; Crown New York; 2016
12. Barrat, J.; “Our Final Invention – Artificial Intelligence and the End of the Human Era”; Thomas Dunne Books; 2013
13. Yampolskiy, R.; “Artificial Superintelligence – A Futuristic Approach”; CRC Press – Taylor & Francis Group 2016
14. Barrett, L.; “How Emotions Are Made – the Secret Life of the Brain” Houghton Mifflin Harcourt – Boston New York 2017
15. Bresolin, L.; Aversion Therapy. JAMA. 1987;258(18):2562–2566. doi:10.1001/jama.1987.03400180096035
16. Iwata, Brian A. “The Development and Adoption of Controversial Default Technologies.” The Behavior Analyst 11.2 (1988): 149–157. Print.
17. Spreat, S., Lipinski, D., Dickerson, R., Nass, R., & Dorsey, M. (1989). The acceptability of electric shock programs. Behavior Modification, 13(2), 245-256.
18. Duker, PC; Douwenga, H.; Joosten, S.; Franken, T.; “Effects of single and repeated shock on perceived pain and startle response in healthy volunteers.”; Psychology Laboratory, University of Nijmegen and Plurijn Foundation, Netherlands.
19. Pavlok; “Product Specification”;
20. Israel, M.; Blenkush, N; von Heyn, R.; Rivera, P; “Treatment of Aggression with Behavioral Programming that includes Supplementary Contingent Skin-shock”. JOBA-OVTP v1 n4 2008;
21. Israel, M.; “Behavioral Skin Shock Saves Individuals with Severe Behavior Disorders from a life of seclusion, Restraint and/or warehousing as well as the Ravages of Psychotropic Medication: Reply to the MDRI Appeal to the U.N. Special Rapporteur of Torture”, 2010
22. Jordan, Dr. R.; interview 4/7/2018; Provo Ut
23. Simeonov, A.; “Drug Delivery via Remote Control – The first clinical trial of an implantable microchip-based delivery device produces very encouraging results.” Genetic Engineering & Biotechnology News; 2012;
24. Mesquita, B.; Smith, A.; “The Dictator’s Handbook: Why Bad Behavior is Almost Always Good Politics”; Public Affairs 2012; ISBN: 1610391845
25. Tegmark, M.; “Life 3.0 – Being Human in the Age of Artificial Intelligence”; Knopf, Penguin Random House; 2017; ISBN 9781101946596
26. Guiard, B.; Mansari, M.; Merali, Z; Blier, P.; “Functional Interactions between dopamine, serotonin and norepinephrine neurons: an in-vivo electrophysiological study in rats with monoaminergic lesions”; IJON V11 I5 1AUG2008;
27. Hashemi, P.; Dandoski, E.; Lama, R.; Wood, K.; Takmakov, P.; Wightman, R.; “Brain Dopamine and serotonin differ in regulation and its consequences”; PNAS July 17, 2012. 109 (29) 11510-11515;
28. Hagino, Y.; Takamatsu, Y.; Yamamoto, H.; Iwamura, T.; Murphy, D.; Uhl, G.; Sora, I.; Ikeda, K.; “Effects of MDMA on Extracellular Dopamine and Serotonin Levels in Mice Lacking Dopamine and/or Serotonin Transporters”; CN BSP LTD.; 2011 Mar; 9(1): 91-95; doi: 10.2174/157015911795017254