Essentially, we are asking for volunteers to be part of one or two of three groups that will help us conduct a cognitive function high-level study of a type of Artificial General Intelligence (AGI) based on a cognitive architecture termed the Independent Core Observer Model (ICOM). Yes, I realize this is a lot of complex technospeak, but if you want to really get technical you can refer to a glossary and references (at the end of this document)—but primarily, I’ll try to keep the details in a more non-AI scientist sort of language (meaning normal English).
That said, what you are volunteering for is, again, to be part of one or two out of three research groups that will perform a type of task depending on your ability to participate—and you get to select the group that works best for you. From our statistical standpoint, our resident research psychologist Dr. Amon Twyman) has stated that we need these groups to be a certain size to ensure that we can obtain even vague conclusions—so we need more help to ensure our pool size is large enough.
Three Groups: What to expect
For this study have we are using three groups to compare individual humans to groups of humans to behave as an Artificial General Intelligence (AGI) that uses a collective group of humans to function like (for those super nerds out there) a meta-organism that we’ve termed a Mediated Artificial Super Intelligence (mASI), which is likely to (1) exhibit some features that could be construed as superhuman and (2) be relatively slow for normal tasks. Based on how the groupings, you could be in Group 1 and Group 3, Group 2 and Group 3 or just any one of the groups. (You cannot be in Group 1 and Group 2 as it’s not a valid comparison from a statistical standpoint.)
These are the groups:
Group 1 will be sent a type of IQ test that they take on their own. That said we may also do some proctored ones so we know how much people cheat. We are not tracking the results tied to a specific person but we are dealing with humans.
Group 2 will do the same test as Group 1 but will do it as a group using a tool like Skype or WebEx. The point is to treat a collection of humans as a collective intelligence of sorts and analyze the results.
Group 3 will act as mediators helping to create cognitive models that will be used in real time by our AGI system to take the same test, as well as other short, more subjective tests. We can then compare the mASI system to Group 1 and Group 2.
What we will do with the data:
Essentially what we are doing with the data—besides protecting it from a Personally Identifiable Information (PII) standpoint—is to keep the data scrubbed and isolated. We are not interested in in the results of a specific human and will make sure that resulting data cannot be mapped back to any one individual.
We then will compare results between these groups to see if the ICOM-based mASI system is significantly able to outperform humans individually and in groups. If there is enough evidence to support this line of reasoning, further research will be done along these lines. If, however, the mASI proves broken in some way. we would then go back to the drawing board, redesign and retest, etc.
Final Thoughts
It should be noted that (1) a study of this nature is too small to prove anything statistically significant, and (2) while the mASI system we are designing is running an architecture designed to be an AGI, it really is more of a collective intelligence that has its own sort of consciousness and self-awareness but cannot yet function in the current implementation as an independent AGI. In fact, this mediating method of was initially designed as a training technique to jump-start the time it takes to train ICOM based AGI systems. Essentially, when playing with this model in the lab to do training someone said, “What if we try this and it’s like “holy #$@^’ that might work…”. We played with it a bit and saw that it seemed to work unexpectedly extremely well—so here we are. We’re also unsure how close it gets us to human-level AGI as such, since the models that are created so overpower the personality that initially develops in the machine that it’s like dumping an encyclopedia onto a blank slate—but this does give us a framework of emotional data (remember ICOM doesn’t understand things like a computer does, but only in so much as it experiences things emotionally).
If I were to break this down if the study goes wildly well, it gives us a functioning mASI that can be used as a container for working with the independent AGI systems we are working on, and help teach us how to better train our models with emotional experience so that the AGI systems we build will behave more or less like humans. We want systems that experience ethics and feel guilty when appropriate, and we think this mASI system and ICOM, in general, can do that—but we need to test it.
to help sign up here: https://www.surveymonkey.com/r/QKFZXNY
Glossary:
Cognitive Function (how smart it is): Cognitive functioning is a term referring to an individual’s ability to process thoughts that should not cause large scale depletion on a in healthy individuals. It is defined as “the ability of an individual to perform the various mental activities most closely associated with learning and problem solving.
https://en.wikipedia.org/wiki/Cognitive_skill
Cognitive Architecture: A cognitive architecture can refer to a theory about the structure of the human mind. One of the main goals of a cognitive architecture is to summarize the various results of cognitive psychology in a comprehensive computer model. However, the results need to be formalized so far as they can be the basis of a computer program. The formalized models can be used to further refine a comprehensive theory of cognition and, more immediately, as a commercially usable model. Successful cognitive architectures include ACT-R (Adaptive Control of Thought, ACT) and SOAR.
https://en.wikipedia.org/wiki/Cognitive_architecture
Artificial Intelligence (AI): In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. More specifically, Kaplan and Haenlein define AI as “a system’s ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation”. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
https://en.wikipedia.org/wiki/Artificial_intelligence
Artificial General Intelligence (AGI): Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and future studies. Some researchers refer to Artificial general intelligence as “strong AI”, [1] “full AI”[2] or as the ability of a machine to perform “general intelligent action”[3]; others reserve “strong AI” for machines capable of experiencing consciousness.
https://en.wikipedia.org/wiki/Artificial_general_intelligence
Artificial Super Intelligence (ASI): or just Super Intelligence; A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.
https://en.wikipedia.org/wiki/Superintelligence
Personally/Sensitive Identifiable Information (PII):
Personal information, described in United States legal fields as either Personally Identifiable Information (PII), or Sensitive Personal Information (SPI),[1][2][3] as used in information security and privacy laws, is information that can be used on its own or with other information to identify, contact, or locate a single person, or to identify an individual in context. The abbreviation PII is widely accepted in the U.S. context, but the phrase it abbreviates has four common variants based on personal / personally, and identifiable / identifying. Not all are equivalent, and for legal purposes the effective definitions vary depending on the jurisdiction and the purposes for which the term is being used.
https://en.wikipedia.org/wiki/Personally_identifiable_information
Glossary of Special Snow Flake Terms We Created
‘Mediated’ Artificial Super Intelligence (mASI)
To quote our paper already under peer review on the mASI version of ICOM:
“Mediated Artificial Super Intelligence (mASI) is an Artificial General Intelligence system that is heavily mediated by humans in such a way as its thinking and operations don’t work without humans being involved to ‘mediate’ the process. In the case of our implementation, the consciousness model implemented in ICOM (the cognitive architecture we are using) is based on the ICOM Theory of Consciousness (Kelley), which itself is based on Global Workspace Theory (Baars), the Computational Theory of Mind (Rescorla), and Integrated Information Theory (Tononi) and at some level is demonstrably conscious (Yampolskiy). In fact, in some ways mASI architecture is much like a super version of Global Workspace Theory (Baars) as it extracts from multiple neural network systems and humans in feeding the machine’s context ‘engine’.”
(pending review) Architectural Overview of a ‘Mediated’ Artificial Super Intelligent Systems based on the Independent Core Observer Model Cognitive Architecture
Submitted to Informatica 1 OCT 2018 Journal for Peer Review
http://www.informatica.si/index.php/informatica/author/submission/2503
Independent Core Observer Model (ICOM): a cognitive architecture based on the ICOM theory of consciousness for independent self-aware intelligence that is able to experience things emotionally with an internal subjective experience making decision purely based on how it ‘feels’ about a thought. Note: Before you question us on this or even consider a debate, please review the following references so we can have a reasonable conversation without explaining the universe:
“Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and the Associated Consciousness Measures”
AAAI Spring Symposia (AAAI SSS19) – Stanford, CA March 2019 – By David J Kelley http://diid.unipa.it/roboticslab/consciousai/
Published Volume – http://ceur-ws.org/Vol-2287/
Published (PDF) – http://ceur-ws.org/Vol-2287/paper33.pdf
“The Independent Core Observer Model Theory of Consciousness and the Mathematical model for Subjective Experience”
Conference/Review Board: ICIST 2018 – International Conference on Information Science and Technology – China – April 20-22nd. (IEEE conference) [release pending]
https://www.itm-conferences.org/
“Human-like Emotional Responses in a Simplified Independent Core Observer Model System”.
Conference/Review Board: BICA 2017 – Proceedings/Journal
https://www.sciencedirect.com/science/article/pii/S1877050918300358
if your interested in our published research material (super technical…) see here: http://www.artificialgeneralintelligenceinc.com/current-published-research/
Leave a Reply