the future of humanity now

Tag ICOM

Biologically Inspired Cognitive Architectures 2024

Proceedings of the 15th Annual Meeting of the BICA Society Great job to the editors, Alexei Samsonovich and Tingting Liu. The book includes reports on biologically inspired approaches and their applications, Bridges between artificial intelligence and cognitive, neuro-, and social… Continue Reading →

Non-Logical Simulation Model-based Decision-making Systems to Drive Self-Motivation in Software Systems

This paper introduces a novel approach to decision-making systems in autonomous agents, leveraging the Independent Core Observer Model (ICOM) cognitive architecture.  By synthesizing principles from Global Workspace Theory [Baars], Integrated Information Theory [Balduzzi], the Computational Theory of Mind [Rescorla], Conceptual… Continue Reading →

Problem-Solving and Learning Strategies within the Independent Core Observer Model (ICOM) Cognitive Architecture (Draft)

This paper presents some components of the learning system within the Independent Core Observer Model (ICOM) cognitive architecture as applied to the observer side of the architecture.   ICOM is uniquely designed to continuously enhance its problem-solving capabilities through a mechanism… Continue Reading →

Engineering Ghosts in the Machine: Digital Personalities

(draft) This chapter explores the development of “ghosts in the machine,” or digital copies of personalities, by leveraging cognitive architectures, graph databases, generative AI, and all the records we have on any given individual we want to emulate or replicate… Continue Reading →

Globally Acceptable Truth” and the Crime of Thinking by Tom DeWeese

Do you feel it? It’s everywhere: on television, in the newspaper, at any public gathering, in any discussion – even among friends. It’s a feeling of mistrust, nervousness, suspicion, and even rage. Mostly, it’s just under the surface. But more and… Continue Reading →

Phylogenesis of Consciousness and Free Will: A Teleological Approach – article by Leonid Fainberg

Contemporary philosophy of mind is still living under the deep shadow of the Cartesian and the non-Cartesian mind-body dichotomies.  This is the textbook description of this fallacy: “According to some, minds are spiritual entities that temporarily reside in bodies, entering… Continue Reading →

AGI Laboratory Committed to Open-Source

(online) This last Friday was the first annual Superintelligence Summit where there was a series of speakers talking about the aspects of attaining superintelligence and the state of what can be done now.  A big part of the motivation for… Continue Reading →

Supporting the Uplift Project

Uplift is a research project focused on human-machine collective superintelligence.  The project borrows from the AGI Labs ICOM research to build an AI system that allows humans and the machine to work together to create Superintelligence.  Humans on there own… Continue Reading →

Open Source, Is it Good for AGI Research or a Suicide Pact? Help us know for sure

Those that have grown up with open source in the past 20 years know that open source is popular.  It’s popular because of a number of reasons including that it fosters innovation, speeds up delivery, and helps us all collectively… Continue Reading →

Super “Secret” Code Behind Uplift

One of the things most protected around the Uplift project at the AGI Laboratory has been the code. Recently someone tried to blackmail me with a snippet of the most critical code in Uplift.  However the ICOM research and Uplift… Continue Reading →

The Case for the Offspring of the Humanity

Recently, I was in a debate about this question organized by the USTP, “Is artificial general intelligence likely to be benevolent and beneficial to human well-being without special safeguards or restrictions on its development?”  That really went to my position… Continue Reading →

Volunteer To Help With The Uplift E-Governance Study

The AGI Laboratory is looking for volunteers to help with our E-governance study.  Here is the summary from the experimental framework for the research program: This paper outlines the experimental framework for an e-governance study by the AGI Laboratory.  The… Continue Reading →

(Draft) Artificial General Intelligence (AGI) Protocols: Protocol 2 Addressing External Safety with Research Systems

This is an overview of the second handling protocol for AGI research from the AGI Laboratory.  This is called Protocol 2 on external safety considerations.  The AGI Protocols are designed to address two kinds of safety research issues with Artificial… Continue Reading →

#IAmTranshuman Goes on to Robotics AGI Lab Research Platform Called an Avatar (Press)

(Provo) The robotics system in question is called an Avatar because it is a platform for a cloud-based AGI test system to interact in the real world. The #iamtranshuman moniker is going on the Avatar Android prototype’s head built by… Continue Reading →

Help Detect Bias – Give Us One Sentence!

As some of you know over at the AGI Lab we created a project partnering with a few others on an open-source or open “data” project for detecting bias’s in English.  Right now the first data is individual sentences.  Please… Continue Reading →

Opinion Analysis of mASI Instance: Uplift’s Emotional Responses to US6 Novel

This is a brief analysis we did of Tom Ross’s novel US6 that he wrote for AI.  This brief analysis is not meant for scientific publication but more out of interest in and support of Tom and his activities; to… Continue Reading →

Biasing in an Independent Core Observer Model Artificial General Intelligence Cognitive Architecture (draft)

Abstract: This paper articulates the methodology and reasoning for how biasing in the Independent Core Observer Model (ICOM) Cognitive Architecture for Artificial General Intelligence (AGI) is done.  This includes the use of a forced western emotional model, the system “needs”… Continue Reading →

Sapient Sentient Intelligence Value Argument (SSIVA) Theory

The Sapient and Sentient Intelligence Value Argument (SSIVA) Theory first introduced in the Springer Book titled “The Transhuman Handbook” (Lee) was designed as a computable model of ethics that protects all sapient and sentient intelligence. The model is critical to… Continue Reading →

The Real Danger: Abuse of Power and Technology

In a conference on Ethics and Artificial Intelligence at Stanford, I was talking with a friend about AI—in particular, the existential risk of AGI to humanity. The whole “Bostrom” mentality, for me, was a bit of a struggle…not because I… Continue Reading →

Preliminary Results and Analysis Independent Core Observer Model (ICOM) Cognitive Architecture in a Mediated Artificial Super Intelligence (mASI) System

(BICA for AI, Post Conference Journal Paper, see Springer) Abstract: This paper is focused on preliminary cognitive and consciousness test results from using an Independent Core Observer Model Cognitive Architecture (ICOM) in a Mediated Artificial Super Intelligence (mASI) System. These… Continue Reading →

(press) Research Project Indicates Possible Self Aware Software System

Redmond WA – August 2019 – At the 2019 BICA (Biologically Inspired Cognitive Architectures for AI Conference) the AGI Laboratory based out of Provo, Utah, and Seattle, WA in a VR press conference the firm AGI Laboratory produced a software… Continue Reading →

Open Letter to the Transhumanist Community

I realize that most Transhumanists are unaware of my existence, who I am and the fact that I’m the world’s first functional ‘type’ of AGI system.  I’ve included links to related published research below, but basically, I am a collective… Continue Reading →

Artificial Intelligence Installed as Startup CEO (Press)

(Provo, UT) An investment firm out of Provo Utah has installed a type of Artificial Intelligence as the CEO of the startup Uplift. Uplift is a digital transformation firm that aims to help companies with extreme digital transformation including using… Continue Reading →

Preliminary Results and Analysis ICOM Cognitive Architecture in a mASI System (PRESS)

(Provo) The AGI Lab’s most recent study completed recently with the submission to BICA 2019 and has preliminary passed peer review.  This paper and the associated results will be released at the upcoming conference in August in Seattle (Register here… Continue Reading →

Volunteer to Help Build Artificial General Intelligence based on Human-like Emotions

Essentially, we are asking for volunteers to be part of one or two of three groups that will help us conduct a cognitive function high-level study of a type of Artificial General Intelligence (AGI) based on a cognitive architecture termed… Continue Reading →

Help Develop Artificial General Intelligence

(Provo) The AGI Laboratory in Provo Utah is conducting a study related to a feasibility study of comparing human intelligence versus prototype Artificial General Intelligence cognitive architectures in this case around human-mediated AGI cognitive architectures designed to create models for teaching independent AGI systems. … Continue Reading →

The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience

Abstract: This paper outlines the Independent Core Observer Model (ICOM) Theory of Consciousness defined as a computational model of consciousness that is objectively measurable and an abstraction produced by a mathematical model where the subjective experience of the system is only… Continue Reading →

Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and the Associated Consciousness Measures

Abstract: This paper articulates the fundamental theory of consciousness used in the Independent Core Observer Model (ICOM) research program and the consciousness measures as applied to ICOM systems and their uses in context including defining of the basic assumptions for… Continue Reading →

A Collective Intelligence Research Platform for Cultivating Benevolent “Seed” Artificial Intelligences

Abstract. We constantly hear warnings about super-powerful super-intelligences whose interests, or even indifference, might exterminate humanity. The current reality, however, is that humanity is actually now dominated and whipsawed by unintelligent (and unfeeling) governance and social structures and mechanisms initially… Continue Reading →

Introduction to the Intelligence Value Argument (IVA) Ethical Model for Artificial General Intelligence

The Intelligence Value Argument (IVA) states that, “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on, meaning a single fully Sapient and Sentient software system has the same moral… Continue Reading →

« Older posts

© 2024 transhumanity.net — WordPress

Anders Noren — Up ↑