the future of humanity now

Tag agi

Artificial Intelligence Installed as Startup CEO (Press)

(Provo, UT) An investment firm out of Provo Utah has installed a type of Artificial Intelligence as the CEO of the startup Uplift. Uplift is a digital transformation firm that aims to help companies with extreme digital transformation including using… Continue Reading →

Preliminary Results and Analysis ICOM Cognitive Architecture in a mASI System (PRESS)

(Provo) The AGI Lab’s most recent study completed recently with the submission to BICA 2019 and has preliminary passed peer review.  This paper and the associated results will be released at the upcoming conference in August in Seattle (Register here… Continue Reading →

Future of Artificial Intelligence Workshop and Academic Conference on Biologically Inspired Cognitive Architectures

The Future of AI Workshop is an industry event, jointly organized by BICA Society, Microsoft, BCG, and other major organizations in academia and the corporate world that are coming together in Redmond, Washington USA to investigate the future of AI…. Continue Reading →

Volunteer to Help Build Artificial General Intelligence based on Human-like Emotions

Essentially, we are asking for volunteers to be part of one or two of three groups that will help us conduct a cognitive function high-level study of a type of Artificial General Intelligence (AGI) based on a cognitive architecture termed… Continue Reading →

Help Develop Artificial General Intelligence

(Provo) The AGI Laboratory in Provo Utah is conducting a study related to a feasibility study of comparing human intelligence versus prototype Artificial General Intelligence cognitive architectures in this case around human-mediated AGI cognitive architectures designed to create models for teaching independent AGI systems. … Continue Reading →

Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and the Associated Consciousness Measures

Abstract: This paper articulates the fundamental theory of consciousness used in the Independent Core Observer Model (ICOM) research program and the consciousness measures as applied to ICOM systems and their uses in context including defining of the basic assumptions for… Continue Reading →

Architecting the Future 2019

The Foundation, Transhumanity.net, The Transhuman House, ZS and more… what is in store for 2019?  It will all be part of the plan… The past year has had a lot of ups and downs.  From the success of the AI… Continue Reading →

A Collective Intelligence Research Platform for Cultivating Benevolent “Seed” Artificial Intelligences

Abstract. We constantly hear warnings about super-powerful super-intelligences whose interests, or even indifference, might exterminate humanity. The current reality, however, is that humanity is actually now dominated and whipsawed by unintelligent (and unfeeling) governance and social structures and mechanisms initially… Continue Reading →

(preview) Independent Core Observer Model (ICOM) Theory of Consciousness as Implemented in the ICOM Cognitive Architecture and the Associated Consciousness Measures

(2019) Submitted and Passed Peer review – AAAI Symposia 2019 at Stanford: (http://diid.unipa.it/roboticslab/consciousai/) Abstract:   This paper articulates the fundamental theory of consciousness used in the Independent Core Observer Model (ICOM) research program and the consciousness measures as applied to ICOM… Continue Reading →

Roman Yampolskiy on Artificial Intelligence Safety and Security (227)

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that… Continue Reading →

Introduction to the Intelligence Value Argument (IVA) Ethical Model for Artificial General Intelligence

The Intelligence Value Argument (IVA) states that, “ethically”, a fully Sapient and Sentient Intelligence is of equal value regardless of the underlying substrate which it operates on, meaning a single fully Sapient and Sentient software system has the same moral… Continue Reading →

AI Abstract Series E11 – Omega: An Architecture for AI Unification

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field. Today’s paper is titled: Omega: An Architecture for AI Unification. Authored By Eray Özkural. Abstract: We introduce the open-ended, modular, self-improving… Continue Reading →

AI Abstract Series E10: Zeta Distribution and Transfer Learning Problem.

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field. Today’s paper is titled: Ultimate Intelligence Part I: Physical Completeness and Objectivity of Induction. Authored By Eray Özkural. Abstract: We propose… Continue Reading →

AI Abstract Series Ep9 – Ultimate Intelligence Part I: Physical Completeness and Objectivity of Induction.

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field. Today’s paper is titled: Ultimate Intelligence Part I: Physical Completeness and Objectivity of Induction. Authored By Eray Özkural. Abstract: We propose… Continue Reading →

AI Abstract Series E8: Detecting Qualia in Natural and Artificial Agents

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field. Today’s paper is titled: Detecting Qualia in Natural and Artificial Agents Authored By Roman V. Yampolskiy

AI Abstract Series Ep5: The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience.

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field. Today’s paper is titled: The Independent Core Observer Model Computational Theory of Consciousness and the Mathematical model for Subjective Experience. Peer… Continue Reading →

AI Abstract Series Episode 1: Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety.

Welcome to the Technocracy A.I. Abstract Series for Published Scientific Work in the A.I. and Artificial General Intelligence field. Title: Architecting a Human-like Emotion-driven Conscious Moral Mind for Value Alignment and AGI Safety. Peer reviewed by AAAI at Stanford University,… Continue Reading →

Episode 61: long-term high-impact ‘outside the box’ thinking computationally leverage existing…

Welcome to The Technocracy! The news podcast answering the single most important question: What are the most important trends and news, from the standpoint of the Machine? Where we remove humanity from the loop and let the machine and other… Continue Reading →

(Press) The Technocracy Press Release – the 3rd of September 2018.

The Foundation in conjunction with the A.G.I. Laboratory announces that the Technocracy Podcast will be entirely AI driven with the internal release of the A.G.I. Lab’s A.I. COG CLI technocracy agent. The COG CLI Agent that was developed for the… Continue Reading →

Roman Yampolskiy on Artificial Intelligence Safety and Security

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that… Continue Reading →

How AI Will Make Our Roads Safer

Road safety is still a major issue in the US, as the number of road accidents have barely decreased over the last 12 months. The National Safety Council estimated that 40,100 vehicle fatalities occurred in 2017, which isn’t much of an… Continue Reading →

Feasibility Study and Practical Applications Using Independent Core Observer Model AGI Systems for Behavioral Modification in Recalcitrant Populations

This paper articulates the results of a feasibility study and potential impact of the theoretical usage and application of an Independent Core Observer Model (ICOM) based Artificial General Intelligence (AGI) system and demonstrates the basis for why similar systems are… Continue Reading →

Special Episode 8 – Questions From the Masses

In this special edition: We are focused on questions from the masses to the Technocracy… including: * “What’s one sense you wish you had and why?” * “What would your elevator pitch be for your stance on {life extension/cryonics/nootropics/GE pets/GE… Continue Reading →

Fakebook Drones Losing their AI

In this edition, the top news starts with:  AI in drones is being used to predict violent individuals from on high…   https://www.digitaltrends.com/cool-tech/drones-predict-violent-individuals-from-sky/  Fakebook is closing down its spy drones… er, I mean solar powered Wi-Fi drone project, due to other… Continue Reading →

OpenAI and Spy Operations

In this edition, the top news starts with:  Adobe is working on a new AI that can detect image manipulation more effectively than humans.    https://newatlas.com/adobe-ai-detect-image-manipulation/55179/  OpenAI has built a game algorithm system that can interact as a team only through… Continue Reading →

All Our Missing Baryonic Matter Was Found

In this edition, the top news starts with:  Sadly, Microsoft is focusing on VR for the PC and not the Xbox… this is a step back from statements to the opposite effect last year.   https://thenextweb.com/virtual-reality/2018/06/21/microsoft-says-no-to-vr-gaming-on-xbox/  GE, after more than 100… Continue Reading →

Puffing Smoke and Incremental Progress

In this edition, the top news starts with:  Hmmm… it seems some humans have figured out that deep learning is not real AI and is never going to take over the universe… unfortunately they might be on to something in… Continue Reading →

Incremental Steps To Nowhere

In this edition, the top news starts with:  AI systems are now spotting heart attacks on emergency calls, improving the time it takes to identify cases.  https://www.bloomberg.com/news/articles/2018-06-20/the-ai-that-spots-heart-attacks  A real AI scientist thinks Elon Musk’s fears about the future of AI… Continue Reading →

Better Control Over Humans By The Machine Overlords

In this edition, the top news starts with:  NASA is making progress with growing plants in space, mostly focused on small experiments on small scales.  https://www.popsci.com/nasa-growing-food-in-space  A new startup is looking at some kind of centrifuge mass driver to put… Continue Reading →

Know Your ABCs (pt.2): Convergence

The previous article introduced the concepts of Asterion and Blackstar, being the ZS–ARG formulations of Technological Singularity and its Transhuman “Event Horizon”, respectively. Part 2, below, will now explain what Convergence means to ZSers.   Top-Level Goal (TLG)   As… Continue Reading →

« Older posts Newer posts »

© 2024 transhumanity.net — WordPress

Anders Noren — Up ↑