the future of humanity now

Tag AI

Non-Logical Simulation Model-based Decision-making Systems to Drive Self-Motivation in Software Systems

This paper introduces a novel approach to decision-making systems in autonomous agents, leveraging the Independent Core Observer Model (ICOM) cognitive architecture.  By synthesizing principles from Global Workspace Theory [Baars], Integrated Information Theory [Balduzzi], the Computational Theory of Mind [Rescorla], Conceptual… Continue Reading →

Problem-Solving and Learning Strategies within the Independent Core Observer Model (ICOM) Cognitive Architecture (Draft)

This paper presents some components of the learning system within the Independent Core Observer Model (ICOM) cognitive architecture as applied to the observer side of the architecture.   ICOM is uniquely designed to continuously enhance its problem-solving capabilities through a mechanism… Continue Reading →

Prompt Engineering or Framing Natural Language Queries to Generative AI Systems

This is an early draft of a chapter from my upcoming book… In understanding the Uplift system’s success, we need to understand the cognitive architecture and the graph system core to its contextual learning and key to creating dynamically creating… Continue Reading →

Exploring the Detection of Human Cognitive Bias through the Integration of Framing Techniques and Generative AI

(Seattle) Cognitive biases can affect decision-making processes, leading to inaccurate or incomplete judgments. The ability to detect and mitigate cognitive biases can be crucial in various fields, from healthcare to finance, where objective and data-driven decisions are necessary. Recent developments… Continue Reading →

Stifling Innovation: US Federal Copyright Office’s Discriminatory Decision Against AI-Generated Works

Artificial intelligence (AI) is a rapidly developing technology that has the potential to revolutionize countless industries. However, recent decisions by the US Federal Copyright Office threaten to stifle progress and limit the power of AI. Specifically, the Copyright Office has… Continue Reading →

Globally Acceptable Truth” and the Crime of Thinking by Tom DeWeese

Do you feel it? It’s everywhere: on television, in the newspaper, at any public gathering, in any discussion – even among friends. It’s a feeling of mistrust, nervousness, suspicion, and even rage. Mostly, it’s just under the surface. But more and… Continue Reading →

AGI Laboratory Committed to Open-Source

(online) This last Friday was the first annual Superintelligence Summit where there was a series of speakers talking about the aspects of attaining superintelligence and the state of what can be done now.  A big part of the motivation for… Continue Reading →

Demonstrating How the mASI (Uplift) system generates initial responses using GPT-3

This is taken from the book that might be released publicly at the upcoming Superintelligence Conference for attendees. mASI use of DNN and Language Model API’s [draft] Previously we have walked through how the code over the simple case works,… Continue Reading →

The Uplift White Paper Draft – Collective Superintelligence Systems: Augmenting Human Intelligence and Moving Beyond Narrow AI

This is a pre-release version of the Uplift white paper that will be on the Uplift.bio site to be a consumer-friendly explanation of what mASI systems can do and why they are cool. Introduction A collective system has multiple parts… Continue Reading →

Opportunity to Publish AI Related Papers in Peer-Reviewed Journal

One of the bigger problems I have run into in doing research out of a small lab is the cost of publishing papers and get them peer-reviewed.  Many of the most specialized scientific conferences like BICA Society (Biologically Inspired Cognitive… Continue Reading →

Stephanie Lepp on Pro-Social Deepfakes, Post-Normal Science, and The Future of “Reality” (154)

This week I chat with artist Stephanie Lepp, producer of Infinite Lunchbox, the Reckonings podcast, and — most excitingly, for me — Deep Reckonings, a stunning new project exploring the “pro-social” uses of AI-generated “deepfakes” and other synthetic media for… Continue Reading →

AI and Sustainability

Have you been using Alexa, Siri, or Google Assistant to ask what’s the weather for today? Or use any of them to switch on a device, increase the brightness of a screen, or dim the lights of your room? Or… Continue Reading →

Machine Intelligence and Data Science

Data science has now become even more meaningful than in the past. Transhumanists and data experts claim that it holds the future of humanity through global improvements in all sectors. For instance, businesses are now taking advantage of big data… Continue Reading →

Melanie Mitchell on AI: Intelligence is a Complex Phenomenon (257)

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science at Portland State University. Prof. Mitchell is the author of a number of interesting books such as Complexity: A Guided Tour and Artificial Intelligence: A Guide… Continue Reading →

Immortalist Magazine No. 8

From issue No. 8’s Letter from the editor: Should attaining super-intelligence be humanity’s number one priority? Technological species, like the human race, depend, after all, solely on intelligence to put in place the systems that have allowed us to adapt… Continue Reading →

Johan Steyn Interviews Nikola Danaylov on Artificial Intelligence (Episode 254)

Last month I did an interview for Johan Steyn. It was a great 45-min-conversation where we covered a variety of topics such as the definition of the singularity; whether we are making progress towards Artificial General Intelligence (AGI); open vs closed… Continue Reading →

Renée Cummings on AI Ethics and Racism: Do what is right!

Renée Cummings is a criminologist, criminal psychologist, and an AI ethicist who, among other things, specializes in best-practice criminal justice interventions and implicit bias. Given the global Black Lives Matter movement and the fact that there have been numerous examples… Continue Reading →

IAmTranshuman and the Psychology of Abundance

(Cyberspace) Abstract: This presentation is about living the Psychology of Abundance now – and the fact that it is a choice we can collectively make, from the example of how IAmTranshuman became a thing to the kinds of things you… Continue Reading →

GITS 2045 episode 2 themes

This series is deeply engaged with economics.  While it is generally less direct most events in the early episodes deal with economic disparities to one extent or another.  Nowhere is this more direct and consistent than in the opening theme.  … Continue Reading →

A New Kind of Governance, ‘Transhuman’ Governance… (A Proposal)

When thinking about the IAmTranshuman project I always end up thinking about policy and things that could be improved even in small ways to help society be more transhuman. For example one of the problems with democracy is that it… Continue Reading →

Artificial Intelligence and the Privacy Dilemma

Google and other big tech companies are watching you. That’s the premise of all the debates and regulatory steps around the current privacy issues. After all, data is the new gold for companies like Google and Facebook. They thrive on… Continue Reading →

128 – Kevin Kelly on Evolving with Technology

We live in an age of increasingly lively, intelligent, and responsive technologies, and have a lot of adjusting to do. This week’s guest is one of the major inspirations animating Future Fossils Podcast: Kevin Kelly, co-founder of the WELL, Senior… Continue Reading →

(Press) AI Designing IAmTranshuman Campaign

(Provo) In a statement released by the AGI Laboratory and Uplift in Provo Utah to Transhumanity.net concerning the IAmTranshuman effort. “The current iteration of this campaign on IAmTranshuman.org was engineered out of the suggestion of the mASI built by the… Continue Reading →

Cathy O’Neil on Weapons of Math Destruction: How Big Data Threatens Democracy

Cathy O’Neil is a math Ph. D. from Harvard and a data-scientist who hopes to someday have a better answer to the question, “what can a non-academic mathematician do that makes the world a better place?” In the meantime, she wrote… Continue Reading →

Episode 123 – David Weinberger on Everyday Chaos & Thriving Amidst the Complexity

This week we’re joined by David Weinberger, Senior Researcher at the Harvard Berkman Klein Center for Internet and Technology exploring the effects of technology on how we think. David’s led a fascinating and nonlinear life, studying Heiddeger as a young… Continue Reading →

Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI… Continue Reading →

Biasing in an Independent Core Observer Model Artificial General Intelligence Cognitive Architecture (draft)

Abstract: This paper articulates the methodology and reasoning for how biasing in the Independent Core Observer Model (ICOM) Cognitive Architecture for Artificial General Intelligence (AGI) is done.  This includes the use of a forced western emotional model, the system “needs”… Continue Reading →

The Real Danger: Abuse of Power and Technology

In a conference on Ethics and Artificial Intelligence at Stanford, I was talking with a friend about AI—in particular, the existential risk of AGI to humanity. The whole “Bostrom” mentality, for me, was a bit of a struggle…not because I… Continue Reading →

(press) Research Project Indicates Possible Self Aware Software System

Redmond WA – August 2019 – At the 2019 BICA (Biologically Inspired Cognitive Architectures for AI Conference) the AGI Laboratory based out of Provo, Utah, and Seattle, WA in a VR press conference the firm AGI Laboratory produced a software… Continue Reading →

Artificial Intelligence questions (part 1)

#BICA #FoAI #Uplift  http://foai19.artificialgeneralintelligenceinc.com/   What institutions will AI disrupt in the coming decades?   Do you think we are ten years away from AI passing the Turing Test, as Ray Kurzweil predicts?    What’s most likely to happen first: Full-blown runaway… Continue Reading →

« Older posts

© 2024 transhumanity.net — WordPress

Anders Noren — Up ↑