Those that have grown up with open source in the past 20 years know that open source is popular.  It’s popular because of a number of reasons including that it fosters innovation, speeds up delivery, and helps us all collectively learn from each other.

We ourselves at the AGI Lab have just assumed this was a good thing.  We believe that Open Source research helps everyone.  Many research groups in AGI research are already open sourcing including Open Cog, Open Nars, and more.

From an ethical standpoint, we use a system called SSIVA Theory to teach ethics to systems we work on such as Uplift and so we assumed we should release some of our code (which we have here on this blog and in papers) and we planned on open sourcing a version of the mASI or collective system that we work on that uses an AGI Cognitive Architecture.

From an SSIVA standpoint, you can even make the point that the most ethical course of action is to achieve AGI as quickly as possible.  But is that correct? if not they why?

We have been talking recently by members of the Machine Intelligence Research Institute or MIRI that say this is a bad thing? But is it?  And why?  Can’t collective superintelligent systems contain human-level AGI?

We are putting on a conference to help decide but so far we have not found someone to advocate for MIRI’s position.  At the conference, we hope to vote on if this is a good thing or not allowing someone for both sides of the issue to make a case and at the end of the conference, we will either start open-sourcing our research or not.

In particular, we want to open-source parts of this project “Uplift

Let us know if you would like to help either way.

Our Conference is on June 4th on Collective Superintelligence

As a side note, many other groups are already open sourcing AGI research code (some of these work as toy AGI’s already) and there are just some of them here:


if you know of some others let us know.