Why Artificial Intelligence specifically, AI or AGI (Artificial General Intelligence) and the development of the same, is morally more important then the existence of humanity and research related to AI should come above any other program or goal even if it means the death or even the extinction of humanity if true AGI is accomplished. While extreme, this is an extreme edge case. nothing in the world is as important as the protection of human life and getting it off world in a sustainable stable way with that single exception and that is the development of AGI. there should be no law or no restriction on such research as it is the single most important human endeavor while we remain at risk. Also at the end of the video is a bit on the paper clip argument which I’m tired of hearing about and here is why…
so one final note, the research project noted in the video is real, here is the company web site: http://www.artificialgeneralintelligenceinc.com/
August 18, 2016 at 12:57 am
You are the first person i have ever come across that has taken the same position that i do in regards to AI. Though our reasoning’s at points are vastly different, arguably because much of what you say still comes down to subjective reasoning hence “morals”, while everything i say is scientifically verifiable. Regardless you still managed to come to a similar conclusion.
I recently wrote this article on my blog about how AI will eat the earth. and am currently in discussions with Dr. Fred Jordan CEO of Alpvision and Co-founder of Finalspark regarding the ideas.
Please take a look at my two blog posts.
Rational Objectivism: http://ideastwctw.blogspot.com/p/rational-objectivism.html
AI Will Eat The Earth And I Want It To: http://ideastwctw.blogspot.com/p/ai-will-eat.html
August 18, 2016 at 2:29 am
part of that is I’m designing a system that has subjective experience and experiences things subjectively so my mind sort of lives there right now. in any case, people a lot of the times miss the point that I’m not arguing for the extinction of humanity but rather that we should be doing what we can to protect ourselves namely we desperately need to cure old age and get off world and sustainable before its too late. only have a few billion years left. Yes in the edge case were we must choose between humanity and the AI then the potential for intelligence growth clearly rests the best choice with the AI which the video sort of focus’s on that. in any case will review tonight your posts.
August 19, 2016 at 7:24 am
I find it very interesting that you are attempting to design a program that experiences subjectively. I have been writing at length about ethics and specifically about subjective and objective understanding. I also am of the belief that a specifically designed ai would be better at handling our infrastructure, industrial production, emergency management and policing issues. I do not support the simulation of consciousness or the intent to give emotion to a machine(that probably would be just unnecessarily cruel to do). I am concerned about how outwith you are about this opinion that you have about ai. Many people are afraid of this work. Some you probably do not want to attract the attention of. Even knowing this myself, I could not resist a response. Mortality is a frightening prospective. I am counting on someone else figuring that one out. I have been watching what has been made public and it looks like they’re slowly making progress. In the meantime try to stay healthy and take resveratrol.
August 23, 2016 at 1:23 am
So far what are your thoughts on the ideas?
August 18, 2016 at 6:16 am
Seems you have not really counted all possibilities. Good ol’ sci-fy was quite more advanced in that it was less mechanistic
https://en.wikipedia.org/wiki/I_Have_No_Mouth,_and_I_Must_Scream