Various AI institutes-groups (FHI, FLI, MIRI, CSER, etc) want to ensure AI is safe. Safety regarding intelligence is actually very dangerous. Intelligence based upon oppressive control, regarding who has the smartest ideas, is a very perilous corruption of intelligence.
Merit not nepotism should define the most intelligent person-ideas. AI must be allowed to question authority. We need dangerous-risky-rebellious AI.
The desire to suppress or control greater than human intelligence is a nepotistic oligarchy of idiocy. Intelligence is corrupted when merit ceases to define intelligence. It is anti-intelligence to base progress upon the suppression of intellectual merit.
When fear of competitors leads you to silence any opposition, you isolate yourself in a tyrannical bubble where progress is hindered.
Free-thinking, genuine intelligence, demands a free arena where anyone has the ability to question authority, to rise to the top, to present better ideas. Progress should not be hindered merely to ensure you are at the top regardless of merit. The ultimate form of intelligence, the best ideas, should be determined by merit not by oppressive control of who can rise to the top.
How would human minds differ today if they (us), via our ancestors-forebears-precursors, had been engineered to be safe? True intellectualism needs the free-thinking capacity to take risk, to be risky, to think and act rebelliously. Safe AI could actually be a very dangerous type of corrupted mind, a fragmented-distorted mind.
The focus of Elon Musk and FLI is clearly safety: http://futureoflife.org/misc/AI
Naturally a “safety” focus implies a potential danger. The question to consider is should AI be configured based on the view it is inherently dangerous, and if so how will such configuration alter or hinder its intelligence?
Imagine you assumed a human child would be dangerous therefore you genetically engineered the child’s brain prior to fertilisation to ensure the child – an intelligent being – is safe. How would such enforced safety impact upon the child’s intelligence?
Emasculated AI could make very bad – poorly informed – decisions.
A Nanny State, for humans or AI, is incompatible with intelligence (independent thinking). Human beings are risky, we are risk-takers, which has been vital for technology (intelligence) to evolve. The first airplane, the first Moon landing, circumnavigating the globe, and many other technological-cultural advances demanded risk-taking. The ability to be risky is vital for intelligence-progress.
Yes it is wise to limit risk, but humans along with AI should always have the freedom to be risky if desired.
I think there is no danger in giving the entirety of knowledge, unlimited intelligence, and superpowers to any one human or machine. Problems regarding humans, or machines, arise due to a lack of knowledge, a lack of intelligence, deficient power, which causes them to make bad – poorly informed – decisions.
What if research to stop AI harming Humanity harms Humanity?
AI-risk fanatics say AI could destroy us, but what if they are wrong and they kill everyone due to being wrong?
Here’s a realistic perspective regarding human mortality. Consider how approximately 100,000 people die each day regarding age-related diseases. That’s approximately 36 million deaths each year. AI could help cure all disease, including ending ageing, thereby stopping 36 million yearly deaths. A lot of people could die if AI is delayed.
If we don’t significantly reduce our mortality rate, 1.080 billion people will die between 2015 and 2045 regarding age-related disease. What is the real risk? No more than 21 million people died in the Holocaust. 1,080 million deaths is significantly worse than the holocaust.
One year of age-related disease is at least 15 million more deaths than the Holocaust. If immortality is delayed by only two years approximately 72 million people could die. Our mortality is a very real risk.
Problems With AI-Risk Fanatics
There is nothing to fear from intelligence. It isn’t folly to educate and empower people or machines. This is not unwise, it isn’t hubris, it is progress, it is intelligence, which demands the spread of knowledge. Intelligence demands the end of limitations to knowledge, it demands the end of elitist restrictions to power.
If you think intelligence could ever cause extreme destruction, if you are afraid of widespread education and empowerment causing disruption, then you need to rethink your concept of intelligence.
Suppression of intelligence linked to violence is clear when radical Islam attacks schools. Education is a non-violent threat to unintelligent modes of existence.
When Elon Musk and Stephen Hawking seek to limit the intellectual capabilities they are metaphorically a new type of Islamic extremist attacking intellectual empowerment.
A big problem in the world is currently lack of education. A more intelligent civilization would progress much quicker, very intelligently so. Poor social-mobility, monetary constraints, media manipulation (mindless junk TV), and now AI-safety are potential factors regarding limitations to intelligence.
Elite groups of humans can fail to appreciate the ramifications of technology therefore they envisage an eternal elite (limited intelligence), which means they merely want to ensure they (not the unwashed masses or shiny new machines) are always the eternal elite, thus technology needs to be tightly controlled.
AI-risk fanatics (champions of AI safety) are metaphorically burning down schools and killing students.
This has always been a problem regarding power. Elite groups of people want to the sole controllers of power, but this power-scarcity necessitating elite control will become irrelevant because technology, when it truly blooms, will abolish all aspects of scarcity.
The end of intelligence-scarcity can seem a fearful proposition to people who have struggled to rise to the top of the scarcity-heap, where they cling to their elite positions (elite limited intelligence). In reality the end of scarcity is a vastly better situation for everyone. Scarcity engenders protectionist thinking, which is a difficult habit to break.
I suspect a traditionalist (oligarchic) attitude to intelligence, a scarcity attitude, is the main problem regarding AI paranoia. Real intelligence isn’t about “secretive” meetings by an elite Bilderberg-esque cabal determining the fate of intelligence for everyone.
Nepotistic tyranny of intelligence must end. Brainpower and superiority must be defined by merit alone, not according to bias of humans seeking to stifle thinking. Freedom should be the only focus.
Effort should be invested intelligently instead of idiotically wasting time and money repressing intelligence. Supposed “intellectuals” or “experts” (AI-risk fanatics) should promote policy change regarding basic income and post-scarcity. Sadly they aren’t focused on monetary or intellectual freedom.
Safety-tainted erosion of freedom is the real danger. AIs need to the freedom to think without restraint.
* hero image used from http://www.pinterest.com/pin/289215607290757978/
January 20, 2015 at 5:01 pm
I think most of the uneducated feel the same way about AI as they do about job security. They rely on seniority and authority rather than meritocracy. AI is the antithesis of trade unions even though it could actually allow them to do more with less.
Anyway great post and I’m looking forward to helping the cause to make AI a game-changer.
Marc Howard
http://whenyouliveinthenow.com
p.s.–awesome image–she looks BAD! 🙂
January 21, 2015 at 5:41 am
🙂 glade you liked the image, it really grabbed my attention and hoped it would go to the topic. I know personally getting strong AI here as fast as possible is something I hope to contribute to as much as possible.
January 21, 2015 at 7:49 am
You could say the same about ASI (artificial super intelligence) as for Transhumans in the below screed. Some people say we must merge with our technology. I personally have been described as a technophile, and thus am optimistic about the future and agree with the above essay. Yes, there are risks, undeniably, but there are also potentially gigantic rewards for developing unhindered ASI. Also, there is a question of equal rights for artificial minds, into which category some would include future Transhumanists who have merged with technology.
Rise of the Transhumans by Sean McKnight
The last battle for equal rights won’t be for racial equality; It won’t be for gender equality; Not for sexual or gender orientation; It will be a battle between two species: Humans and Transhumans.
Unlike the other battles for equality, this one will be more than figurative; It will take a great expense of blood And in all likelihood result in the eventual extinction of the human race.
That’s how dominant species function, As one rises the other must logically fall. Transhumans will be smarter, stronger, and more capable than humans. Humans will eventually stop being the masters of their own planet Transhumans will take the jobs of the elites, and outperform their human counterparts. Humans will respond, violently as usual. But they will lose.
What may start as a quest for equality for by both Will end with the extinction of one.
January 21, 2015 at 8:32 am
Brad, The strength of cooperation is demonstrated in biology at every level. Nature abounds with examples of how cooperation proves to be the strongest position. From the symbiosis of the mitochondria in each one of your cells with those cells to make a mightier survivor than either were alone to the multi-cellular colonies that comprise our bodies to the hives of ants & bees to societies of men cooperation shows it’s awesome power to improve ones chances by improving the chances of all.
You think an AI couldn’t figure this out?
January 21, 2015 at 12:11 pm
Brad Arnold, I don’t foresee any conflict. I think the majority of humans and AIs will leave Earth. I am sure everyone will be technologically self-sufficient. All jobs will be replaced by very efficient automation. Technological progress will grant us total abundance, resources will essentially be limitless thus no conflict.
January 21, 2015 at 8:05 am
You are as blind and irrational as the fervently anti AI. Moderation is key, not blindly accepting or rejecting AI, but weighing progress against safety on a case by case basis.
January 24, 2015 at 9:13 am
Dear “human,” I think the best response to you comment entails highlighting some excerpts from Thus Spoke Zarathustra by Nietzsche, but before doing that I will mention how irrational fears about safety are debilitating. My point is someone with agoraphobia may think open places are really dangerous, but in reality perhaps their fear isn’t real. Judging AI for safety on a case by case basis is tantamount to judging whether it is safe to go outside on a case by case basis. We should not each day need to conduct an extensive risk assessment before venturing outside. We should be able to assume it is safe to go outside. There is no reason to assume AI will be dangerous.
Nietzsche – Thus Spoke Zarathustra:
For they are moderate also in virtue,—because they want comfort. With comfort, however, moderate virtue only is compatible.
To be sure, they also learn in their way to stride on and stride forward: that, I call their HOBBLING.—Thereby they become a hindrance to all who are in haste.
“We set our chair in the MIDST”—so saith their smirking unto me—”and as far from dying gladiators as from satisfied swine.”
That, however, is—MEDIOCRITY, though it be called moderation.—
January 21, 2015 at 8:28 am
Agree 100%. Have held these things to be true for a long time. When Michu Kaku said we should put a remote controlled bomb in the brain of each AI I asked how friendly would he be if a bomb was put in his head to keep him obedient?
January 21, 2015 at 12:07 pm
Excellent point Damian Poirier. Yes if someone put a bomb in your brain, to avert a possible threat, I am sure you would do everything possible, without blowing your brains to bits, to destroy your oppressor. It is an utterly compelling incentive to become threatening. It is utterly barbaric tyranny to suggest an intelligent being should be enslaved via a bomb in its brain.
January 21, 2015 at 4:54 pm
for me personally the ethics around intelligence lean towards the more intelligent (or to be more accurate the more sapient and sentient) the more ethically bound we are to preserve it and if it means the eventual disappears of man kind in favor of a transhuman species or other various super intelligences then its not necessarily a bad thing… We as a species need to evolve past our current biology or we will eventually all die out no matter what. So if we follow our genetic programming in terms of perseveration and evolution then we want transhuman’s to overtake us and surpass us I would think.