I wish Ray Kurzweil would cease his waffling about a supposed “double-edged sword.” It’s a silly notion he utters in almost every interview.
Whoops I’ve sliced my fingers off on the sword-edge of my keyboard typing this.
Technology isn’t a doubled-edged sword. At worst it is a 99% good sword and tiny 1% bad sword. Doubled-edged implies a 50/50 harm/goodness.
How often do you cut your fingers when using a sharp knife? I rarely cut my fingers. Perhaps on one occasion in every hundred I will cut my finger when using a knife, which is a generous pandering to Ray’s unfounded fears regarding possible threats from technology. In actuality, from my viewpoint, the bad from technology is considerably less than 1%.
Statistically there are fatal car crashes happening all the time, but the chance of you being in fatal crash is very slim. Existential risk is certainly not a double-edged fifty-fifty scenario. This very low car crash risk is same for all technology; furthermore risk is continually being reduced because technology is improving. Technology is becoming smarter.
What will be the supposed double-edge be when self-driving cars drastically reduce traffic fatalities? There will be no double-edge because risk is not increasing, risk is decreasing.
Ray said (5 Oct 2015): “Technology has been a double-edged sword since fire kept us warm but burned down our villages.”
The reference to villages burning down highlights the anachronistic unreality of Ray’s views applied to modern life.
How many times have you burned down your village? Have you ever burned down your house? House fires happen, but they’re very uncommon in the circles I move in. I’ve never met anyone who suffered their house or village burning down. Perhaps I move in smarter circles? Maybe Ray burns down his village every week?
The idea of villages burning down is the essence of bogus AI-fear, or synbio-biotech hysteria. No village is going to burn down. Yes Ray is being metaphoric, but neither will there be a new world war or terrorist destruction of the world, contrary to fear and doom by Alex Jones and Max Keiser types.
Ray’s double-edged sword refers to bio-terrorists modifying a flu virus to create a WMD. Perhaps it is similar to the Swine Flu hysteria a few years ago where everyone died, or the Ebola hysteria where people assumed the end of the world was coming?
Terrorists are utterly insignificant compared to roughly half a million yearly cancer deaths in the USA. In the UK 161,823 people died from cancer in 2012, according to Cancer Research UK. You are more likely to to be killed by lightning, or win the lottery, than die from terrorism.
While Ray does get many things right, he is far from perfect. Perhaps his imperfectness is something he subconsciously desires. He often states he is not a utopian. The power of the Self-Fulfilling Prophecy should not be underestimated.
Ray previously stated biology is now an information technology, growing exponentially; furthermore on one occasion, at least, Ray stated scarcity causes conflict. Ray admits scarcity is being eroded regarding info-tech. The contradiction in Ray’s thinking is clear. If biology is info-tech growing exponentially, reducing scarcity thereby reducing conflict, why does he think “bio-terror” is an increased risk?
Ray said in March 2012: “I’ve actually grown up with a history of scarcity – and wars and conflict come from scarcity – but information is quite the opposite of that.” Ray stated in Oct 2005: “We are making exponential progress in every type of information technology. Moreover, virtually all technologies are becoming information technologies.” Similarly in July 2009 Ray stated “…biology is now an information technology.”
Think about what Ray stated on those various occasions.
Scarcity causes conflict, which Ray grew-up with, whereas information is rapidly moving away from scarcity. Biology is an information technology, thus subject to the same scarcity-reducing exponential growth. It seems clear Ray does, at least subconsciously, see how bio-terrorists must be a reducing NOT increasing risk?
For years Ray warned us about bio-terror, but the supposed bio-terrorists haven’t risen to his challenge. Perhaps one day Ray’s Self-Fulfilling Prophecy will win through, then he will get the bio-terror he seems to yearn for. In the meantime I intelligently focus on utopia.
Ray was my initial source of inspiration, but these days I think Eric Schmidt is a better source of inspiration, a fountain of wisdom, very rational, because Eric admits to being a “utopian.”
Let’s face our dawning utopian reality. There is no threat from village-burning keyboards, or display screens cutting off our fingers. Similarly unfriendly AI, or bio-terror, is comparable to historic anti-train propaganda, which stated locomotive travel at the high speed of 20mph could cause people to disintegrate or asphyxiate along with blighting crops. Historically, regarding train travel at 50mph, it was stated women would suffer their uteruses flying out of their bodies.
The main risk, likely to disastrously threaten our existence, is the painful absurdity of Kurzweil, MIRI, FLI, Hawking, or other fearful scaremongers.
Scarcity of intelligence is the only problem. Limits upon information-technology are the only danger. Absurd critics sadly link rapid growth in biology or AI to great peril.
The first edge of their absurdity relates to approximately 100,000 daily deaths regarding age-related disease. Sufficiently advanced AI will certainly cure all disease, along with ending aging. Irrational fears could delay AI progress in medicine, which will likely cause needless deaths. The second edge of their absurdity could easily kill our will to live regarding the mind-numbing agony of their “unfounded” beliefs.
* infographic from https://www.domo.com/blog/2014/04/data-never-sleeps-2-0/
* hero image from http://digital-art-gallery.com/picture/7276
October 20, 2015 at 10:28 pm
You can’t really throw numbers around like 99% when we actually have no idea how technology will progress over the next 30 years. We are dealing with experimental technologies that really do carry actual existential risks, and there’s no way to know how they will develop and/or affect society. There are known unknowns, and unknown unknowns.
So it’s more sensible to err on the side of caution than it is to blindly wade into unregulated development. Caution is healthy & should be encouraged.
And double-edged just implies that it can cut both ways, not that it’s a zero-sum game…
October 21, 2015 at 6:42 am
What does ‘caution’ mean? When 400 people a day are dying of cancer in one small country alone, is it ‘cautious’ to hold back the development of a technology that could help us to find solutions? If so, how can you measure it to reach the conclusion that such caution is ‘sensible’?
October 21, 2015 at 5:12 pm
I think 99% positivity is vastly more accurate than a doubled-edged cutting both ways.
Two edges implies a 50/50 split, whereas, in reality, harm from technology is rather rare. The reference to Donald Rumsfeld’s (?) “known unknowns, and unknown unknowns” does not make a logical case for technological fear.
I think we have every idea, a very clear view, of how intelligence, either human or artificial, reshapes society. We see clearly how the fruits of knowledge, technological progress, are very beneficial.
The blindness is not regarding the vital need for free-thought, free-intelligence, a lack of regulation regarding AI minds. The blindness is regarding the irrational notion that hyper-bright AI minds could possibly be inherently, wholesale, dangerous.
Blind, unfounded caution is not actually healthy. Caution should only be applied when logically relevant. Unwarranted caution can in fact be mental illness.
October 21, 2015 at 11:46 pm
> Two edges implies a 50/50 split
No, it doesn’t, at all.
> harm from technology is rather rare.
Tell that to everyone that’s been shot or bombed.
> The reference to Donald Rumsfeld’s (?) “known unknowns, and unknown unknowns” does not make a logical case for technological fear.
Erm – it absolutely makes a very good case for caution. In fact it’s the perfect case for caution. There are so many variables, and we have not done anything like that ever before. The result of ASi development (assuming it’s even possible) is the biggest mystery we have ever encountered. There is no map, no precedence, no advice, no safeguards. We are going in absolutely blind.
> I think we have every idea, a very clear view, of how intelligence, either human or artificial, reshapes society. We see clearly how the fruits of knowledge, technological progress, are very beneficial.
Obviously technology is an amazing tool, it’s our most valuable tool. To imply that I’m stating that technology should not be developed at all is a reductio ad absurdum. My point was, caution should be exercised when dealing with potentially very powerful technology about which we know absolutely nothing.
> The blindness is not regarding … a lack of regulation regarding AI minds.
This is sheer madness. You can’t let people just go nuts with ASI. For one thing, you’re creating a (theoretical) sentience, which would have rights. Secondly, an ASI could play with us like we play with mice. And no, we can’t create a ‘prison’ for an ASI, any more than a mouse can create a prison for humans.
> The blindness is regarding the irrational notion that hyper-bright AI minds could possibly be inherently, wholesale, dangerous.
And you’re in possession of some knowledge that nobody else is? You know for a fact that an ASI would be inherently benevolent? I think you have an extremely over-inflated sense of your own precognitive ability. We have absolutely no way of knowing anything about the nature of an ASI. It is all just speculation. We will be venturing into the ultimate unknown territory, and all bets are off. If you’ve convinced yourself that we’re going to create some awesome Utopian perfect existence & walk with Gods on a daily basis, & there is zero risk of anything going wrong, that’s absolutely your prerogative. Just please stay away from any positions where you might be influential to technological progress. You’re a liability.
> Unwarranted caution can in fact be mental illness.
The absence of caution is a MUCH more dangerous form of mental illness.
October 22, 2015 at 12:25 am
Yes there are parallelograms, but swords typically have two edges of equal length. It seems a 50/50 split is implied. Even with a scimitar the edges are very close to 50/50, perhaps 55/45 at worst, which is not representative of the very low risk from all technology.
Despite people being shot or bombed (I have never met such people in my personal life, thus I think they are rare, which is a rarity backed by statistics), the risk of war is relatively slim, very rare. Clean running water, heated homes, modern farming, electricity, modern medicine, and all the other benefits of technology vastly outweigh the relatively minor risks. Terrorism is an utterly insignificantly risk.
LOL “This is sheer madness. You can’t let people just go nuts with ASI.”
Why not? I have a Country of the Blind outlook regarding “nuts” thus I would say it is sheer madness to prohibit people from going “nuts” with ASI. The real insanity is to fear or delay super-intelligence.
Seriously it is very easy to deduce what super-intelligence will entail. It is very basic logic regarding intellectualism. Gods are morons, deluded moronic concepts, thus I certainly don’t intend to “walk with Gods on a daily basis.” In fact I already do that when amid the rabble, the hoi polloi, of everyday life in the year 2015.
The people who fear intelligence are the real liability. I really hope I will influence technological progress. My pen (keyboard) is mightier than Ray’s absurd double-edged sword.
October 21, 2015 at 6:38 am
Persuasive.
October 21, 2015 at 3:02 pm
Religious fanatics won’t care much if scarcity is lessened, Bin Laden was rich after all.
October 30, 2015 at 10:51 pm
Religion is a symptom of scarcity. Scarcity of indefinite lifespans, the lack of immortality, can entail feelings of powerlessness. Immense fear of death is not uncommon. Fear of death causes some people to engage in magical thinking, delusions regarding an afterlife, religion.
Deluded thinking will be obsolete when AI has sufficiently increased intelligence; when AI has sufficiently reduced scarcity, which among other things entails the abolition of death.
Being rich or being religious does not make a person super-intelligent or immortal in the year 2015, which means in our world of scarce intelligence some humans can act in a stupid manner.
The smartest or richest person, in a world of scarcity, is not immune to the pressures and pitfalls of scarcity. Rich people do have less worries but they must nevertheless lock everything to ensure it is not stolen; furthermore having less worries does not extract a person entirely from a world of worry, a world of scarcity.
Some people, rich or poor, will become unbalanced by the pressures of living in a world of scarcity, which can manifest as criminality, religion, or other maladies.
October 21, 2015 at 10:59 pm
“How many times have you burned down your village? Have you ever burned down your house? House fires happen, but they’re very uncommon in the circles I move in. I’ve never met anyone who suffered their house or village burning down. Perhaps I move in smarter circles? Maybe Ray burns down his village every week?”
That’s true, but how many times have we burned down the planet? Isn’t that how fire’s double edge is now manifested?
Things are and probably will continue to get better, yet at the same time, there are dangers.
October 21, 2015 at 11:33 pm
While industry and overpopulation puts pressure on our planet, climate change is not a threat because long before it is an issue we will have the technology to reverse any harm.
I think the majority of people will live off-planet, self-sufficiently, no later then 2045.
Our planet shows every sign of being very resilient despite the pressures of primitive technology, furthermore we are rapidly, or soon will be, moving away from environmentally harmful technology.
I don’t see any noteworthy dangers. The danger is trivial, insignificant.
The dangers are slight and becoming slighter all the time. We will reach a point where there is zero danger from technology. Already we are very far from the supposed doubled-edged threat.
October 22, 2015 at 7:01 am
Sigh. Being a double edge sword means technology will be used by us as we choose. We can choose to use it for good or ill. The author seems to studiously miss this point.
Going from this dubious entry point to amateur psychoanalysis of Ray Kurzweil cements this being an utterly worthless piece.
Ray is one of the great proponents of anti-aging and of an extremely optimistic future. He is usually attacked for his high level of optimism, not on using a common phrase that is in fact quite true this side of AGI making the decisions on how to use technology.
October 30, 2015 at 10:37 pm
Ah… often I sigh too, Samantha. Sadly my sighs are too often indeed. I am now sighing and shaking my head.
I am well aware of the doubled-edged idiom, the phrase.
You wrote regarding the meaning of a double-edge to technology: “We can choose to use it for good or ill.” I wonder though, regarding using it for good or ill, it seems potential is balancing on a knife-edge, or a sword edge perhaps?
My point is regarding the ease of choosing good or ill; is there a 50/50 opportunity versus peril regarding the two edges? Does the idea of two choices (“we can choose”), in the manner you stated, mean the risk/benefit is equal?
I suppose everyone could choose to kill themselves tomorrow. You could say choice between suicide and living is a double-edged situation. Many reasons to sigh could be a compelling reason to state suicide is more likely, but looking at the evidence (a painful lack of understanding in our world) we see the evidence does not entail everyone committing suicide when confronted with painfully unbearable situations. I think it is wrong to state there is a double-edge regarding the potential for living or suicide. We must consider the key piece of evidence, which is the will to live generally programmed into our DNA.
The problem with Kurzweil is he doesn’t consider all the evidence, which by way of analogy means he thinks all people are equally likely to kill themselves than carry on living because we have the potential to do either.
In the real world, in actuality, how often or how likely it is for technology to be used for ill? I think the evidence points to technology being massively oriented towards benefit instead of ill. The foundation of intelligence means technology is better suited the positively benefiting our world. Our ability to have clean running water or electricity, easily accessed in our homes, is not an accidental in-the-balance situation. The evidence is electricity and water on tap cause minimal (insignificant) harm.
Technology generally, due to its intelligent foundations, is not designed to cause harm. The harm factor is so low it can be dismissed, especially because intelligence is continually increasing. The only harm in our world arises from stupidity (a lack of technology) not intelligence.
Benefit from technology is hard-wired, similar to how humans are generally hard-wired to refrain from suicide, similar how water or electricity are rarely used to cause harm.
Accidents or ill-will are not very common compared to tangible massive benefits regarding technology. Computing and the Internet is the same massive weighting on the side of benefit.
I think my insights into Kurzweil’s mind should not be dismissed as “amateur psychoanalysis.” Evidentially I highlighted a logical contradiction in Kurzweil’s thinking, namely scarcity causes conflict and info is exponentially moving away from scarcity, and bio-tech is now info-tech, yet despite his recognition of this he thinks bio-terror is an increasing threat. Considering this evidential illogic by Kurzweil it is valid, either in a professional or amateur capacity, to speculate upon his psychological obsession with bio-terrorism.
It is also interesting that the metaphor of a “sword” is used to describe technology. Is a weapon really the best comparative tool to express the nature of technology? Yes in a world of scarcity we are fighting daily battles, often via incisive words, but in reality our everyday life is far from being a weaponised double-edge despite the wielding of cutting words.
I recognise how Ray opposes ageing, and how he wants to see intelligence explode, but really he is outdated in 2015. Eric Schmidt is a vastly better modern model these day because Eric admits, unlike Ray, to be a utopian.