There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence, and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point, I was very happy to have Roman back on my podcast for our third interview. [See his previous interviews here and here.]

During our 80-minute conversation with Prof. Yampolskiy, we cover a variety of interesting topics such as: AI in the media; why we’re all living in our own bubbles; the rise of interest in AI safety and ethics; the possibility of a 9.11 type of an AI event; why progress is at best making “safer” AI rather than “safe” AI; human and artificial stupidity; the need for an AI emergency task force; machine vs deep learning; technology and AI as a magnifying mirror; technological unemployment; his latest book Artificial Intelligence Safety and Security.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunesmake a direct donation or become a patron on Patreon.

read more and listen to the podcast here:

 

https://www.singularityweblog.com/artificial-intelligence-safety-and-security/