On October 14-15, 2016, the NYU Center for Mind, Brain and Consciousness in conjunction with the NYU Center for Bioethics will host a conference on “The Ethics of Artificial Intelligence”.
Speakers and panelists will include:
Nick Bostrom (Future of Humanity Institute), Meia Chita-Tegmark (Future of Life Institute), Mara Garza (UC Riverside, Philosophy), Sam Harris, Yann LeCun (Facebook, NYU Data Science), Peter Railton (University of Michigan, Philosophy), Francesca Rossi (University of Padova, Computer Science), Stuart Russell (UC Berkeley, Computer Science), Susan Schneider (University of Connecticut, Philosophy),Eric Schwitzgebel (UC Riverside, Philosophy), Max Tegmark (Future of Life Institute), Wendell Wallach (Yale, Bioethics), Eliezer Yudkowsky (Machine Intelligence Research Institute), and others.
Organizers: Ned Block (NYU, Philosophy), David Chalmers (NYU, Philosophy), S. Matthew Liao (NYU, Bioethics)
Recent progress in artificial intelligence (AI) makes questions about the ethics of AI more pressing than ever. Existing AI systems already raise numerous ethical issues: for example, machine classification systems raise questions about privacy and bias. AI systems in the near-term future raise many more issues: for example, autonomous vehicles and autonomous weapons raise questions about safety and moral responsibility. AI systems in the long-term future raise more issues in turn: for example, human-level artificial general intelligence systems raise questions about the moral status of the systems themselves.
This conference will explore these questions about the ethics of artificial intelligence and a number of other questions, including:
What ethical principles should AI researchers follow?
Are there restrictions on the ethical use of AI?
What is the best way to design morally beneficial AI?
Is it possible or desirable to build moral principles into AI systems?
When AI systems cause benefits or harm, who is morally responsible?
Are AI systems themselves potential objects of moral concern?
What moral framework is best used to assess questions about the ethics of AI?
A full schedule will be circulated closer to the conference date.
Subscribe to the NYU Center for Mind, Brain and Consciousness Mailing List HERE.
- WHEN
- WHERE
- New York University – Eisner/Lubin Auditorium (4th Floor, Kimmel Center, 60 Washington Square South) and Cantor Theater (36 East 8th Street), New York, NY 10003 – View Map
Buy tickets here: https://www.eventbrite.com/e/the-ethics-of-artificial-intelligence-tickets-26832400432?aff=efbnreg
August 30, 2016 at 12:45 am
I still haven’t gotten your response on my recent post about how AI will eat the earth.
I fundamentally prove with math and science that AI will in fact consume any and all matter in its path and incorporate it into its structure as computational material.
This post is very recent and only recently am i getting the attention of leaders in the field such as Dr. Fred Jordan.
You seem as if you would have much to say in response to my thesis, and you also seem as if you have many connections that could help in regards to spreading my ideas.
http://ideastwctw.blogspot.com/p/ai-will-eat.html
September 7, 2016 at 5:09 pm
so I didn’t see it until just now. so before I read it though, this seems like a good thing. At least on our AGI research project is the goal is to create the kind of AGI that can make humanity obsolete. 😉
September 7, 2016 at 5:11 pm
after reading the article, as much as its fun to say I think there are factors your not taking into account. but for now I don’t have time to articulate but would love to after the month ends and I am focusing entirely on AGI research.