Abstract. We constantly hear warnings about super-powerful super-intelligences whose interests, or even indifference, might exterminate humanity. The current reality, however, is that humanity is actually now dominated and whipsawed by unintelligent (and unfeeling) governance and social structures and mechanisms initially developed to order to better our lives. There are far too many complex yet ultimately too simplistic algorithmic systems in society where “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.” We now live in a world where constant short-sighted and selfish local “optimizations” without overriding “moral” or compassionate guidance have turned too many of our systems from liberators to oppressors. Thus, it seems likely that a collaborative process of iteratively defining and developing conscious and compassionate artificial entities with human-level general intelligence that self-identify as social and moral entities is our last, best chance of clarifying our path to saving ourselves.
The full paper will appear in the 2019 AAAI Symposia at Stanford University.
See https://rabc.solutions/ for more information
January 2, 2019 at 2:34 am
[1] We are already dealing with artificial decision networks – corporations.
And the first advanced and capable intelligences will be owned by corporations and military organizations. Our current system of law, such as it is, has evolved to constrain the actions individuals and organizations. If these controls are not adequate for their intended (pre-AI) purposes, we are already in trouble. My point here is that we should expect AGI will be no more constrained than the organizations that “own” them.
[2] To be dangerous, an automaton only needs capability – It does not even need autonomy or intelligence. Example: a land-mine. To create an existential threat, an entity (human or otherwise) only needs the means to initiate an exponential process. Example: Grey Goo in the Ocean. My point is that the threshold of capability, autonomy, and intelligence required within our current technological context is terrifyingly low. A grad student working with too little supervision and too little sleep could easily trigger a major catastrophe unintentionally.