So you want to build your own Artificial General Intelligence (AGI)? Well then, you’ve come to the right place! Obviously I’m being rather glib, but let’s take a very quick tour of the essential elements you’d need to create a software agent with a broad enough range of human-like functionality for us to recognize it as something like ourselves, rather than merely having proficiencies limited to narrow domains or aspects of human-like behaviour (i.e. AGI, rather than AI).
First things first, any such system requires a “top-level goal” (TLG) or purpose, regardless of whether that goal is explicitly represented within the software or is more of an implicit, contextual thing. For example, human beings do not have any explicit TLG written somewhere in their physiology, but it is clear that our implicit TLG is to survive long enough to reproduce (i.e. to survive as a species, on the longer evolutionary time-scale). For humans, implicit lower-level goals (i.e. those which “serve” the TLG) include our need to satisfy hunger, stay warm, escape predators, engage in social behaviours, seek a certain degree of novelty, and so on. Investigations of so-called “Friendly AI” often centre on the question of TLGs, for the reason that an AI with a TLG that does not involve any accounting for concerns of human safety could end up endangering humans in order to satisfy that goal.
Next, we come to the fundamental insight of cybernetics, which is that all living things instantiate at least one goal-based feedback loop. All biological organisms together comprise a subset of the group of cybernetic organisms, which is to say organisms whose structure and behaviour is based on perceptuo-behavioural feedback loops. In other words, the organism (1) perceives the world/environment as being in a state which to some degree matches (or not) its goal state, and (2) manipulates some aspect of that environment in order to bring it into closer alignment with the TLG. The altered state of the world is then perceived and assessed… and around and around we go in a goal-seeking loop (which in a dynamic world will be constant as long as the organism continues to exist). Clearly, in order to do these things your AGI must have both a perceptual apparatus of some sort, and an effector mechanism capable of manipulating the environment in ways relevant to the TLG. Exploratory recursive algorithms of the sort used in Machine Learning (such as AIXI) fall into the category of “effector mechanisms” for our purposes here.
From here, things start to get more complicated. Just as humans have the implicit sub-goals mentioned above, your AGI will have a degree of behavioural flexibility which is correlated to some degree with the number of sub-goals which serve its TLG. In nature it is the “higher animals” (e.g. mammals rather than insects) that have the wider range of sub-goals, relatively speaking, and in those animals motivation toward sub-goals is mediated by pleasure/pain responses and emotional states. As Buddhists have long noted, emotions tend to be aroused in connection with goals, with achievement of goals states leading to positive states and frustration leading to negative states. Any AGI with multiple sub-goals (which is to say any AGI worthy of the name) will also need some kind of “emotional motivation analogue”, which prioritizes the satisfaction of some sub-goals over others, on the basis of which ones are more important or pressing at any given moment. In short, the emotional motivator not only impels the system to act (by making it “uncomfortable” when priority goals remain unfulfilled), but also acts to balance and integrate the demands of multiple sub-goals with potentially opposing demands.
It is interesting to note at this point that metarepresentational systems – i.e. systems which model the activity of other systems – are both required to make such a complex regulatory system work, and often considered to be the basis of reflective consciousness or self-awareness. It may be the case that by creating an Artificial General Intelligence beyond a certain degree of sub-goal complexity, you are also by necessity creating an Artificial Consciousness, to some degree aware of its own internal states.
Clearly, such an “ecosystem” of dynamic sub-goal demands and interacting behavioural loops will rapidly give rise to a complex and often unpredictable AGI. We have already noted one system element which will help to moderate that complexity, a “rudder” for the system as a whole, which is the TLG. We might think of the TLG as the basis of a “top-down”, executive process for anchoring the system’s behaviour, but one final element is required which is also common to all complex organisms. The missing ingredient is an inherent sense of the boundary between the organism itself and its environment, which is to say the AGI and everything not-AGI. That boundary is critical to the cybernetic feedback loop at the heart of the system, in that it allows the system to assess the state of the organism (i.e. itself) relative to its goals and the surrounding environment. That may sound like a trivial distinction, but evolution tends to select strongly for an intuitive sense of one’s own boundaries, quite simply because it helps keep you alive. Furthermore, when an AGI is composed of elements such as algorithms that can propagate across networks, negotiating its environment is going to become very tricky indeed if it cannot distinguish processes that are part of itself from those which are not.
So there you have it; The essential ingredients of your very own AGI! Enjoy and develop responsibly, and try not to destroy civilization with your creations, OK?
This piece was originally posted at http://metric.media
2 Pingbacks