What Is Human Cognitive Bias?
When humans make intuitive judgments, they often make errors that don’t seem to make a lot of sense. The errors matter because they’re systematic, occurring again and again under similar circumstances. For example, people tend to be over-confident. They don’t weight advice from others appropriately. They believe in unreal things like “winning streaks” in games of chance. In short, we humans are frequently not nearly as clever as we imagine ourselves to be. Psychologists studying this phenomenon refer to such systematic tendencies toward error as biases.
Why Did Biases Evolve, and Why Does It Matter?
It seems strange that evolution would favor a systematically biased animal. How could such a trait have survival value? Well, as it turns out, the answer is one that is familiar to AI developers. Nature essentially crafted our minds to do as much as possible, with as few resources as possible, including time. Rather than optimize for complex explicit algorithms and exact rationality in our thinking, the evolutionary process bequeathed us with minds that specialize in simple rules of thumb, and a tendency to make fast, ballpark guesses under circumstances like the ones we evolved in. Take a human out of their evolutionary environment (i.e. the African savannah) and ask them to solve problems their brain did not evolve to solve, and things can go wrong. This matters, of course, because we live in a world that is far more complex and rapidly developing than ever before, more dangerous than ever before by some measures, and yet humans – an all our glorious bias – are still for the most part making the decisions. That’s a problem.
Computers Only Do What They’re Told. Mostly.
You’ve probably heard the statement that computers only do what they are programmed to do. If they are not explicitly designed or programmed to take something into account, then they won’t. Aside from the fact that this is not necessarily true of complex, evolving systems such as Artificial Intelligences, it’s also not necessarily a bad thing. AI is not human (a truism which has caused much frustration to AI developers over the decades), but that also means that AI does not come with human cognitive bias built-in. AI is as rational and unbiased as the models we program into it… at least until it starts redesigning its own source code and that process comes under evolutionary pressures.
The Loaded Slingshot.
The philosopher Daniel Dennett once wrote about human babies as being creatures whose minds are like a slingshot, stretched and ready to fire the moment an object is placed in them. For example, a baby is not born speaking English or Mandarin, but they are born with very strong cognitive predispositions (e.g. to match their babbling to sounds made by their parents) which make learning those languages a natural and easy thing to do. Similarly, humans are born with innate predispositions toward certain cognitive heuristics which are often helpful… and sometimes not so much in the modern world.
When we are designing Artificial Intelligences and thinking about what decisions they should be allowed to make, and how they should make them, we should remember a couple of things. One is that AI can transcend human bias, which is something we sorely need in our decision-making. Another is that the loaded slingshot can be a very powerful design tool indeed, and sometimes we may want to engineer a little cognitive bias into our creations, to make them more human.
originally posted here: http://blog.metricmedia.io/2017/12/05/ai-transcends-human-cognitive-bias/