The Cognitive Bias Foundation is an open-source collaborative project designed to help document and understand methods to identify cognitive bias and provide resources for identification.
Want to help? Contact us at: email@example.com
What is a Cognitive Bias you ask? (from Wikipedia)
Although the reality of most of these biases is confirmed by reproducible research, there are often controversies about how to classify these biases or how to explain them. Some are effects of information-processing rules (i.e., mental shortcuts), called heuristics, that the brain uses to produce decisions or judgments. Biases have a variety of forms and appear as cognitive (“cold”) bias, such as mental noise, or motivational (“hot”) bias, such as when beliefs are distorted by wishful thinking. Both effects can be present at the same time.
There are also controversies over some of these biases as to whether they count as useless or irrational, or whether they result in useful attitudes or behavior. For example, when getting to know others, people tend to ask leading questions which seem biased towards confirming their assumptions about the person. However, this kind of confirmation bias has also been argued to be an example of social skill: a way to establish a connection with the other person.
Although this research overwhelmingly involves human subjects, some findings that demonstrate bias has been found in non-human animals as well. For example, hyperbolic discounting has been observed in rats, pigeons, and monkeys. [wiki ref]
Ways to get involved:
Whether you are talented with academic writing, coding, linguistics, psychology, analytics, engineering, mathematics, solutions architecture, or any number of other specialties there are ways for you to contribute. Please submit these to: firstname.lastname@example.org
Additional details, clarifications/corrections, references, and notes may be submitted for any given bias to provide more data for contributors to work with on the site. Academic writers, linguists, and psychology professionals, for example, could offer great value in this.
We’re working on a collaboration with WikiBias and others to produce tagged datasets of text which maybe be used to analyze the structural patterns which occur in the presence of a bias. Analysts, developers, and engineers, for example, will be able to review these datasets to help locate these structural patterns individually, adding them to a given bias on the site.
After a few such patterns have been recognized for a given bias mathematicians, engineers, and solutions architects, for example, may propose algorithms which consider these factors together for increased accuracy over single-pattern flagging. If we can for example use a dataset to establish that when Structure A is presented in B tone, C part of the sentence, and in conjunction with Structure D that Bias E is present 60% of the time we can produce an algorithm with that starting point and several variables to gradually increase the % accuracy without sacrificing transparency, compatibility, or the upper-bound on refinement.
Following such algorithms being proposed we can put them to the test, with developers and engineers formalizing the code and measuring the accuracy of these proposed algorithms.
When successful algorithms have been measured they can be further refined with the addition of analysis for the False-positive segment where an algorithm fails, producing a second layer of structural pattern recognition to de-flag false-positives. These patterns and algorithms may likewise be added to a given bias to produce an iterative cycle of refinement over the accuracy of detection. These cycles can function in composite or parallel as additional Positive/False-Positive equations are proven successful.
The First Milestone:
Our next big goal is to produce a successful algorithm for the detection of a single bias, in plain text, using a single sentence. We’re aiming to meet this goal before the end of 2019.
Following this milestone a successful algorithm may be generalized for the detection of similar biases, branching out over time, with the goal of expanding to cover all forms of cognitive bias. After a few such algorithms have proven successful analysis of the ways in which detection needed to be modified between them may produce meta-algorithms for the generation of new detection algorithms. Further, as many of these patterns are likely to be expressed in other languages a similar pattern of geometric growth in bias detection may take shape starting at the points where one algorithm proves successful at detection when applied to a new language.
The ability to automatically detect Cognitive Bias is hugely important, especially in an age of increasing automation, advertising, and social media influences. Contributions to this project will go towards teaching both humans and AGI (both existing and future) how to recognize Cognitive Biases and avoid their pitfalls.
Check out more information at: http://bias.transhumanity.net/
Hero image used from https://ritholtz.com/2016/09/cognitive-bias-codex/