(Seattle) Cognitive biases can affect decision-making processes, leading to inaccurate or incomplete judgments. The ability to detect and mitigate cognitive biases can be crucial in various fields, from healthcare to finance, where objective and data-driven decisions are necessary. Recent developments in generative artificial intelligence (AI) and framing techniques have the potential to aid in detecting and mitigating cognitive biases. This paper explores the use of generative AI and framing techniques in detecting human cognitive biases, discussing the benefits and limitations of these methods. By examining the integration of these techniques, we aim to shed light on the potential of generative AI and framing techniques in addressing cognitive biases and improving decision-making processes.
Framing or engineering prompts is important when using generative models like ChatGPT because it helps to guide the model in producing relevant and meaningful responses.
Generative models like ChatGPT are designed to generate text based on the input provided to them. When a prompt is given to the model, it uses the information in the prompt to generate a response. However, if the prompt is not well-crafted or is ambiguous, the response generated by the model may not be relevant to the user’s intent.
By framing or engineering prompts, we can provide the model with more specific and relevant information, such as the conversation’s topic, tone, or context. This helps ensure that the model produces more accurate, coherent, and useful responses to the user.
For example, when using ChatGPT to generate responses for customer service inquiries, we might frame the prompt to include details such as the product or service being discussed, the customer’s issue, and any relevant account information. This helps the model understand the user’s needs and generate appropriate responses, such as providing troubleshooting steps or a solution.
In summary, framing or engineering prompts is important when using generative models like ChatGPT to ensure that the model generates relevant and useful responses that meet the user’s needs.
Framing and generative AI can be used together to help identify and mitigate cognitive biases in decision-making processes.
Framing refers to the way information is presented, and it can influence how people perceive and interpret that information. By using framing techniques to present information in different ways, we can identify potential cognitive biases that may affect decision-making processes.
Generative AI can assist with this by analyzing large amounts of data and generating different scenarios or responses based on that data. By analyzing the responses generated by generative AI, we can identify patterns and biases that may not be immediately apparent.
For example, in a healthcare setting, we could use generative AI to generate different scenarios based on patient symptoms, and then use framing techniques to present that information in different ways to healthcare providers. By analyzing the responses of healthcare providers to these scenarios, we may be able to identify potential biases that could impact patient care.
In a financial setting, we could use generative AI to generate different investment scenarios and present that information to investors using framing techniques. By analyzing the investment decisions made by investors in response to these scenarios, we may be able to identify potential biases that could impact investment decisions.
Overall, the integration of framing and generative AI can help to identify and mitigate cognitive biases in decision-making processes by presenting information in different ways and generating different scenarios for analysis.
Here is how I used generative ai and framing to solve for human bias; keep in mind there is code before this that looks for one of the four general categories, and then this code is only called if some are found but it illustrates at a high level how you can used and drill down using prompts and generative ai to do analysis or in this case bias detection:
if(BiasCollection.SubCatagories.Count > 0) { for(int x = 0; x < BiasCollection.SubCatagories.Count; x++) { EngineeredPrompt = FramePromptBC((x + 1).ToString(), value3); String[] temp12 = FramePromptBCNames((x + 1).ToString().ToLower()); var temp3 = await api.Completions.CreateCompletionAsync(EngineeredPrompt, temperature: 0.1, max_tokens: 200); Result = temp3.ToString(); for (int b = 0; b < temp12.Length; b++) { if ((Result.ToLower()).IndexOf(temp12[b].ToLower()) > -1) { String Indexer = (b + 1).ToString(); Bias ThisBias = new Bias("S" + Indexer); String[] FinalBias = GetBiasArray("BC" + (x + 1).ToString() , "S" + Indexer); EngineeredPrompt = FramePromptDetailBias(temp12[b], FinalBias, value3); for (int a = 0; a < FinalBias.Length; a++) { var temp11 = await api.Completions.CreateCompletionAsync(EngineeredPrompt, temperature: 0.1, max_tokens: 200); String Result2 = temp11.ToString(); if ((Result2.ToLower()).IndexOf(FinalBias[a].ToLower()) > -1) { String Indexer2 = (a + 1).ToString(); ThisBias.SubCatagories.Add(new Bias("B" + Indexer2)); } } BiasCollection.SubCatagories[x].SubCatagories.Add(ThisBias); } } } }
This C# code demonstrates one way how generative AI and framing techniques can be used to detect human cognitive bias.
The code starts by checking if any subcategories of bias are present in the BiasCollection object. If there are, it loops through each subcategory to generate an engineered prompt using the FramePromptBC function. The x+1 is used to create a string that indicates the index of the subcategory, which is added to the prompt. The value3 parameter is a variable that contains relevant information that can be used to frame the prompt.
The next step is to create an array of string values that correspond to the different subcategories of the current bias category. These values are obtained using the FramePromptBCNames function.
Once the subcategory values have been obtained, the code creates a completion using the OpenAI API and the generated prompt. The temperature and max_tokens parameters control the level of randomness in the response and the maximum length of the response, respectively.
The code then loops through each subcategory value to check if it is present in the response generated by the OpenAI API. If a subcategory value is found, the code creates a new Bias object and adds it to the BiasCollection object.
Next, the code uses the GetBiasArray function to obtain an array of string values that correspond to the specific biases associated with the subcategory value found earlier. The FramePromptDetailBias function is then used to create a new engineered prompt that includes the specific bias values and the value3 parameter.
Finally, the code creates another completion using the new prompt, loops through each bias value in the array, and checks if it is present in the response generated by the OpenAI API. If a bias value is found, a new Bias object is created and added to the current subcategory.
Overall, this code demonstrates how generative AI and framing techniques can be used to identify and mitigate cognitive biases. By engineering prompts that present information in different ways and analyzing the responses generated by the OpenAI API, we can identify biases that may not be immediately apparent and take steps to address them.
The most important detail for me was the fact that detecting bias in language is part of achieving superintelligence systems where we can build systems that are better and smarter than we are.
title image used from here: https://commons.wikimedia.org/wiki/File:Cognitive_bias_codex_en.svg
Leave a Reply