Base Rate Fallacy

As a minimum definition, we can say that we make a base rate error when we make a statistical inference that somehow ignores the base rate (or prior probability) of the interest rate. Base bet error occurs when we make judgments too quickly, ignoring base bets or odds in favor of new information. Baseline velocity bias is a common cognitive error that distorts decision-making, with the result that information about the occurrence of some common characteristics in a given population is ignored or of little importance in the decision-making process. [Sources: 5, 9, 11]

When assessing the probability of an uncertain event, the tendency to ignore the baseline rate, underestimate the baseline rate or the current information of previous information, which determines the probability of an uncertain event, is the main distortion of a person’s probability conclusion (1, 2). It highlights the long-term research program of experimental psychology that uses ideal Bayesian inference as a model of human behavior and attempts to understand how people estimate probability by examining the systematic deviation of probability estimates from the ideal model predictions (3). [Sources: 7]

These results indicate that bias in probabilistic inference, widely observed in human probabilistic judgments, arise from calculations of information weighting and relative sensitivity to information variability in the brain. These results indicate that, by combining precedent and likelihood, sensitivity to information variability and the calculation of subjective weights critically affect the individual heterogeneity of base rate neglect. Neglecting the base rate — a serious error in assessing the likelihood of uncertain events — describes a person’s tendency to underestimate the base (previous) rate compared to the discovery of information (probability). [Sources: 7]

However, although predictive information is easily used, even when the actual reliability of the predictors is questionable, the information inherent in the baseline event rate criteria is often underutilized (Tversky & Kahneman, 1982). For example, when the predictor is a witness that a suspicious car was blue, human participants tend to believe it was indeed blue, even when faced with evidence that the base frequency of blue cars is low at that particular location. When presenting information related to the base rate (i.e. general information about prevalence) and specific information (i.e. information that is specific to a specific case), people tend to ignore the base rate in favor of identifying information instead of correctly integrate them. … [Sources: 4, 14]

The error arises from confusing the nature of the two different failure rates. Consequently, 100% of all cases when the alarm sounds for non-terrorists, but it is impossible to even calculate a false negative indicator. [Sources: 4]

If there were as many terrorists as there are non-terrorists in the city, and the number of false positives and the number of false negatives were almost equal, then the probability of false identification would be about the same as the number of false positives of the device. In this case, the number of false positives per positive test will be almost equal to the number of false positives per non-pregnant woman. Even a very low rate of false positives will result in so many false positives that such a system is practically useless. [Sources: 1]

In the latter case, it is impossible to derive the posterior probability p (drunkenness | positive result) by comparing the number of drunk drivers and positive results with the total number of people who received a positive breathalyzer test, since information on the basis of the fare is not saved and should be explicitly re-introduced using Bayes’ theorem. In situations where our test is very accurate (for example> 95%), but the previous probability or base rate of the percentage characteristic is low (say only 1/125), then when the first (hence the proportion of positive cases) positively classified) will be high ( 95%, in our case, just for the accuracy of the test), the latter (therefore, the proportion of positive cases classified as positive) can be very low (because the vast majority of cases classified positively are actually false positives due to the fact that the number of those who passes the test does not have the characteristic of interest, ultimately due to the low base rate). The likelihood of a positive test result is determined not only by the accuracy of the test, but also by the characteristics of the sample. [Sources: 4, 9]

We tend to make judgments based on known specific numbers and percentages, and ignore necessary general statistics. Therefore, when making statistical conclusions, it is very important to be wary of our tendency to fall into such traps. If you cannot draw a conclusion, then you are approaching Bayesian probability, and you are not alone. When providing data on the likelihood of breast cancer in women who have a positive mammogram, an alarming 80% of doctors are wrong. [Sources: 2, 9, 13]

Clinical profilingists are likely to resist the idea of ​​adding a percentage figure to their predictions – this seems to contradict clinical intuition or judgment. It always happens that way; People unfamiliar with the technical rules of prior probability usually ignore previous statistics because they don’t seem relevant. There is still a lot to think about before calculating the bounce rate. The profiler needs to communicate more clearly by entering a personal prediction percentage (e.g. 30%) so that researchers can judge how strongly the profiler thinks an event will happen. [Sources: 2, 5, 8]

When predicting criterion events by predictors in probabilistic parameters, it is normatively advisable to take into account two types of information: the global base frequency of criterion events and the values ​​of predictors for a particular case. Moreover, even when the predictors did not contain any useful statistical information, there was a bias in the selection of criterion events that outwardly resembled predictors due to the lack of conjugation with event criteria (Goodie & Fantino, 1996). [Sources: 14]

They argued that many judgments of probability or cause and effect are based on how representative one object is to another or a category. They argued that many judgments of probability or cause and effect are based on how representative one object is to another or a category. One of the main theories postulates that this is a matter of relevance, so we ignore the base rate information because we classify it as irrelevant and therefore think it should be ignored. [Sources: 1, 4, 6]

Representativeness heuristics may lead to basic rate errors, because we can treat events or objects as highly representative and make probabilistic judgments based solely on this, without stopping to consider the basic rate value. This is a common psychological bias associated with representative heuristics. They are influenced by personal stories, for example, a story about a smoker who lived to be 95 years old. [Sources: 6, 12]

It also happens when the profiler feels he is better equipped to deal with the problem based on previous experience. For example, a profiler might focus on a specific criminal, obscuring useful information about a group of criminals with similar characteristics. Subjective judgments about likelihood based on personal belief profiles, such as that the perpetrator will commit the crime again, or that a particular suspect is the prime suspect, or that the perpetrator lives in a specific area. The mechanical application of Bayess’s Theorem to identify performance errors is inadequate when (1) key model assumptions are unchecked or severely violated, and (2) no attempt is made to define the goals, values, and assumptions that are the responsibility of decision-makers. [Sources: 0, 5]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/base-rate-fallacy-reconsidered-descriptive-normative-and-methodological-challenges/5C0138815B364140B87110364055683B

[1]: https://psychology.fandom.com/wiki/Base_rate_fallacy

[2]: https://www.statisticshowto.com/base-rates-base-rate-fallacy/

[3]: https://www.investopedia.com/terms/b/base-rate-fallacy.asp

[4]: https://en.wikipedia.org/wiki/Base_rate_fallacy

[5]: https://ifioque.com/social-psychology/base-rate-fallacy

[6]: https://thedecisionlab.com/biases/base-rate-fallacy/

[7]: https://www.pnas.org/content/117/29/16908

[8]: https://www.adcocksolutions.com/post/no-8-of-86-base-rate-fallacy

[9]: https://www.capgemini.com/gb-en/2020/10/the-base-rate-fallacy-what-is-it-and-why-does-it-matter/

[10]: https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095449924

[11]: https://tactics.convertize.com/definitions/base-rate-fallacy

[12]: https://fs.blog/mental-model-bias-from-insensitivity-to-base-rates/

[13]: https://www.wheelofpersuasion.com/technique/base-rate-neglect-base-rate-fallacy/

[14]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2441578/