Sample size neglect is a cognitive bias that Amos Tversky and Daniel Kahneman have studied well. Key Findings Sample size neglect is a cognitive bias explored by Amos Tversky and Daniel Kahneman. [Sources: 3]

Regarding the hospital issue, most of the participants in Tversky and Kahnemans rated more than 60 percent of boys to be equally likely to be admitted to small and large hospitals, presumably because these events are described by the same statistics and are therefore equally representative of the general population (Tversky et al. Kahneman calls this “sample size insensitivity”). Tversky and Kahneman explained that these results are due to the representativeness heuristic, in which people intuitively evaluate samples as having properties similar to their collection, without taking other considerations into account. Sample size insensitivity is a cognitive error that occurs when people estimate the likelihood of obtaining statistical information from a sample regardless of the sample size. [Sources: 0, 4]

In this article, we empirically explore the psychometric properties of some of the best-known statistical and logical cognitive illusions of Daniel Kahneman and Amos Tversky’s Heuristics and Bias research program, which nearly 50 years ago presented fascinating puzzles such as the famous Linda problem, Wason’s paper selection problem, and so-called Bayesian reasoning problems (eg mammography problem). The cognitive illusions they then presented provided empirical evidence that human reasoning abilities defy the laws of logic and probability. [Sources: 4]

Ignoring the sample size refers to the inability to consider the role of the sample size in determining the reliability of statistical statements, while ignoring the benchmark rate means that people tend to ignore existing knowledge of a phenomenon when evaluating new information. Insensitive to sample size People do not understand the significant difference in sample size for any probability calculation. In this article, we studied the famous statistical and logical cognitive illusions in the heuristic and bias research project of Daniel Kahneman and Amos Tversky from the perspective of psychometrics. This is called the sample size insensitivity bias, or, if you will, the law of decimals. [Sources: 1, 2, 3, 4]

This occurs when users of statistical information make false conclusions by ignoring the sample size of the data in question. Thus, it is very important to determine whether the sample size used to obtain a given statistic is large enough to allow meaningful conclusions to be drawn. So it struck me that you can argue that all of these books and articles on cognitive errors are pretty unscientific in their own way, or lack the proper sample size, because they only focus on where heuristics lead to errors. (and furthermore, these same errors are measured under highly unrealistic conditions in psychological laboratories using highly unrepresentative samples of college students). [Sources: 2, 3]

This performance sampling works like everything else: the larger the sample size, the more uncertainty is reduced, and the more likely you are to make the right decision. However, the frequency composition, attenuation characteristics and other factors of earthquake ground motion vary greatly. Therefore, the precedent for a single earthquake is that the sample size is very small. Before drawing conclusions from information about a limited number of events (samples), it is important to choose from a large number of events (populations) and to understand some information about sample statistics. [Sources: 1]

Therefore, the last part of the Tversky and Kahnemans paper-about subjective probability distribution-is not covered in other books because it is expressed in such dense mathematical terms that it is almost incomprehensible, and because they criticized the idea of decision theory in 1970 -x is too far away from most people’s daily concerns. In other words, variation is more likely to occur in smaller samples, but people may not expect it. According to the so-called “law of decimals”, we often use small samples of information to speak on behalf of or on behalf of a wider group of people. [Sources: 0, 2, 5]

Exaggerated trust in little champions is just one example of a more general illusion: we place more emphasis on the content of messages than on information about their reliability, and the result is a simpler, more consistent view of the world around us. what the data justify. Of course, if the champion was extreme, say 6 people, you would doubt it. The most common form of delusion is the tendency to assume that small samples should be representative of their parent population, with player delusion being a special case of this phenomenon. [Sources: 1, 6]

Bias due to the presence of the search set. Imagine you are choosing words from random text. Performance records are generated by a combination of core capabilities and sample variation. Heuristics are mental shortcuts that our brains use to help us make quick decisions. We often select past experiences that we believe should be similar to future events or that we believe should reflect an ideal outcome. [Sources: 1, 2, 5]

— Slimane Zouggari

##### Sources #####

[0]: https://en.wikipedia.org/wiki/Insensitivity_to_sample_size

[1]: https://fs.blog/mental-model-bias-from-insensitivity-to-sample-size/

[2]: https://astrofella.wordpress.com/tag/insensitivity-to-sample-size/

[3]: https://www.investopedia.com/terms/s/sample-size-neglect.asp

[4]: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.584689/full

[5]: https://thedecisionlab.com/biases/gamblers-fallacy/

[6]: https://www.oxfordreference.com/view/10.1093/oi/authority.20110803100439475

[7]: https://hyperleap.com/topic/Insensitivity_to_sample_size