Decoy Effect

The addition of MP3 Player C, which buyers are likely to avoid (they may pay less for a model with more storage), results in MP3 Player A, the dominant option, being chosen more often than when we had only two options. Since A is better than C in both respects, while B is only partially better than C, more consumers will prefer A now than before (quoted from Wikipedia). Since A is better than C in both respects, while B is only partially better than C, more consumers will prefer A now than before. [Sources: 2, 4]

So C is a bait whose sole purpose is to increase A’s sales. Add decoy C-considering that models with more memory may pay a lower price, consumers may avoid doing this-it does. Compared with the case of considering only two options in set 1, A, the dominant option, should be selected more frequently; C affects consumer preferences and serves as the basis for comparing A and B. In other words, the decoy effect is a phenomenon in which the attractiveness of B (relative to A) can be enhanced by adding an additional option D, author B (Li et al., 2019). Marketers refer to the decoy effect as an asymmetric advantage effect, exploring this phenomenon, that is, when consumers face a third option in addition to the two options they have, they can signal a change in their preferences, and thus in the new Create an asymmetric advantage in terms of incentives. [Sources: 4, 7, 12, 14]

The decoy effect is technically known as asymmetric dominance and occurs when people’s preference for one option over another changes as a result of the addition of a third option (similar but less attractive). Bait effect is defined as a phenomenon in which consumers change their preferences between two options when they are presented with a third option – “bait”, with “asymmetric dominance”. In marketing, the decoy effect (or gravity or asymmetric dominance effect) is a phenomenon in which consumers will tend to have a certain preference change between two options when a third option with asymmetric dominance is also presented. [Sources: 11, 12, 15]

The decoy effect, or asymmetric dominance effect, is a cognitive bias in which consumers will tend to have a certain change in preference between two options when a third option with asymmetric dominance is also presented. The decoy effect is the phenomenon in which the addition of a third pricing option causes the consumer to change their preference in favor of the option the seller is trying to promote. But when consumers are presented with a different strategic bait option, they are more likely to choose the more expensive of the two initial options. [Sources: 0, 4, 13]

The bait effect can also be measured by how much more the consumer is willing to pay to select a target rather than a competitor. Advertisers and marketers use fictitious prices to make the targeted option look better than similar products with lower prices. Companies are pushing customers, who usually tend to buy a cheaper product, towards a more expensive product. [Sources: 2, 8, 12]

Marketers use a particularly cunning pricing strategy to switch your choice from one option to a more expensive or profitable option. It seems that the only thing that really makes sense is the “most valuable” offer. Price is the most subtle factor in the marketing mix, and we have considered a lot of questions about setting prices so that we can spend more money. Based on the theory that consumers tend to choose products or services of average value and price, marketers use the trade-off effect to increase the desire to buy average value products from customers. [Sources: 1, 8, 13]

Consumers may prefer a higher quality product more than a cheaper product of lower quality when offered a third option that is relatively inferior to the first product in terms of quality, price, or both. [Sources: 10]

Decoy manipulates the decision-making process by directing consumers’ attention to the target option. Bait is not meant to be sold, but only to drive consumers away from the “competitor” and toward the “target,” usually the most expensive or profitable option. When used as a marketing strategy, the price of the bait not only increases profits, but also improves the overall image of the targeted product or service. [Sources: 8, 15]

The addition of bait increases the likelihood of purchasing a higher quality product. The bait should be chosen so that it resembles the target variation, but is slightly smaller in order to create an effect. Once added, bait changes your choice from competitor to target. [Sources: 5, 6]

This makes you less likely to rate a competitor than the other two options and are more likely to rate a target than a decoy. And if you are evaluating target versus bait, the only differentiating factor is price, which means you will be targeting. In this situation, it can be foreseen that if (for example) the memory size of the bait is too close to the memory of the target, the two products may appear to be nearly identical, and the bait is unlikely to affect the inversion of choice. While it is difficult to imagine how this confounder encourages people to choose a target option over other options, decoy research is ideal if it can rule out this confounder. [Sources: 5, 6, 14]

Note that since it is well known that strong prior preferences of both target and competitor can inhibit the effect of bait injection (Huber et al., 2014), our study clearly focuses on scenarios in which options can be created. [Sources: 5]

We base our decisions on what is more profitable than what is best for our purpose, when the decoy option is right between our choice and what marketers want to sell. Calls usually hit us unnoticed; whatever we end up choosing, we believe we are doing it independently. By manipulating these key attributes of choice, bait guides you in a certain direction, giving you the feeling that you are making rational and informed choices. [Sources: 8, 9, 15]

Thus, the decoy effect is a form of “nudge” that Richard Thaler and Cass Sunstein defined (pioneers of nudge theory) as “any aspect of choice architecture that alters people’s behavior in a predictable way without inhibiting any options.” Not all attacks are manipulative, and some argue that even manipulative attacks can be justified if the goal is noble. This has been demonstrated in many areas such as medical decisions (Schwartz and Chapman, 1999), consumer choice, gambling preferences, and so on (Huber et al., 1982; Heath and Chatterjee, 1995). This is consistent with the claim that the bait effect is persistent (Huber and Mccann, 1982). [Sources: 3, 14, 15]

In this study, the state of the bait had a higher visual load than the control state. In any case, with the exception of lottery tickets, the decoy successfully increased the likelihood that the target was chosen. As expected, when bait opportunity was present, people were more likely to target. The addition of bait allowed people to judge the beer by quality and forget about the price. [Sources: 6, 9, 14, 15]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://pony.studio/design-for-growth/decoy-effect

[1]: https://scitechdaily.com/think-you-got-a-good-deal-beware-the-decoy-effect/

[2]: https://www.paulolyslager.com/decoy-effect-price-tables/

[3]: https://theconversation.com/the-decoy-effect-how-you-are-influenced-to-choose-without-really-knowing-it-111259

[4]: https://www.intelligenteconomist.com/decoy-effect/

[5]: https://www.nature.com/articles/palcomms201682

[6]: https://kenthendricks.com/decoy-effect/

[7]: https://www.segmentify.com/blog/the-3-option-decoy-effect-and-relativity

[8]: https://designbro.com/blog/industry-thoughts/decoy-effect-marketers-influence-choose-what-they-want/

[9]: https://thedecisionlab.com/biases/decoy-effect/

[10]: https://thinkinsights.net/strategy/decoy-effect/

[11]: https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/decoy-effect/

[12]: https://en.wikipedia.org/wiki/Decoy_effect

[13]: http://humanhow.com/the-decoy-effect-complete-guide/

[14]: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.523299/full

[15]: https://www.qut.edu.au/study/business/insights/the-decoy-effect-how-you-are-influenced-to-choose-without-really-knowing-it

Distinction Bias

But there was only one object at home, and there was nothing to compare with. Shoppers in stores are often in a comparison mode, evaluate products next to each other, and are hypersensitive to the slightest differences. When people evaluate options independently, they tend to focus on easily measurable attributes. [Sources: 0, 11]

But when comparing these two options, they can consider attributes that are difficult to evaluate. The difference bias indicates that comparing two options, just like in a joint evaluation, even small differences between the options are obvious. When the small quantitative difference between the two options is amplified by direct comparison, the difference bias occurs. Prejudice gives too much qualitative value to small differences of little value, such as technological products or important life decisions. [Sources: 3, 7, 11]

A study published in the Journal of Personality and Social Psychology explains that people compare and experiment with options in different ways. Psychologists believe that when we compare options, we are acting in different ways, rather than experimenting with them. In making a choice, we are in a comparison mode, sensitive to small differences between options, as I choose a TV. But when we live by our decisions, we are in a mode of experience: there are no other options with which to compare our experience. [Sources: 1, 7]

We often pay more attention to insignificant quantitative differences and choose the option that doesn’t actually bring us the most happiness. Rather than optimizing what makes us happiest in the long run, let’s play “find the difference” around attributes that don’t really matter. While marketers can use this bias to sell us things that might not make us feel better, there is no reason why we should continue to fall in love with their gimmicks. By understanding our cognitive quirks, such as our bias about differences, we can outsmart our own brains. [Sources: 1, 9]

These biases distort thinking, influence beliefs, and influence the decisions and judgments people make every day. Background and History Confirmation bias is an example of how people sometimes process information in an illogical and distorted way. The process of making decisions and processing information by people is often biased, because people simply interpret information from their point of view. This biased approach to decision making is largely unintentional and often results in conflicting information being ignored. [Sources: 4, 8]

Importance Confirmation bias is important because it can lead people to forcibly hold false beliefs or give more weight to information that supports their beliefs than the evidence supports. Confirmation bias, the tendency to process information by seeking or interpreting information in accordance with existing beliefs. Confirming bias. The tendency to seek, interpret, focus, and memorize information in a way that confirms your biases. Blind Spot Bias The tendency to see oneself as less biased than other people, or the ability to identify more cognitive biases in others than in oneself. [Sources: 5, 8]

Social Comparison Bias The tendency to make decisions in favor of potential candidates who do not compete with their particular strengths. Social desirability bias. The tendency to overestimate socially desirable characteristics or behavior to oneself and underestimate socially undesirable characteristics or behavior. Status quo bias The tendency to prefer things to remain relatively unchanged (see also loss aversion, donation effect, and systemic rationale). [Sources: 5, 12]

Addiction to social comparison The tendency in hiring decisions in favor of potential candidates who do not compete with their particular strengths. Optimism bias. A tendency to be overly optimistic, overestimate favorable and pleasant outcomes (see also wishful thinking, valence effect, bias for positive outcomes). Outcome bias The tendency to judge a decision based on its final outcome rather than the quality of the decision at the time it was made. [Sources: 5, 12]

Module function attribution is incorrect. In human-computer interaction-people tend to have system errors when interacting with robots. Automation bias The tendency to rely too much on automated systems, which can lead to incorrect automated information overwriting correct decisions. Automation bias The tendency to rely too much on automated systems, which can lead to incorrect automated information overwriting correct decisions. [Sources: 5, 12]

Negativity bias A psychological phenomenon in which people have more unpleasant memories than positive ones. Negative effect. The tendency of people, when assessing the reasons for the behavior of a person who they do not like, to attribute their positive behavior to the environment, and negative – to the inner nature of a person. Confirmation bias also manifests itself in people’s tendency to seek positive examples. By interacting with people who, in their opinion, have certain personalities, they will ask questions to those people who biasedly support the beliefs of the recipients. [Sources: 8, 12]

But in the case of design reviews, I find that offsetting the differences is usually useful because the people doing the reviews are professionals who need to understand the true meaning and importance of the various attributes. [Sources: 11]

The evaluability hypothesis describes another phenomenon that psychologists observe when studying how people choose an option. When analyzing these two options, people tend to worry about the quantitative value they think will affect happiness. The framing effect is a kind of cognitive bias, when people present equivalent options, they will make different decisions based on the way these options are expressed. In our previous example, we observed this phenomenon when analyzing the reliability of the two designs, which can be expressed as a small increase in uptime or a significant decrease in downtime. [Sources: 7, 11]

These shots tend to be a little more nuanced, but we can take a look at both ends of the spectrum to clarify the point. These frames may not completely influence our decision, like verbal, valuable, positive and negative frames, but in fact they can influence our decision. This can significantly affect how we interpret the machine and how we relate to it. For example, higher values ​​lead us to believe that this is a better deal. [Sources: 2]

Hindsight bias This is sometimes called the “knew it all” effect — the tendency to view past events as predictably as they happened. The theory behind this bias is that we tend to overestimate the impact of future events on our emotional state. While this concept differs from discrimination bias, the two are interrelated. [Sources: 5, 10]

Difference bias, a concept in decision theory, is the tendency to view two options as more distinguishable when judged simultaneously than when judged separately. Difference bias is the tendency to view two options as more different when they are judged simultaneously than when they are judged separately. [Sources: 0, 3]

Psychological research shows how discriminating bias can prevent you from accurately predicting how much happiness a life object or decision will bring. Among many biases in the brain, I fell prey to discrimination bias: a tendency to overestimate the effect of small quantitative differences when comparing options. [Sources: 1, 7]

We now see that the quality difference between the two options is not as important as we expected. In our example above, the two designs initially looked very similar, but when compared, the differences became more apparent. Default effect When choosing between multiple options, the default option tends to be used. Focus effect. The tendency to overemphasize one aspect of the event. [Sources: 5, 10, 11]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.adcocksolutions.com/post/no-17-of-36-distinction-bias

[1]: https://www.psychologytoday.com/us/blog/automatic-you/201803/distinction-bias-why-you-make-terrible-life-choices

[2]: https://boycewire.com/framing-effect-definition-and-examples/

[3]: https://nlpnotes.com/2014/03/22/distinction-bias/

[4]: https://www.verywellmind.com/cognitive-biases-distort-thinking-2794763

[5]: https://uxinlux.github.io/cognitive-biases/

[6]: https://www.alleydog.com/glossary/definition.php?term=Distinction+Bias

[7]: https://www.brescia.edu/2018/07/understanding-distinction-bias-in-decision-making/

[8]: https://www.britannica.com/science/confirmation-bias

[9]: https://www.nirandfar.com/distinction-bias/

[10]: https://thedecisionlab.com/biases/distinction-bias/

[11]: https://effectivesoftwaredesign.com/2012/10/21/the-psychology-of-reviews-distinction-bias-evaluability-hypothesis-and-the-framing-effect/

[12]: https://www.emerald.com/insight/content/doi/10.1108/978-1-78635-566-920161032/full/html

Denomination Effect

The naming effect suggests that when a person has two equal amounts of money, they are more likely to spend less money (for example). The dignity effect (Raghubir and Srivastava, 2009) shows that when people exist in the form of large denominations (e.g., ten dollar bills), compared with many smaller denominations (e.g., ten dollars and one dollar bills), people It is unlikely to spend money. Previous research has documented the impact of denominations because consumers are less likely to spend large bills (e.g., $100) compared to small bills (e.g., five $20 bills). [Sources: 0, 4, 11]

For example, the naming effect occurs when people are less likely to spend larger bills than their equivalent value on small bills or coins. This cognitive bias, known as the naming effect, affects various forms of currency, with the result that people are less likely to spend money if they take the form of large bills than to spend the equivalent amount of money in smaller bills or coins. … The naming effect was only officially discovered recently, in a 2009 research paper by marketing professors Priya Raghubir and Joydip Srivastava, who conducted several test experiments on the topic. [Sources: 2, 8]

In another experiment, Raghubir was standing outside a gas station in Omaha. Priya Raghubir and Joydeep Srivastava conducted a series of experiments in the United States and China that showed that people were much more willing to spend the same amount of money if they had lower denomination bills rather than large bills. They were also more likely to break a larger worn-out bill than to pay the exact amount in a smaller denomination. [Sources: 6, 10]

Our findings specifically supplement the “naming effect” literature (Raghubir and Srivastava, 2009), which suggests that one of the reasons people are more likely to shop when they have lower denominations is because they want to exercise self-control. …And they don’t want to forget how much money they have (see also Raghubir et al., 2017). The marginal effect of naming provides insight into people’s normative beliefs about money; consumers may think that smaller bills are more affected because they themselves use them more frequently. When testing the opening, please note that when people receive the same amount of small denomination bills (four five-dollar bills), they spend more than when they hold large bills (one 20-dollar bill). The author discovers the physics of money Aspects can be enlarged. Reduce or even reverse this impact. [Sources: 6, 7, 9]

Despite evidence that currency denomination can affect spending, researchers have yet to explore whether the appearance of money can do the same. However, in addition to providing some of the earliest evidence that the appearance of money can change spending behavior, this study offers insight into the relative power of denomination on spending. Second, we find that the appearance of money can improve, mitigate, and even reverse the previous effects of denomination on spending. [Sources: 9]

So, like Serina, if you’ve ever wondered why we would rather spend our low value money more than high value equal value money, one explanation is the denomination effect. We tend to value lower bills less than tall bills, not paying much attention to why we do it. [Sources: 0]

In other situations not covered in this article, consumers may be indifferent between using two bills on hand, maintaining the principle of monetary invariance, or more likely to spend smaller bills, maintaining the “bill effect,” which we will discuss below. We assume that the effect of price denomination will mean that people will use cash when the present value matches the price to be paid, while using the card when the present value does not match the price. [Sources: 7]

This means that if the denominations are not equal to price, consumers are less likely to rely on price information when choosing a denomination to buy. Study 2 results show that people choose discounts that match the price of purchases that are most often considered, rather than discounts that do not match those prices, and make those decisions faster. We also examined the role of price pegging as a potential mechanism for this effect and found that people make faster decisions when they choose a bill that matches the price (Study 2). [Sources: 5]

We found that when the price matches the customer’s cash denomination, consumers are more likely to use cash than debit cards, indicating that the price denomination also affects their choice of payment method. However, the price denomination effect predicts that when the price matches the value of the paper money they hold, people are more likely to pay in cash, but if not, they will not. Between these two terms, participants are more likely to spend when they receive four quarters instead of one dollar bill, which is consistent with the denomination effect. [Sources: 0, 5, 7]

Study 1B replicated the effect of Study 1A with higher spending levels and higher prices. Study 1B explores the possibility of generalizing the effect using higher cost levels, higher price set, and larger reductions. Like the results of infection, these effects did not depend on the face value. [Sources: 5, 7, 9]

Another way consumers violate descriptive invariance is to spend more when using a credit card than when using cash (Raghubir and Srivastava, 2008), and spend more when they have smaller bills than larger ones (Mishra et al., 2006; Raghubir and Srivastava, 2009). Called the denomination effect, our tendency to avoid spending larger bills also suggests that people prefer to receive money in larger denominations. In 2009, people tend to associate smaller bills with smaller, more varied purchases, according to a study published in the Journal of Consumer Research. This means that if you carry around 10 $ 10 bills, you’ll spend it faster than $ 100, according to a study published in the Journal of Consumer Research. … [Sources: 3, 5]

The same effect usually occurs in the financial sector, where the unit value of a particular asset indicates that investors tend to reduce spending when pricing large amounts. When a company’s stock is traded at a relatively high price before the spin-off, such as five to one, investors are more likely to switch to and invest in other companies with lower absolute prices. Therefore, the impact of lower prices increases people’s absolute expenditures. [Sources: 8]

If a person receives APS20, if it is offered in a smaller denomination (such as APS1 coins or change), then that person is more likely to spend it instead of offering it in a larger denomination (especially in the case of the famous .APS20) . Using psychological accounting theory (Thaler, 1985; Thaler and Shefrin, 1981), the authors suggest that people might consider large denominations of currency as real currency, while equivalent amounts of smaller denominations are considered small currencies or change ( Ragubir and Srivastava, 2009)). [Sources: 1, 9]

You are not alone, and our reluctance to pay big bills can in fact be used to trick our minds into spending less, as science shows. This matching effect can be extended to other payment methods (such as gift cards). This belief underscores the growing interest in the influence of naming. [Sources: 3, 9, 11]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://absolutedecisionsblog.wordpress.com/2018/03/23/the-denomination-effect-in-consumer-spending-behaviour/

[1]: https://www.adcocksolutions.com/post/no-16-of-36-the-denomination-effect

[2]: https://www.businessinsider.com/cognitive-biases-2015-10

[3]: https://au.finance.yahoo.com/news/how-to-save-money-using-cash-235843261.html

[4]: https://www.journals.uchicago.edu/doi/10.1086/689867

[5]: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.552888/full

[6]: https://www.eurekalert.org/news-releases/630713

[7]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7509091/

[8]: https://internationalbanker.com/brokerage/cognitive-bias-series-6-denomination-effect/

[9]: https://academic.oup.com/jcr/article/39/6/1330/1825424

[10]: https://www.npr.org/templates/story/story.php?storyId=104063298

[11]: https://www.sciencedirect.com/science/article/abs/pii/S0148296321000990

Berkson’s Paradox

More broadly, however, we could only call it the Berkson effect because of the general application of unusual selection forces that are likely to create unexplored relationships. However, such a correlation clearly does not exist when we restrict our analysis of the NBA. [Sources: 11]

An obvious reading of Burkson’s paradox is that people who are good at basketball and short, and people who are bad and tall can compete in the NBA, but people who are short and poor at basketball were excluded from the champion. … intense competition. The significance of these rare attributes to the NBA can easily surpass the growth seen among the broader population. Growth is unrelated to playing in the NBA, because a short NBA player must have abnormally high skills to break into the tight circle known as the NBA. [Sources: 5, 11]

As a result of bias in our data collection, we end up seeing that 100% of the least educated are successful (Figure 3 – green), but only a fraction of the highly educated are successful (Figure 4 – green). This is the result of a compromise between the GPA and SAT scores of the people surveyed. [Sources: 4, 5]

It is tempting to see conflicting correlations and try to build a story about them, but they are often not surprising when we realize that a small circle is choosing between two dominant attributes. In general, if two factors influence the choice in the sample, we say that they “collide during the choice” (see Figure 2a). The lesson here is that we can see spurious correlations between variables as a result of sample bias. In the aggregate dataset, we come to the conclusion that if we divide this data into groups according to some criteria, we get results that are completely opposite to the results of previous observations. [Sources: 1, 5, 7, 9]

One of the most famous examples of the Simpsons paradox is the study of gender bias in the Graduate School of the University of California, Berkeley. This paradox is named after Joseph Berkson, who pointed out the selection bias in case-control studies to determine causal risk factors for the disease. Since the samples are taken from hospital patients rather than the general population, this may lead to negative and false associations between disease and risk factors. [Sources: 1, 7, 13]

This example is very similar to Berkson’s original 1946 work, in which the author noted a negative correlation between cholecystitis and diabetes in hospital patients, despite diabetes being a risk factor for cholecystitis. The paradox can also give the impression of a negative correlation, when in fact two variables are positively correlated or completely independent of each other. The Berkson paradox arises when this observation appears to be true, when in fact two properties do not correlate, or even positively correlate, because members of a population in which both are absent are observed unevenly. [Sources: 3, 5, 6]

According to Berkson’s paradox, cases in which two elements that appear to be related to people in general are not actually related. In other words, in the presence of two independent events, if we consider only the outcomes in which at least one occurs, they become negatively dependent, as shown above. This leads to the Berkson paradox, according to which, due to the presence of B in the subset, the conditional probability of A decreases, which explains the negative dependence of two independent events, provided that at least one of them occurs. The answer to any occurrence of Berkson’s error is to properly define or characterize the population and then statistically examine a significant portion of the population to test the relationship between A and B. [Sources: 0, 13]

In addition, a suitable procedure is proposed for generating inferences for a population based on a biased sample that has all the characteristics of the Burkson paradox. In particular, this occurs when there is an inherent estimation bias in the study design. Like other threats to causal inference, once you know about collider displacement, you will see it lurking everywhere: from introducing an association, when the two factors were effectively independent, to diminishing, exaggerating, or even altering the existing association in a way that is difficult to predict. … [Sources: 8, 9, 12]

In recent article 1, we summarize how collider displacement could play a role in Covid-19 research and identify hundreds of demographic, genetic, and health-related factors that influence the likelihood of a person being selected for Covid-19 testing. United Kingdom. Biobank participants. Observational errors and subgroup differences can easily lead to statistical paradoxes in any data analysis application. Hidden variables, variable collisions, and class imbalances can easily create statistical paradoxes in many data processing applications. In this article, we’ll look at 3 of the most common types of statistical paradoxes found in data science. [Sources: 3, 9]

A prime example is the observed negative association between the severity of COVID-19 and cigarette smoking (see, for example, Griffith 2020, recently published in the journal Nature, suggesting that this may be a case of a collider displacement, also called the Burkson paradox. The most common example of the Burkson paradox is is the false observation of a negative correlation between two positive traits, namely that members of the population who have some positive traits tend to miss a second. [Sources: 3, 5]

The Berkson paradox broadly refers to the tendency of subpopulations, caused by some selection effect, such as a cut estimate, to give the impression of correlations that do not exist or even exist in the opposite direction in a larger population. I have seen more general definitions of the Burkson paradox (sometimes Berkson bias) equate it with selection effects in general, but I think it is mainly used in situations where a portion of a population / dataset is excluded that will decrease or reverse the observed correlation if included. [Sources: 11]

In fact, this erroneous observation is based on erroneous assumptions related to “cause” and “effect” and bias in data collection. He is often described in the field of medical statistics or biostatistics, such as Joseph Berkson’s original description of the problem. He is often described in the field of medical statistics or biostatistics, such as Joseph Berkson’s original description of the problem. It is a probability statistical phenomenon in which trends appear in different data sets, but disappear or reverse when combined. [Sources: 1, 4, 8]

Because, looking at something from only one side, we will continue to see the same thing even on completely opposite data. If the observer only examines the stamps on display, he will discover a false negative relationship between beauty and rarity as a result of selection bias (i.e., lack of beauty clearly indicates rarity in an exhibition but not in a general collection). [Sources: 1, 13]

Burkson’s paradox states that two independent events become negatively dependent if we consider only the outcomes in which at least one of them occurs. The Berkson paradox is one of the results that can be obtained by mentally making conditional comparisons. The Burkson paradox, also known as Burkson bias, collider bias, or Burkson error, is the result of conditional and statistical probabilities that are often illogical and therefore a real paradox. [Sources: 0, 10, 13]

The Berkson paradox was originally discovered in the context of epidemiological studies that trace the relationship between disease and exposure to potential risk factors. Berkson’s original illustration includes a retrospective study examining the risk factor for disease in a statistical sample of the inpatient hospital population. [Sources: 2, 13]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://medium.com/@darshit0111/berksons-paradox-6d27225c47cb

[1]: https://www.sergilehkyi.com/7-data-paradoxes-you-should-be-aware-of/

[2]: https://brilliant.org/wiki/berksons-paradox/

[3]: https://towardsdatascience.com/top-3-statistical-paradoxes-in-data-science-e2dc37535d99

[4]: https://www.futurescienceleaders.com/blog/2021/03/berksons-paradox-a-statistical-illusion/

[5]: https://moontowermeta.com/berksons-paradox/

[6]: https://www.jaad.org/article/S0190-9622(21)00227-9/fulltext

[7]: http://corysimon.github.io/articles/berksons-paradox-are-handsome-men-really-jerks/

[8]: https://psychology.fandom.com/wiki/Berkson%27s_paradox

[9]: https://rss.onlinelibrary.wiley.com/doi/10.1111/1740-9713.01413

[10]: https://rf.mokslasplius.lt/berkson-paradox/

[11]: https://freddiedeboer.substack.com/p/beware-berksons-paradox

[12]: https://pubmed.ncbi.nlm.nih.gov/31696967/

[13]: https://en.wikipedia.org/wiki/Berkson%27s_paradox

Sunk Cost Fallacy

The addiction to commitment boils down to constantly trying to convince ourselves and others that we are making rational decisions. We do this by maintaining consistency in our actions and defending our decisions in front of the people around us, as we believe it will give us more respect. [Sources: 3]

Barry M. Stav was the first person to study and describe commitment bias. He hypothesized that this change in attitude is due to the need for consistency, which seems to be the driving force of all mankind. 4 Inconsistency is the cause of unpleasant feelings. It is related to cognitive dissonance. Since people are unlikely to recognize the negative consequences of their decisions, confirmation bias will also escalate commitment. Another deviation closely related to the escalation of commitments and sunk cost errors is the escalation of commitments, also known as commitment deviations. This prejudice is also called the delusion of promise escalation or sunk cost. [Sources: 0, 4, 7, 12]

When there is a commitment bias, we carefully select information that makes our decision seem right, while minimizing or even completely ignoring the evidence that we made the wrong choice. It may be us who justifies our past behavior in front of ourselves or in front of others. [Sources: 7]

Confirmation of the Hypothesis and Alternative Explanations Again, my hypothesis is that in some cases we feel pressured about sunk costs, but not in others, because we want to tell flattering yet believable stories about our diachronic behavior, stories in which we have not been harmed. diachronic adversity – and recognizing sunk costs can help achieve this. Another explanation for why we take sunk costs into account invokes prospect theory (Kahneman and Tversky, 1979), which is a descriptive theory of risky decision making. [Sources: 9]

Economists and behaviorists use the related term “sunk cost error” to describe the justification for investing more money or effort in a decision based on previous cumulative investment (“sunk costs”), despite new evidence suggesting that the future costs of continuing such behavior exceeds the expected benefit. More recently, the sunk cost fallacy has been used to describe the phenomenon in which people justify a large investment in a solution based on their previous cumulative investment, despite new evidence that the cost of continuing from today exceeds the cost of the solution. expected benefit. The sunk cost error increases the likelihood that a person or organization will continue a business in which they have already invested money, time, or effort, even if they would not have started a business if they had not already invested in it. [Sources: 4, 6, 8]

The larger the size of the shadow investment, the more people are inclined to invest additional funds, even if the added return on investment is not worth it. The idea that a larger investment in something you’ve already invested in “makes it work” may not only be the result of a sunk cost error, but the additional investment increases business commitment and can increase the likelihood of further investment. for sunk costs. Perhaps even worse, increased commitment to a particular course of action through past investment alone can block needed change and limit innovation. [Sources: 6]

Research by Ohio University psychologist Hal Arkes and his partner Katherine Bloomer shows sunk costs can discourage you from choosing “entertainment” whenever possible. When deciding whether to disconnect from the grid, costs already invested are usually taken into account, as a result of which some of them are more expensive. These are the costs that we most consider when deciding on a choice. [Sources: 6, 12]

Examples of potential costs are potential losses or costs of restocking. In a rational economy, these costs are not part of the decision-making process. People make the sunk cost error when they continue to conduct or endeavor as a result of previously invested resources (time, money, or effort) (Arkes & Blumer, 1985). [Sources: 2, 12]

There is also an interesting phenomenon – the so-called “reverse sunk cost effect” (Heath 1995) or the “Pro Rata fallacy” (Baliga & Ely 2011), in which decision-makers consider sunk costs by discarding them rather than fulfilling them. the project on which the costs were spent. In Teger’s and then Ross and Staw’s studies, situations where completing an action is worth more than completing it has left decision-makers trapped in their current and costly behaviors. In Tesco’s case, those tasked with making the best decisions often ramped up their efforts irrationally, and such irrational decision making was clearly costly. [Sources: 4, 5, 9]

First, we aimed to examine the extent to which people continue to increase their participation in a coherent decision-making situation. We report the results of an experiment inspired by a pivotal study (Staw, 1976) that introduced the concept of “escalation of commitment,” a variant of the sunk cost error. A critical aspect of management decision-making is managing the risks of increasing commitment in the face of a bad course of action. First, the situation requires a lot of resources such as time, money and people invested in the project. [Sources: 3, 4, 5, 11]

You hope to make a difference with additional effort / resources. Sometimes it makes perfect sense to want to continue with a project because of the resources you’ve already invested in it. Once you have spent a large amount of resources, you are wishful thinking. [Sources: 3, 9]

Don’t increase your engagement (despite evidence to the contrary) because you’ve already spent so much time, money, energy, emotion on this action. Entrepreneurs are ripe for “escalating commitments,” as evidenced by their willingness to keep investing in their businesses, even when things go wrong. For example, it found that those responsible for prior losses tend to be more positive about projects and more likely to commit additional resources to them than people who took on projects halfway through. Reinforcement, misrepresentation, and self-justification – three psychological factors that we all are exposed to – can keep us from projects or actions that we initiate. [Sources: 1, 6, 10]

However, when others observe our behavior, most management decisions involve some additional factors. Similarly, it is important for managers to be aware of the potential harm that might be caused by decisions made solely by the power leader. [Sources: 1, 3]

This may sound counterintuitive, but it can provide excellent insight into your thinking process when making decisions. Challenge decisions by asking open-ended, detailed, laser-focused questions that will help you understand their positions and interests. It may be that the party directing you does not have the understanding, data, or cost / benefit analysis to proceed / not continue. [Sources: 10, 12]

Second, I believe that once we understand why we feel pressured about sunk costs, it will no longer be clear that this is always irrational. And, as long as this desire is not offset by other considerations, it should not be irrational to consider sunk costs. [Sources: 9]

The term “irrecoverable” refers to the importance of incurred costs. The term is also used to describe bad decisions in business, government, general information systems, especially software project management, politics, and gambling. [Sources: 8, 12]

Adherence escalation is a pattern of human behavior in which an individual or group facing increasingly negative consequences of a decision, action, or investment continues to pursue that behavior rather than changing course. [Sources: 4]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://askinglot.com/what-is-escalation-bias

[1]: https://hbr.org/1987/03/knowing-when-to-pull-the-plug

[2]: https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/sunk-cost-fallacy/

[3]: https://thinkinsights.net/strategy/escalation-of-committment/

[4]: https://en.wikipedia.org/wiki/Escalation_of_commitment

[5]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6041483/

[6]: https://leepublish.typepad.com/strategicthinking/2015/03/sunk-cost-fallacy.html

[7]: https://thedecisionlab.com/biases/commitment-bias/

[8]: https://psynso.com/escalation-of-commitment/

[9]: https://quod.lib.umich.edu/e/ergo/12405314.0006.040/–sunk-cost-fallacy-is-not-a-fallacy?rgn=main;view=fulltext

[10]: http://www.negotiations.ninja/blog/the-danger-of-the-escalation-of-commitment/

[11]: https://www.sciencedirect.com/science/article/pii/S0014292121001562?dgcid=rss_sd_all

[12]: https://neurofied.com/sunk-cost-fallacy/

Gambler’s Fallacy

By definition, independent random events cannot be predicted with certainty. This way of thinking is wrong because past events do not change the likelihood of certain events that will occur in the future. [Sources: 2, 4]

Gambler delusion is best defined as the tendency to think that future probabilities are skewed by past events when they do not really change. The player’s mistake is that he erroneously evaluates whether the series of events is really random and independent, and erroneously concludes that the outcome of the next event will be the opposite of the outcome of the previous series of events. If we estimate the likelihood of an event based on past events, we will all be misleading the player. Player error stems from our tendency to assume that if a random event happened many times in the past, it will happen more or less often in the future. [Sources: 2, 3, 5, 6]

So we are trying to rationalize random events in order to create an explanation and make them predictable. When we are busy rationalizing random events, we do not think clearly. Because player error forces us to keep track of our experience or base future events on past performance; we do not make an informed choice. It also refocuses the question of “rationality” implied in the term “fallacy”, because then we can ask ourselves if it is illogical to behave as if past results helped us predict future outcomes. [Sources: 3, 4, 5]

The fact that past events will not change the possibility of future events is the reason why such ideas and decisions are wrong. Player error is a person who mistakenly believes that past events or a series of events will affect future events. This error occurs when a person mistakenly believes that a particular event may or may not occur, depending on the result of a series of previous events. [Sources: 5]

Therefore, an error occurs when a person believes that a particular event in the past will affect future events. The root of this error is that we tend to assume that an event has occurred based on the number of occurrences in the past. It is based on the misunderstanding that due to previous events, events are more or less likely to happen. When an event occurs more frequently than usual, it is less likely to happen in the future. [Sources: 5]

There are also potentially unique issues that can be associated with sequential events. This could plausibly reflect a different process from the gambler’s delusion observed while gambling, however, it would be interesting to see if the similarities show up even in the investment scenario when decisions are made regularly based on consistent information such as daily performance or hourly. One possible approach would be to use a developmental paradigm to test if anything similar to the player’s fallacy arises before developing an understanding of the properties of chance. In this light, the phenomena described here are easy to understand; Beliefs about a relationship with previous results can provide a valuable (or, in the case of random events, the only) possible source of information about what might happen in the future and increase our sense of predictability. [Sources: 4]

We often select past experiences that we believe should be similar to future events or that we believe should reflect an ideal outcome. However, once a base rate is established, or at least a fair estimate, we can predict when certain events will return to the average. We see regression towards the mean because each individual event can bring us closer to our original baseline (assuming the baseline remains the same). [Sources: 1, 3]

In fact, the same is true for the frequency domain and probability domain. Heads-up with LeBron James will not change the results, while playing with someone with your skill level will make the results more or less random. Unless you think your mindset and processes are actually changing every week, the most likely outcome for any given week is a long-term return on investment. [Sources: 1, 8]

I find that the error of the players is so strong because it is difficult for people to understand how the mean reversion occurs. Player error can be so severe that it influences decision making because people tend to “act” after peripheral events. [Sources: 1]

This can be explained by the tendency of traders to judge a decision based on the outcome rather than the quality of the decision at the time it was made. To illustrate the bias in the outcome, we can use the example of a trader trading a system that relies primarily on Fibonacci retracements and extensions. To avoid skewing the result, the trader must think of his trading as a package of many skills. [Sources: 6]

This can be explained by a tendency to seek, interpret, focus, and memorize information in a way that confirms a preconceived notion. Distortions – After generalizations, people agree to distortions in order to adapt the information to their beliefs. The framing effect (cognitive bias) People respond differently to certain choices depending on how they are presented. [Sources: 0, 6]

This can be defined as the tendency to rely on or reinforce a trait or piece of information in making decisions (usually the first piece learned on the topic). Cognitive bias is best explained as a thinking bias that influences people’s decisions and judgments. They arise as a result of our brains trying to simplify information processing. [Sources: 6]

Hindsight bias This is sometimes called the “knew it all” effect — the tendency to view past events as predictably as they happened. Retrospective bias (cognitive bias) The tendency, after an event has occurred, to view the event as predictable, even though there is little objective basis for predicting it. Player error. The tendency to think that future probabilities are distorted by past events when they have not really changed. [Sources: 0, 7]

Focusing effect. The tendency to attach too much importance to one aspect of an event. The illusion of control. The tendency to overestimate the degree of their influence on other external events. [Sources: 7]

connection error. Tend to assume that certain conditions are more likely than general conditions. Information Bias The tendency to seek information, even if it cannot influence action. Basic rate error (cognitive error). Tend to ignore information about the basic interest rate (general, general information) and focus on specific information (information only relevant to specific cases). Basic bet error or basic bet lost. Tend to ignore information about basic rates (general and general information) and focus on specific information (information related to specific cases only). [Sources: 0, 7]

Availability heuristic. The tendency to overestimate the likelihood of events with increased “availability” in memory, which may depend on how recent the memories are or how unusual or emotionally charged they may be. Player error, also known as Monte Carlo error, occurs when a person mistakenly believes that a certain random event is less or more likely to occur based on the outcome of a previous event or series of events. Hot Hand Bug Hot Hand Bug (also known as the Hot Hand Phenomenon or Hot Hand) is the belief that a person who succeeds in a random event is more likely to succeed in further attempts. The error occurs due to the erroneous concept of the law of large numbers. [Sources: 1, 2, 7]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.studystack.com/flashcard-2548692

[1]: https://www.actionnetwork.com/education/jonathan-bales-endowment-effect-gamblers-fallacy-betting-fantasy-sports

[2]: https://www.investopedia.com/terms/g/gamblersfallacy.asp

[3]: https://thedecisionlab.com/biases/gamblers-fallacy/

[4]: https://www.nature.com/articles/palcomms201650

[5]: https://bizzbucket.co/gamblers-fallacy-why-it-matters-in-business/

[6]: https://tgfx-academy.com/lessons/lesson-2-psychological-biases/

[7]: https://uxinlux.github.io/cognitive-biases/

[8]: https://marro.io/bias/interpretation/probability.html

Hot-Hand Fallacy

In addition to his most famous discoveries, it appears to have correctly shown that people tend to overestimate the hot hand effect and spot random patterns where they don’t exist. Scientists argue that people misinterpret randomness and draw the wrong conclusions. [Sources: 0, 11]

The first of the biases is player error, which leads a person to assume that a long sequence of “head” or “tail” increases the likelihood of getting a “head” or “tail”, respectively. Their reaction to this belief – when winners place safer bets, assuming they will lose, and losers place long-term bets, believing that their luck is about to change – has the opposite effect of lengthening successive streaks. [Sources: 8, 10]

Maybe people who have winning streaks are better at betting than people who don’t heat up. The idea is that during a losing streak, a player’s luck can tip over and he starts winning. Likewise, in gambling, when we have a winning streak due to a hot hand mistake, we believe our success will continue. [Sources: 9, 10]

This means that it is a hot hand mistake to say that winning many times in a row will increase your chances of winning the next bet, and the player mistakenly saying that losing many times in a row will increase your chances of winning. Every bet. Besides, they were wrong. When the player thinks that the probability of winning consecutively is greater, a hot hand error occurs. The tendency of players to exchange lottery tickets for multiple tickets instead of cash is consistent with the hot hand phenomenon; because people who have won consistently in the past believe that they are more likely to win again. The belief in hot hands stems from the control of illusions in which people believe that they or others can control randomly defined events. [Sources: 8, 10]

Hot hand error is a psychological condition in which people feel that a person is “hot” or “cold” based on past work, when that performance does not affect future results. Hot hand is a cognitive social bias in which a person believes that past successful achievements can be used to predict success in future endeavors. Key Points A hot hand is an idea in which people believe that after a series of successes, a person or organization is more likely to have lasting success. The hot hand belief is shared by many players and investors alike, and psychologists believe it comes from the same source – the representative heuristic. [Sources: 6, 8]

While there is some truth in this that a person can be in good shape or their confidence is reinforced by initial success, the Hot Hand phenomenon goes beyond that of people making statistically inaccurate assumptions. The fact that a team or player is doing well in a short period of time does not contradict their overall average, but the hot hand error leads us to believe it is. Most likely, we will place bets reflecting a logical error and, as a result, lose money. This is why it is so easy for a trader to believe that his hand is hot and that he cannot lose dramatically after a series of winning trades. [Sources: 3, 9, 11]

Hot handed bias occurs when a trader believes that they have a winning streak, they have a hot hand, so they cannot lose. Belief in a hot hand is also prevalent in sports, especially basketball, where it is believed that a player’s performance over a period of time is significantly better, depending on his or her successful shooting streak. When something like this happens, most observers and players assume that the player in question has a “hot hand” – that for some reason he has entered a state that makes him shooting and shooting easier than they are. Usually (or even easier, in the case of a star born to score, like Thompson). When people see a streak like Craig Hodges scoring 19 out of 3 in a row or other outstanding performance, they usually attribute it to a hot hand. [Sources: 0, 2, 8, 11]

The idea that basketball players can end up with a warm hand – a series in which they magically appear to be shot after shot – resonates with sports reporters and audiences alike. This column examines whether the hot hand idea has any real foundation. In basketball, recent research has mainly focused on controlled conditions such as shooting experiments, NBA three-point matches or free throws, and researchers find evidence of a strong hand even under these controlled conditions (Arkes 2010, Miller & Sanjurjo 2018, Miller & Sanjurjo 2019) … Belief in a warm hand is simply an illusion that arises from the fact that we humans have a predisposition to see patterns in chance; we see stripes even though the survey data is essentially random. [Sources: 2, 5]

The GVT concluded that a warm hand is a “cognitive illusion”; the tendency of people to detect random patterns, to consider completely typical stripes as atypical, made them believe in an illusory warm hand. Importantly, the GVT found that the pros (players and coaches) were not only victims of error, but that their faith in the hot hand was steadily growing. The GVT has generated significant interest in the hot hand in various fields and sports, including baseball (Green and Zwiebel, 2018), horseshoe throwing (Smith, 2003), tennis (Klaassen, Magnus, 2001) and bowling (Dorsey-Palmateer and Smith , 2004). ). … Research by University College London psychology professor Nigel Harvey and graduate student Juemin Xu, published in the May 2014 issue of Cognition magazine, found that online betting site gamblers believed in common gambling error, “gambler’s mistake,” etc., led to that they experienced the opposite effect, the hot hand delusion (via Cardiff Garcia). [Sources: 2, 5, 10]

There is also a hot-hand trend in financial markets where traders try to delegate their investment decisions to professional fund managers with proven track records, believing they can consistently maintain high performance indicators. Just like a player will have a random streak when a coin is tossed, a basketball player will have a random streak when a ball is tossed. Basketball players and fans alike tend to believe that a player’s chances of hitting after hitting are higher than after a mistake on the previous hit. In this intuitive test, players’ goal percentage after winning streaks was not significantly higher than after missing. [Sources: 1, 2, 11]

The results showed that these shooters achieved 57.3% of their shots after a streak of three or more hits and 57.5% of their shots after a streak of three or more failures. Half of the shooters had fewer hit or miss streaks than expected (52%), and the other 48% had much more hit or miss streaks. Thus, it is clear that there is no consistent pattern between hitting, missing and the next shot. [Sources: 4]

The research was aimed at proving that people are wrong in believing that players have hot hands and are more likely to hit another successful shot after a series of hits. The study, which examined the intuitive concept of hot hand and shooting range belief, was based on mathematical psychology, decision-making behavior, heuristics, and cognitive psychology. [Sources: 8, 11]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.thecut.com/2016/08/how-researchers-discovered-the-basketball-hot-hand.html

[1]: https://www.sciencedirect.com/science/article/pii/0010028585900106

[2]: https://www.scientificamerican.com/article/momentum-isnt-magic-vindicating-the-hot-hand-with-the-mathematics-of-streaks/

[3]: http://changingminds.org/explanations/theories/hot_hand.htm

[4]: https://www.samford.edu/sports-analytics/fans/2017/The-Hot-Hand-Myth-or-Reality

[5]: https://voxeu.org/article/basketball-s-hot-hand-myth-or-reality

[6]: https://www.investopedia.com/terms/h/hot-hand.asp

[7]: https://link.springer.com/article/10.3758/BF03206327

[8]: https://corporatefinanceinstitute.com/resources/knowledge/trading-investing/hot-hand/

[9]: https://thedecisionlab.com/biases/hot-hand-fallacy/

[10]: https://www.businessinsider.com/the-gamblers-fallacy-and-the-hot-hand-2014-4

[11]: https://capital.com/hot-hand-fallacy-bias

Plan Continuation Bias

One of the key ones is the possibility of continuation bias in linear processes and how easy it is to become path dependent, increasing risk and closing the capacity for adaptation as you deepen the course of action. Hence, it is important to understand that continuation errors can occur, and it is important for pilots to be aware of the risks associated with not analyzing changes in a situation and considering the consequences of those changes in order to determine whether a line-align approach is more appropriate. As the workload increases, especially in one pilot scenario, there is less and less mental capacity to process these changes and consider the potential impact they could have on the original plan. The addiction to continuing the plan can prevent crews from realizing that they need to change their course of action. [Sources: 6, 12, 13]

Plan continuation addiction can be defined as the tendency of individuals to continue with an original course of action that is no longer viable, which can often occur despite changing conditions and current information about the situation (APA, 2020). Perhaps one of the most famous applications of this concept can be seen when an airline pilot is unexpectedly confronted with bad weather (or changing conditions) when entering the ground, but instead of taking a different runway or aborting a landing, decides to move forward with landing. plan at the originally planned destination. A NASA study of nine major plane crashes in the United States between 1990 and 2000, in which crew errors were considered a likely cause, found that pilot-to-aircrew bias about the continuation of the plan tends to increase as they get closer to their destination. … [Sources: 6, 9]

In other words, the closer the aircraft is to the final approach and landing phase, the more likely it is that the crew will continue to operate even when the environment changes. [Sources: 6]

Simply put, when the journey is nearly complete, people tend to run on autopilot, ignoring changing and potentially hazardous environmental factors. The “mission at any cost” mentality tends to creep in and overwhelm the crew’s abilities. Fatigue and stress are secondary factors, but with a major impact on the rider’s exposure to non-compliance with rules / procedures. [Sources: 7]

Situational awareness (SA) failures occur when continuous deviations prevent the pilot from detecting important signals, or the pilot cannot recognize the meaning of these signals. Plan continuation deviations are more common in single-pilot light aircraft operations; NASA does not use resources at all for forensic investigations of every small aviation incident. The 2004 NASA Ames Human Factors Study found persistent deviations. The study analyzed 19 plane crashes caused by crew errors between 1991 and 2000. [Sources: 0, 5, 13]

In some incidents, we have observed a snowball effect, in which decisions or actions at one stage of the flight increased the crew’s vulnerability to making mistakes later. For example, a crew that continued a highly questionable approach during a thunderstorm found themselves in a high workload situation that may have helped them forget to turn on spoilers. While NASA’s accident analysis has focused on human behavior, for example [Sources: 7, 12]

Prejudice arises when people stick to a plan, even if it seems wrong. These signs, even if people see and recognize them, often fail to lead people in a different direction … When the signs suggesting a change in plan are weak or ambiguous, it is not difficult to predict where people are trading. if canceling the plan is somehow costly. Accident investigators often believe that accidents are a result of this bias – that the idea of ​​taking a break or changing approach becomes not only aggravating, costly, or unpleasant – becomes literally unthinkable. Simply put, avoiding continuing with a plan is the tendency of all of us to continue on the path we have already chosen or taken, without carefully checking whether this is still the best idea or even the most expedient. [Sources: 1, 10, 11]

This particular form of cognitive bias to which we humans are susceptible is more complex (especially when we view it in terms of plane crashes) than it has been described here, but the concept as a whole is interesting in that it could be studied in relation to with many decisions and paths that we persist in pursuing despite current information or even warning signs that suggest this may not be the best course of action or the most appropriate course of action. In this article, I will describe how this bias permeates our psychology, observing how it works in plane crashes, and then examining its impact on financial markets. Investors will learn how to combat this bias and improve trading efficiency. [Sources: 1, 9]

I think there is something about these stories that suggests a continuation bias (or what aviation pilots call “push-to-go”), which is the tendency of people to continue their original course of action despite changing conditions. even when the plan is no longer viable. In the event of a GPS failure, it may have to do with a feeling that the technology needs to be fixed and that we will find a better way or a way out right around the next corner. In aviation, this tendency to move forward is more commonly known as “get it done” … and is often fatal, especially among less experienced pilots. It’s a bizarre name for “goal achievement” – a plan continuation bias, which is an unconscious cognitive bias towards continuing with the original plan despite changing conditions, and can be fatal to general aviation pilots. [Sources: 0, 4, 6]

This bias can be especially strong during the approach phase, when only a few extra steps are required to complete the original plan, and can act by preventing pilots from noticing subtle indications that the initial conditions have changed. Looking at the list of cognitive biases, one of the best things to keep in mind in the mental model cheat pack is the “Plan Continuation Bias”. So when you stray too far from the wrong path, prejudices become stronger, task saturation comes into play, situational awareness wears off, and you fully defend yourself, no longer thinking about the future. [Sources: 0, 10, 12]

This quick and erroneous simulation of pilots leaves out many important factors. Flying with one pilot in light aircraft requires good rainfall, healthy routines, excellent motor skills, and an understanding of our cognitive biases. [Sources: 1, 5]

We can be responsible for sticking to the plan like the above-mentioned pilots. We bought this stock, which was a good idea at the time, and we will continue to hold it even if the reason for the purchase disappears or is not disclosed. In retrospect, we can see that we will have to change our plans to adapt to changing conditions. [Sources: 1, 9]

When studying and analyzing plane crashes, it is very easy to fall into what cognitive scientists call past bias. Automatic bias. False priorities. A tendency to over-reliance on automated systems, which can lead to incorrect automated information overriding correct decisions. Module function attribution error. In human-robot interaction – the tendency of people to make systematic errors when interacting with a robot. [Sources: 2, 12]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://generalaviationnews.com/2013/05/20/protect-yourself-from-get-there-itis/

[1]: https://timlshort.com/2019/04/24/plan-continuation-bias-in-financial-markets/

[2]: https://en.wikipedia.org/wiki/List_of_cognitive_biases

[3]: https://www.ignitecsp.com/blog/plan-continuation-bias-or-oh-crap-what-do-i-do-now/

[4]: https://sakasandcompany.com/get-there-itis/

[5]: https://www.cessnaflyer.org/flighttraining/item/799-plan-continuation-bias-just-another-name-for-get-there-itis.html

[6]: https://www.onlydeadfish.co.uk/only_dead_fish/2020/06/the-danger-of-blindly-following-machines.html

[7]: http://aviationsafetyblog.asms-pro.com/blog/the-internal-aviation-sms-threat-plan-continuation-bias

[8]: https://criticaluncertainties.com/tag/plan-continuation-bias/

[9]: https://wiseducationblog.com/2020/08/22/when-sticking-to-the-plan-makes-us-stuck-plan-continuation-bias-and-university-career-plans/

[10]: https://medium.com/10x-curiosity/plan-continuation-bias-60efcc2b4cbe

[11]: https://thedailycoach.substack.com/p/we-must-eliminate-plan-continuation

[12]: https://humansystems.arc.nasa.gov/flightcognition/article2.htm

[13]: https://skybrary.aero/articles/continuation-bias

Time-Saving Bias

When asked to estimate how much time can be saved by increasing speed, they tend to underestimate the time saved when driving at a relatively low speed and overestimate the time saved when driving at a relatively high speed. Drivers were presented with a situation of accelerating from a relatively low speed in order to reach their destination in time, and were asked to estimate the time that could be saved by changing to higher speeds. The bias in Equation 2 towards time savings was found in another study by Swenson (1973), in which participants were asked to rate the effect of increasing the speed of a physical object and underestimating the time savings at a lower speed. It also indicates that this bias is not primarily limited to cognitive tasks, because it persists when information about a problem is based on perceptual cues or active driving information. [Sources: 3, 4, 6]

Then, the driver will want to accelerate to ensure that they arrive at their destination on time, but they may misjudge the time that can be saved by increasing the speed (Svenson, 2008). The time-saving deviation of active driving cannot be attributed to the underestimation of the average speed because the participants accurately estimated the average speed. Therefore, in a queue, the average time saved by increasing the speed from 100 km/h is 2.21 minutes, which is significantly less than three minutes, t 11 = -3.228, p = 0.00403. Therefore, my participants are driving faster than necessary. And gaining more time than needed when increasing speed from low speed. The idea that driving can save more time is called time-saving bias, which was explored in a study by the Hebrew University of Jerusalem. Their results showed that participants were biased in assessing how much time they saved in assessing distance. Time and speed. [Sources: 6, 7]

Acceleration from 10 km / h to 20 km / h saves 30 minutes per 10 km, but acceleration from 20 km / h to 30 km / h (the same speed increase) saves only 10 minutes, and acceleration from 30 km / h to 40 km / h saves only 5 minutes. we reach the speeds that are probably of interest to us, the time savings are minimal. As in the case above, increasing the speed by just 5 km / h only saved two minutes, but looking at just 65 km / h it looks like it will save a significant amount of time. Acceleration saves time, but less and less as the starting speed increases. 10 km is ten times higher per 100 km, but the time saved by traveling at 100 km / h instead of 90 km / h per 100 km is a paltry 7 minutes. It turns out that the widespread, almost universal assumption that we will get where we are going faster if we move at higher speeds is, if not false, then at least much less important than we might imagine. Once established, it seems obvious. [Sources: 1, 7]

Lowe believes that if the driver increases the average speed to 65 km per hour, it will reduce his trip to two minutes. In a number of studies, I have found that consumers actually make mistakes in these judgments and overestimate the benefits of higher speeds while underestimating the benefits of lower speeds. [Sources: 5, 7]

To save time, press the gas pedal a little harder to get to the spot faster. In active driving, an alternative reverse speed meter was used to reject judgment. Subsequently, the procedure was repeated, but for a different speed (and distance). [Sources: 2, 6, 7]

Great efforts are being made to persuade motorists not to accelerate, especially during vacations. Once we see we should be less frustrated when colliding with slower drivers and reduce the risk of crashes or collisions with road patrols. [Sources: 1]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://nlpnotes.com/2014/03/23/time-saving-bias/

[1]: https://www.futurelearn.com/info/courses/logical-and-critical-thinking/0/steps/9130

[2]: https://www.tandfonline.com/doi/full/10.1080/00140139.2015.1051592

[3]: https://www.sciencedirect.com/science/article/pii/S1369847811000659

[4]: https://pubmed.ncbi.nlm.nih.gov/20728651/

[5]: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2383205

[6]: http://journal.sjdm.org/13/13309/jdm13309.html

[7]: https://www.carhistory.com.au/resources/blog/does-driving-faster-actually-save-more-time

Zero-Sum Bias

For example, zero-sum bias can lead people to believe that there is competition for a limited resource in one place, but that resource is actually available for free elsewhere. Zero-sum bias can also affect the way people view transactions and transactions. In this case, they mistakenly believe that there should always be one person who benefits the most from the transaction, even when both parties benefit the same. But in a different way. Because zero-sum prejudice makes people believe that for one person to get something, another person must lose the same thing, this prejudice encourages people to believe in the antagonism of social relations. This means that zero-sum bias can lead people to mistakenly believe that within a group, there is competition for specific resources between them and other members of a specific social group. [Sources: 4, 12]

When applied to judgments between groups, the zero-sum heuristic will conclude that the gain of the other group (outer group) means the corresponding loss of their own group (inner group). The zero-sum deviation describes the intuitive judgment of a situation as a zero-sum (that is, comparing the resources received by one party with the corresponding loss of the other party), when in fact it is not zero. Zero-sum bias is a cognitive inference that leads people to (incorrectly) view certain situations as “zero returns”-they believe that one party’s gains are directly offset by the other’s losses. Zero-sum bias is a cognitive bias that makes people mistakenly regard certain situations as zero-sum, that is, mistakenly believe that the gains of one party are directly offset by the losses of the other party. [Sources: 2, 4, 5, 12]

Zero-sum deviation is a cognitive deviation from zero-sum thinking; people tend to intuitively judge that the situation is zero, even if it is not. When discussing zero-sum deviation, it is usually assumed that people generally tend to zero-sum thinking, and the solution is that we should be more inclined to treat the situation as a comprehensive positive sum. Similarly, while it makes people more likely to view the situation as a positive number, it sometimes makes them more cooperative, but this factor may be overestimated. [Sources: 8, 11]

Placing two groups in a non-zero-sum situation where cooperation leads to mutual gain and competition leads to mutual loss, as in the Robbers Cave experiment (Sherif et al., 1961), can conceptually reduce bias to zero-sum (Wright, 2000). ). One of the main ideas of economics is that trade is mutually beneficial, making both sides better than they were before. “One of the mistakes we often make when we think of trading is to view it as a great deal, not a win-win. [Sources: 5, 10]

A situation involving a set of objects is zero sum if the profit of one object represents another loss, while a situation is a positive sum if the participating objects can achieve the best possible result by cooperating with each other. This includes cases where the situation is considered to be a zero sum, which means that the gains of either party in the scenario are directly offset by the losses of the other parties involved, or that domain expansion should be at the expense of one. Decrease of the other. [Sources: 3, 4]

For example, it can be assumed that zero-sum bias is highly egalitarian (Katz and Hass, 1988) and prosocial (Van Lange et al., 1997; Kurzban and Houser, 2005) for people from a collectivist culture. ) Or collectivism (Triandis, 1995). Emotional adaptation can include jealousy (Hill and Buss, 2008), and cognitive adaptation can include zero-sum heuristics, because the achievements of others often mean their own losses, especially for inseparable resources such as peers and senior positions. Rank in an organized hierarchy. Since zero-sum deviation is very beneficial to the survival of early humans, natural selection ensures that it is still an instinctive way of thinking of modern humans. [Sources: 1, 5]

This is a harmful way of looking at the world, not only for others, but also for yourself. A universal belief system about the opposing nature of social relations, shared by people in society or culture, and based on the implicit assumption that there are a limited number of goods in the world, one of them wins the others and loses, and vice versa, [. ..] A relatively lasting and common belief that social relations are like a zero-sum game. In other words, they think that losing one person is gaining another. [Sources: 0]

Some people try to have a principled argument based on the money they bring to the company or the salary of others (higher) in order to do the same work. It should be noted that you need to consider how to create value for the other party. Some people try to get job opportunities by making more money elsewhere, and they can negotiate based on their value to others. [Sources: 7]

In such a context, people gain more resources for themselves by taking resources from others. The ancient people who survived and reproduced most successfully were the ones who intuitively felt that the receipt of resources by one entity could only occur at the expense of the loss of resources by another entity. Thus, a creature that “benefits” from a situation or has more pie can only come at the expense of all other creatures that have suffered a loss or received less pie. [Sources: 1, 3]

If a creature takes a larger piece of the pie, it means that all other creatures should be content with a smaller piece. Therefore, whenever another creature takes one unit of this limited supply – be it a toy, a piece of land or prey – it means that you have one unit less of this resource at your disposal, which increases the chances of you having a hard time surviving. [Sources: 1]

Under nonzero conditions, for example, when unlimited resources are available, applying this heuristic leads to a skewed judgment that the desired resources are no longer available. The relative neglect of opportunity cost in the context of altruistic action can be seen as a form of positive sum. [Sources: 5, 8]

If you feel like there is no alternative but to split the fixed pie, you may not be able to look for places to create value. For example, a child may mistakenly believe that he is in a zero-sum situation when it comes to the love his parents have for him and his siblings, which means that the love they have for the child must come from love. felt for others. This study offers only an introductory look at the roots of such thinking, but it reminds us that our common disagreements on economic issues are rooted in deeper views of the human personality and the fundamental nature of human relations in economic life (and beyond). .). [Sources: 6, 7, 10]

But the vision, enriched by economic history and theology, positions humans not only as mouths devouring the Earth’s resources, but also as productive gardeners and sub-creators, engraved with a divine creative spark. – buyers versus sellers, employees versus employers – we can rethink our work in the global economy as creators and servants, employees and contributors working with our neighbors to paint a big picture of the abundance and harmony of God in society. [Sources: 10]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.goalcast.com/what-is-zero-sum-thinking/

[1]: https://academy4sc.org/video/zero-sum-bias-i-win-you-lose/

[2]: https://www.mendeley.com/catalogue/dd5c3469-a59d-35bd-b6c7-16567e8b5780/

[3]: https://www.lesswrong.com/posts/aAFanvZnmPJb666EQ/fight-zero-sum-bias

[4]: https://effectiviology.com/zero-sum-bias/

[5]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3153800/

[6]: https://psych.substack.com/p/zero-sum-bias-

[7]: https://www.psychologytoday.com/us/blog/statistical-life/201804/the-zero-sum-fallacy-in-negotiation-and-how-overcome-it

[8]: https://stefanfschubert.com/blog/2020/10/6/positive-sum-bias

[9]: https://cognitivebiases.net/zerosum-bias

[10]: https://blog.acton.org/archives/122444-win-win-denial-the-roots-of-zero-sum-thinking.html

[11]: https://dbpedia.org/page/Zero-sum_thinking

[12]: https://mmpi.ie/zero-sum-bias/