Compassion Fade

Extinction of compassion is a cognitive bias that refers to a decrease in compassion that is shown to people in need as the number of victims increases. This may be the result of the mental numbness first proposed by Robert Jay Lifton. [Sources: 9]

Psychologist Paul Slovic came up with this phrase after observing that as suffering increases, people’s compassion decreases. Our sympathy for suffering and loss is rapidly falling as more and more victims are presented to us. As the number of victims of the tragedy increases, our sympathy, our willingness to help, reliably decreases. [Sources: 0, 2, 6]

Fading away compassion is the tendency for empathy to diminish as more people need help. Another way to explain the decline in the concept of compassion is to view it as a tendency for empathy to decrease as more people need help. According to psychologist and researcher Paul Slovic, the extinction of compassion is the idea that the more people are affected by tragedy, the less empathy is. This erosion of compassion can significantly impede individual and collective (eg, political) responses to urgent large-scale crises such as genocide, mass famine [5] or severe environmental degradation [32]. [Sources: 4, 5, 11]

The main tenet of this study is that compassion, and therefore concern for society, is often diminished rather than increased in the face of more serious threats. The main purpose of this article is to understand the psychological basis of this perverse phenomenon. In this article, we explore how attention-focused affective feelings may underlie the findings that when it comes to awakening compassion, an individual with a face and name usually elicits a stronger reaction than a group. [Sources: 5]

These results support the idea that the extinction of compassion is an affective phenomenon in which feelings are more pronounced towards individuals or groups perceived as separate units. These discoveries not only expand our understanding of the psychology of compassion, but also offer ways to deal with the loss of feelings as the need increases. The first evidence of this comes from research showing that compassion for victims decreases as the number of people needing help increases [30], the identifiability of victims decreases [31] and the percentage of victims who received assistance decreases [7]. [Sources: 1]

The people who have the most empathy and also tend to experience the most empathic discomfort are actually more likely to experience a collapse of compassion than people who are less important. As I continued to explore the extinction of compassion, I found that people tend to ignore feelings because they are trying to avoid depression or emotional distress. Perhaps fading compassion is partly a way for people to vaccinate themselves so they don’t look deeper into any guilt or shame they may feel because of their privilege and / or contribution to the problem of this wholeness. And this is not an accident of human psychology, it is a real barrier to compassion that can prevent people from doing things that might matter. [Sources: 2, 11]

Compassion collapses not because our ability to empathize or care is so limited, but because we can’t find a way to get to that part of compassion that includes feeling like we have the resources to do something important. which will change the situation. … This is because it has stretched over time, and our common human problem – to understand numbers, to comprehend these numbers – may be one of the reasons why compassion begins to fade. This clarifies the reason why we don’t feel that other people deserve compassion. The statement that “people expect the needs of large groups to be potentially overwhelming” suggests that we consciously consider what this attachment might entail and move away from it, or that we are aware that we are reaching an endpoint of compassion and are beginning to deliberately change. ..classification of the accident from personnel to statistics. [Sources: 2, 10, 11]

The affect heuristic forces people to make decisions based on emotional attachment to a stimulus … It is this emotional element of System 1 that leads us to the effects of compassion disappear when we make decisions based on attachment and the feelings of others. emotion that goes beyond the facts of the situation. [Sources: 4]

As noted below, compassion can also be viewed as having many different feelings and behaviors depending on the context. For example, one of the roots of compassion is caring parenting behavior. As a social attitude, compassion has a flow as we can be compassionate towards others, be open to compassion for others, and be compassionate towards ourselves. Compassion is caused by an awareness of the special suffering and pain of others. [Sources: 3, 7]

Here we experience the absolute spontaneity of compassion that arises beyond all differences and differences, attachments and conceptual structures. We can see boundless compassion that is itself spontaneous, not fabricated, free of concepts or visions of any kind. Usually our most direct experience of compassion is triggered by the awareness of suffering itself. [Sources: 3]

When it seems to be most needed, compassion is felt the least. Affective feelings such as empathy, empathy, sadness and compassion are often viewed as important in motivating help [9], [10]. [Sources: 1, 12]

Since psychotherapy focuses on mental distress, the development of motives and skills of compassion for self and others may be the focus of psychotherapy. In Buddhism, however, compassion is understood as something much more extensive than a simple feeling or emotion, with all the guiding qualities that these words imply. [Sources: 3, 7]

Human compassion has a hard limit. It is one of the most powerful psychological forces that shape human events. The answer—whether it is an overseas refugee crisis or a family health debate—is often related to Paul Slovic. Vocabulary research shows that the human mind is not good at thinking and sympathizing with millions of people. Our sympathy for the plight of strangers can be reduced to a number comparable to the number of people with whom we can be friends—a number with whom we unconsciously associate. [Sources: 0, 10]

We use the term “fading out of compassion” to mean 1) decrease in behavior and 2) influence as the number of people in need increases. Thus, “fading out of compassion,” as used here, means a decrease in positive affect, leading to a decrease in donation as the number of those in need increases. [Sources: 1]

The authors concluded that the largest donations to an individual victim were most likely due to the stronger emotional distress caused by those victims. The findings are consistent with both the psychophysical function of the lost frames and a possible decrease or extinction of compassion as the number of victims identified as risk groups increases. [Sources: 1, 5]

The number that causes the “collapse of compassion” may differ from person to person, but I think it may start to disintegrate along the correlated continuum of Dunbar 150. Think of the collapse of compassion on a grid where compassion is represented on the y-axis and the number of victims on the axis X. This lesson will explore why our compassion sometimes collapses under the weight of great suffering, and what we can do about it. support compassion and change the world for the better. [Sources: 2, 10]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.vox.com/explainers/2017/7/19/15925506/psychic-numbing-paul-slovic-apathy

[1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4062481/

[2]: https://www.linkedin.com/learning/self-compassion-when-compassion-is-difficult/understanding-compassion-collapse

[3]: https://lithub.com/on-the-uses-of-compassion/

[4]: https://redefineschool.com/compassion-fade/

[5]: https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0100115

[6]: https://www.nytimes.com/2015/12/06/opinion/the-arithmetic-of-compassion.html

[7]: https://www.frontiersin.org/articles/10.3389/fpsyg.2020.586161/full

[8]: https://www.researchgate.net/figure/A-model-depicting-psychic-numbing-compassion-fade-when-valuing-the-saving-of-lives_fig9_263289454

[9]: http://econowmics.com/compassion-fade/

[10]: https://bigthink.com/the-present/why-compassion-fades/

[11]: https://podcast.wellevatr.com/why-compassion-fades

[12]: https://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780190464684.001.0001/oxfordhb-9780190464684-e-20

Base Rate Fallacy

As a minimum definition, we can say that we make a base rate error when we make a statistical inference that somehow ignores the base rate (or prior probability) of the interest rate. Base bet error occurs when we make judgments too quickly, ignoring base bets or odds in favor of new information. Baseline velocity bias is a common cognitive error that distorts decision-making, with the result that information about the occurrence of some common characteristics in a given population is ignored or of little importance in the decision-making process. [Sources: 5, 9, 11]

When assessing the probability of an uncertain event, the tendency to ignore the baseline rate, underestimate the baseline rate or the current information of previous information, which determines the probability of an uncertain event, is the main distortion of a person’s probability conclusion (1, 2). It highlights the long-term research program of experimental psychology that uses ideal Bayesian inference as a model of human behavior and attempts to understand how people estimate probability by examining the systematic deviation of probability estimates from the ideal model predictions (3). [Sources: 7]

These results indicate that bias in probabilistic inference, widely observed in human probabilistic judgments, arise from calculations of information weighting and relative sensitivity to information variability in the brain. These results indicate that, by combining precedent and likelihood, sensitivity to information variability and the calculation of subjective weights critically affect the individual heterogeneity of base rate neglect. Neglecting the base rate — a serious error in assessing the likelihood of uncertain events — describes a person’s tendency to underestimate the base (previous) rate compared to the discovery of information (probability). [Sources: 7]

However, although predictive information is easily used, even when the actual reliability of the predictors is questionable, the information inherent in the baseline event rate criteria is often underutilized (Tversky & Kahneman, 1982). For example, when the predictor is a witness that a suspicious car was blue, human participants tend to believe it was indeed blue, even when faced with evidence that the base frequency of blue cars is low at that particular location. When presenting information related to the base rate (i.e. general information about prevalence) and specific information (i.e. information that is specific to a specific case), people tend to ignore the base rate in favor of identifying information instead of correctly integrate them. … [Sources: 4, 14]

The error arises from confusing the nature of the two different failure rates. Consequently, 100% of all cases when the alarm sounds for non-terrorists, but it is impossible to even calculate a false negative indicator. [Sources: 4]

If there were as many terrorists as there are non-terrorists in the city, and the number of false positives and the number of false negatives were almost equal, then the probability of false identification would be about the same as the number of false positives of the device. In this case, the number of false positives per positive test will be almost equal to the number of false positives per non-pregnant woman. Even a very low rate of false positives will result in so many false positives that such a system is practically useless. [Sources: 1]

In the latter case, it is impossible to derive the posterior probability p (drunkenness | positive result) by comparing the number of drunk drivers and positive results with the total number of people who received a positive breathalyzer test, since information on the basis of the fare is not saved and should be explicitly re-introduced using Bayes’ theorem. In situations where our test is very accurate (for example> 95%), but the previous probability or base rate of the percentage characteristic is low (say only 1/125), then when the first (hence the proportion of positive cases) positively classified) will be high ( 95%, in our case, just for the accuracy of the test), the latter (therefore, the proportion of positive cases classified as positive) can be very low (because the vast majority of cases classified positively are actually false positives due to the fact that the number of those who passes the test does not have the characteristic of interest, ultimately due to the low base rate). The likelihood of a positive test result is determined not only by the accuracy of the test, but also by the characteristics of the sample. [Sources: 4, 9]

We tend to make judgments based on known specific numbers and percentages, and ignore necessary general statistics. Therefore, when making statistical conclusions, it is very important to be wary of our tendency to fall into such traps. If you cannot draw a conclusion, then you are approaching Bayesian probability, and you are not alone. When providing data on the likelihood of breast cancer in women who have a positive mammogram, an alarming 80% of doctors are wrong. [Sources: 2, 9, 13]

Clinical profilingists are likely to resist the idea of ​​adding a percentage figure to their predictions – this seems to contradict clinical intuition or judgment. It always happens that way; People unfamiliar with the technical rules of prior probability usually ignore previous statistics because they don’t seem relevant. There is still a lot to think about before calculating the bounce rate. The profiler needs to communicate more clearly by entering a personal prediction percentage (e.g. 30%) so that researchers can judge how strongly the profiler thinks an event will happen. [Sources: 2, 5, 8]

When predicting criterion events by predictors in probabilistic parameters, it is normatively advisable to take into account two types of information: the global base frequency of criterion events and the values ​​of predictors for a particular case. Moreover, even when the predictors did not contain any useful statistical information, there was a bias in the selection of criterion events that outwardly resembled predictors due to the lack of conjugation with event criteria (Goodie & Fantino, 1996). [Sources: 14]

They argued that many judgments of probability or cause and effect are based on how representative one object is to another or a category. They argued that many judgments of probability or cause and effect are based on how representative one object is to another or a category. One of the main theories postulates that this is a matter of relevance, so we ignore the base rate information because we classify it as irrelevant and therefore think it should be ignored. [Sources: 1, 4, 6]

Representativeness heuristics may lead to basic rate errors, because we can treat events or objects as highly representative and make probabilistic judgments based solely on this, without stopping to consider the basic rate value. This is a common psychological bias associated with representative heuristics. They are influenced by personal stories, for example, a story about a smoker who lived to be 95 years old. [Sources: 6, 12]

It also happens when the profiler feels he is better equipped to deal with the problem based on previous experience. For example, a profiler might focus on a specific criminal, obscuring useful information about a group of criminals with similar characteristics. Subjective judgments about likelihood based on personal belief profiles, such as that the perpetrator will commit the crime again, or that a particular suspect is the prime suspect, or that the perpetrator lives in a specific area. The mechanical application of Bayess’s Theorem to identify performance errors is inadequate when (1) key model assumptions are unchecked or severely violated, and (2) no attempt is made to define the goals, values, and assumptions that are the responsibility of decision-makers. [Sources: 0, 5]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/base-rate-fallacy-reconsidered-descriptive-normative-and-methodological-challenges/5C0138815B364140B87110364055683B

[1]: https://psychology.fandom.com/wiki/Base_rate_fallacy

[2]: https://www.statisticshowto.com/base-rates-base-rate-fallacy/

[3]: https://www.investopedia.com/terms/b/base-rate-fallacy.asp

[4]: https://en.wikipedia.org/wiki/Base_rate_fallacy

[5]: https://ifioque.com/social-psychology/base-rate-fallacy

[6]: https://thedecisionlab.com/biases/base-rate-fallacy/

[7]: https://www.pnas.org/content/117/29/16908

[8]: https://www.adcocksolutions.com/post/no-8-of-86-base-rate-fallacy

[9]: https://www.capgemini.com/gb-en/2020/10/the-base-rate-fallacy-what-is-it-and-why-does-it-matter/

[10]: https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095449924

[11]: https://tactics.convertize.com/definitions/base-rate-fallacy

[12]: https://fs.blog/mental-model-bias-from-insensitivity-to-base-rates/

[13]: https://www.wheelofpersuasion.com/technique/base-rate-neglect-base-rate-fallacy/

[14]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2441578/

Restraint Bias

Nordgren concluded that “we tend to overestimate our ability to control our impulses,” a phenomenon known as the “self-control illusion,” which often leads us to make very bad decisions in life. The illusion of self-control is the tendency to overestimate our ability to control impulsive behavior. Moderation bias is the tendency for people to overestimate their ability to control impulsive behavior. Projection bias causes people to overestimate their ability to resist the temptations around them, thereby undermining their attempts to exercise self-control. [Sources: 4, 8, 11]

Now that you know your projection bias, it can help you get rid of overconfidence in your ability to control temptation. Projection bias can lead non-cigarette users to underestimate the strength and disadvantages of addiction. Thinking that you are rational despite the obviousness of the irrationality of others is also known as the blind spot bias. [Sources: 2, 8]

Every cognitive distortion exists for a reason, primarily to save time or energy in our brains. Cognitive biases are just tools that are useful in certain contexts and harmful to others. Some things that we recall later make all of the above systems more biased and more detrimental to our mental processes. [Sources: 6, 7]

With these four problems and their four implications in mind, the accessibility heuristic (and in particular the Baader-Meinhof phenomenon) ensures that we notice our biases more often. If you visit this page to refresh your memory from time to time, the spacing effect will help you highlight some of these thought patterns to control blind spot bias and naive realism. Nothing we do can make the four problems disappear (until we have a way to expand the processing power and memory of our minds to match those in the universe), but if we admit that we are constantly biased and that there is room for improvement: Confirmation bias will continue to help us find corroborating evidence that will ultimately lead us to a better understanding of ourselves. Nothing we do can make the 4 problems go away (until we have a way to expand the processing power of our minds and storage of memory to match those in the universe), but if we admit that we are constantly biased, but that there is room for improvement, confirming bias will continue to help us find corroborating evidence that will ultimately lead us to a better understanding of ourselves. [Sources: 6, 7]

Minimizing the strength of constraint bias means more accurate perception of our impulse control and, accordingly, making better decisions. First, we can take an inventory of the areas of our life that we think are most affected by impulsivity or incontinence. Attention has a lot to do with prejudice, self-control, and impulses in our environment. [Sources: 3, 4]

Herding Behavior This effect is evident when people do what others do, instead of using their own information or making independent decisions. This tells us that impulsiveness and selfishness are only two halves of the same coin, as are their opposites – moderation and compassion. [Sources: 5, 9]

This may be why people with dark personality traits like psychopathy and sadism have a low compassion score but a high impulsivity. A gap between hot and cold empathy occurs when people underestimate the impact of visceral states (for example, this is also called an “empathy gap”, when people underestimate the impact of visceral states (for example, projection bias is a tendency to project current preferences into the future). As if future tastes will be the same as the current ones (Loewenstein, ODonoghue & Rabin, 2003). [Sources: 5, 8, 9]

Prediction Bias In behavioral economics, prediction bias is related to the assumption that people’s tastes or preferences remain constant over time (Loewenstein et al., 2003). Optimism bias People tend to overestimate the possibility of positive events and underestimate the possibility of negative events in the future (Sharot, 2011). The tendency to confidently assume that other people have the same mentality, opinions, and beliefs as us is called projection bias. A related effect, called false consent bias, makes us think that other people also agree with our views, thereby furthering this trend. [Sources: 8, 9]

Believing that we can control ourselves and everything around us makes us feel safe. In practice, we find it difficult to imagine the strength that inner impulses and emotions can display and the strength they have to break our willpower and our self-control. Levenshtein explains that we have limited memory for intuitive experience, which means we can remember the impulsive state, but we cannot recreate the feeling of the impulsive state, which causes us to repeat the same mistake over and over, which leads to a fall. the illusion of self-control. [Sources: 11]

Self-control Self-control in psychology is a cognitive process that serves to curb certain behaviors and emotions aimed at temptations and impulses. Control premium In behavioral economics, the control premium refers to the willingness of people to give up potential rewards in order to control (avoid delegating) their income. Inflated beliefs in impulse control cause people to overly succumb to temptation, thereby contributing to impulsive behavior. [Sources: 0, 9]

What’s more, Suchek showed that the degree of their bias – their inability to leave their own head – predicted how impulsive and selfish they were in the first experiment. There is a division between cold and hot empathies, which states that when people are in a cold state, for example, do not feel hungry, they tend to underestimate these influences in a hot state. [Sources: 4, 5]

If this area is exposed to an electric current, people are better able to perceive someone else’s point of view. If the neurons inside it are better connected (and well connected to other parts of the brain), people will exhibit less bias towards their own groups. But new research by Alexander Saucek of the University of Zurich suggests that self-control is also influenced by another area of ​​the brain, which puts this ability in a different light. License effect. The licensing effect, also known as self-expression or moral license, occurs when people allow themselves to do something bad (for example, [Sources: 5, 9]

McGonigal also suggests creating obstacles for yourself and making a commitment to be more responsible for your impulses. Another way to avoid getting this page in the future is to use the Privacy Pass. But despite trying to assimilate the information on this page many times over the years, there seems to be very little left. [Sources: 3, 6, 10]

Objectives and methods. Here, we selectively examined s / fMRI studies of ADHD and DBD to identify disorder-specific and common aberrant neural mechanisms of AI and RI. Results. In ADHD, deviating functional activity of the prefrontal and lumbar parts of the lower back was associated with an increase in IS. [Sources: 1]

The “problem” was that some people were told before screening that they had a high level of self-control, while others were told that they could not control their impulses. With the ability to evaluate interrupted intent, they instead began looking for results. [Sources: 5, 11]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://journals.sagepub.com/doi/abs/10.1111/j.1467-9280.2009.02468.x

[1]: https://www.sciencedirect.com/science/article/pii/S0149763418300162

[2]: https://www.businessinsider.com/cognitive-biases-2015-10

[3]: https://thedecisionlab.com/biases/restraint-bias/

[4]: https://en.wikipedia.org/wiki/Restraint_bias

[5]: https://www.theatlantic.com/science/archive/2016/12/self-control-is-just-empathy-with-a-future-you/509726/

[6]: https://qz.com/776168/a-comprehensive-guide-to-cognitive-biases/

[7]: https://betterhumans.pub/cognitive-bias-cheat-sheet-55a472476b18

[8]: https://uxdesign.cc/projection-bias-how-it-affects-us-in-our-daily-lives-influence-our-design-decisions-933baa3a3084

[9]: https://www.behavioraleconomics.com/resources/mini-encyclopedia-of-be/

[10]: https://www.researchgate.net/publication/38061630_The_Restraint_Bias_How_the_Illusion_of_Self-Restraint_Promotes_Impulsive_Behavior

[11]: https://psychology-spot.com/illusion-of-self-control-hot-cold-empathy-gap/

Planning Fallacy

The intriguing aspect of planning error is that people simultaneously harbor optimistic expectations about a specific future task and more realistic ideas about how long it took them to do something in the past. When it comes to plans and predictions, people may know the past well, but they are doomed to repeat it. The hallmark of planning error is that people admit that their past projections were overly optimistic, but insist that their current projections are realistic. Scheduling error requires that the current forecasts of task completion times are more optimistic than assumptions about past completion times for similar projects, and that current forecasts of task completion times are more optimistic than the actual time for completion of activities. [Sources: 2, 10, 15]

Scheduling bias results from the tendency to overlook distribution data and adopt what might be called an internal forecasting approach that focuses on the building blocks of a particular problem rather than the distribution of outcomes in similar cases. An internal approach to evaluating plans can lead to underestimation. For example, academics and writers are known to tend to underestimate the time it takes to complete a project, even if they have significant past failure to meet planned schedules. This phenomenon sometimes occurs regardless of whether people realize that past tasks of a similar nature took longer than usually planned. [Sources: 2, 5]

First proposed by Daniel Kahneman and Amos Tversky, scheduling error refers to a phenomenon in which the time it takes to complete a task is systematically underestimated. As we discussed earlier, a scheduling error causes students to underestimate the amount of time it will take to complete their homework, which can result in them wasting all night or missing deadlines. Planning error is a phenomenon in which forecasts of how long it will take to complete a future task exhibits an optimistic bias and underestimates the time it will take. This phenomenon occurs regardless of whether people realize that past tasks of a similar nature took longer than normally planned. [Sources: 5, 6, 10, 15]

Prejudice only involves the prediction of their own tasks; when outsiders predict when an activity will be completed, they are often pessimistic and overestimate the time it takes. In 2003, Lovallo and Kahneman proposed an expanded definition that tends to underestimate the time, cost, and risk of future actions, while overestimating the benefits of the same actions. In 2003, Daniel Kahneman and his new research partner Dan Lovallo re-examined the wrong idea of ​​the plan and found that people not only underestimated the time required to complete certain tasks, but also underestimated the negative consequences and costs. A specific task. [Sources: 6, 8]

Kahneman and Tversky initially explained the error by imagining that planners focus on the most optimistic scenario for completing a task, rather than using their full experience of how long similar tasks take to complete. The original explanation for the error by Kahneman and Tversky was that planners focus on the most optimistic scenario for completing a task, rather than using their entire experience in terms of how long similar tasks take to complete. One explanation, focusism, suggests that people fall prey to a planning error because they only focus on the task ahead and do not consider similar tasks from the past that took longer than expected. [Sources: 5, 8]

Examples of planning errors in action can be as large as a massive public works program like Big Dig in Boston (the highway project ended nine years late and $ 22 billion over budget), or as small as seemingly fast commission that somehow takes a day. Whatever the outcome, the planning fallacy stems from two fundamental mistakes, as Kahneman wrote in his memoir Thinking Fast and Slow. Scheduling mistakes are hard to avoid, from miscalculating travel times to your destination – if you think you can beat your app’s rating – to thinking you can leave your presentation at the last minute. [Sources: 3, 4]

Planning bias was first proposed by Daniel Kahneman and Amos Tversky in 1979. It is a cognitive bias that is caused by our inherent belief that we cannot predict negative events, which leads to concerns about the time, cost, or risk associated with performing tasks. The forecast is inaccurate. Regardless of past experience with similar tasks. Nobel laureate Daniel Kahneman and his partner Amos Tversky first made the main point in an influential article published in 1979, namely Humans cannot estimate how long it will take to complete the task. As described in Kahneman’s recent book “Thinking Fast and Slowly”, a study found that the typical homeowner expects the cost of his home renovation project to be about $19,000. [Sources: 1, 14]

In such studies, people predict how long it will take to complete an upcoming project and also report how long it took them to complete very similar projects in the past. For example, people imagine and plan specific steps they will take to complete a targeted project. The people who make the plan can eliminate factors that they think are irrelevant to the specifics of the project. [Sources: 8, 9]

At the same time, leaders may favor forecasts that are more optimistic than others, which gives people an incentive to plan inaccurately based on intuition. Cognitive biases such as optimism bias (the tendency for people to expect positive results from their actions) and overconfidence have been suggested as causes of planning error. Oxford University researcher Bent Flivbjerg has received growing evidence that optimism bias is one of the most important biases when it comes to predictive factors in project planning. [Sources: 7, 13]

Kahneman and Tversky, and then Dan Lovallo, suggested that the appearance of the forecast helps to reduce planning error. For example, by encouraging people to form “implementation intentions” during forecasting — by getting them to complete parts of a task at specific times and on specific dates — people are more likely to perform those activities and, therefore, are less prone to planning errors. [Sources: 9, 13]

Predicting how long a task will take is unrealistic behavior. It is a deeply ingrained behavior and requires some practice to find out that you are doing it. After obtaining an objective estimate of the time required to complete the project, you need to ensure that you have the time and resources to complete your plan. [Sources: 4]

Make them urgent by setting deadlines as close to the current moment as possible. Determine if you are the priority organizer, planner, arranger, or spectator so you can plan accordingly. [Sources: 11]

— Slimane Zouggari

 

 

##### Sources #####

[0]: https://harappa.education/harappa-diaries/planning-fallacy-its-meaning-and-examples/

[1]: https://hbr.org/2012/08/the-planning-fallacy-and-the-i

[2]: https://www.bbntimes.com/global-economy/the-planning-fallacy-or-how-to-ever-get-anything-done

[3]: https://qz.com/work/1533324/daniel-kahnemans-planning-fallacy-explains-why-were-bad-at-time-management/

[4]: https://nesslabs.com/planning-fallacy

[5]: https://en.wikipedia.org/wiki/Planning_fallacy

[6]: https://academy4sc.org/video/planning-fallacy-bit-off-more-than-you-can-chew/

[7]: https://thedecisionlab.com/biases/planning-fallacy/

[8]: https://psynso.com/planning-fallacy/

[9]: https://spsp.org/news-center/blog/buehler-planning-fallacy

[10]: https://herdingcats.typepad.com/my_weblog/2015/05/the-fallacy-of-the-plannig-fallacy-1.html

[11]: https://www.entrepreneur.com/article/350045

[12]: https://blog.firmsone.com/overcoming-the-planning-fallacy/

[13]: https://www.washingtonpost.com/business/the-planning-fallacy-can-derail-a-projects-best-intentions/2015/03/05/fcd019a0-c1bc-11e4-9271-610273846239_story.html

[14]: https://www.mcguffincg.com/the-planning-fallacy/

[15]: https://medium.com/gravityblog/the-planning-fallacy-3af4bb20493c

Illusion Of Validity

In this article, we empirically explore the psychometric properties of some of the most famous statistical and logical cognitive illusions in the heuristic and bias research project of Daniel Kahneman and Amos Tversky, which proposed Linda “Wason” nearly 50 years ago. Fascinating puzzles such as questions. Selected paper tasks and so-called Bayesian thinking tasks (for example, mammography tasks). In this article, we studied the famous statistical and logical cognitive illusions in the heuristic and bias research project of Daniel Kahneman and Amos Tversky from the perspective of psychometrics. The cognitive illusion they subsequently proposed provided empirical evidence that human reasoning ability violated the laws of logic and probability. Daniel Kahneman and Amos Tversky used a large number of so-called “cognitive illusions” to demonstrate the psychological, verbal, and mathematical possibilities of human error in statistics and logical reasoning. Explanation (Tversky and ​​Kahneman, 1974; Kahneman et al., 1982)… [Sources: 0]

The basic premise of this book is that there are far more fundamental biases in human judgment. We usually think of bias in the context of underlying motives or interests, especially in the political realm. Even when we have perfectly correct information and are free from motivational biases, we still make the wrong decisions. [Sources: 2]

However, surprisingly, hospital problem-solving rates have changed a lot since then. Integrating the underlying data from Book E of Intuitive Judgments is called regression and results in more accurate estimates. The authors consider an example of how Rorschach blot tests were used to assess whether patients were homosexual or not. [Sources: 0, 2]

For example, if a student does not do well in school, he is more likely to find him lazy than to consider the circumstances in his home. [Sources: 2]

Even when basic speed data is provided, people rarely take it into account using their original data rather than specific numbers from scientific research. While most people would agree that skill and luck have a lot in common, a full understanding of how inextricably linked they are remains to be achieved. In situations of ability, there is a causal relationship between behavior and outcome. Another good example is exploring how people can be overconfident about quick calculations. [Sources: 1, 2]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.frontiersin.org/articles/10.3389/fpsyg.2021.584689/full

[1]: https://www.cambridge.org/core/books/judgment-under-uncertainty/illusion-of-control/A338AC4C785CABF7E40EF3FF5F017F58

[2]: https://www.goodreads.com/book/show/125967.Judgment_Under_Uncertainty

The Overconfidence Effect

The basic assumption behind this paradigm is that consumers are inherently lack of calibration. To study the effects of overconfidence and distrust on their behavior, researchers only need to observe this natural inconsistency of knowledge (Carlson, Bearden and Hardesty, 2007; Park, Mothersbaugh) ). And Feick, 1994; Pillai and Hofacker, 2007). Sources of related measurement overconfidence and calibration errors In studies with general knowledge elements, participants usually choose between two alternatives and must report their confidence in the choice as 0.5 (guess) and 1.0 (Confident) Subjective probability within range. There are many studies on individual differences in self-confidence measured by subjective likelihood calibration. [Sources: 0, 1, 2]

This article explores the impact of miscalibration of knowledge in terms of both self-confidence (i.e. aesthetics). ). For overly self-confident consumers, an independent t-test showed that manipulation effectively reduces subjective knowledge and the level of miscalibration of knowledge of participants in the experimental group compared to the control group (experimental M = 3.8 versus M control = 4.5; t = 4.04, df = 149, p = 0.000). The above analysis of the components of calibration, confidence, resolution and linearity showed only significant mathematical effects on the calibration of subjective probabilities. This article first explores the literature on objective knowledge, subjective knowledge, and knowledge mismatch, and then develops a hypothetical model of the impact of overconfidence and low confidence on perceived value. [Sources: 1, 2]

One way to assess the validity of a set of subjective probabilistic judgments is to examine the degree to which they are calibrated. Regarding miscalibration, the direct confidence scores analyzed by subjective likelihood calibration were not related to ANS severity, but to the mathematical ability of the participants. There is little general overconfidence with two-choice questions and overt overconfidence with subjective confidence intervals. However, in contrast to overconfidence in the calibration of subjective probability, even those with more computational power still make a high degree of conjunction error. [Sources: 0, 2, 5, 6]

Over / under confidence error is measured by the subjective mean probability minus the corrected proportion (relative frequency). The effect is that overconfidence is introduced by the non-linear perception of the probability scale, and a calibration curve that displays the correct proportions with respect to the stated probability will become curved. When the adjusted proportions are equal to the subjective probabilities at each confidence level, the participant is perfectly calibrated with a calibration score of 0. The greater / lesser confidence bias is measured by the difference between the mean confidence x and the corrected total proportion. c, where x – c> 0 indicates excessive security and x – c <0 indicates poor security. [Sources: 0, 2]

One particular bias that has been shown to persist between groups of subjects is miscalibration. Poorly calibrated people overestimate the accuracy of their predictions or underestimate the variance of risky processes; in other words, their subjective probability distributions are too narrow. Previous research has shown very consistent patterns in individual people’s likelihood estimates, with the prevailing finding that people are overconfident. [Sources: 3, 4]

The aim of the study was to investigate how the mathematics and accuracy of the approximate number system (ANS) relate to the calibration and consistency of probabilistic judgments. In this study, we examine the effect of mathematics on both the consistency and consistency of probabilistic judgments. Calibration of Probabilistic Judgments, Organizational Behavior and Human Performance, 20 (2), 159-83. In the present study, we examined how the individual’s ability to understand numerical information relates to probabilistic judgments. [Sources: 0, 2, 3]

In the research described in this article, we rely on a scale of mean likelihood, in which participants first select one of two choices (True or False) and then assess the reliability on a scale of 0.5 to 1 in 0.1 increments. [Sources: 0]

Dunning, David, Griffin, Dale W., Milojkovic, James D. and Ross, Lee (1990), “The Effect of Overconfidence in Social Prediction”, Journal of Personality and Social Psychology, 58 (4), 568-81 . Decades of laboratory experiments on people (usually students) have shown that people are affected by psychological biases that influence decision-making. Griffin, Dale W. and Dunning, David and Ross, Lee (1990), “The Role of Construction Process in Predicting Self-confidence in Self and Others”, Journal of Personality and Social Psychology, 59 (December), 1128- 39. [Sources: 3, 4]

Misalignment is defined as overconfidence in obtaining accurate information (Alpert and Raiffa 1982, Lichtenstein et al. Their overconfidence is reflected in the confidence interval that is too narrow and impractical for the entire stock market and their own company projects). Robert P., Griffin, Dale W. and Lin, Sabrian and Ross, Lee (1990), “Predictions of future behavior and outcomes of oneself and others are too reliable”, Journal of Personality and Social Psychology, 58 (4) , 582– 92. [Sources: 3, 4]

M. Alpert and H. Raiffa (1982), Progress Report on Probability Assessors, D. Kahneman, P. Slovich, and A. Tversky (edited by Moore, D. A. and P. J. Healy (2008 ) “The Overconfidence Problem,” Psychological Review, 115, 502-517. Phillips, Lawrence D. and Wright, GN (1977), “Cultural Differences in Visualizing Uncertainty and Estimating Probability,” in Decision Making and Changing Human Affairs. Jungermann, Hermuth, and deZeeuw, Gerard, eds. [Sources: 3, 4]

First, we determine that senior executives are extremely poorly calibrated. These results are combined with the corporate finance literature, which shows that management characteristics have a real impact on the performance of companies. [Sources: 4]

 

— Slimane Zouggari

 

##### Sources #####

[0]: https://www.frontiersin.org/articles/10.3389/fpsyg.2014.00851/full

[1]: https://onlinelibrary.wiley.com/doi/full/10.1002/mar.20787

[2]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4122178/

[3]: https://journals.sagepub.com/doi/abs/10.1177/002224379202900304

[4]: https://voxeu.org/article/managers-are-miscalibrated

[5]: https://www.sciencedirect.com/science/article/pii/S0749597899928479

[6]: https://apps.dtic.mil/sti/citations/ADA033180

Illusion Of Control

In controlling the illusion of experimentation, participants are usually asked to what extent they think their actions have effectively controlled the results. Since the underlying cause in our experiment was the external events of half of the participants, we replaced the standard formula of controllability with the more general phrase “efficiency”. It turns out that this task is sensitive to the effects of the illusion of causality, whether it is when the underlying cause is an external event (for example, Matute et al., 2011) or the behavior of the participant (Blanco et al., 2011). , 2011). In a series of experiments, Langer (1975) observed that, for example, when the task included skill cues, his subjects behaved as if they were controlling random events. [Sources: 7, 11]

Alan Langer was the first to prove the control of illusions, and she explained her findings through confusion and random situations. He suggested that people make control judgments based on “skill attributes.” Ellen Langers’ research shows that when skill cues appear, people are more likely to behave as if they can control in random situations. In a series of experiments, Lange first proved the universality of the illusion of control, and secondly, people are more likely to behave as if they can exercise control in random situations where there are “signs of grasp.” [Sources: 6, 8]

Lange showed that people often act as if random events are under personal control. Lange showed that people often view accidental positive results as positive manipulations. Ellen Langer (1975) was one of the first scientific researchers who pointed out that humans have a positive illusion that they can influence situations through fictional skill clues in random gambling. The most influential to this popular doctrine is Ellen Langer’s (1975) conflicting conclusions about unrealistic hallucinations. [Sources: 8, 11]

The illusion of control is the tendency for people to overestimate their ability to manage events, for example, to feel that they are in control of outcomes over which they have no obvious influence. The illusion of control is the tendency for people to overestimate their ability to manage events, for example, when someone experiences a sense of control over outcomes that is not clearly affected. The illusion can arise from the fact that people do not have a direct idea of ​​whether they are in control of events. [Sources: 5, 6]

However, when you ask people about their control over random events (this is a typical experimental setup in this document), you can only draw a mistake in one direction: believing that they have more control than they actually do. Time and time again, research has shown that despite wisdom, knowledge, and wisdom, people often believe that they can control events in their lives, even if such control is impossible. Another example was discovered by Alan Langer of Harvard University in 1975. He believed that the prevailing “illusion of control” caused most people to overestimate their ability to control events, even those in which they had no influence. The illusion of control leads to insensitivity to feedback, inhibits learning, and tends to take more objective risks (because the illusion of control reduces subjective risk). [Sources: 0, 3, 8, 12]

Psychologist Daniel Wegner argues that the illusion of control over external events underlies the belief in psychokinesis, the purported paranormal ability to move objects directly using the mind. In lab games, people often report that they control randomly generated results. From a motivational perspective, the illusion of control is expected to be stronger when participants judge the consequences of their own behavior (active participants) than when they judge the consequences of the behavior of others (constrained participants). [Sources: 6, 7, 12]

Delusion is weaker in people who are depressed, and stronger when people have an emotional need to control the outcome. When it comes to accurately assessing control, depressed people have a much better understanding of reality. This stems from a psychological effect known as the illusion of control, a person’s tendency to overestimate their personal ability to control and manage events. They feel that they are being challenged, they feel that the sense of control over its outcome is not working and clearly does not affect their way of thinking. [Sources: 3, 5, 9]

But in their day-to-day life, where they affect many outcomes, underestimating control can be a big mistake. It is important to remember that control in our lives is often illusory. After you have taken all the possible actions in your sphere of influence and control, you must learn to recognize and accept what you cannot control. [Sources: 3, 10, 12]

When people lose control and can only go wrong in one direction, this will of course be discovered. The opposite of the illusion of control is learned helplessness, which describes that if people were previously in a situation where they could not change certain things, they would begin to feel that they could not control their lives. This allows them to give up more quickly when facing obstacles. [Sources: 2, 12]

In 1988, Taylor and Brown believed that positive illusions, including control illusions, are adaptive because they motivate people to persist in completing tasks, otherwise they might refuse. However, Bandura (1989) is fundamentally interested in the usefulness of optimistic assumptions about control and performance in controlled, non-hallucid situations, and he also suggests that in situations where hallucinations may have costly or disastrous consequences , A realistic vision is needed. The survival and well-being of mankind. Lefkult later believed that the sense of control, the illusion of the possibility of making a personal choice, played a clear and positive role in sustaining life. [Sources: 5, 6, 11]

The illusion of control was formally identified by Ellen Langer in 1975 in her article, The Illusion of Control, published in the Journal of Personality and Social Psychology. The illusion of control is the tendency of people to believe that they can control, or at least influence, outcomes that the researchers believe they have no influence on. It is a mentally constructed psychological illusion that is an overrated tendency for people to think they have the ability to manipulate certain events as if they had paranormal and mystical powers. [Sources: 8, 9, 10]

For example, someone feels they can influence and control certain outcomes that have little or no effect on them. People will obviously give up control if they think the other person has more knowledge or skills in areas such as medicine, where real skills and knowledge are involved. I believe these people are more likely to rely on the illusion of control to reinforce their hope that retention will provide the kind of security they crave. Ironically, there can be more “control” in a flexible position than in a position characterized by a tendency to keep everything within a well-defined comfort zone. [Sources: 3, 6, 9]

Over the years, many studies have shown that we perceive things differently depending on whether we feel like we are in control of them. This illusion arises in cases where something is clearly random, for example, in a lottery, and in situations where we clearly do not influence the result, for example, in sports matches. This type of illusion works as an effect because we are completely convinced that we have the ability to manipulate completely random events, and they are in fact beyond our control. [Sources: 2, 9]

— Slimane Zouggari

 

##### Sources #####

[0]: https://kathrynwelds.com/2013/01/13/useful-fiction-optimism-bias-of-positive-illusions/

[1]: https://bestmentalmodels.com/2018/09/25/illusion-of-control/

[2]: https://thedecisionlab.com/biases/illusion-of-control/

[3]: https://psychcentral.com/blog/the-illusion-of-control

[4]: https://artsandculture.google.com/entity/illusion-of-control/m02nzt4?hl=en

[5]: https://nlpnotes.com/2014/04/06/illusion-of-control/

[6]: https://en.wikipedia.org/wiki/Illusion_of_control

[7]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4013923/

[8]: https://psychology.fandom.com/wiki/Illusion_of_control

[9]: https://discover.hubpages.com/education/A-Phenomenon-called-the-illusion-of-control

[10]: https://www.interaction-design.org/literature/article/the-illusion-of-control-you-are-your-worst-enemy

[11]: http://positivepsychology.org.uk/positive-illusions/

[12]: https://www.jasoncollins.blog/the-illusion-of-the-illusion-of-control/

The Barnum Effect

Moreover, these statements are very popular because most of the time what they say applies to most. These statements are vague and clear to everyone in their wording, but somehow they seem specific when people read them. Descriptions usually consist of vague statements that may be true for anyone, but are judged to be reasonably accurate by the participants. It is not uncommon for a person to hear or read a description of a disease and then worry that they have the disease; this is due to the tendency of most people to give personal meaning to broad information. [Sources: 4, 12]

In psychology, this is an example of the Forer effect (also known as the Barnum effect), which indicates a tendency for people to think of descriptions of their personality as accurate, even if the descriptions are so vague that they apply to many people. The Barnum effect in psychology, also known as the Forer effect, is when a person believes that personality descriptions apply specifically to him, for example, by reading his horoscope in the newspaper and realizing that it is surprisingly accurate. The Barnum effect, also called the Forer effect in psychology, is a phenomenon that occurs when people believe that personality descriptions apply to them (more than to other people), despite the fact that the description is actually full of information that applicable to all. In simple terms, the Barnum effect refers to our tendency to think that the information provided about our personality concerns us, regardless of its generalization. [Sources: 4, 6, 9, 11]

The Barnum Effect explains our tendency to believe in generalized descriptions of personality and accept them as accurate descriptions of ourselves. The Barnum effect, also known as the Forer effect, refers to vague and generally positive descriptions of personality that are general in nature, but which most people find very accurate for them. His name is commonly associated with the famous showman P. T. Barnum, best known for promoting famous hoaxes and the founder of the Barnum and Bailey Circus. The Barnum Effect stems from a phrase often attributed (perhaps erroneously) to showman P. T. Barnum that a “goof” is born every minute. [Sources: 1, 4, 10, 11]

The Barnum effect is based on the logical fallacy of appealing to vanity and authority, and uses people’s willingness to personalize flattery while believing that they come from a trusted source. In advertising, effects are often used to induce people to believe that products, services, or advertising campaigns are designed specifically for selected specific groups of people. Use this effect when writing horoscopes or fortune telling to make people feel that these predictions are made specifically for them. The Barnum effect in psychology means that people are easily deceived when reading descriptions of themselves. [Sources: 0, 5, 7]

By personality, we mean that people are different and unique. According to Forer, people are not distinguished by the existence of personal qualities, but by their relative size. The second statement describes the same characteristics, but more specifically describes the extent of its existence. [Sources: 5, 12]

It is important to understand that this effect is only valid if the statement is positive or complementary. In addition, if people think that the person conducting the assessment is a senior professional, they are more likely to accept a negative assessment of themselves. It turns out that positive reviews of them often mislead people, although the same applies to anyone else. [Sources: 3, 10]

The Barnum effect is deeply rooted in people’s propensity for flattery and the tendency to believe in seemingly authoritative sources, which means that if the statements are right, people will accept general statements and believe that they have a direct impact on them. Therefore, we can say that astrologers, fortune tellers, and wizards are good at understanding human psychology and applying the principles of the Barnum effect to their interpretations. The idea that psychics and psychics can prove the personality of the subject seems so accurate that it must be the origin of the supernatural phenomenon, but in fact it consists of general statements about the Barnum effect and can be applied to most people. [Sources: 1, 4, 8]

The conclusion drawn from this argument is that just because something seems valid and applies to your life and personality does not mean it is accurate or reliable. When you read or hear something that is strange to you, practice making the Barnum Effect checklist and let your friends know that they probably shouldn’t make important life decisions based on their sign. It is also a good idea to question the credibility of the sources you use. [Sources: 7, 9]

Derren, for example, is one of the few illusionists who focus on educating the general public about some of the techniques used to deceive them, such as the Barnum effect. They all use the Barnum Effect to convince people that the statements they make are personal to them. This suggests that horoscopes objectively do not correspond to the people they are supposed to describe, but in the event that horoscopes are labeled with zodiac signs, the Barnum effect works, when people perceive a horoscope for their own zodiac sign as corresponding to them – even though in fact it is such a bad coincidence that they could not have found it if it had not been marked with a zodiac sign. Psychologists believe this works because of a combination of the Forer effect and confirmatory biases within people. [Sources: 1, 7, 10]

Before exploring the Forer effect in detail, I understood this cognitive distortion technique, but I did not appreciate how long it has been used and how it has adapted over the years. In the next article we will look at what this effect is and why it is so effective. You may or may not have heard of the Barnum Effect, but most likely you have been a victim of it at some point in your life. The basic mechanism has been used by healers, psychics, astrologers and merchants for thousands of years. [Sources: 6, 10]

The same demonstration of Barnum has been replicated in elementary psychology students for over 50 years (Forer, 1949) and for some reason never made it into the public consciousness due to the systematic distortion of psychology in the popular media. He also works with HR managers who need to be aware of this effect during training (Stagner, 1958). This is in our Kalata textbook and should be described in all other introductory psychology books. The term was adopted after a psychologist expressed frustration with other psychologists who generally spoke of their patients7. Paul Mil saw this as negligence, especially in his practice and in relation to his patients. [Sources: 2, 5]

The Barnum effect is a cognitive bias, discovered by psychologist Bertram Forer in 1948 when he experimented with the error-proneness of personal verification. In 1948, Forer conducted a personality test on a group of students, and then based on their results to show them a detailed analysis of their personalities allegedly. Forer then asked his students to rate these statements on a scale of 0 (very low accuracy) to 5 (extremely good accuracy) based on their suitability for them. Well, the students rated the accuracy of their personal statements on average 4.3 points (out of 5 points). [Sources: 4, 7, 9]

— Slimane Zouggari

 

##### Sources #####

[0]: https://whatis.techtarget.com/definition/Barnum-effect-Forer-effect

[1]: https://scienceterms.net/psychology/barnum-effect/

[2]: https://thedecisionlab.com/biases/barnum-effect/

[3]: https://dbpedia.org/page/Barnum_effect

[4]: https://neurofied.com/barnum-effect-the-reason-why-we-believe-our-horoscopes/

[5]: https://psych.fullerton.edu/mbirnbaum/psych101/barnum_demo.htm

[6]: https://michaelgearon.medium.com/cognitive-biases-the-barnum-effect-b051e7b8e029

[7]: https://nesslabs.com/barnum-effect

[8]: https://www.abtasty.com/blog/barnum-effect/

[9]: https://www.explorepsychology.com/barnum-effect/

[10]: https://interestingengineering.com/the-power-of-compliments-uncovering-the-barnum-effect

[11]: https://www.britannica.com/science/Barnum-Effect

[12]: https://study.com/learn/lesson/barnum-effect-psychology-examples.html

Semmelweis Reflex , Semmelweis Effect

There he was able to fully implement his hand washing policy in a small maternity hospital and then at the University of Pest, where he became a professor of obstetrics. The story goes that in the 19th century, Semmelweis realized that the infant mortality rate in the hospital where he worked was plummeting if his fellow doctors frequently washed their hands with chlorine-based hand sanitizer. Semmelweis realized that this difference was due to the doctors’ habit of performing autopsies and examining women in maternity hospitals without hand disinfection, a practice that caused the infection. [Sources: 6, 13, 14]

Ignaz Semmelweis suggested that doctors infect patients with what he called “cadaveric particles” and immediately demanded that all medical personnel wash their hands with a solution of chlorinated lime before treating patients and giving birth. Despite the fact that Semmelweis published his findings, which showed that hand washing reduced deaths from birth fever to less than 1%, his observations were rejected by the medical community. This was partly due to the fact that he could not provide a scientific explanation for his observations (more on this in a moment), but also because doctors were offended by the simple suggestion to wash their hands. Other doctors believed that a gentleman’s hand could not transmit disease. [Sources: 0, 4, 8]

As is often the case with people who, for good reasons, try to change existing beliefs, Semmelweis’s life ended badly. Semmelweis was fired from the hospital, harassed by the medical community, and eventually suffered a nervous breakdown and died in an orphanage. His theory largely challenged the long-standing practice and beliefs of the medical community regarding such fever and, despite compelling evidence presented by Semmelweis, was ridiculed and rejected in the medical community. [Sources: 0, 7, 8]

The question arises as to why the medical community did not accept, or at least did not consider, the sterilization claims submitted by Semmelweis. Perhaps even more worrisome, 150 years after the publication of Semmelweis ‘treatise, we continue to encounter Semmelweis’ modern thinking on the use of hand hygiene in health care. Hand washing during the coronavirus pandemic seems like a universal habit. [Sources: 5, 7, 10]

Ignaz Semmelweis, a Hungarian doctor named after a real person, discovered in 1847 that when the doctor disinfected his hands before moving, the death toll caused by the so-called childbirth fever (bacterial infection of the female reproductive tract after childbirth or abortion) was drastic. decline. From one person to another. It is named after the 19th century Hungarian doctor Ignaz Semmelweis, who was one of the first scientists to prove the link between hospital hygiene and infection, long before Louis Pasteur popularized the theory of microorganisms. [Sources: 0, 1]

Semmelweis worked in two clinics in the same hospital in Vienna, where mortality rates for women in childbirth differed sharply. Semmelweis spent years trying to tell the difference between the two, which would explain why Clinic 1 was much more deadly than Clinic 2. [Sources: 1]

Semmelweis hypothesized that medical personnel and, in particular, doctors passed the disease from one patient to another. Although the microbial theory of the disease had not yet been established, he argued that doctors who immediately went from autopsy to examining pregnant women at the First Obstetric Clinic of the hospital somehow transmitted the infection to those women who were dying at an alarming rate compared to poorer patients. Second clinics cared for by midwives, not doctors. In 1846, three years after Holmes’s publication, Ignaz Semmelweis, a Hungarian physician who is an icon in the community of health epidemiologists, independently reached a similar conclusion from his careful assessment of the increase in maternal mortality in the maternity ward compared to that in the obstetric ward. his hospital. Since Semmelweis could not explain the underlying mechanism, skeptical doctors looked for other reasons. [Sources: 5, 14]

The new Semmelweis theory did not fit the prevailing theory and therefore many physicians ignored it. Modern critics have suggested cleaner ways of testing the phenomena described by Semmelweis. Despite overwhelming evidence – the method stopped the ongoing infection of pregnant women – Semmelweis was unable to convince his peers of the effectiveness of his simple solution. [Sources: 1, 6, 10]

Some doctors rejected his idea on the grounds that a gentleman’s hands could not transmit disease. Despite compelling empirical evidence, most of the medical world rejected his theory for incorrect medical and non-medical reasons. However, despite overwhelming evidence of the effectiveness of his intervention, his ideas were met with skepticism – and even ridicule – by the modern medical community, including many of the leading medical experts of the time. The reaction to his discoveries has been so significant that 150 years later, we now refer to circumstances where factual knowledge is recklessly and systematically rejected because evidence contradicts existing culture or contradicts existing paradigms such as “Semmelweis thinking.” [Sources: 5, 6, 8]

The story of Semmelweis inspired a concept called the Semmelweis effect-the reflexive rejection of evidence or new knowledge that violates established norms, beliefs, and paradigms. Semmelweiss reflection is a metaphor for the instinct and reflex tendency to reject new evidence or knowledge because it contradicts existing and established beliefs or norms. The reflex or Semmelweis effect refers to the tendency to automatically reject new information or knowledge because it conflicts with current thoughts or beliefs. The Semmelweis effect is a reflexive tendency to reject new evidence or new knowledge because it conflicts with established beliefs, norms, or paradigms. [Sources: 0, 6, 8, 11]

The Semmelweis reflex means that people instinctively avoid, reject, and play down any new evidence or knowledge that goes against their established beliefs, practices, or values. Thus, the Semmelweis reflex is a reflex-type reaction by which people reject new information if it contradicts established norms or paradigms. It is a form of persistence bias, in which people will stick to their beliefs despite the fact that new information directly contradicts them. [Sources: 1, 12]

This effect is called the Semmelweis reflex, which Thomas Szasz has described as the “invincible social force of false truths” – a phenomenon so dangerous that it has claimed many lives throughout history. The two-sided nature of this reflex is revealed when its importance is emphasized in prematurely accepted medical failures. Careful and careful study design, scientific rigor, and critical self-examination of the manuscript can help avoid falling prey to this reflex. This tendency is elegantly described by the concept of the Semmelweiss reflex, the instinctive rejection of new and unwanted ideas. [Sources: 3, 4, 12]

This is diametrically opposed to the Semmelweiss reflex, which means that we accept new ideas and facts too quickly when they are compatible with our thinking. If they contradict each other, as in the original case of Semmelweis, we reject them too easily. This instinctive tendency to reject new evidence because it contradicts established beliefs is called the “Semmelweis reflex”, which makes us easily reject complex new ideas. We can learn to avoid the Semmelweis reflex by not sticking to our beliefs or losing our bias when new evidence emerges. [Sources: 4, 12]

Awareness can increase the likelihood of the Semmelweis reflex occurring before it occurs, but like all psychological phenomena, there are a number of other confusing and competing variables that interact when making decisions. In scenarios where evidence of alternative explanations for observed phenomena emerges, the aforementioned biases may cause an automatic tendency to reject new knowledge. [Sources: 10]

— Slimane Zouggari

 

##### Sources #####

[0]: https://nutritionbycarrie.com/2020/07/weight-bias-healthcare-2.html

[1]: https://www.ideatovalue.com/curi/nickskillicorn/2021/08/the-semmelweis-reflex-bias-and-why-people-continue-to-believe-things-which-are-proved-wrong/

[2]: https://riskacademy.blog/53-cognitive-biases-in-risk-management-semmelweis-reflex-alex-sidorenko/

[3]: https://pubmed.ncbi.nlm.nih.gov/31837492/

[4]: https://nesslabs.com/semmelweis-reflex

[5]: https://www.infectioncontroltoday.com/view/contemporary-semmelweis-reflex-history-imperfect-educator

[6]: https://rethinkingdisability.net/lessons-for-the-coronavirus-pandemic-on-the-cruciality-of-peripheral-knowledge-handwashing-and-the-semmelweis-reflex/

[7]: https://iqsresearch.com/the-semmelweis-reflex-lifting-the-curtain-of-normalcy/

[8]: https://www.renesonneveld.com/post/the-semmelweis-reflex-in-corporate-life-and-politics

[9]: https://www.encyclo.co.uk/meaning-of-Semmelweis_reflex

[10]: http://theurbanengine.com/blog//the-semmelweis-reflex

[11]: https://www.alleydog.com/glossary/definition.php?term=Semmelweis+Reflex+%28Semmelweis+Effect%29

[12]: https://qvik.com/news/ease-of-rejecting-difficult-new-ideas-semmelweiss-reflex-explained/

[13]: https://whogottheassist.com/psychology-corner-the-semmelweis-reflex/

[14]: https://www.nas.org/academic-questions/34/1/beware-the-semmelweis-reflex

Selective Perception

Favoritism within a group, also known as bias within a group, bias within a group, bias within a group, or preference within a group, is a pattern of preference among group members over members of an outgroup. [Sources: 3]

In many different contexts, people act more prosocial towards members of their own group than towards members of their group. For this reason, beliefs about reciprocity are influenced by both group membership and interdependence, so that people have higher expectations of reciprocity from their group members, and this leads to intragroup favoritism (Locksley et al., 1980). If within-group favoritism arises from social preferences based on depersonalization in which the in-group is included, the individuals who most strongly identify with their group should also be those who act more prosocially towards the members of the in-group. Moreover, the social identity perspective suggests that intragroup bias should be stronger among people who identify more strongly with their nation as a social group. [Sources: 7, 8]

There are several theories that explain why prejudice appears in a group, but the most important one is called the social identity theory. However, over the years, research on group bias has shown that group membership affects our perceptions on a very basic level, even if people are divided into different groups based on completely meaningless criteria. [Sources: 10]

The classic study showing the strength of this bias was conducted by psychologists Michael Billig and Henry Tiffel. Consistent with this view, participants who tended to set themselves up against others more through social comparisons exhibited a stronger affirmative bias: they might feel more challenged by the idea that the other group might be right about politics. These results contradict other researchers’ findings that intragroup bias stems from simple group membership. [Sources: 8, 10]

Instead of automatically arising wherever a group is formed, it may be that group favoritism only arises when people expect their good deeds to be rewarded by members of their group. The strength of this influence can, of course, vary greatly, and it may or may not be that a real negative perspective manifests itself in relation to those who are not part of the group. The similarity bias reflects the human tendency to focus on ourselves and give preference to those who are like us. Group bias is, in fact, the way that managers can show favoritism in their judgments. [Sources: 5, 10]

Particularly positive reviews are received by those who were lucky enough to get into “their” executive circle, and those who are not included in this circle – no. For example, a teacher may have a favorite student because he is opposed to favoritism in the group. Selective perception can refer to any number of cognitive biases in psychology related to how expectations affect perception. [Sources: 0, 5]

Human judgment and decision making is distorted by a range of cognitive, perceptual, and motivational biases, and people tend to be unaware of their own bias, although they tend to easily recognize (and even overestimate) the effect of bias in human judgment from part of their prejudices. other. People exhibit this bias when they selectively collect or recall information, or when they interpret it in a distorted way. The effect is stronger for emotionally charged issues and deeply rooted beliefs. [Sources: 0, 2]

Misinterpretation This type of bias explains that people interpret evidence against their existing beliefs, usually evaluating corroborating evidence differently than evidence that refutes their prejudice. To minimize this dissonance, people adjust to confirmation bias by avoiding information that contradicts their beliefs and looking for evidence to support their beliefs. Home messages. Confirmation bias is the tendency of people to give preference to information that corroborates their existing beliefs or assumptions. In other words, selective perception is a form of bias because we interpret information according to our existing values ​​and beliefs. [Sources: 0, 2]

Although we should strive to be as fair as possible in our judgments, in fact we all have biases that affect our judgments. Managers are of course no exception. Many common misunderstandings affect their evaluation of employees. The most common ones are stereotype, selective perception, confirmation bias, first impression bias, novelty bias, minor bias, intra-group bias, and similarity bias. [Sources: 5]

While a particular stereotype about a social group may not fit an individual person, people tend to remember stereotyped information better than any evidence to the contrary (Fyock & Stangor, 1994). Hence, the stereotype is automatically activated in the presence of the stereotypical group member and can influence the thinking and behavior of the perceiver. However, people whose personal beliefs reject bias and discrimination may try to deliberately suppress the influence of the stereotype in their thoughts and behavior. [Sources: 2, 4]

Therefore, if implicit stereotypes indicate a potentially uncontrollable cognitive bias, the question arises of how to account for its results when making decisions, especially for a person who is sincerely striving for an unbiased judgment. Confirmation bias also affects professional diversity, as preconceived notions about different social groups can discriminate (albeit unconsciously) and influence the recruitment process (Agarwal, 2018). Another disturbing finding is that intra-group prejudice and associated prejudice are manifested in people from an early age. [Sources: 2, 4, 10]

This study found that although both women and men had more favorable outlooks than women, prejudice in the female group was 4.5 times stronger [25] than in men, and only women (not men) showed a cognitive balance between intragroup prejudice, identity and self-esteem, showing that men lack a mechanism that reinforces automatic gender preference. In another series of studies conducted in the 1980s by Jennifer Crocker and colleagues using the minimal group paradigm, people with high self-esteem who experienced self-esteem threats showed greater bias within the group compared to people with low self-esteem. who have suffered from threats to their self-esteem. On the other hand, researchers may have used inappropriate self-esteem measures to test the link between self-esteem and intragroup bias (global personal self-esteem, not specific social self-esteem). [Sources: 3]

Like self-serving bias, group attribution can have a self-improvement function, making people feel better by creating favorable explanations for their in-group behavior. Group service bias, sometimes called late attribution error, describes the tendency to make internal attributions on our successes within the group and external attributions on their failures, and to do the opposite attribution model on our external groups (Taylor & Doria, 1981). We are also often biased towards group services when we make more favorable attributions about our internal groups than about our external groups. [Sources: 1]

But prejudice within the group is not only friendly to our group; it can also be harmful to our outside group. If the prejudice of the service group can explain most of the cross-cultural differences in attribution, then in this case, when the author is an American, the Chinese should be more likely to accuse members of outside groups for internal attribution, while Americans There should be more external and less impact on members. Your internal group. Looking at the results of previous empirical research on social identity views that support intra-group bias in selective news reporting, it is clear that low-level groups in particular exhibit this bias (Appiah et al., 2013; Knobloch-Westerwick & Hastall, 2010)-perhaps The other countries represented do not appear to be a sufficiently relevant comparison group for American participants, or to them do not represent a higher-status group that can initiate internal groups, as in these previous studies. [Sources: 1, 8, 10]

— Slimane Zouggari

 

 

##### Sources #####

[0]: https://theintactone.com/2018/12/16/cb-u2-topic-9-selective-perception-common-perceptions-of-colours/

[1]: https://opentextbc.ca/socialpsychology/chapter/biases-in-attribution/

[2]: https://www.simplypsychology.org/confirmation-bias.html

[3]: https://en.wikipedia.org/wiki/In-group_favoritism

[4]: https://www.nature.com/articles/palcomms201786

[5]: https://courses.lumenlearning.com/suny-principlesmanagement/chapter/common-management-biases/

[6]: https://www.psychologytoday.com/us/basics/bias

[7]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4327620/

[8]: https://journals.sagepub.com/doi/full/10.1177/0093650217719596

[9]: https://link.springer.com/article/10.1007/s10670-020-00252-1

[10]: https://thedecisionlab.com/biases/in-group-bias/