Simpson’s paradox, also known as the Yule-Simpson effect, is a fascinating phenomenon that shows the importance of causal reasoning. The event is mostly introduced as to why we need to study statistics. It shows how fallible we are to make a decision based on instinct without true and tested facts. 

Ironically, in recent times, this phenomenal paradox has been adopted to illustrate that even statistics or fact, has its limits, and there should be a place for intuition decision making. This is because when Simpson’s paradox was first introduced, it was believed that the result from statistics or analysis is enough and there’s no need to consider other factors that can influence the final outcome.

We are going to look at Simpson’s Paradox in this post first from its history and later, we’ll consider its effect in experimental research and how it can be avoided.

What is Simpson’s Paradox

Simpson’s paradox is an excessive situation whereby an obvious relationship between two variables is reversed. This happens when data from the two variables are tested in a group that has a confounding variable.

This means that during analysis, a confusing variable was present, the variable alters the facts of the data, and because the variable was not supposed to be present in the data, the researcher did not consider it before conducting the test. When this happens, it can lead the researcher to conclude falsely and the result of the test will be inaccurate.

When two or more tables showing the frequencies for particular combinations of values collapse into a single table, the result from that disintegrated analysis can differ from the result from the original data

For example, let us assume you want to analyze 100 people to understand the cause of weight gain, and you group them into an equal sample size of 50 women, and 50 men.

Now in your study, you find out that not exercising causes weight gain. What becomes the confounding variables, are the other factors you didn’t consider in your test. Because you didn’t consider these other factors, you can not be certain that only lack of exercise causes weight gain. The confounding variables now are how much food do people eat, if men eat more than women, then gender is also a confounding variable.

What is the age group of the sample groups? For instance, if all the men in the test are 15, and the men are in their 40s, age should then be considered as a weight gain factor as against only exercise. Also, if the distribution of the sample size is imbalanced, the result will be faulty. Unequal distribution of data and ignoring age and gender as confounding variables, while supporting the result of not exercising as a cause of weight gain, can make the test inaccurate and the conclusion of the researcher wrong.

Hence, the presence of these two conditions; an ignored confounding variable with a powerful effect and an unequal positioning of a confounding variable among the samples being tested causes a Simpson’s paradox to occur.

The zero-order relationship between the independent and dependent variables and the instability in the population of the confounding variables must be large because it can only be reversed if the result of the confounding variable is powerful enough. Because the issue that causes a paradoxical occurrence is not only the unequal distribution of sample sizes but other issues that affect the validity of results such as power.

Why Simpson’s Paradox Occurs in Research

If the chain of events that were present in the initial data gets reversed when the data is distributed into groups and analyzed, it is said that Simpson’s paradox has occurred. Simpson’s paradox occurs in research analysis when the distribution of data into groups results in some groups having an unequal representation when compared with others. This event is known as a confounding variable and it may be because of the imbalance association between two variables.

Simpson’s paradox can happen in:

  1. Psychological science study
  2. Statistical analysis
  3. Data Analysis

Simpson’s Paradox can be a problem because when the result of the analysis shows two different outcomes, it may become difficult to draw an accurate conclusion. Intuitively, the researcher may decide to support the outcome of the data from the distribution group while assuming it has more information, but there’s a chance the added variable is a confounding variable. Arriving at two different outcomes is possible however, a Simpson’s paradox occurs when the trend in the original data is reversed in the segregated groups.

History of Simpson’s Paradox 

In 1951, Edward H. Simpson first explained the Simpson’s paradox phenomena. According to him, Simpson’s paradox is a statistical event in which the relationship between two variables reverses when an analysis is being conducted on the combined and divided data of a sample group by a third variable. However, Karl Pearson in 1899 and Udny Yule in 1903 described a related effect to that of Edward. These three men revealed a relationship that vanished when the data being analyzed were aggregated.

In 1934, Cohen and Nagel were the first to report a reversal sign in an analysis. Blyth then renamed the reversal observed by Cohen and Nagel “paradox” in 1972. Edward Simpson reported that if the background story of the data to be analyzed is suggestive, then the more “sensible interpretation” can be the better representation of the aggregate group and sometimes even compatible with the divided subgroups. 

In the 1950s, large statistical data was considered sufficient to prove an analysis. The engagement of other instinctive knowledge or background knowledge was disapproved. But in 1981, Lindley and Novick revealed that there is no statistical data that could give a warning signal so a researcher does not draw a wrong outcome nor tell a researcher which analysis outcome is true thereby supporting the reports of Edward Simpson’s paradox.

The question Simpson’s paradox raises is which level of data collection shows an outcome that raises curiosity. Also, how can probable variables be detected and what guidelines can decide which variables affect the researcher’s conclusion?

The Simpson’s paradox is also known as the Amalgamation paradox, The reversal paradox, and the Yule Effect.

How to Detect Simpson’s Paradox During Research Analysis

It may be difficult to detect the extent to which Simpson’s paradox is likely to occur in experimental research if data has not been tested. Therefore, during the planning stage of analysis, identify the likely group to have a confounding variable. A good way to find this out is to cross-check the results of multiple analyses. If a confounding variable is present in some analyses, it might be detected if there is instability in the association between the result of the findings and the application. 

You can carry out a pilot test to identify confounding variables and determine which group accurately describes the hypothesis being analyzed. When you have detected the likely confounding variables and decide on the group of interest, choose a design and analysis method that can manage the confounding variables in the area of your research study. Keep in mind that if the confounding variables are equal in all the groups, Simpson’s paradox can not occur.

There are tools available to help with the presence of Simpson’s paradox in a dataset. You can check out the R Package. This helps a researcher detect Simpson’s paradox in running data.

Once the researcher specifies which of the samples is the dependent variable and the independent variable and also specifies the variable to be divided into groups. An R Package checks for the presence of Simpson’s paradox only if you decide in advance where it should check before conducting the data and it does not work on the entire data.

Examples of Simpson’s Paradox

A study on smoking and its long-term surviving effect on women was conducted in 1996 by Appleton. The result of the analysis shows that women who smoked and survived had a higher percentage than women who died from smoking. When the sample was narrowed down by age groups, the result showed that women who did not smoke have a better chance of surviving than the women who smoked.

To understand Simpson’s paradox in this example, note that in the initial testing, the age of the women was ignored, and that is the confounding variable. The reason is that the outcome of the analysis, which is the survival status of the women, was associated with age. You notice that the survival rate of the women changed with the age groups and the rate of women who smoked not surviving spiked up with including age in the analysis.

Now in these two analyses, the factors that can cause Simpson’s paradox were present. The first being an uneven distribution of confounding variables and the second being ignoring these variables. Logically, the reason the outcomes are reversed remains unclear.

Let’s look at this, for example, if you want to investigate the revenue of an international organization, you would run the test by the company’s product, the time and year the revenue was generated. The result of the test will provide insight into what action should be taken to sustain the business. The insight generated from the analysis could be false if interpreted wrongly. It is important to know that good data can lead to wrong decisions if the interpretation is faulty. 

Effects of Simpson’s Paradox in Experimental Research

The effect of Simpson’s paradox in experimental research is that a false association can lead to an incorrect conclusion. The effect of the incorrect conclusion is that a researcher may admit a wrong treatment and even, the researcher may continue to make a further study on the incorrect conclusion. This is going to be a misuse of resources, efforts, and a complete waste of time. In experimental research, results can be different whether the confounding variable is considered before conducting the test.

How to Avoid Simpson’s Paradox

Simpson’s paradox can be avoided in a study if the most accurate experimental design is used. During the planning stage, the selection of the analysis should include the confounding variables so that the most correct answer to the research question will be gotten when the study is being analyzed. If unequal distribution of data into groups and undetected confounding variables are combined in a study, a Simpson’s paradox will occur.

Hence, to avoid the wrong conclusion which is in fact different from the correct outcome of the study, the most suitable experimental design should be generated and dispersed between the sample group.

To ensure balanced groups in experimental designs, these three designs should be considered.

1. Simple randomization: Simple randomization means strewing data into sample groups. The advantage of simple randomization is that data in each of the sample groups have the opportunity to be equally distributed to any group and the method manages both known and unknown variables.

It is best that the sample size is large because this method may not be so effective if it is applied for a small sample size.

2. Randomized block design: In a randomized block design, the study data are grouped into subgroups according to their similar characteristics. This method reduces the consequences of confounding variables.

In this case, to examine abdominal pain in infants, the treatment would be administered randomly and separately based on gender. The female in one group and the male in the other. This way, the outcome of the treatment will be apportioned based on the frequency of the pain in the male and female subjects. The researcher should note that this method can be difficult if there are many blocking variables. 

3. Minimization: Minimization randomly distributes subjects to equivalent groups and the likely confounding variables are equally distributed. Minimization is a better method if your study has many confounding variables. This is done by calculating the level of inequality that can occur if the test sample were distributed to a particular group. The group with minimal confounding variables will be selected for the subject’s treatment.

Minimization is less effective if there is a relationship between confounding variables, so a researcher should know the minimization is best applied if the effect has an even subject aggregate, and higher weight should be assigned to samples with small subjects to provide balance. 

To avoid Simpson’s paradox, the researcher has a responsibility of choosing which of these designs is most appropriate for producing the answer of interest for the research questions while also managing the confounding variables.

The researcher should then determine what the actual association between dependent and independent variables is. When this is done, the researcher should determine if the original hypothetical sample presented is accurately reflected by the effects of the group or strata. 

Read: 21 Chrome Extensions for Academic Researchers in 2021

FAQs about Simpson’s Paradox

  • How common is Simpson’s paradox?

According to reports from researchers in 2009, Simpson’s paradox may be present in more research than we might want to shy away from. It is estimated that Simpson’s paradox may happen in about 1.67% of research analysis studies with equally aggregated random data. In another research, it was revealed that Simpson’s paradox may occur in experimental research however undetected by the researcher. 

  • Can Simpson’s paradox occur in correlations?

When two variables have a common relationship that leads to one effect, then Simpson’s paradox can occur. This outcome can be positive, and it can be negative. This effect can only be reversed when the relationship between the two variables is no longer intact. When the data being analyzed differs in effect, the relationship between the two variables is reversed. 

Conclusion

In this post, we have seen how Simpson’s Paradox comes to play. Simpson’s paradox is no doubt tricky, but a researcher equipped with the right tools and sound knowledge can manage it well. We hope this article provides insights that can help you better understand this paradox.


  • busayo.longe
  • on 11 min read

Formplus

You may also like:

How to do a Meta Analysis: Methodology, Pros & Cons

In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...


10 min read
Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.


12 min read
What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research


8 min read
Double Blind Studies in Research: Types, Pros & Cons

In this article we will discuss what the double-blind study is, its usefulness, advantages and disadvantages in a study or research.


10 min read

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. Try Formplus and transform your work productivity today.
Try Formplus For Free