Let’s say you want to study the effects of a new drug on lowering blood pressure. You could randomly assign half of the participants to receive the drug and the other half to receive a placebo. However, this isn’t fair to the patients in the placebo group.
So, instead of experimenting to find out the effect of the new drug, you use a quasi-experiment to evaluate the drug. The purpose of quasi-experimental research is to establish a causal relationship between an independent variable and a dependent variable.
This guide will discuss the different types of quasi-experimental research, their practical applications, and the best practices for conducting successful quasi-experimental research.
Quasi-experimental research is a way of finding out if there’s a cause-and-effect relationship between variables when true experiments are not possible because of practical or ethical constraints.
For example, you want to know if a new medicine is effective for migraines. Instead of giving the medication to some people and not to others, you use quasi-experimental research to compare who takes the medication and people who haven’t.
Quasi-experimental research doesn’t always have the same level of internal validity as real experiments. Pre-selected samples can be biased and not represent the real population.
In a true experiment, the participants are assigned randomly to the experimental or control groups. This ensures that groups are as homogeneous as possible, except for the treatment they receive.
Quasi-experiments don’t randomly assign participants to groups, so the differences within the groups could affect the groups.
This design captures changes by measuring participants before and after an intervention. The pretest measures the dependent variable before the intervention, while the posttest measures it after the intervention.
The difference between the two measurements is the change that occurred due to the intervention.
However, it is important to be aware of the potential threats to the internal validity of the pretest-postest design. One threat is selection bias, which happens when the group is not homogenous.
Another is maturation, which occurs when participants change naturally over time. You can mitigate these threats using a control group, randomization, and blinding techniques.
The posttest-only design with nonequivalent groups is similar to the pretest-posttest design, but it does not include a pretest. You see the effect of the intervention by comparing groups to see if there is a difference in their scores.
The difference in scores determines the impact of the intervention. This design is less powerful than the pretest-posttest design because it does not account for potential differences between the groups at baseline.
However, the posttest design is still a valuable tool for research, especially when you can’t collect pretest data. You can mitigate its limitations by using matching participants on important factors, statistical analysis, and sensitivity analysis
The regression discontinuity design uses naturally occurring cutoff points to assign participants to treatment and control groups. It’s widely adopted in education and social policy research.
For example, a talent recruiter might use the cutoff score on a standardized test to move candidates to the next phase of the application.
The Interrupted Time Series design is a type of study that uses time series data to figure out how interventions or events affect the population. In this study, you measure the dependent variable multiple times over time, before and after the intervention.
The interrupted time series design is most commonly used to study the impact of policies or programs. For example, you are studying the impact of a new law on traffic accidents.
You could collect data on the number of traffic accidents each month for a year before the law was passed and a year after the law was passed. If the number of traffic accidents decreases after the law was passed, then you could conclude that the law had a positive impact on traffic safety.
Matching techniques are a way to create balanced treatment and control groups in quasi-experimental research. This is done by matching participants on important characteristics, such as age, gender, or socioeconomic status.
Propensity score matching is one of the most popular matching methods. It works by using a statistical model to figure out how likely it is that each person in the study would have been in the treatment group if they were selected randomly. Then, people are randomly assigned according to their propensity scores, making sure that the treatment group and the control group are as close to the same as possible.
An instrumental variable (IV) in quasi-experimental research is a variable that’s related to the independent variable, but not to the error term. It’s a variable that can be used to measure how the independent variable affects the dependent variable.
Let’s say you want to investigate how a new drug reduces the risk of heart attack. You can use the number of days a person has taken aspirin as the instrumental variable.
Aspirin is associated with an independent variable (new drug), however, it is not associated with a dependent variable (risk of a heart attack). This is because people who take aspirin are more likely to take other medications, such as statins, which also lower the risk of heart attack.
Difference-in-differences analysis is a statistical technique that can be used to compare changes in treatment and control groups over time. It is typically used in quasi-experimental research to estimate the causal effect of an intervention.
You have to first define two groups when using the difference-in-differences analysis: the treatment group and the control group. A treatment group is a group that receives an intervention, while a control group doesn’t receive an intervention.
Next, collect data on the dependent variable for both groups before and after the intervention. The difference-in-differences estimate is then calculated by comparing the change in the dependent variable for the treatment group to the change in the dependent variable for the control group.
For example, in a study by David Card and Alan Krueger, they compared the effect of increasing the minimum wage in a particular region to the employment rate. They found that the minimum wage increase in New Jersey did not lead to job losses.
Strategies for Minimizing Validity Threats
Sample size is the number of participants in a study, while power is the probability of detecting a meaningful effect if it exists.
You have to carefully consider sample size and power when designing a quasi-experiment. This is because groups may not be 100% homogeneous at the start of the study, which can limit the strength of the design.
Using power analysis, you can figure out the sample size you need to see a significant effect. Power analysis looks at the magnitude of the effect, and the standard deviation of the dependent variable, and determines the alpha level.
A major downside of the quasi-experimental design is that it’s usually not generalizable or applicable to other environments. It is typically done in natural environments, so you can’t control factors that could influence the results.
Carefully consider the context of the study before you generalize a quasi-experimental. Also, try replicating the study in other settings to see if the results are consistent.
A study by the Universiti Kebangsaan, Malaysia used a quasi-experimental design to assess the effectiveness of a new program for preventing childhood obesity. The study found that the program was effective in reducing the risk of obesity, but it was also expensive.
A study by Raj Chetty and his colleagues found that students who attended charter schools in California were more likely to attend college than students who did not attend charter schools. However, this study arguably promoted academically underqualified students being admitted to colleges.
A study by the RAND Corporation used a quasi-experimental design to assess the effects of a job training program on employment and earnings.
The study found that job training programs were effective in increasing employment and earnings, but they also found that the impact varied depending on the characteristics of the participants and the design of the program.
Quasi-experimental research is a valuable tool for understanding the causal effects of interventions. It is particularly useful when you can’t conduct actual experiments because of ethical or practical constraints.
However, it is important to be aware of the limitations of this type of research. Carefully design the study and consider the limitations to ensure that the findings are accurate and reliable.
You may also like:
Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology
Overview on exploratory research, examples and methodology. Shows guides on how to conduct exploratory research with online surveys
A complete guide on market research; definitions, survey examples, templates, importance and tips.
This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.