Quasi Random Assignment Vs Random

A quasi-experiment is an empirical study used to estimate the causal impact of an intervention on its target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to treatment or control. Instead, quasi-experimental designs typically allow the researcher to control the assignment to the treatment condition, but using some criterion other than random assignment (e.g., an eligibility cutoff mark).[1] In some cases, the researcher may have control over assignment to treatment. Quasi-experiments are subject to concerns regarding internal validity, because the treatment and control groups may not be comparable at baseline. With random assignment, study participants have the same chance of being assigned to the intervention group or the comparison group. As a result, differences between groups on both observed and unobserved characteristics would be due to chance, rather than to a systematic factor related to treatment (e.g., illness severity). Randomization itself does not guarantee that groups will be equivalent at baseline. Any change in characteristics post-intervention is likely attributable to the intervention. With quasi-experimental studies, it may not be possible to convincingly demonstrate a causal link between the treatment condition and observed outcomes. This is particularly true if there are confounding variables that cannot be controlled or accounted for.[2]


The first part of creating a quasi-experimental design is to identify the variables. The quasi-independent variable will be the x-variable, the variable that is manipulated in order to affect a dependent variable. “X” is generally a grouping variable with different levels. Grouping means two or more groups, such as two groups receiving alternative treatments, or a treatment group and a no-treatment group (which may be given a placebo - placebos are more frequently used in medical or physiological experiments). The predicted outcome is the dependent variable, which is the y-variable. In a time series analysis, the dependent variable is observed over time for any changes that may take place. Once the variables have been identified and defined, a procedure should then be implemented and group differences should be examined.[3]

In an experiment with random assignment, study units have the same chance of being assigned to a given treatment condition. As such, random assignment ensures that both the experimental and control groups are equivalent. In a quasi-experimental design, assignment to a given treatment condition is based on something other than random assignment. Depending on the type of quasi-experimental design, the researcher might have control over assignment to the treatment condition but use some criteria other than random assignment (e.g., a cutoff score) to determine which participants receive the treatment, or the researcher may have no control over the treatment condition assignment and the criteria used for assignment may be unknown. Factors such as cost, feasibility, political concerns, or convenience may influence how or if participants are assigned to a given treatment conditions, and as such, quasi-experiments are subject to concerns regarding internal validity (i.e., can the results of the experiment be used to make a causal inference?).

Quasi-experiments are also effective because they use the "pre-post testing". This means that there are tests done before any data are collected to see if there are any person confounds or if any participants have certain tendencies. Then the actual experiment is done with post test results recorded. This data can be compared as part of the study or the pre-test data can be included in an explanation for the actual experimental data. Quasi experiments have independent variables that already exist such as age, gender, eye color. These variables can either be continuous (age) or they can be categorical (gender). In short, naturally occurring variables are measured within quasi experiments.[4]

There are several types of quasi-experimental designs, each with different strengths, weaknesses and applications. These designs include (but are not limited to):[5]

Of all of these designs, the regression discontinuity design comes the closest to the experimental design, as the experimenter maintains control of the treatment assignment and it is known to “yield an unbiased estimate of the treatment effects”.[5]:242 It does, however, require large numbers of study participants and precise modeling of the functional form between the assignment and the outcome variable, in order to yield the same power as a traditional experimental design.

Though quasi-experiments are sometimes shunned by those who consider themselves to be experimental purists (leading Donald T. Campbell to coin the term “queasy experiments” for them),[6] they are exceptionally useful in areas where it is not feasible or desirable to conduct an experiment or randomized control trial. Such instances include evaluating the impact of public policy changes, educational interventions or large scale health interventions. The primary drawback of quasi-experimental designs is that they cannot eliminate the possibility of confounding bias, which can hinder one’s ability to draw causal inferences. This drawback is often used to discount quasi-experimental results. However, such bias can be controlled for using various statistical techniques such as multiple regression, if one can identify and measure the confounding variable(s). Such techniques can be used to model and partial out the effects of confounding variables techniques, thereby improving the accuracy of the results obtained from quasi-experiments. Moreover, the developing use of propensity score matching to match participants on variables important to the treatment selection process can also improve the accuracy of quasi-experimental results. In fact, data derived from quasi-experimental analyses has been shown to closely match experimental data in certain cases, even when different criteria were used.[7] In sum, quasi-experiments are a valuable tool, especially for the applied researcher. On their own, quasi-experimental designs do not allow one to make definitive causal inferences; however, they provide necessary and valuable information that cannot be obtained by experimental methods alone. Researchers, especially those interested in investigating applied research questions, should move beyond the traditional experimental design and avail themselves of the possibilities inherent in quasi-experimental designs.[5]


A true experiment would, for example, randomly assign children to a scholarship, in order to control for all other variables. Quasi-experiments are commonly used in social sciences, public health, education, and policy analysis, especially when it is not practical or reasonable to randomize study participants to the treatment condition.

As an example, suppose we divide households into two categories: Households in which the parents spank their children, and households in which the parents do not spank their children. We can run a linear regression to determine if there is a positive correlation between parents' spanking and their children's aggressive behavior. However, to simply randomize parents to spank or to not spank their children may not be practical or ethical, because some parents may believe it is morally wrong to spank their children and refuse to participate.

Some authors distinguish between a natural experiment and a "quasi-experiment".[1][5] The difference is that in a quasi-experiment the criterion for assignment is selected by the researcher, while in a natural experiment the assignment occurs 'naturally,' without the researcher's intervention.

Quasi-experiments have outcome measures, treatments, and experimental units, but do not use random assignment. Quasi-experiments are often the design that most people choose over true experiments. The main reason is that they can usually be conducted while true experiments can not always be. Quasi-experiments are interesting because they bring in features from both experimental and non experimental designs. Measured variables can be brought in, as well as manipulated variables. Usually Quasi-experiments are chosen by experimenters because they maximize internal and external validity.[8]


Since quasi-experimental designs are used when randomization is impractical and/or unethical, they are typically easier to set up than true experimental designs, which require[9]random assignment of subjects. Additionally, utilizing quasi-experimental designs minimizes threats to ecological validity as natural environments do not suffer the same problems of artificiality as compared to a well-controlled laboratory setting.[10] Since quasi-experiments are natural experiments, findings in one may be applied to other subjects and settings, allowing for some generalizations to be made about population. Also, this experimentation method is efficient in longitudinal research that involves longer time periods which can be followed up in different environments.

Other advantages of quasi experiments include the idea of having any manipulations the experimenter so chooses. In natural experiments, the researchers have to let manipulations occur on their own and have no control over them whatsoever. Also, using self selected groups in quasi experiments also takes away to chance of ethical, conditional, etc. concerns while conducting the study.[8]


Quasi-experimental estimates of impact are subject to contamination by confounding variables.[1] In the example above, a variation in the children's response to spanking is plausibly influenced by factors that cannot be easily measured and controlled, for example the child's intrinsic wildness or the parent's irritability. The lack of random assignment in the quasi-experimental design method may allow studies to be more feasible, but this also poses many challenges for the investigator in terms of internal validity. This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity.[11] Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables.[12]

Disadvantages also include the study groups may provide weaker evidence because of the lack of randomness. Randomness brings a lot of useful information to a study because it broadens results and therefore gives a better representation of the population as a whole. Using unequal groups can also be a threat to internal validity. If groups are not equal, which is sometimes the case in quasi experiments, then the experimenter might not be positive what the causes are for the results.[4]

Internal validity[edit]

Internal validity is the approximate truth about inferences regarding cause-effect or causal relationships. This is why validity is important for quasi experiments because they are all about causal relationships. It occurs when the experimenter tries to control all variables that could affect the results of the experiment. Statistical regression, history and the participants are all possible threats to internal validity. The question you would want to ask while trying to keep internal validity high is "Are there any other possible reasons for the outcome besides the reason I want it to be?" If so, then internal validity might not be as strong.[8]

External validity[edit]

External validity is the extent to which results obtained from a study sample can be generalized to the population of interest. When External Validity is high, the generalization is accurate and can represent the outside world from the experiment. External Validity is very important when it comes to statistical research because you want to make sure that you have a correct depiction of the population. When external validity is low, the credibility of your research comes into doubt. Reducing threats to external validity can be done by making sure there is a random sampling of participants and random assignment as well.[13]

Design types[edit]

"Person-by-treatment" designs are the most common type of quasi experiment design. In this design, the experimenter measures at least one independent variable. Along with measuring one variable, the experimenter will also manipulate a different independent variable. Because there is manipulating and measuring of different independent variables, the research is mostly done in laboratories. An important factor in dealing with person-by-treatment designs are that random assignment will need to be used in order to make sure that the experimenter has complete control over the manipulations that are being done to the study.[14]

An example of this type of design was performed at the University of Notre Dame. The study was conducted to see if being mentored for your job led to increased job satisfaction. The results showed that many people who did have a mentor showed very high job satisfaction. However, the study also showed that those who did not receive the mentor also had a high number of satisfied employees. Seibert concluded that although the workers who had mentors were happy, he could not assume that the reason for it was the mentors themselves because of the numbers of the high number of non-mentored employees that said they were satisfied. This is why prescreening is very important so that you can minimize any flaws in the study before they are seen.[15]

"Natural experiments" are a different type of quasi experiment design used by researchers. It differs from person-by-treatment in a way that there is not a variable that is being manipulated by the experimenter. Instead of controlling at least one variable like the person-by-treatment design, experimenters do not use random assignment and leave the experimental control up to chance. This is where the name "natural" experiment comes from. The manipulations occur naturally, and although this may seem like an inaccurate technique, it has actually proven to be useful in many cases. These are the studies done to people who had something sudden happen to them. This could mean good or bad, traumatic or euphoric. An example of this could be studies done on those who have been in a car accident and those who have not. Car accidents occur naturally, so it would not be ethical to stage experiments to traumatize subjects in the study. These naturally occurring events have proven to be useful for studying posttraumatic stress disorder cases.[14]


  1. ^ abcDinardo, J. (2008). "natural experiments and quasi-natural experiments". The New Palgrave Dictionary of Economics. pp. 856–859. doi:10.1057/9780230226203.1162. ISBN 978-0-333-78676-5. 
  2. ^Rossi, Peter Henry; Mark W. Lipsey; Howard E. Freeman (2004). Evaluation: A Systematic Approach (7th ed.). SAGE. p. 237. ISBN 978-0-7619-0894-4. 
  3. ^Gribbons, Barry; Herman, Joan (1997). "True and quasi-experimental designs". Practical Assessment, Research & Evaluation. 5 (14). 
  4. ^ abMorgan, G. A. (2000). Quasi-Experimental Designs. Journal of the American Academy of Child & Adolescent Psychiatry. 39. pp. 794–796. doi:10.1097/00004583-200006000-00020. 
  5. ^ abcdShadish; Cook; Cambell (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin. ISBN 0-395-61556-9. 
  6. ^Campbell, D. T. (1988). Methodology and epistemology for social science: selected papers. University of Chicago Press. ISBN 0-226-09248-8. 
  7. ^Armstrong, J. Scott; Patnaik, Sandeep (2009-06-01). "Using Quasi-Experimental Data To Develop Empirical Generalizations For Persuasive Advertising"(PDF). Journal of Advertising Research. 49 (2): 170–175. doi:10.2501/s0021849909090230. ISSN 0021-8499. 
  8. ^ abcDeRue, Scott (September 2012). "A Quasi Experimental Study of After-Event Reviews". Journal of Applied Psychology. 97 (5): 997–1015. doi:10.1037/a0028244. PMID 22506721. 
  9. ^CHARM-Controlled Experiments
  10. ^http://www.osulb.edu/~msaintg/ppa696/696quasi.htm[permanent dead link]
  11. ^Lynda S. Robson, Harry S. Shannon, Linda M. Goldenhar, Andrew R. Hale (2001)Quasi-experimental and experimental designs: more powerful evaluation designsArchived September 16, 2012, at the Wayback Machine., Chapter 4 of Guide to Evaluating the Effectiveness of Strategies for Preventing Work Injuries: How to show whether a safety intervention really worksArchived March 28, 2012, at the Wayback Machine., Institute for Work & Health, Canada
  12. ^Research Methods: Planning: Quasi-Exper. Designs
  13. ^Calder, Bobby (1982). "The Concept of External Validity". Journal of Consumer Research. 9 (3): 240–244. doi:10.1086/208920. 
  14. ^ abMeyer, Bruce (April 1995). "Quasi & Natural Experiments in Economics". Journal of Business and Economic Statistics. 13 (2): 151–161. doi:10.1080/07350015.1995.10524589. 
  15. ^Seibert, Scott (1999). "The Effectiveness of Facilitated Mentoring A Longitudinal Quasi Experiment". Journal of Vocational Behavior. 54 (3): 483–502. doi:10.1006/jvbe.1998.1676. 

External links[edit]

NBER Reporter: Research Summary Summer 2003

Randomized Trials and Quasi-Experiments in Education Research

Joshua D. Angrist(1)

The 2001 No Child Left Behind (NCLB) Act promises a series of significant reforms. The hope is that these reforms will jump-start under-performing American schools. Most public discussion of the Act has focused on the mandate for test-based school accountability and the federal endorsements of charter schools and other forms of school choice. Other important provisions include changes in funding rules for states and a new emphasis on reading instruction. The NCLB Act also repeatedly calls for education policy to rely on a foundation of scientifically based research. Although this appears to be a bland technical statement, it strikes me as potentially at least as significant as other components of the Act.

What is Scientifically Based Research?

NCLB defines scientifically-based research as research using rigorous methodological designs and techniques, including control groups and random assignment. In a presentation made shortly after President Bush signed NCLB into law, the deputy director of the Office of Research in the Department of Education put studies involving randomized trials and quasi-experiments at the top of the methodological hierarchy.

Randomized trials are experiments in which the division into treatment and control groups is determined at random (for example, by tossing a coin). Quasi-experimental research designs are based on naturally occurring circumstances or institutions that (perhaps unintentionally) divide people into treatment and control groups in a manner akin to purposeful random assignment.

A reliance on control groups and random assignment indeed would mark a new direction for education research. For example, an important question on the education research agenda is the role of technology in schools. Most previous research on the use of technology in the classroom (computer-aided instruction or CAI) relies on uncontrolled measurements, such as the level of satisfaction experienced by technology users. Not surprisingly, teachers and students typically report that they enjoy using new computer equipment (as shown in a recent study of laptops in Maine's public schools). But this does not establish that students who use the laptops are learning more, or that the expenditure on computers meets a cost-benefit standard (after all, computer hardware and software is expensive).

Randomized trials provide the best scientific evidence on the effects of policies like educational technology, changes in class size, or school vouchers because differences between the treatment and control group can be attributed confidently to the treatment. A good quasi- or natural experiment is the next best thing to a real experiment. In some cases, quasi-experiments also involve random assignment, such as in the lotteries sometimes used to distribute school vouchers. In addition to comparing apples to apples, randomized trials and natural experiments also rely on assessments by disinterested non-participants and on clearly defined outcomes that other researchers can reproduce and interpret. This is what science is all about. In contrast, U.S. education policy has often relied on evidence that is fragmentary or anecdotal, uses subjective outcomes, and, most importantly, fails to make rigorous comparisons of treatment and control groups.

If successful, a shift to scientifically based research will move the study of education much closer to medicine, which has been experiencing a similar transition to scientifically based research over the last half-century. NBER researchers have been in the vanguard of this transition to scientifically based research on education. We have used natural experiments -- and in some cases, actual randomized trials -- to provide powerful evidence on issues ranging from the effects of compulsory attendance laws to changing class size. I describe some of this work below, focusing on my own efforts. I have used quasi-experiments -- and in recent and ongoing projects, randomized trials -- to make scientifically grounded inferences regarding the effects of achievement incentives and school choice, school resources, and macro education policy.

School Incentives and School Choice

The desire to help disadvantaged teens get through high school is a recurring theme of school reform proposals. Most anti-dropout efforts involve the provision of support services to low-achieving students. But the results from recent demonstration projects assessing services for at-risk high-school students have been disappointing.(2) Motivated by the economic view of education, which sees student effort in school as determined partly by a comparison of the costs and benefits of effort devoted to schooling, Victor Lavy and I developed a unique program that rewards Israeli high school students who pass their high school matriculation exams (something like the New York Regents exam or British A Levels) with cash payments. Although this project was controversial in Israel (and eventually was cancelled as a political liability), it is in the spirit of a 1998 proposal by former Labor Secretary Reich, who suggested that students from low-income families in the United States be offered a $25,000 cash bonus for graduating from high school. It is also similar to the merit-based stipends common in higher education.

Perhaps most unusually, Lavy and I implemented the Achievement Awards program as a school-based randomized trial.(3) That is, we identified 40 of the lowest-achieving schools in Israel, and randomly selected 20 of them for participation in the program. Any student from the 20 treatment schools who passed their exams was eligible for a $1500 payment, quite a large sum in Israel, although small relative to the private and social costs of dropping out. Only about 18 percent of students in the control group completed their matriculation exams. Students in our control group were about 7 percentage points more likely to complete their matriculation exams, a statistically significant difference with an economic benefit that easily outweighs the cost of bonus payments.

The Achievement Awards demonstration is the first of what we hope will be a series of randomized trials designated to test education incentive plans. In research in progress, Lavy and I are evaluating a package of incentives that provides awards for teachers as well as for students. A unique feature of our ongoing work is that the new demonstration project includes a component specifically designed to explore the interaction of student and teacher incentives.(4) We also plan a long-run follow up study of the Achievement Awards program.

One of the most controversial innovations highlighted by NCLB is school choice. In a recently published paper,(5) my collaborators and I studied what appears to be the largest school voucher program to date. This program provided over 125,000 pupils from poor neighborhoods in the country of Colombia with vouchers that covered approximately half the cost of private secondary school. Colombia is an especially interesting setting for testing the voucher concept because private secondary schooling in Colombia is a widely available and often inexpensive alternative to crowded public schools. (In Bogota, over half of secondary school students are in private schools.) Moreover, governments in many poor countries are increasingly likely to experiment with demand-side education finance programs, including vouchers.

Although not a randomized trial, a key feature of our Colombia study is the exploitation of voucher lotteries as the basis for a quasi-experimental research design. Because demand for vouchers exceeded supply, the available vouchers were allocated by lottery in large cities. Our study compares voucher applicants who won a voucher in the lottery to those who lost. Since the lotteries used random assignment, losers provide a good control group for winners. A comparison of voucher winners and losers shows that three years after the lotteries were held, winners were 15 percentage points more likely to have attended private school and were about 10 percentage points more likely to have finished eighth grade, primarily because they were less likely to repeat grades. Lottery winners also scored 0.2 standard deviations higher on standardized tests. A follow-up study in progress shows that voucher winners also were more likely to apply to college. On balance, our study provides some of the strongest evidence to date for the possible benefits of demand-side financing of secondary schooling, at least in a developing country setting.(6)

Research on vouchers naturally focuses on the question of whether voucher recipients benefit from the opportunity to use vouchers. A related question that gets less attention arises from the fact that voucher recipients and other school choice beneficiaries are typically low-income. For example, NCLB singles out the students in the worst schools as being eligible for choice. In particular, NCLB requires districts to allow students in schools judged to be "failing" the opportunity to change schools. Policymakers and parents in the schools that accept these students have wondered what the consequences will be for high-achieving children when low achievers from poor areas choose to attend their schools. Economists refer to research on questions of this sort as the measurement of peer effects.

Boston's long-running Metco program provides a unique opportunity to estimate peer effects in the classroom using a quasi-experimental research design. Metco gives mostly black students in the Boston public school district the opportunity to attend schools in more affluent suburban districts. Kevin Lang and I focus on the impact of Metco on the students in one of the largest Metco-receiving districts.(7) Because Metco students have substantially lower test scores than local students, this inflow generates a significant decline in average scores. Our research shows that the overall decline in scores is attributable to a composition effect, though, because we find no impact on average scores in a sample limited to non-Metco students. This weighs against the hypothesis of significant negative peer effects as a result of school choice (although we do find a short-lived negative effect on the scores of minority third graders in reading and language). Our research on Metco exploits idiosyncratic features of the process used to allocate Metco students to different schools through what is known as the "regression-discontinuity" method for analysis of quasi-experiments.

School Resources

Another strand of my work uses quasi-experiments to look at what economists call the education production function. This research links school resources, including computers and class size, with outcomes such as student achievement on standardized tests. The principal challenge in research of this type, as in most empirical research in economics, is in isolating cause and effect. Many factors make the observed correlation between school resources and student achievement hard to interpret. Rich and poor districts differ on many dimensions, teachers sort students into classes of different size, and students and parents make systematic choices that are reflected in the resources/achievement relationship.

The question of how technology affects learning has been at the center of recent debates over educational inputs. My most recent research on school resources, joint with Lavy,(8) exploits a natural experiment arising from the fact that the Israeli State lottery, which uses lottery profits to sponsor various social programs, funded a large-scale computerization effort in many elementary and middle schools. Although lottery officials did not use random assignment to allocate the computers across towns and schools, they used an idiosyncratic priority scheme that appears to have an essentially random component. We used this to estimate the impact of computerization on both the instructional use of computers and pupil achievement. Results from a survey of Israeli school-teachers show that the influx of new computers increased teachers' use of CAI in the fourth grade, with a smaller effect on CAI in eighth grade. Perhaps surprisingly, CAI does not appear to have had educational benefits that translated into higher test scores. In fact, estimates for fourth graders show lower math scores in the group that was awarded computers, with smaller (insignificant) negative effects on language scores. These results call into question the widely-held view that additional resources should be devoted to CAI.(9)

Another central question in the school resources debate is the importance of class size. Although recent years have seen renewed interest in the class-size question, academic interest in this topic is not simply a modern phenomenon: the choice of class size has been of concern to scholars and teachers for hundreds of years. One of the earliest references on this topic is the Babylonian Talmud, completed around the beginning of the 6th century, which discusses rules for the determination of class size and pupil-teacher ratios in bible study. The great 12th century Rabbinic scholar, Maimonides, interprets the Talmud's discussion of class size as follows: "Twenty-five children may be put in charge of one teacher. If the number in the class exceeds twenty-five but is not more than forty, he should have an assistant to help with the instruction. If there are more than forty, two teachers must be appointed."

In my first study of school resources, also joint with Lavy,(10) we use Maimonides' rule capping class size at 40 to construct a natural experiment to estimate the effects of class size on the scholastic achievement of Israeli pupils. To see how this experiment works, note that according to Maimonides' rule, class size increases one-for-one with enrollment until 40 pupils are enrolled, but when 41 students are enrolled, there will be a sharp drop in class size, to an average of 20.5 pupils. Similarly, when 80 pupils are enrolled, the average class size will again be 40, but when 81 pupils are enrolled the average class size drops to 27. Our use of this variation is an application of the quasi-experimental regression-discontinuity method.

Interestingly, the observed association between class size and student achievement in our data is always perverse (that is, students in larger classes tend to do better). But this illustrates the importance of research using a good experiment. Estimates of class size effects using Maimonides' Rule suggest that reductions in class size induce a significant and substantial increase in math and reading achievement for fifth graders, and a modest increase in reading achievement for fourth graders. We gain confidence is this result (as opposed to the simple correlation between class size and test scores) because a randomized trial manipulating class size in Tennessee generated similar estimates.(11)

The Effects of Macro-Education Policy

The work discussed above focuses on a micro-level analysis of students and schools. I have also used natural experiments to study legislative and other macro-level education policies. Here, experiments are harder to come by and research may have to rely on simple policy shifts that affect some states and not others. Nevertheless, this work follows the natural-experiments model in that there is always a well-defined control group. For example, Jon Guryan and I recently looked at state changes in teacher certification requirements.(12) We find that states that introduced teacher tests (such as the national teachers examination) ended up paying higher teacher salaries with no measurable increase in teacher quality. This suggests that tests are more of a barrier to entry than an effective quality screen.

In earlier work, Alan Krueger and I looked at the effects of compulsory attendance laws.(13) This research exploits the interaction between individuals' quarter of birth and state laws (children born earlier in the year are allowed to drop out of school after having completed less schooling than those born later). More recently, Daron Acemoglu and I have used state compulsory attendance laws to estimate the social returns to education (that is, an economic benefit beyond that accruing to the more educated individuals themselves).(14) I also have looked at natural experiments increasing the education infrastructure, for example a large-scale expansion of higher education in the West Bank and Gaza Strip.(15) Finally, Lavy and I studied the economic consequences of the change in language of instruction in Morocco's secondary schools.(16)


In addition to providing evidence on specific questions, I believe that an important overall contribution of my work on education has been to document the feasibility and promise of both quasi-experimental methods and randomized trials in education research. Many other NBER researchers are also involved in this work and I expect that education research along these lines will be a growth area for economists in the years to come. I am certainly looking forward to doing more of it.


1. Angrist is a Research Associate in the NBER's Program in the Economics of Education and a Professor of Economics at MIT. His profile appears later in this issue.

2. M. Dynarski and P. Gleason, How Can We Help? What Have We Learned from Evaluations of Federal Dropout-Prevention Program, Mathematica Policy Research report 8014-140, Princeton, NJ, June 1998.

3. J. D. Angrist and V. Lavy, "The Effect of High School Matriculation Awards" Evidence from Randomized Trials,' NBER Working Paper No. 9389, December 2002.

4. See P. Glewwe, N. Ilias, and M. Kremer, "Teacher Incentives," NBER Working Paper No. 9671, May 2003, for a recent randomized trial of teacher incentives in Kenya.

5. J. D. Angrist, E. P. Bettinger, E. Bloom, E. King, and M. Kremer, "Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment," NBER Working Paper No. 8343, June 2001, and in American Economic Review, 92 (5) (December 2002), pp. 1535-58.

6. Evidence on voucher effects for the United States has been more mixed. Two studies involving randomization are Alan B. Krueger and P. Zhu, "Another Look at the New York City Voucher Experiment," NBER Working Paper No. 9418, January 2003 and C. E. Rouse, "Private School Vouchers and Student Achievement: An Evaluation of the Milwaukee Parental Choice Program," The Quarterly Journal of Economics, 113 (2) (May 1998). C. M. Hoxby, "Does Competition Among Public Schools Benefit Students and Taxpayers?" American Economic Review, 90 (5) (December 2000), pp. 1209-38 is a quasi-experimental study of school choice. In work in progress, J. B. Cullen, S. D. Levitt, and B. A. Jacob are using lotteries to study school choice in the Chicago public schools in, "The Impact of School Choice on Enrollment and Achievement: Evidence from over 1000 Lotteries," manuscript, 2003 (forthcoming as an NBER Working Paper).

7. J. D. Angrist and K. Lang, "How Important are Classroom Peer Effects? Evidence from Boston's METCO Program," NBER Working Paper No. 9263, October 2002.

8. J. D. Angrist and V. Lavy, "New Evidence on Classroom Computers and Pupil Learning," The Economic Journal, 112 (October 2002), pp. 735-65.

9. My MIT colleagues Abhijit Banerjee and Esther Duflo are currently running a randomized trial of CAI in India.

10. J. D. Angrist and V. Lavy, "Using Maimonides' Rule to Estimate the Effect of Class Size on Student Achievement," Quarterly Journal of Economics, 114 (2) (May 1999), pp. 533-75.

11. A. B. Krueger, "Experimental Estimates of Education Production Functions," Quarterly Journal of Economics, 104 (1999) pp. 497-532. But see also C. M. Hoxby, "Does Competition Among Public Schools Benefit Students and Taxpayers?, " which finds little evidence of a class size effect using quasi-experimental methods to analyze data from Connecticut.

12. J. D. Angrist and J. Guryan, "Does Teacher Testing Raise Teacher Quality? Evidence from State Certification Requirements," NBER Working Paper No. 9545, March 2003.

13. J. D. Angrist and A. B. Krueger, "Does Compulsory School Attendance Affect Schooling and Earnings?" Quarterly Journal of Economics, 106 (November 1991), and "The Effect of Age at School Entry on Educational Attainment: An Application of Instrumental Variables with Moments from Two Samples," Journal of the American Statistical Association, (June 1992).

14. J. D. Angrist and D. Acemoglu, "How Large are the Social Returns to Education? Evidence from Compulsory Attendance Laws," NBER Macro Annual, 15, Cambridge, MA: MIT Press, 2000.

15. J. D. Angrist, "The Economic Returns to Schooling in the West Bank and Gaza Strip," American Economic Review, 85 (5) (December 1995), pp. 1065-87. A related and more recent study in this spirit is E. Duflo, "The Medium Run Effects of Educational Expansion: Evidence from a Large School Construction Program in Indonesia," NBER Working Paper No. 8710, January 2002.

16. J. D. Angrist and V. Lavy, "The Effect of a Change in Language of Instruction on the Returns to Schooling in Morocco," Journal of Labor Economics, 15 (January 1997) S48-S76.

0 thoughts on “Quasi Random Assignment Vs Random”


Leave a Comment

Your email address will not be published. Required fields are marked *