Trends and Innovations in Public Policy Analysis

Deven Carlson


This essay identifies three notable advances that have influenced the field of public policy analysis in recent years: the move toward social experimentation, the use of meta-analysis and Monte Carlo simulation in benefit-cost analysis, and the rise of institutional actors that promote the practice and dissemination of high-quality policy analysis. In addition to describing each of these innovations, this essay discusses how each of these advances has affected the practice of public policy analysis.


Introduction

The field of public policy analysis is not constrained by traditional disciplinary boundaries or practices. It is informed by the work of political scientists, economists, sociologists, psychologists, and lawyers, as well as scholars of substantive policy areas, such as those who specialize in education, welfare, housing, health care, and many other policy areas. Theoretical or methodological innovations in any of these areas are often quickly added to the toolkits of policy analysts and manifest them- selves in their work. As a result, the field of policy analysis is unique in its dynamism and continual evolution. This essay describes three notable advances that have taken place in recent years and discusses the effects of these innovations on the practice of public policy analysis.

Contemporary policy research places an intense focus on using exogenous variation in treatment assignment to identify the causal effects of various policies or interventions on particular outcomes of interest. Among other consequences, this focus has resulted in an increased use of social experiments as a method for estimating policy impacts. Because policy research is often used to inform policy analysis, the trend toward the increased use of social experiments has affected the field of policy analysis in several important ways.1 The first part of this essay describes the movement toward social experimentation, the advantages and drawbacks of doing so, and the impact of this movement on the field of policy analysis.

Residing under the broad umbrella of policy analysis is the methodology of benefit-cost analysis. Whereas traditional policy analysis purports to identify all potential impacts of selected policy alternatives, benefit-cost analysis is concerned with identifying all efficiency-related policy impacts and then monetizing these impacts to determine which policy alternative is most desirable from a social efficiency perspective (Boardman, Greenberg, Vining, & Weimer, 2006; Weimer & Vining, 2011). Because benefit-cost analysis falls under the larger umbrella of policy analysis, many of the innovations in policy analysis — such as the move toward experimentation — also influence the conduct of benefit-cost analysis. There are other advances, however, that are unique to the practice of benefit-cost analysis. The second part of this essay focuses on two recent advances that are predominantly in the domain of benefit-cost analysis: (i) the use of meta-analysis to enhance the generalizability of benefit-cost analyses and (ii) the utilization of Monte Carlo simulation methods to convey uncertainty in the estimates of specific benefit and cost categories, as well as in the estimate of net benefits for a particular policy.

As the field of public policy analysis lacks a single disciplinary home, it has long occupied a space where its visibility is largely a function of the efforts of particular scholars and policy domains. As a result, its use, popularity, and influence in policy debates has waxed and waned over time. There is a sense, however, that these ebbs and flows are abating and policy analysis is beginning to occupy a mainstream position consistently in policy debates. This movement toward the mainstream can be partially attributed to an increased emphasis on the practice and dissemination of high-quality public policy analysis by a diverse set of institutional actors. The third part of this essay explores the role that three specific actors have played in ushering policy analysis and benefit-cost analysis toward the mainstream. In particular, it describes how the Congressional Budget Office (CBO), the Washington State Institute for Public Policy (WSIPP), and the John D. and Catherine T. MacArthur Foundation have advanced the cause of high-quality policy analysis. By describing the roles of these three institutions, I highlight how the recent emphasis on rigorous public policy analysis is the result of efforts by federal government agencies, state-level governmental institutions, and non-governmental entities.

By focusing on the three topics identified above — the move toward social experimentation, recent innovations in benefit-cost analysis, and institutional actors that have promoted the practice and dissemination of high-quality policy analysis — I provide a description of some of the most important recent innovations and advances in public policy analysis, as well as a portrayal of the current state of this vibrant field. Public policy analysis, which is growing in influence both inside and outside of the academy, provides scholars and analysts with a rigorous, systematic process for informing the important policy debates of the day, debates that affect the lives of millions of people in meaningful and diverse ways.

The Move Toward Social Experimentation

Traditional policy analysis is a systematic, multi-step process requiring a diverse set of skills and information from a wide variety of sources. Weimer (2009, p. 93) provides a succinct summary of the process when he writes "Canonical policy analysis defines the problem being addressed, identifies the social values, or goals, relevant to the problem, constructs concrete policy alternatives, projects the impacts of the alternative policies in terms of the identified goals, and makes a recommendation based on an explicit assessment among goals offered by the alternatives." Of these steps, projecting the impact of each policy alternative in terms of the relevant goals is perhaps the most important, yet also the most difficult (Bardach, 2009).

The task of projecting the impact of policies in terms of various goals requires analysts to assemble all available evidence that can usefully inform predictions about the likely effects of all relevant policy alternatives. While this is an inherently forward-looking endeavor, some of the most valuable evidence for informing predictions about future policy impacts comes from backward-looking policy research and evaluation; oftentimes the best predictor of the future effects of a policy is the past performance of identical or similar policies. Consequently, it is easy to see how high-quality policy research and evaluation can be an invaluable resource for practitioners of traditional policy analysis.

In recent years, policy research has exhibited an intense focus on identifying the causal effect of a policy or intervention on one or more outcomes of interest. Do charter schools increase student achievement? Does the relocation of families from public housing projects to low-poverty neighborhoods result in improved labor market outcomes? Answering questions such as these with precision and clarity is the main, and oftentimes sole, goal in contemporary policy research. In theory, a wide variety of research designs can provide valid and reliable estimates of causal relationships. In many of these designs, however, a causal interpretation of any estimates requires strong assumptions.2 As a result, preference is generally given to studies that can clearly demonstrate treatment assignment to be exogenous. While multiple designs can provide persuasive cases for exogenous assignment to treatment — regression discontinuity approaches and natural experiments are two such designs — perhaps the most convincing demonstration of treatment exogeneity is random assignment of units to treatment and control groups.

The allure of the experimental approach is clear; under what many consider to be relatively weak assumptions, experimental designs can provide unbiased, model-free estimates of the causal effect of a given treatment — often a specific policy or social intervention — on a particular outcome of interest. Many of the earliest applications of experimental designs in social policy research occurred in the context of early childhood education in the 1960s and '70s with evaluations of programs such as the Perry Preschool Project and the Carolina Abecedarian Project (see Heckman, Hyeok Moon, Pinto, Savelyev, & Yavitz, (2010) and Campbell and Ramey (1994) for descriptions of these programs and their respective designs).3 These evaluations signaled the beginning of a steady increase through the 1980s, 1990s, and 2000s in the use of experimental designs to evaluate the effects of a wide variety of social programs and policies on selected outcomes. See Greenberg and Shroder (2004) for a catalog of social experiments.

Case Study: Education

Although experimental evaluations have become increasingly common in nearly all social policy domains, the movement toward randomized controlled trials (RCTs) as the preferred method for evaluating the impacts of policies and interventions has perhaps been most pronounced in the area of education. Established by the Education Sciences Reform Act of 2002, the Institute for Education Sciences (IES) has made a concerted effort to improve the rigor of education research in order to provide better guidance to policymakers and practitioners. The effort of IES to increase the rigor of education research may be best exemplified by the creation and maintenance of the What Works Clearinghouse (WWC) and the associated Registry of RCTs. As its name suggests, the WWC is intended to serve as a one-stop shop for obtaining information on the effectiveness of various educational interventions and policy reforms. Each intervention or policy reform contained in the WWC is classified into one of the three following categories: Meets Evidence Standards, Meets Evidence Standards with Reservations, or Does Not Meet Evidence Standards. While several factors ultimately determine the rating assigned to a particular intervention or policy reform, only interventions evaluated using an RCT are eligible to receive a rating of Meets Evidence Standards. Perhaps not surprisingly, this standard has resulted in a substantial number of educational interventions and policy reforms being evaluated experimentally. To catalog the growing number of RCTs in education, IES created the Registry of RCTs. The registry serves as a repository of information about all RCTs funded by the Department of Education. As of this writing, the Registry contains descriptions of over 100 RCTs that have been funded by the Department of Education in recent years, including prominent evaluations of charter schools, private school vouchers, and dozens of other interventions and policy reforms (Gleason, Clark, Clark Tuttle, & Dwoyer, 2010; Wolf et al., 2010).4

In addition to increasing the number of experimental evaluations, the heavy focus that IES has placed on RCTs has also driven innovation in the field of experimental design and evaluation. Early experimental evaluations, such as the Perry Preschool Project, were generally small in scale with individual students being randomized at a limited number of sites. Such practices led to concerns about external validity and the presence of spillover effects (Bloom, 2005). In response to such concerns, it has become increasingly common in education for groups, or "clusters," to serve as the unit of randomization. Studies routinely randomize classrooms, schools, and even whole school districts to treatment and control groups in an effort to determine the effects of a particular intervention or policy. This evolution has spawned a literature that provides guidance on several issues that arise during the execution and analysis of cluster randomized trials. A sampling of these issues include missing data, statistical models for estimating treatment effects, and improving precision to name but a few (Puma, Olsen, Bell, & Price, 2009; Raudenbush, 1997; Raudenbush, Martinez, & Spybrook, 2007; Schochet, 2009). This guidance can be found in peer-reviewed journals, as well as in a series of Technical Methods Reports published by the National Center for Education Evaluation, which is an agency located within IES. Taken as a whole, the intense focus on designing and executing RCTs, coupled with the accompanying methodological advances, has undoubtedly represented one of the most dramatic and influential changes in the field of education research in recent memory.

Advantages, Drawbacks, and Influence of RCTs on Public Policy Analysis

As the movement toward social experiments has progressed, the advantages and drawbacks of this research design have become increasingly clear. As stated earlier, the primary appeal of experimental designs stems from the fact that, under what many consider to be relatively weak assumptions, they can provide unbiased, model-free estimates of the causal effect of a given treatment on a particular outcome of interest. A secondary advantage of experimental designs is that the logic and results of experimental studies are relatively easy to explain to policymakers. As a result, experimental evaluations have the potential to exert substantial influence on policy decisions.

Like any research design, the experimental approach has its share of drawbacks and critiques, a sampling of which are discussed below. First, the cost of designing and executing a high-quality RCT is substantial (Weimer & Vining, 2009). Oftentimes a single RCT costs multiple millions of dollars; several high-quality observational studies can generally be conducted for the price of a single experiment. Second, the external validity of experimental results is often unclear (Schneider, Carnoy, Kilpatrick, Schmidt, & Shavelson, 2007). Concerns over external validity can be generated by two distinct factors. First, it could be the case that the study sample is not representative of any larger population of interest. Units that take part in RCTs are rarely randomly sampled from a well-defined population of interest and, as a result, it is not clear whether the results can be generalized to a broader population. Second, attempts to generalize experimental results often run into general equilibrium concerns (Stock & Watson, 2003) — it is uncertain whether the results of even large-scale cluster randomized trials will be replicated when an intervention or policy reform is scaled up in size. As an example, consider the disconnect between the results of California's statewide class-size reduction initiative and the Tennessee STAR experiment. The Tennessee STAR study, which was a large-scale cluster randomized trial, found reduced class size to have a significant, positive effect on student achievement. California policymakers took note of this result and implemented a statewide class-size reduction initiative — expecting it to raise student achievement — but the official evaluation of this initiative failed to find a positive effect (Bohrnstedt & Stecher, 2002). Evaluators concluded that the class-size reduction policy resulted in declines in teacher qualifications and a more inequitable distribution of credentialed teachers. These general equilibrium effects, which were not observed in the Tennessee STAR experiment, may have contributed to the failure of the class-size reduction initiative to increase student achievement.

Although many people contend that experiments rely on more plausible assumptions than other research designs, this viewpoint is not shared by everyone. Heckman and Smith (1995) observe that experiments assume members of the control group cannot obtain close substitutes for the treatment. The authors note that "substitution bias" is introduced into the experiment when this assumption is violated. Similarly, experiments assume there to be no spillover from the treatment units to the control units (Bloom, 2005). Violation of this assumption can also result in the introduction of bias into the experiment. Finally, experiments rely on randomization to generate treatment and control groups that are balanced on all dimensions, both observable and unobservable. Unequal or nonrandom attrition from the treatment and control groups can lead to biased results.

Other common critiques of social experiments include both ethical and epistemological concerns. Ethically, withholding a treatment or intervention that is expected to produce positive effects from the control group can be difficult to justify. Epistemologically, experiments often leave questions about particular causal mechanisms unanswered; an experiment can tell us the average effect of a specific treatment on a given outcome but it says little about particular structural parameters that may also be of interest.

Proponents of social experiments contend that the advantages of the research design outweigh any potential downsides. Skeptics are less sure of this contention. Regardless of the ultimate resolution of this debate, the fact is that experimental evaluations have become increasingly common in nearly all policy domains. How has this trend affected the practice of traditional public policy analysis? As noted earlier, projecting the impact of each policy alternative in terms of the relevant goals is both the most important and difficult step in policy analysis. The move toward increased social experimentation has provided policy analysts with a stronger evidence base to draw upon when generating predictions about the likely effects of various policy alternatives. This has the potential to result in more accurate predictions that can be made with greater certainty. At the same time, social experiments should not be seen as a panacea for policy analysts. Practitioners of policy analysis must caution against excessive certainty in predictions that are based on experimental evidence; analysts should consider several issues when developing such projections. For example, analysts should carefully assess the generalizability of the experimental results upon which they are basing any predictions. Similarly, analysts must assess how closely the policy reform that was evaluated experimentally aligns with the policy alternative whose impact they are attempting to predict. As long as analysts exercise appropriate caution, the movement toward social experimentation should, on the whole, greatly benefit practitioners of traditional public policy analysis.

Trends in Benefit-Cost Analysis: Meta-Analysis and Monte Carlo Simulation

Many trends and innovations that affect the practice of public policy analysis — such as the move toward social experimentation — also influence benefit-cost analysis. There are other advances, however, that are more specific to the practice of benefit-cost analysis. This section describes two recent advances that have been primarily in the domain of benefit-cost analysis: i) the use of meta-analysis to enhance the generalizability of benefit-cost analyses, and ii) the utilization of Monte Carlo simulation methods to convey uncertainty in the estimates of specific benefit and cost categories, as well as in the estimate of net benefits for a particular policy.

Meta-Analysis

Among other tasks, benefit-cost analysis requires the analyst to catalog all efficiency-related impacts of each policy alternative, project these impacts, and then monetize the impacts. As is the case in traditional policy analysis, the first step in projecting impacts in benefit-cost analysis involves assembling all available evidence — including evaluations of similar policies — that can usefully inform predictions about the likely effects of the policy alternatives. In some cases there may be only one or two evaluations of similar policies upon which predictions can be based. In other cases, however, analysts may have the luxury of drawing on a significant number of evaluations of similar policies. In cases where multiple evaluations exist, it is often beneficial to use meta-analysis as a method for summarizing the findings of the various evaluations. In its most stylized form, meta-analysis involves collecting all studies that estimate the effect of a specific policy on a particular outcome, standardizing the impact estimates presented in each study into effect sizes, and then systematically analyzing the effect sizes; comprehensive treatments of applied meta-analysis are provided by Lipsey and Wilson (2001) and Cooper, Hedges, and Valentine (2009). From the standpoint of benefit-cost analysis, the meta-analytic results of greatest utility are generally the mean effect size across studies and the variability around the mean. Analysts can use this information to guide their predictions about the effects of relevant policy alternatives on a particular outcome. The mean effect size will shape the magnitude of the prediction while the variability in effect sizes will influence the certainty with which the prediction is made. In addition to providing information about the mean and variance of effect sizes, meta-analysis also has the ability to examine whether any of the variability in effect sizes can be explained by relevant characteristics of the studies, such as research design, sample characteristics, political context, or any number of other factors. Analysts can use such information to refine their predictions.

The benefits that meta-analysis provide to practitioners of benefit-cost analysis can be substantial. Meta-analysis provides analysts with a rigorous, systematic method for using all available evidence to project the effect of policies on outcomes. This helps analysts avoid the dangers inherent in basing predictions on a single evaluation, which could be plagued by any of several deficiencies, such as a flawed research design or an unrepresentative sample (Boardman et al., 2006). Further, by providing a method for summarizing all available evidence, meta-analysis often enhances the generalizability of benefit-cost analyses. Using meta-analysis to inform benefit-cost analysis permits practitioners to analyze the efficiency of a policy that has been broadly implemented. Relying on a single evaluation performed for a particular site makes it difficult to generalize the results of a benefit-cost analysis beyond that site, and site-specific analyses are often of only limited utility to policymakers.

Although meta-analysis can provide significant benefits to practitioners of benefit-cost analysis, there are also several challenges to its use. First, meta-analysis is often a time- and resource-intensive endeavor. Analysts must locate all relevant studies and then scour each study for the information needed to compute a standardized effect size. These tasks often require a significant amount of time and resources, which can be problematic for analysts operating under a fixed timeline and a limited budget. This constraint can be especially relevant if policy alternatives are expected to have multiple efficiency-related impacts and the analyst hopes to perform meta-analyses for each of them. Second, meta-analysis nearly always requires a number of subjective judgments to be made. Analysts must develop an operational definition of the intervention in order to guide judgments as to which evaluations should be included in the meta-analysis (Lipsey, 2009). The choice of which evaluations to include can influence the results of the meta-analysis. In addition, analysts must determine how to calculate the effect size that will serve as the basis for the meta-analysis. This task is guided by the desire to employ the most appropriate effect size measure, but is constrained by the data that are presented in the individual evaluations; it is often difficult to locate all data needed to calculate effect size statistics. Despite these challenges, analysts should strive to employ meta-analysis to inform benefit-cost analysis whenever possible. The potential gains in generalizability and certainty generally exceed the costs of conducting meta-analyses.

Monte Carlo Analysis

Uncertainty is inherent in the practice of benefit-cost analysis. Projecting the impact of policy alternatives on various outcomes is clearly an imprecise endeavor. Monetizing the projected impacts — through the application of market or shadow prices — also involves uncertainty. Even cataloging all efficiency-related impacts of policy alternatives is not always a fully straightforward task. Because of the central, if unwelcome, role that uncertainty occupies in benefit-cost analysis, practitioners must ensure that they adequately communicate the uncertainty associated with their estimates. Although several methods can be used to convey uncertainty, there is a growing view that Monte Carlo analysis represents the most desirable approach for doing so and should be a part of every benefit-cost analysis (Weimer & Vining, 2009).

Monte Carlo analysis is appealing because its logic is straightforward and its execution is relatively feasible. In its essence, Monte Carlo analysis involves conducting a specified number of trials — usually 10,000 or 100,000 — that perform a specified calculation. In the context of benefit-cost analysis, Monte Carlo simulations are most often used to estimate net benefits, but the method can also be used to estimate specific benefit or cost categories. In each trial, all uncertain parameters are randomly drawn from probability distributions specified by the analyst (Boardman et al., 2006). When possible, these distributions should be based on published empirical results. The end result of this simulation process is a distribution of estimates. Within this distribution, the mean, standard deviation, minimum, and maximum estimates are often of interest to analysts. The mean estimate represents the expected value of the category while the standard deviation can be interpreted as a measure of uncertainty of that estimate. The minimum and maximum estimates represent worst- and best- case scenarios, respectively. When Monte Carlo analysis is used to calculate net benefits, the proportion of estimates greater than zero can be interpreted as the probability that the policy will have positive net benefits (Weimer & Vining, 2009).

From a benefit-cost perspective, the primary appeal of Monte Carlo analysis stems from its ability to accommodate simultaneously all uncertain parameters when calculating estimates of a specific benefit or cost category, or net benefits of a project or policy. In addition to accommodating uncertain parameters, Monte Carlo analysis has the added ability to distill the results into an easily interpretable format. The results can be effectively presented either numerically — as a table — or graphically — as a histogram. Furthermore, Monte Carlo analysis is neither time nor resource intensive; it can be performed in fairly little time by anyone with access to a spreadsheet program or nearly any statistical software package. Monte Carlo analysis represents a powerful, yet simple, method for conveying uncertainty in estimates.

Unlike other recent trends and innovations in policy analysis and benefit-cost analysis, Monte Carlo analysis has no notable disadvantages or challenges. Overall, because of the large potential benefits and the limited downsides of Monte Carlo analysis, it is easy to see why leading benefit-cost scholars believe the practice should be part of every benefit-cost analysis (Boardman et al., 2006; Weimer & Vining, 2009).

Institutional Supporters of Policy Analysis

Public policy analysis has historically faced several hurdles to becoming an influential, mainstream analytical approach in both academia and the broader policy community. In academia, public policy analysis lacks a single disciplinary home; it draws on the fields of political science, economics, sociology, law, and many others. While the eclecticism of policy analysis is a large part of its attraction, it has also resulted in policy analysis struggling to gain the institutional resources, support, and stature that often accompany approaches grounded in a single discipline. In addition, scholars often lack career-advancement incentives to work in this area. For better or worse, the incentive structure underlying career advancement generally encourages publication in disciplinary journals. This fact has likely dissuaded some scholars from focusing more heavily on policy analysis. In policy communities, policy analysis and benefit-cost analysis have previously been viewed as manipulable methodological tools that policy advocates often use to support a desired policy. This view was strengthened by the increased visibility of think tanks with particular viewpoints through the 1980s and 1990s.

Taken together, these factors have resulted in substantial variation in the use, popularity, and influence of policy analysis over the years. There is a growing sense, however, that this traditional variability is diminishing and policy analysis is beginning to consistently occupy a mainstream, influential position in policy debates. This movement toward the mainstream is the result of a confluence of factors, but increased institutional support has played a substantial role. Although a wide variety of institutions have been instrumental in increasing the visibility of high-quality policy analysis, the actions of three particular institutions — the Congressional Budget Office, the Washington State Institute for Public Policy, and the John D. and Catherine T. MacArthur Foundation — have been particularly effective in promoting policy analysis.

Although the Congressional Budget Office (CBO) has long been a source of rigorous, unbiased policy analysis, the value of such work became acutely apparent during the debate over health care reform that took place in 2009 and 2010.5 As the debate progressed, it became clear that policymakers of all viewpoints and partisan affiliations trusted and respected the analysis issued by CBO. There were numerous points where the future direction of the debate was clearly going to be shaped by a forthcoming CBO analysis. Rarely has policy analytic work played such a visible and influential role in a policy debate of such high stakes. However, by providing rigorous analysis with clearly stated assumptions and no policy agenda, the CBO illustrated the substantial value that such high-quality work can bring to important policy debates.

Although the CBO's influence resides primarily in the policy community, it is not exclusive to that domain; CBO employees routinely take active steps to engage the professional and scholarly communities as well. A prominent example of such engagement are two 2007 pieces coauthored by Orszag and Ellis — the CBO director and senior analyst at the time — on the topic of rising health care costs that appeared in the New England Journal of Medicine (Orszag & Ellis, 2007a,b). The perspectives presented in these articles were influential in shaping discussions among policymakers, scholars, and practitioners. As an example of their influence, these pieces have each been cited approximately 60 times in the scholarly literature since their publication about three years ago.

Most states have an organization or agency that is charged with providing state policymakers with information on relevant policy options, but few are as skilled and respected as the Washington State Institute for Public Policy (WSIPP) (see Weimer and Vining 2009 for an in-depth description of the activities of the WSIPP).6 The WSIPP was founded in the early 1980s and was charged with providing policymakers with unbiased analysis on topics requested by the state legislature. A feature that makes the WSIPP unique is its ability to execute high-quality benefit-cost analysis. The WSIPP routinely releases analyses that use sound analytic practice to bring evidence to bear on a wide variety of current policy issues. For example, WSIPP recently released a benefit-cost analysis of extending foster care to age 21 (Burley & Lee, 2010). It also routinely performs analyses in the areas of mental health and criminal justice. Unlike the CBO, which only analyzes the specific policy proposal submitted by Congress, the WSIPP often analyzes several potential policy alternatives. The WSIPP provides a valuable service to Washington policymakers, and other states would likely benefit from developing an institution with similar capacities.

While institutions such as the CBO and the WSIPP have been successful at generating respect for high-quality policy analysis in the policy community, the MacArthur Foundation has focused mainly on promoting the practice of high-quality policy analysis within academia. The MacArthur Foundation has provided a substantial amount of resources and support to further the causes of policy analysis and benefit-cost analysis.7 For example, it has helped underwrite the Benefit-Cost Analysis Center at the University of Washington as well as the Society for Benefit-Cost Analysis.8 It has also financed workshops designed to strengthen the practice of benefit-cost analysis and encourages many of its grantees to perform benefit-cost analyses on a variety of policies and interventions. In short, the MacArthur Foundation has exhibited a substantial commitment to improving and increasing the practice of policy analysis and benefit-cost analysis among scholars. The tangible results of this commitment can be found in several recent publications that can serve as useful resources for both scholars and practitioners of benefit-cost analysis. These publications include an edited volume that assesses the state of the practice of benefit-cost analysis in ten distinct social policy areas (Weimer & Vining, 2009), a recent report that provides guidance for valuing benefits in benefit-cost analyses of social programs (Karoly, 2008), and articles appearing in the recently established Journal of Benefit-Cost Analysis.

Although the MacArthur Foundation has focused heavily on promoting policy analysis within academia, it also recognizes the importance of building the capacity for high-quality policy analysis in the larger policy community. As a result, the MacArthur Foundation — in collaboration with the Pew Charitable Trusts — has initiated an effort to expand benefit-cost capacity at the state level. In effect, the organizations hope to promote the creation of organizations with benefit-cost capacity on par with that of the WSIPP. Through its diverse efforts to enhance the practice of policy analysis, the MacArthur Foundation strives to positively impact policy outcomes.

The three organizations discussed above are diverse in many respects, but they share one important commonality. Specifically, these institutions are committed to conducting and promoting high-quality policy analysis. Such a commitment permits these institutions to gain the trust of policymakers with diverse viewpoints and partisan affiliations. When all interests in a policy debate respect and trust a particular organization, analysis by that organization can provide a common starting point for constructive discussions and result in an improved policy outcome.

Conclusion

The ultimate purpose of public policy analysis is to provide scholars, analysts, and practitioners with a rigorous, systematic analytical approach for identifying the most desirable policy alternative for addressing a specific problem. Similarly, the primary goal of benefit-cost analysis is to determine whether a particular policy meets a predefined threshold of social efficiency. Although both of these analytical approaches are based on well-accepted principles that contribute a measure of stability to their practice, they are also flexible enough to incorporate innovations that are likely to help analysts reach the goal of accurately identifying the most desirable policy alternative. The flexibility of these approaches is illustrated by their incorporation of the three advances discussed in this essay — the move toward social experimentation, the growing use of meta-analysis and Monte Carlo analysis, and the increased institutional support for high-quality policy analysis.

Each of these innovations has exhibited the ability to improve the practice of public policy analysis in a distinct manner. Social experiments provide information that may allow policy analysts to project the impact of policy alternatives on relevant goals and outcomes with more accuracy and confidence. Similarly, meta-analysis represents a systematic method for drawing on all available information when predicting the effects of a policy or intervention, a feature that has the ability to enhance the generalizability of a benefit-cost analysis. Monte Carlo simulation is an ideal tool for accommodating uncertainty that may arise from several different sources and distilling it into a single, easily-interpretable expression of the uncertainty inherent in any benefit-cost estimates. The increased institutional demand for, and practice of, high-quality policy analysis has provided resources and support for improving and advancing the field of policy analysis. As this field undergoes further evolution it will have the potential to garner even more institutional support, which will have the potential to spark a cycle of continuous improvement and progress in the field of public policy analysis.


Deven Carlson is a PhD. candidate in the Department of Political Science, University of Wisconsin-Madison. His primary substantive research interests include education policy and housing policy. His methodological interests include policy analysis, policy evaluation, benefit-cost analysis, and experimental design.


Notes

I would like to thank Dave Weimer and Sara Dahill-Brown for comments on an earlier draft of this essay. Their comments and suggestions undoubtedly improved the quality of this essay.

  1. For a systematic discussion of the difference between policy research and policy analysis see Weimer (2009) or Weimer and Vining (2011).
  2. For example, regression coefficients can be interpreted as causal estimates under the assumption of conditional independence.
  3. Job training programs — such as the National Supported Work Demonstration (NSW) — represent another policy area that pioneered the use of experimental designs to evaluate their effects (see LaLonde, 1986 for a brief description of the NSW).
  4. See http://ies.ed.gov/ncee/wwc/references/registries/RCTSearch/RCTSearch.aspx for a listing of all evaluations contained in the Registry of RCTs.
  5. For further reading, the CBO's website can be accessed at www.cbo.gov. For a comprehensive collection of CBO publications related to the health care debate see http://www.cbo.gov/publications/collections/collections.cfm?collect=10.
  6. For further reading, the WSIPP website is located at: http://www.wsipp.wa.gov/. The website contains links to all WSIPP publications, including benefit-cost analyses.
  7. For a description of MacArthur's initiative to strengthen and promote the practice of benefit-cost analysis, see http://www.macfound.org/atf/cf/%7BB0386CE3-8B29-4162-8098-E466FB856794%7D/SOCIALBENEFITSBUFFSHEET-V5.PDF.
  8. For more information about the Benefit-Cost Analysis Center see http://evans.washington.edu/node/1262. The website for the Society for Benefit-Cost Analysis can be found at http://benefitcostanalysis.org/.

References

  • Bardach, Eugene. 2009. A Practical Guide for Policy Analysis: The Eightfold Path to More Effective Problem Solving. Washington, DC: CQ Press.
  • Beatty, Alexandra. 2009. Strengthening Benefit-Cost Analysis for Early Childhood Interventions: Workshop Summary. Washington, DC: National Academies Press.
  • Bloom, Howard S. 2005. "Randomizing Groups to Evaluate Place-Based Programs." In Learning More from Social Experiments: Evolving Analytic Approaches, ed. H.S. Bloom. New York: Russell Sage, 115–72.
  • Boardman, Anthony E., David H. Greenberg, Aiden R. Vining, and David L. Weimer. 2006. Cost-Benefit Analysis: Concepts and Practice, 3rd ed., Upper Saddle River, NJ: Prentice Hall.
  • Bohrnstedt, George W., and Brian M. Stecher, eds. 2002. What We Have Learned About Class Size Reduction in California. Sacramento, CA: California Department of Education.
  • Burley, Mason, and Stephanie Lee. 2010. Extending Foster Care to Age 21: Measuring Costs and Benefits in Washington State. Olympia: Washington State Institute for Public Policy. Document No. 10-01-3902.
  • Campbell, Frances A., and Craig T. Ramey. 1994. "Effects of Early Intervention on Intellectual and Academic Achievement." Child Development 65: 684–98.
  • Campbell, Donald T., and Julian C. Stanley. 1963. "Experimental and Quasi-Experimental Designs for Research on Teaching." In Handbook of Research on Teaching, ed. Nathaniel L. Gage. Chicago: Rand McNally, 171–246.
  • ———. 1966. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally.
  • Cooper, Harris, Larry V. Hedges, and Jeffrey C. Valentine, eds. 2009. The Handbook of Research Synthesis and Meta-Analysis, 2nd ed., New York: Russell Sage Foundation.
  • Fisher, Ronald A. 1935. The Design of Experiments. Edinburgh: Oliver and Boyd.
  • Gleason, Philip, Melissa Clark, Christina Clark Tuttle, and Emily Dwoyer. 2010. The Evaluation of Charter School Impacts: Final Report. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
  • Greenberg, David H., and Mark Shroder. 2004. The Digest of Social Experiments, 3rd ed., Washington, DC: Urban Institute Press.
  • Heckman, James J., and Jeffrey A. Smith. 1995. "Assessing the Case for Social Experiments." Journal of Economic Perspectives 9: 85–110.
  • Heckman, James J., Seong Hyeok Moon, Rodrigo Pinto, Peter A. Savelyev, and Adam Yavitz. 2010. "The Rate of Return to the HighScope Perry Preschool Program." Journal of Public Economics 94: 114–28.
  • Karoly, Lynn A. 2008. Valuing Benefits in Benefit-Cost Studies of Social Programs. Santa Monica, CA: RAND Corporation.
  • LaLonde, Robert J. 1986. "Evaluating the Econometric Evaluations of Training Programs with Experimental Data." American Economic Review 76: 604–20.
  • Lipsey, Mark W. 2009. Generalizability: The Role of Meta-Analysis. Presentation for the Workshop on Strengthening Benefit-Cost Methodology for the Evaluation of Early Childhood Interventions. [Online]. http://www.bocyf.org/lipsey_presentation.pdf. Accessed July 22, 2010.
  • Lipsey, Mark W., and David B. Wilson. 2001. Practical Meta-Analysis. Thousand Oaks, CA: Sage.
  • Orszag, Peter R., and Philip Ellis. 2007a. "The Challenge of Rising Health Care Costs — A View from the Congressional Budget Office." New England Journal of Medicine 357: 1793–5.
  • ———. 2007b. "Addressing Rising Health Care Costs — A View from the Congressional Budget Office." New England Journal of Medicine 357: 1885–7.
  • Puma, Michael J., Robert B. Olsen, Stephen H. Bell, and Cristofer Price. 2009. What to Do When Data Are Missing in Group Randomized Controlled Trials. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
  • Raudenbush, Stephen W. 1997. "Statistical Analysis and Optimal Design for Cluster Randomized Trials." Psychological Methods 2: 173–85.
  • Raudenbush, Stephen W., Andres Martinez, and Jessaca Spybrook. 2007. "Strategies for Improving Precision in Group-Randomized Experiments." Educational Evaluation and Policy Analysis 29: 5–29.
  • Schneider, Barbara, Martin Carnoy, Jeremy Kilpatrick, William H. Schmidt, and Richard J. Shavelson. 2007. Estimating Causal Effects Using Experimental and Observational Designs. Washington, DC: American Educational Research Association.
  • Schochet, Peter Z. 2009. The Estimation of Average Treatment Effects for Clustered Rcts of Education Interventions. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
  • Stock, James H., and Mark W. Watson. 2003. Introduction to Econometrics. New York: Addison-Wesley.
  • Weimer, David L. 2009. "Making Education Research More Policy Analytic." In Handbook of Education Policy Research, eds. Gary Sykes, Barbara Schneider, and David N. Plank. New York: Routledge, 93–100.
  • Weimer, David L., and Aiden R. Vining. 2009. "An Agenda for Promoting and Improving the Use of CBA in Social Policy.′′ In Investing in the Disadvantaged: Assessing the Benefits and Costs of Social Policies, eds. David L. Weimer, and Aiden R. Vining. Washington, DC: Georgetown University Press, 249–72.
  • ———. 2011. Policy Analysis, 5th ed., New York: Longman.
  • Wolf, Patrick, Babette Gutmann, Michael Puma, Brian Kisida, Lou Rizzo, Nada Eissa, and Matthew Carr. 2010. Evaluation of the DC Opportunity Scholarship Program: Final Report. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.