David H. Greenberg

University of Maryland, Baltimore County
Economics (Emeritus)

5531 High Tor Hill
Columbia, MD
USA
21045
dhgreenb@umbc.edu

Search Google Scholar
Search for Google Scholar Profile

I recently completed cost-benefit analyses of several U.S. and U.K. welfare-to-work, IDA, and educational programs. I am currently working on several cost and cost-benefit analyses of demonstration programs targeted at the disadvantaged population. With a co-author, I have also been working on several papers on social experimentation and editing a special edition of the Evaluation Review. My co-authors and I have just completed work on the fifth edition of a widely used textbook on cost-benefit analysis. I expect to continue doing cost and cost-benefit analyses of social programs in the future.

Citation:
Boardman, Anthony, David Greenberg, Aidan Vining, and David Weimer. 2018 (forthcoming). Cost-Benefit Analysis: Concepts and Practice, 5th edition. Englewood Cliffs, NJ: Prentice Hall.Boardman, Anthony, David Greenberg, Aidan Vining, and David Weimer. Cambridge University Press
Abstract: This is a new edition of a widely used textbook that provides a thorough coverage of the major topical areas related to cost-benefit analysis. The new edition features considerable reorganization to aid in student and practitioner understanding and learning. The new edition also now has eight cases that demonstrate the actual practice of cost-benefit analysis. Five of these cases involve significant new material. Additionally, a number of chapter exhibits have been updated to cover contemporary research by leading cost-benefit experts on important policy problems.
Citation:
Greenberg, David, Johanna Walter, and Gennevieve Knight. 2013. "A Cost-Benefit Analysis of the UK Employment Retention and Advancement Demonstration" Applied Economics 45 (31):4299-4318.
Abstract: This paper presents a cost-benefit analysis of Britain’s Employment Retention and Advancement demonstration, which was evaluated though the first large-scale randomised control trial in the UK. ERA used a combination of job coaching and financial incentives in attempting to help long-term unemployed men and low-income lone parents sustain employment and progress in work once they were employed. Using both administrative and survey data, ERA’s effects on benefits and costs were estimated through impact analyses, which exploited the experimental design. The findings indicated that ERA was cost-beneficial for long-term unemployed adult men, but not for lone parents. The key findings appear robust to sensitivity tests. Uncertainty, as implied by the standard errors of the estimated impacts, was addressed through a Monte Carlo analysis, an approach seldom previously used in cost-benefit analyses of social programs.
Citation:
Greenberg, David, Charles Michaloupoulos, and Philip K. Robins. 2006. "Do Experimental and Nonexperimental Evaluations Give Different Answers about the Effectiveness of Government-Funded Training Programs?" Journal of Policy Analysis & Management. 25 (3):523-552. Reprinted in Social Experimentation, Program Evaluation, and Public Policy, edited by: Maureen Pirog, Blackwell Publishing, 2008.
Citation:
Burt S. Barnow and David Greenberg, 2015. "Do Estimated Impacts on Earnings Depend on the Source of the Data Used to Measure Them? Evidence from Previous Social Experimets" Evaluation Review 1-50
Abstract: Background: Impact evaluations draw their data from two sources, namely, surveys conducted for the evaluation or administrative data collected for other purposes. Both types of data have been used in impact evaluationsof social programs. Objective: This study analyzes the causes of differences in impact estimates when survey data and administrative data are used to evaluate earnings impacts in social experiments and discusses the differences observed in eight evaluations of social experiments that used both survey and administrative data. Results: There are important tradeoffs between the two data sources. Administrative data are less expensive but may not cover all income and may not cover the time period desired, while surveys can be designed to avoid these problems. We note that errors can be due to nonresponse or reporting, and errors can be balanced between the treatment and the control groups or unbalanced. We find that earnings are usually higher in survey data than in administrative data due to differences in coverage and likely overreporting of overtime hours and pay in survey data. Evaluations using survey data usually find greater impacts, sometimes much greater. Conclusions: The much lower cost of administrative data make their use attractive, but they are still subject to underreporting and other problems. We recommend further evaluations using both types of data with investigative audits to better understand the sources and magnitudes of errors in both survey and administrative data so that appropriate corrections to the data can be made.
Citation:
Barnow, Burt S. and David Greenberg. 2014. "Flaws in Evaluations of Social Programs: Illustrations From Randomized Controlled Trials." Evaluation Review 38(5): 359-387.
Abstract: Method: The eight flaws are grouped into four categories on how they affect impact estimates: statistical imprecision; biases; failure of impact estimates to measure effects of the planned treatment; and flaws that result from weakening an evaluation design. Each flaw is illustrated with examples from social experiments. Although these illustrations are from randomized controlled trials (RCTs), they can occur in any type of evaluation; we use RCTs to illustrate because people sometimes assume that RCTs might be immune to such problems. A summary table lists the flaws, indicates circumstances under which they occur, notes their potential seriousness, and suggests approaches for minimizing them. Results: Some of the flaws result in minor hurdles, while others cause evaluations to fail—that is, the evaluation is unable to provide a valid test of the hypothesis of interest. The flaws that appear to occur most frequently are response bias resulting from attrition, failure to adequately implement the treatment as designed, and too small a sample to detect impacts. The third of these can result from insufficient marketing, too small an initial target group, disinterest on the part of the target group in participating (if the treatment is voluntary), or attrition. Conclusion To a considerable degree, the flaws we discuss can be minimized. For instance, implementation failures and too small a sample can usually be avoided with sufficient planning, and response bias can often be mitigated—for example, through increased follow-up efforts in conducting surveys.
Citation:
Greenberg, David 2014 "Are Individual Development Accounts Cost-Effective: Findings from a LongTerm Follow Up of a Social Experiment in Tulsa" Journal of Benefit Cost Analysis 4 (3): 263-300.
Abstract: This article presents findings from a cost-benefit analysis of the Tulsa Individual Development Account (IDA) program, a demonstration program that was initiated in the late 1990s and is being evaluated through random assignment. The program put particular emphasis on using savings subsidies to help participants accumulate housing assets. The key follow-up data used in the evaluation was collected around 10 years after random assignment, about six years after the program ended. The results imply that, during this 10-year observation period, program participants gained from the program and that the program resulted in net costs to the government and private donors, and that society as a whole was probably worse off as a consequence of the program. The article examines in some detail whether these findings are robust to a number of different considerations, including the assumptions upon which the results depend, uncertainly reflected by the standard errors of the impact estimates used to derive the benefits and costs, and omitted benefits and costs, and concludes that they are essentially robust. For example, a Monte Carlo analysis suggests that the probability that the societal net benefits of the Tulsa program were negative during the observation period is over 90 percent and that the probability that the loss to society exceeded $1,000 is 80 percent. Further analysis considered benefits and costs that might occur beyond the observation period. Based on this analysis, it appeared plausible, although far from certain, that the societal net benefits of the Tulsa program could eventually become positive. This would occur if the program’s apparent positive net impact on educational attainment generates substantial positive effects on the earnings of program participants after the observation period ended. However, there was no evidence that the educational impacts had yet begun to produce positive effects on earnings by the end of the observation period.
Citation:
Barnow, Burt S. and David Greenberg. 2013. "Replication Issues in Social Experiments: Lessons from US Labor Market Programs." Journal for Labour Market Research 46:239-252.
Abstract: When evaluating a pilot or demonstration program, there are risks from drawing inferences from a single test. This paper reviews the experiences of replication efforts from demonstrations using randomized controlled trials in the initial evaluation and the replications. Although replications of promising programs are primarily gathered to increase sample size, replications are also used to learn if the intervention is successful for other target groups and geographic locations, and to vary some of the intervention’s features. In many cases, replications fail to achieve the same success as the original evaluation, and the paper reviews reasons that have been suggested for such failures. The paper reviews what has been learned from replications where random assignment was used in six instances: income maintenance experiments, unemployment insurance bonus experiments, the Center for Employment Training program, job clubs, job search experiments, and the Quantum Opportunity Program. The paper concludes by summarizing lessons learned from the review and areas where more research is needed.
Citation:
Greemberg. David, Victoria Deitch, and Gayle Hamilton. 2010. "A Synthesis of Random Assignment Benefit-Cost Studies of Welfare-to Work Programs." Journal of Benefit-Cost Analysis 1 (1), article 3.
Abstract: Over the past two decades, federal and state policymakers have dramatically reshaped the nation’s system of cash welfare assistance for low-income families. During this period, there has been considerable variation from state to state in approaches to welfare reform, which are often collectively referred to as “welfare-to-work programs.” This article synthesizes an extraordinary body of evidence: results from 28 benefit-cost studies of welfare-to-work programs based on random assignment evaluation designs. Each of the 28 programs can be viewed as a test of one of six types of welfare reform approaches: mandatory work experience programs, mandatory job-search-first programs, mandatory education-first programs, mandatory mixed-initial-activity programs, earnings supplement programs, and time-limit-mix programs. After describing how benefit-cost studies of welfare-to-work programs are conducted and considering some limitations of these studies, the synthesis addresses such questions as: Which welfare reform program approaches yield a positive return on investments made, from the perspective of program participants and from the perspective of government budgets, and the perspective of society as a whole? Which approaches make program participants better off financially? In which approaches do benefits exceed costs from the government’s point of view? The last two of these questions coincide with the trade-off between reducing dependency on government benefits and ensuring adequate incomes for low-income families. Because the benefit-cost studies examined program effects from the distinct perspectives of government budgets and participants’ incomes separately, they address this trade-off directly. The article thus presents benefit-cost findings that can aid policymakers and program developers in assessing the often complex trade-offs associated with balancing the desire to ensure the poor of adequate incomes and yet encourage self-sufficiency.
Citation:
Greenberg, David, and Mark Shroder. 2004. The Digest of the Social Experiments, 3rd edition. Washington, D.C.: The Urban Institute Press.
Citation:
Greenberg, David, and Philip K. Robins. 2011. “Have Welfare-to-Work Programs Improved over Time in Putting Welfare Recipients to Work?" Industrial and Labor Relations Review.
Citation:
Boardman, Anthony, David Greenberg, Aidan Vining, and David Weimer. 2011. Cost-Benefit Analysis: Concepts and Practice, 4th edition. Englewood Cliffs, NJ: Prentice Hall.
Citation:
Greenberg, David, and Philip Robins. 2008. "Incorporating Nonmarket Time into Benefit-Cost Analyses of Social Programs: An Application to the Self-Sufficiency Project." Journal of Public Economics 92: 766-794.

Substantive Focus:
Economic Policy PRIMARY
Education Policy
Social Policy SECONDARY

Theoretical Focus:
Policy Analysis and Evaluation PRIMARY

Keywords

COST-BENEFIT ANALYSIS SOCIAL POLICY RANDOMIZED FIELD EXPERIMENTS