Good Programs Destroyed by Bad Evaluations

|

This is a guest blog post by Dr. Catherine Searle Renault, Principal and Owner of Innovation Policyworks LLC, and Research Fellow at the Center for Regional Economic Competitiveness, where she specializes in evaluation research. 

Who can argue with the importance of understanding whether or not taxpayer dollars are being used effectively to meet agreed upon policy goals like economic growth? Across the country, the concept of regularly evaluating economic development incentives, including those implemented as tax credits, is broadly accepted. The devil, however, is in the details. The best evaluations follow recognized policy evaluation and data analysis methodologies and principles.

In Oklahoma and Maryland, the statutory evaluation is in the hands of evaluation professionals in the economic development agencies, rather than being delegated to a watchdog or audit organization, as is proposed in some states. This ensures that the evaluations are credible and professionally done, and actually answer the questions that the legislatures and the public have.

Evaluations are substantially different from audits, in that they tend to focus on policy outcomes, rather than on financial management and implementation. The distinction is important because the question legislators ask is whether the economic development incentives and tax credits are meeting their policy objectives, not whether or not they are managed correctly. This is especially relevant because many of the incentives are simply part of the tax code and are not actively managed by anyone.

Regardless of who gets the responsibility, getting answers to questions about the effectiveness of economic development programs requires more than just having an evaluation in statute. First, there is the problem that many of the incentives and tax credits were put into law without a clearly stated goal. It is impossible to tell if a program is meeting its policy goals and objectives if these are not spelled out. The Legislature should ensure that all existing and future incentives articulate the reasoning behind each program, with more specificity than just “create jobs.”

Second, the Legislature should also ensure that evaluation is built into the enabling bills. Each new incentive should include a section that inserts the program into the biannual evaluation, and appropriations should cover the cost of the evaluation. An unfunded evaluation is frustrating to both agencies and lawmakers alike. But good evaluations are not free, nor are they cheap.

Third, the Legislature should ensure that the reports they receive reflect good research design, and to the maximum extent possible, show a causal relationship between the program and the outcomes observed. Just because two events coincide doesn’t mean that one caused the other. Evaluators should compare companies that have received economic development incentives with those who haven’t, matching them for industry, stage of development, location and other relevant attributes, and try to observe any differences in job creation, investment or community development.

Fourth, the Legislature needs to recognize the difficulty of balancing the need to get appropriate data, while minimizing reporting requirements on recipients of incentives and credits. Agencies that “own” data, including revenue and labor, should be required to share it with the designated researchers, while all involved must maintain strict data confidentiality.

Fifth, the Legislature needs to understand that evaluation is a process, not an event. Annual or bi-annual looks at programs need to be seen in the economic context – even the best programs cannot overcome downturns like in 2008. When programs are reviewed on a regular basis, evaluators will learn more than from random, isolated snapshots.

Finally, evaluations should be best viewed as an opportunity to improve programs, rather than a “gotcha” exercise with alternative uses for the expenditures already planned before the ink is dry on the reports. A presumption that all economic development programs are wasteful only strengthens the resolve of program managers and recipients alike to avoid evaluations altogether, rather than risk having good programs destroyed by bad evaluations.