A GAO report released in late September has found that that fewer than half (40 percent) of surveyed federal managers say that an evaluation had been completed within the past 5 years of any program, operation, or project they were involved in. The figure is statistically unchanged from 2013, the last time GAO surveyed federal managers on the topic.
The apparently unchanged use of evaluation comes despite years of promotion under the Obama administration, public scorecards released by outside groups like Results for America, and actions by Congress, such as the creation of the Commission on Evidence-based Policymaking. According to the report:
For several years, OMB has encouraged agencies to use program evaluations and other forms of evidence to learn what works and what does not, and how to improve results. Yet, agencies appear not to have expanded their capacity to conduct or use evaluation in decision making since 2013.
The report detailed a number of recent activities that the Obama and Trump administrations have undertaken to address the gap.
OMB staff have … established several interagency workgroups to promote sharing evaluation expertise and have organized a series of workshops and interagency collaborations. For example, in 2016 we recommended that OMB establish a formal means for agencies to collaborate on tiered evidence grants, a new grant design in which funding is based on the level of evidence available on the effectiveness of the grantee’s service delivery model.
OMB’s Evidence Team convened an interagency working group on tiered evidence grants that meets quarterly and established a website for the group to share resources. This team also co-chairs the Interagency Council on Evaluation Policy, a group of 10 agency evaluation offices that have collaborated on developing common policies and conducting workshops.
The Trump Administration’s 2018 Budget proposal endorses a continued commitment to agencies building a portfolio of evidence on what works and how to improve results, investing in evidence infrastructure and capacity, and acting on a strong body of evidence to obtain results.
Despite the apparently mixed success of these efforts, where evaluation has occurred it has often made a positive difference in performance.
About half the managers who had evaluations once again reported that they contributed to a great or very great extent to improving program management or performance and assessing program effectiveness (54 and 48 percent, respectively), while fewer reported that they contributed to allocating program resources or informing the public (35 and 22 percent, respectively).
When evaluations did not lead to improved performance, resources were often the reason:
Those who had evaluations cited most often a lack of resources as a barrier to implementing evaluation findings. Agency evaluators noted that it takes a number of studies rather than just one study to influence change in programs or policies.
Uncertain federal impact also played a role in some cases:
Some federal managers who reported having evaluations also reported that difficulty distinguishing between results produced by the program and results caused by other factors was a great or very great barrier to evaluation use (18 percent). Across the federal government, programs aim to achieve outcomes that they do not control, that are influenced by other programs or external social, economic, or environmental factors, complicating the task of assessing program effectiveness.
The report also found low levels of congressional interest in evaluation findings.
Despite GPRAMA’s requirement that agencies consult with the Congress in developing their strategic plans and priority goals, we found their communication to be one-directional, resembling reporting more than dialogue. In our 2013 interviews with evaluators, one evaluator explained that, for the most part, they conduct formal briefings for the Congress in a tense, high-stakes environment; they lack the opportunity for informal discussion of their results.
To further advance the use of evaluations, GAO recommended that agencies develop evaluation plans or learning agendas “to ensure that an agency’s scarce research and evaluation resources are targeted to its most important issues.” The report recommends that OMB direct each cabinet level department prepare an annual agency-wide evaluation plan that describes:
- key questions for each significant evaluation study that the agency plans to begin in the next fiscal year, and
- congressional committees; federal, state and local program partners; researchers; and other stakeholders that were consulted in preparing their plan.