Will the new Evidence-based Policymaking Commission, created by Congress and signed into law by President Obama on March 30, actually further the cause of evidence-based policymaking?
With just one meeting under its belt, it is too early to know for sure, but the inaugural meeting on July 22 provided reason to worry.
Presentations by executive branch officials, including the Census Bureau (which is staffing the commission), focused almost entirely on improving the quality and usefulness of federal statistics. This focus was also reflected in the meeting’s attendees, most of whom represented various federal statistical agencies, with few (if any) representatives of the federal evaluation community.
Even the one presentation devoted to evaluation — by Raj Chetty of Stanford University — seemed to downplay the usefulness of randomized controlled trials, a gold standard evaluation methodology that was cited as something to support in the authorizing legislation. More broadly, the needs of local practitioners and evaluators, who are commonly on the front lines of evidence-based work, were almost entirely absent from a meeting that lasted over three hours.
Data Data Everywhere
It was a curious start for a commission that drew strong bipartisan support in Congress and from the Obama administration as a way to further the evidence-based policymaking agenda. The commission is directed by law to issue a report within 15 months of its appointment which, according to an estimate by the chair, Katharine Abraham, will be September 6, 2017.
Between now and then, the commission has been tasked with developing recommendations to: (1) improve the federal data infrastructure while respecting privacy and security concerns; (2) incorporate outcomes measurement, randomized controlled trials (RCTs), and rigorous impact analysis into federal program design; and (3) create a possible clearinghouse that would facilitate access to data by the research community.
The initial presentations from the White House and congressional staff reinforced these statutory goals. Devin O’Connor, an associate director at OMB, reiterated administration priorities for building and using rigorous evidence. John Righter, staff to Senator Patty Murray (D-WA), urged the commission to work with outside entities like the Evidence-based Policymaking Collaborative at the Urban Institute. Ted McCann, staff to House Speaker Paul Ryan, urged the commission to consider ways to include evidence in policymaking “writ large.”
When asked if Congress had intended to impose any boundaries on the commission, Righter answered that the commissioners should define their mission as broadly as possible consistent with the law’s requirement that recommendations be supported by at least 75% of the members. Congress did not intend to prescribe limits, he said, but to give the commission needed room. He also urged the commission to consider issuing an interim report for the next administration soon after the new president is inaugurated in January.
Despite this broad mandate, the meeting’s next three speakers, all representing federal statistical agencies, delivered narrow presentations focused on improving linkages between surveys and federal administrative data.
Nancy Potok, deputy director of the Census Bureau, cited the need for better efficiencies heading into the 2020 census. Mary Bohman, director of the Economic Research Service, described how data from three agencies have provided descriptive insights on SNAP usage. The last, Jeri Mulrow of the Bureau of Justice Statistics, mentioned that such statistics could be used to analyze recidivism and the effectiveness of longer or shorter sentences, but did not go into detail.
The most promising of the presentations from an evaluation perspective was saved for last. Raj Chetty, a well-regarded researcher at Stanford University, outlined recent work that he and a number of colleagues have done on economic mobility. Chetty’s work, based on a quasi-experimental design that relied extensively on administrative data, seemed to show that housing vouchers could create greater income mobility for the children of low-income families.
Interestingly, while Chetty’s example demonstrated the potential usefulness of administrative data in evaluations, it also challenged the results of an earlier, federally-sponsored RCT-based evaluation, which found that housing vouchers produced few noticeable effects. Chetty did not hesitate to point out the limitations of such RCTs during his presentation. (Chetty has separately analyzed the longer term effects of the other study, but this was not the focus of his presentation.)
Chetty believes that his statistical associations demonstrate causation. Left unsaid at the meeting was a possible counterargument that his study may have only showed that people who move out of poor areas on their own, usually without the help of a housing voucher, may be systematically different from, more motivated than, and more likely to succeed than housing voucher recipients or other low income people in general. This is something that a well-conducted RCT-based study would have controlled for — and probably did in the case of the earlier research.
Regardless of the merits of these particular studies, which have proven to be controversial, correlation does not prove causation — and this has larger implications for the commission. While better access to big data will undoubtedly produce many benefits, it will also almost certainly generate many false positives and spurious associations. If the commission chooses to focus primarily on data issues while largely ignoring the needs of experimental research, it may only make such matters worse.
Reorienting Toward an Evidence Perspective
In fairness, it is easy to read too much into a single meeting, particularly the first one, which by law had to occur within 30 days of the commission’s appointment. Indeed, the last appointments from Senator Mitch McConnell (R-KY) had come just days before the meeting.
On the other hand, according to those familiar with its plans, the commission is only expected to meet quarterly. This suggests just another four or five meetings before it issues its final report next year (and probably only two before issuing a possible interim report at the start of the next administration, if the commission follows the advice of the congressional representatives at the meeting). This leaves very little time for adjustment.
Looking ahead, if the commission is indeed an evidence commission, then it needs to start with an evidence perspective. It should prioritize the needs of program evaluators and then work backwards from there, identifying and addressing the challenges that such evaluators face. It should not start from the point of view of federal statistical agencies, which would risk creating a large supply of data for which there may be only tangential uses.
More specifically, the commission should invite frontline (especially state and local) public sector and nonprofit actors who are known for strong use of data. It should invite comments from federal evaluation offices and from evidence-oriented programs like the Social Innovation Fund, Investing in Innovation (i3) program at the Department of Education, and teenage pregnancy prevention programs at HHS. And it should invite comments and presentations from well-known external evaluators like MDRC and Mathematica Policy Research.
It is an evidence commission, not a data commission. The needs of federal statistical agencies are secondary, and only important (in this context) to the extent they serve the needs of building evidence.
Update: On July 22, the White House Office of Management and Budget (OMB) released materials that were prepared for the commission’s first meeting. These materials include an overview of federal evidence building efforts, white papers describing the uses and barriers to using administrative data, and a white paper describing privacy and confidentiality concerns. OMB deserves to be applauded for these efforts.
Despite the release of these materials, the commission should note the significant public interest in its work and embrace additional transparency. It should make such materials publicly available at the time of future meetings and invite greater public participation. It should also avoid closed sessions, for which there must be a statutory basis and compliance under the Federal Advisory Commission Act.