On April 24, the U.S. Department of Education announced the availability of $135 million in total grants to be made to nonprofits and other eligible organizations under the administration’s Investing in Innovation (i3) program.
Under the program, grants are made across three different tiers of evidence, with the largest grants reserved for programs with the strongest evidence. On April 24, the Department announced the availability of funds for its top two tiers, “Validation” and “Scale-Up” grants, worth up to $12 million and $20 million respectively. It announced the availability of funds for its lowest tier, “Development” grants worth up to $3 million, on March 14.
i3’s Current Status
This year’s competition marks the fifth under the program, which was first enacted as part of the Recovery Act in 2009. More than half of the approximately $1.2 billion made available so far (including this year) were made available in the program’s first year.
While the initial results of a planned national evaluation by Abt Associates are not expected until the fall of 2015 at the earliest, individual evaluations for some of the larger grants from the first year are beginning to appear.
According to Robert Slavin, Director of the Center for Research and Reform in Education at Johns Hopkins University (who also is associated with one of the grantees, Success for All):
All four of the first cohort of scale-up programs funded by i3 (our Success for All program, Reading Recovery, Teach for America, and KIPP) have had positive first-year findings in i3 or similar evaluations recently, but this is not surprising, as they had to pass a high evidence bar to get scale-up funding in the first place. The much larger number of validation and development projects were not required to have such strong research bases, and many of these are sure to show no effects on achievement.
Slavin is not concerned about the lower success rate for smaller projects with lower levels of evidence. According to Slavin, if even 10 percent of those grants produce solid results that would be a success. “Failures of individual evaluations or projects are an expected, even valued part of the process of research-based reform,” he writes.
Slavin’s judgment of the larger grants was backed by Jon Baron of the Coalition for Evidence-Based Policy, who wrote in January that “The positive results are a notable departure from the usual findings of weak or no positive effects in large randomized trials in education. If the findings hold up in longer-term study reports, they would constitute an important validation of i3’s evidence-based approach to scale up.”
So far, according to a GAO analysis released in February, most i3 projects funded to date have fallen into one of the Department’s four priority areas: (1) supporting effective teachers and principals; (2) using high quality standards and assessments; (3) turning around low-performing schools; and (4) improving science, technology, engineering, and math (STEM) education, with the first of these receiving the most funding.
Specific examples of the kinds of programs funded, according to a July 2013 update from the Department, include:
- Developing online training, coaching, standards-aligned resources, and videos of effective teaching practices;
- Improving the teacher and principal pipeline by creating alternate pathways to certification, developing rigorous evaluation systems, and building district capacity to train and place effective teachers and principals;
- Implementing college- and career- ready curriculum and dual enrollment programs; and
- Creating or adapting digital tools and incorporating technology to support teaching and learning.
Related
- U.S. GAO, Characteristics of the Investing in Innovation Fund (February 7, 2014)
- The New York Times: Guesses and Hype Give Way to Data in Study of Education (September 2, 2013)
- Coalition for Evidence-based Policy: Randomized Controlled Trials Commissioned by the Institute of Education Sciences Since 2002: How Many Found Positive Versus Weak or No Effects (July, 2013)