Connecting state and local government leaders
A new initiative looks to help local leaders alleviate poverty and advance economic mobility using data to invest in evidence-based programs and evaluate innovative ideas.
A new program aims to help state and local governments make better data-based decisions about initiatives that use federal COVID recovery funds.
The Leveraging Evaluation and Evidence for Equitable Recovery, or LEVER, initiative is a two-year program that offers workshops, multi-week training sprints and evaluation incubators to train public sector employees on data-based evaluation. It was developed by J-PAL North America, a research center in the Massachusetts Institute of Technology’s Economics Department, in partnership with the nonprofit Results for America.
“There are a lot of innovative ideas that are being tried all around the country leveraging these major federal relief dollars,” said Vincent Quan, co-executive director of J-PAL. “We want to be able to pair that innovation with rigorous research and science and evaluation so we can build the roadmap around what is going to be effective at alleviating poverty and advancing economic mobility for the future.”
Data and analytics are key components to this evaluation, and although most governments want to use them, many lack the technology or technical know-how required, Quan said. “Through this programming and the specific services that we are offering through the LEVER program, we are hoping to address some of these particular constraints,” he added.
Another goal is to create a community of practice through which state and local governments can engage in peer-to-peer learning about how they tackled policy issues or used data, infrastructure and systems.
LEVER hosted a two-part series of workshops in May with 100 attendees from almost 70 jurisdictions. It focused on how to start on evidence-based evaluation and included breakout groups for peer sharing and learning.
One of the “most robust components of the workshop is bringing people together that are all working in government, are all on board with this concept of embedding data evidence evaluation into the decision-making process and want to learn from each other,” said Jen Tolentino, Results for America’s director of local practice.
Coming in September is a 10-week training sprint that will cover ways to embed evidence and evaluation into decision-making. For instance, participants will learn how to define equity “because that is going to be a really key component of … evaluating success,” she said. Applications for the sprint are currently being accepted; spots for 15 jurisdictions are available.
The third training element is an evaluation incubator targeting governments that are farther along in the evidence-based evaluation process and ready to work on a randomized impact evaluation of a specific program. Randomized impact evaluations involve a control group that gets no intervention through a program and a treatment group that does.
“If there are changes or differences between those two groups, you can attribute it to the program itself and not something else,” Quan said.
During the evaluation incubator, J-PAL offers free technical support and some funding to help public-sector participants get their evaluation going. The goal is to develop evaluations with five jurisdictions this year and another five next year.
Participants do not have to go through all training elements, however. “We designed the LEVER program to have a ‘no wrong door policy,’” Quan said. “You might have some governments that have a lot of experience doing evaluation work that are ready to participate in the evaluation incubator and proceed with a high-quality, full-scale study … and some jurisdictions that are maybe evidence-curious but have less experience that would benefit from attending some of the workshops or training streams.”
Tolentino said the impetus for LEVER came from the unprecedented influx of federal funding during the COVID pandemic and from work that Results for America did with Mathematica, a data and social science company, in 2021 to create the ARP Data and Evidence Dashboard to highlight how local governments invest American Rescue Plan funds.
They released an updated dashboard June 14 that analyzed about 8,300 projects from 200 Recovery Plan Performance Reports that 50 states, 83 counties and 67 cities submitted to the Treasury Department, per compliance requirements. The dashboard found 110 projects that can serve as models for other government leaders, including 24 evidence-based ones, 44 involving evaluations and another 44 involving the development of in-house data and evidence capacity.
An example of a project with evaluation capability is Chicago’s plan to implement a 211 helpline that provides a comprehensive health and human services resources where residents can get information about and referrals for services they need. “An evaluation component will be included to assess the effectiveness and impact of the 211 helpline in facilitating access to services and supporting residents in their recovery process,” according to Results for America.
Both Tolentino and Quan said that state and local governments’ attitudes toward data have shifted in the decade since their organizations started. “We’ve gotten to the maturity level in government decision-making, embedding it with evidence and data and evaluation that it’s now something people are aware of and want to start to do—and now have the money to do it,” Tolentino said.
“I think that there is also an increased acknowledgement on how investing in data infrastructure makes a big difference,” Quan said. To understand “whether or not these dollars are being spent effectively, you need to have strong data infrastructure to be able to understand what is currently going on in your community, and also to answer the questions of whether or not specific programs and policies are moving the needle on outcomes that you care about,” he added.
Stephanie Kanowitz is a freelance writer based in northern Virginia.