Friday, January 6, 2023
The crises of the past few years have brought with them a rallying cry for more evidence in government; a call to “follow the science” and “lead with data.”

And governments have responded. Public sector leaders from local governments all the way to the White House have celebrated the use of evidence in practice, and many are building the infrastructure to infuse data into day-to-day operations. For those of us who sit at the intersection of research and policy, this has been a noticeable shift. Evidence – rigorous, nuanced, and policy-relevant evidence – is not being produced in ivory towers alone but also in federal agencies, through community-led efforts, and across cities and states.

But as researchers and practitioners continue to collaborate to produce evidence on the most critical public sector challenges, it’s time to ask: What happens next? How do we go from documenting “best practice” to adoption of evidence? If we want evidence-based policymaking to meet its promise, we have to move beyond one-off demonstration projects to transformational use of evidence at scale.

My work with Stefano DellaVigna and Woojin Kim has begun to document the size of this challenge in cities across the US. Between 2015 and 2019, over 70 city departments conducted randomized controlled trials (RCTs) – the gold standard of evaluation – in collaboration with the Behavioral Insights Team. These projects tackled a range of pressing challenges ranging from how to diversify the police to how to improve code enforcement.

In many ways, these pilot projects were perfectly poised to make evidence adoption easy. First, the trials produced evidence on what works in the relevant government department. Almost 80% of trials identified a strategy with a positive impact, and almost half had both statistically significant and positive findings. Second, departments were testing low-cost interventions that had already received political, communications, and legal approval as part of the pilot project itself. So the types of innovations being tested were, at the very last, feasible at scale. Third, the cohort of cities that were doing this work were part of What Works Cities – a groundbreaking initiative that brought together cities that were already committed to using data and evidence to improve public policy.

Still, when we followed up five years later to see which of these best practices had been adopted, we found that less than a third of trial results were adopted by the very department that conducted the trial, beyond the timeline of the original pilot. The most surprising finding? The strength of the evidence did not matter. Put differently, the decision of whether to move forward or not after the end of a pilot program was not based on evidence-based information.

Despite this major investment in producing rigorous evidence, and truly committed public sector leaders, the single most important factor in predicting a department’s adoption of a new strategy was whether the tested strategy was a tweak to a pre-existing process, or an entirely new process developed for the specific innovation. Some 67% of strategies built into pre-existing processes were adopted, compared to just 12% of strategies based on new innovations. If we’re not seeing evidence adoption at scale, it seems organizational inertia may be the culprit.

The good news? Behavioral scientists know a lot about how to combat inertia. If we start thinking about the process of evidence adoption as a series of individual micro-hurdles – in the same way we think about how to get people to go to the gym, or to show up on election day – the challenge of evidence adoption becomes a manageable one.

My hope for 2023 is to see more evidence on how to overcome these hurdles using what we know about how humans actually behave. How do you make sure a busy public sector leader sees and understands the science? How do we make sure scientists are asking questions that a government has asked or providing evidence on the outcomes that matter most? How do we bring in the communities most affected into defining success metrics? And ultimately, how do we streamline these processes so that using the evidence becomes the default and not the exception?

We don’t have the answers yet but addressing these bottlenecks for evidence adoption requires the same level of data-driven attention we give to creating the evidence in the first place. Without this final step, all the effort and resources devoted to evidence-based policy making will miss their full potential.

This piece by Elizabeth Linos was first published as a guest column on greenbarrett.com a website that is dedicated to state and local government.