All Articles Education Best Practices How universities can tell when their research is shaping policy

How universities can tell when their research is shaping policy

Max Crowley of Penn State’s Evidence-to-Impact Collaborative explains why higher education leaders need stronger ways to monitor how research informs policy and how those insights guide better engagement strategies.

5 min read

Best PracticesEducationHigher Education

(Pixabay/Geralt)

When we talk about research impact, we often start with data. Counts. Mentions. Metrics. But data alone doesn’t tell us whether science is shaping the decisions that affect people’s lives. The real question is what institutions do with that information. How do they use evidence to strengthen engagement, guide strategy, and demonstrate the public value of research?

This is a core focus of Penn State’s Evidence-to-Impact Collaborative, where we study the “science of scientific impact.” We examine how research moves into policy, what slows that process down, and what kinds of interventions can accelerate it. Increasingly, higher education leaders are asking the same questions, especially as they look to justify investments in research, outreach, and public engagement. Our goal is to help them determine whether and how those investments are being used. 

Why policy uptake matters more than ever

Across every technical area — AI, health, transportation, child welfare — policymakers face decisions that carry long-term consequences. They are elected for leadership, not for niche scientific expertise. But when policy isn’t grounded in evidence, well-intentioned ideas can create unintended harm.

We have seen the opposite as well. When policymakers work directly with researchers, when they can access relevant science in a language they understand, when they see that evidence is reflected in the policies carrying their name, their appreciation for the entire scientific enterprise grows. It strengthens trust. It strengthens institutions.

This is why it is no longer enough for universities to simply hope their work finds its way into the right policy conversations. They need ways to understand whether their research is informing decisions, how it is being used, and where gaps still exist.

Most institutions know too little about how their research is used. Traditionally, policy engagement around research occurs in a largely organic fashion — a researcher and policymaker happen to connect — leading to limited visibility for institutional leadership. To support research enterprises that impact policy-making, leaders need a systematic picture of research visibility, relevance, and use.

At the Evidence-to-Impact Collaborative, we take an experimental approach to understanding how research moves into policy. Through randomized controlled trials, we tested the Research-to-Policy Collaboration Model with my colleague Taylor Scott and team, which creates intentional, structured encounters between researchers and policymakers. Rather than expecting legislators to navigate academic papers that are often inaccessible or not written for policy audiences, the model centers direct conversations around shared priorities and relevant evidence. 

Across trials with members of Congress and state policy communities, we’ve seen a consistent result: when policymakers have tailored, direct interaction with science, they use it more, and they value it more.

This is where tracking tools come into play. Institutions today have more options than ever to monitor how research appears in government reports, bills, agency publications, and other policy outputs. These tools differ in scope and methodology, but collectively they help universities move beyond anecdotes toward clearer evidence of impact.

Measuring what changes and what doesn’t

One tool my team uses, alongside other approaches, is Overton. It is useful because it captures aspects of policy use that are often difficult to observe. In addition to linking policy documents to academic research, it traces citations within policy documents themselves, helping reveal how ideas move through policy systems over time. Because the underlying data focus on policy outputs, such as legislation, agency reports, and government publications, rather than media or scholarly mentions, the resulting picture is more closely aligned with how policymakers actually reference and reuse evidence.

This type of monitoring helps institutions examine where research appears, how it circulates across policy venues, and which engagement strategies are associated with observable uptake.

Higher education needs reliable ways to understand the pathways between research and policy. Monitoring in this fashion has allowed us to evaluate which engagement strategies work, which require refinement, and where science is not being used. Those gaps are just as important. When research that could improve child welfare, health, environmental resilience, or community safety isn’t showing up in policy documents, that signals a need for new outreach, new partnerships, or new support for researchers.

From our experimental work and from observing many institutions’ approaches, a few principles are clear:

  • Impact is behavioral. Policymakers use research when it is timely, relevant, and translates clearly to their priorities. Understanding their needs is as important as disseminating findings.
  • Measurement is strategic. Tracking research visibility is not about generating dashboards for the sake of dashboards. It helps institutions identify which investments in engagement are paying off and which are not.
  • Gaps tell a story. Monitoring helps highlight overlooked populations, underused evidence bases, or emerging issues where science isn’t informing decisions yet.
  • Planning requires evidence and infrastructure. To demonstrate the value of research and improve policy engagement, universities need to know what’s working, what isn’t, and how their efforts change outcomes over time.

This is especially urgent now, as the scientific enterprise faces growing public scrutiny. Transparent, credible evidence of how research supports good policy strengthens both universities and the communities they serve.

Why simply tracking impact isn’t enough

If there is one message I emphasize to higher education leaders, it is this: It’s not the data that’s important. It’s what we do with the data.

Tracking alone won’t improve policy. But tracking can reveal whether your institution’s outreach efforts matter. It can show where engagement should deepen. It can guide researchers toward communicating in ways policymakers can use. And it can help leaders make a stronger case for investing in research.

The stakes are high. When science informs policy, society benefits. And when policymakers see that science shaping successful decisions, they value research — and the institutions behind it — even more.

Opinions expressed by SmartBrief contributors are their own.


Subscribe to SmartBrief’s FREE email newsletter to see the latest hot topics on higher education. It’s among SmartBrief’s more than 250 industry-focused newsletters.