Search Results

You are looking at 1 - 2 of 2 items for

  • Author or Editor: Shannon Guillot-Wright x
Clear All Modify Search

Background:

To improve the use of evidence in policy and practice, many organisations and individuals seek to promote research-policy engagement activities, but little is known about what works.

Aims and objectives:

We sought (a) to identify existing research-policy engagement activities, and (b) evidence on impacts of these activities on research and decision making.

Methods:

We conducted systematic desk-based searches for organisations active in this area (such as funders, practice organisations, and universities) and reviewed websites, strategy documents, published evaluations and relevant research. We used a stakeholder roundtable, and follow-up survey and interviews, with a subset of the sample to check the quality and robustness of our approach.

Findings:

We identified 1923 initiatives in 513 organisations world-wide. However, we found only 57 organisations had publicly-available evaluations, and only 6% (141/2321) of initiatives were evaluated. Most activities aim to improve research dissemination or create relationships. Existing evaluations offer an often rich and nuanced picture of evidence use in particular settings (such as local government), sectors (such as policing), or by particular providers (such as learned societies), but are extremely scarce.

Discussion and conclusions:

Funders, research- and decision-making organisations have contributed to a huge expansion in research-policy engagement initiatives. Unfortunately, these initiatives tend not to draw on existing evidence and theory, and are mostly unevaluated. The rudderless mass of activity therefore fails to provide useful lessons for those wishing to improve evidence use, leading to wasted time and resources. Future initiatives should draw on existing evidence about what works, seek to contribute to this evidence base, and respond to a more realistic picture of the decision-making context.

Open access

Background:

There is growing interest in and recognition of the need to use scientific evidence to inform policymaking. However, many of the existing studies on the use of research evidence (URE) have been largely qualitative, and the majority of existing quantitative measures are underdeveloped or were tested in regional or context-dependent settings. We are unaware of any quantitative measures of URE with national policymakers in the US.

Aims and objectives:

Explore how to measure URE quantitatively by validating a measure of congressional staff’s attitudes and behaviors regarding URE, the Legislative Use of Research Survey (LURS), and by discussing the lessons learned through administering the survey.

Methods:

A 68-item survey was administered to 80 congressional staff to measure their reported research use, value of research, interactions with researchers, general information sources, and research information sources. Confirmatory factor analyses were conducted on each of these five scales. We then trimmed the number of items, based on a combination of poor factor loadings and theoretical rationale, and ran the analyses on the trimmed subscales.

Findings:

We substantially improved our model fits for each scale over the original models and all items had acceptable factor loadings with our trimmed 35-item survey. We also describe the unique set of challenges and lessons learned from surveying congressional staff.

Discussion and conclusions:

This work contributes to the transdisciplinary field of URE by offering a tool for studying the mechanisms that can bridge research and policy and shedding light into best practices for measuring URE with national policymakers in the US.

Full Access