Search Results

You are looking at 1 - 2 of 2 items for :

  • Author or Editor: Cagla Giray x
  • Local Government x
Clear All Modify Search

Background

It is widely recognised that policymakers use research deemed relevant, yet little is understood about ways to enhance perceived relevance of research evidence. Observing policymakers’ access of research online provides a pragmatic way to investigate predictors of relevance.

Aims and objectives

This study investigates a range of relevance indicators including committee assignments, public statements, issue prevalence, or the policymaker’s name or district.

Methods

In a series of four rapid-cycle randomised control trials (RCTs), the present work systematically explores science communication strategies by studying indicators of perceived relevance. State legislators, state staffers, and federal staffers were emailed fact sheets on issues of COVID (Trial 1, N = 3403), exploitation (Trial 2, N = 6846), police violence (Trial 3, N = 3488), and domestic violence (Trial 4, N = 3888).

Findings

Across these trials, personalising the subject line to the legislator’s name or district and targeting recipients based on committee assignment consistently improved engagement. Mentions of subject matter in public statements was inconsistently associated, and state-level prevalence of the issue was largely not associated with email engagement behaviour.

Discussion and conclusions

Together, these results indicate a benefit of targeting legislators based on committee assignments and of personalising the subject line with legislator information. This work further operationalises practical indicators of personal relevance and demonstrates a novel method of how to test science communication strategies among policymakers. Building enduring capacity for testing science communication will improve tactics to cut through the noise during times of political crisis.

Restricted access

Background:

There is growing interest in and recognition of the need to use scientific evidence to inform policymaking. However, many of the existing studies on the use of research evidence (URE) have been largely qualitative, and the majority of existing quantitative measures are underdeveloped or were tested in regional or context-dependent settings. We are unaware of any quantitative measures of URE with national policymakers in the US.

Aims and objectives:

Explore how to measure URE quantitatively by validating a measure of congressional staff’s attitudes and behaviors regarding URE, the Legislative Use of Research Survey (LURS), and by discussing the lessons learned through administering the survey.

Methods:

A 68-item survey was administered to 80 congressional staff to measure their reported research use, value of research, interactions with researchers, general information sources, and research information sources. Confirmatory factor analyses were conducted on each of these five scales. We then trimmed the number of items, based on a combination of poor factor loadings and theoretical rationale, and ran the analyses on the trimmed subscales.

Findings:

We substantially improved our model fits for each scale over the original models and all items had acceptable factor loadings with our trimmed 35-item survey. We also describe the unique set of challenges and lessons learned from surveying congressional staff.

Discussion and conclusions:

This work contributes to the transdisciplinary field of URE by offering a tool for studying the mechanisms that can bridge research and policy and shedding light into best practices for measuring URE with national policymakers in the US.

Restricted access