Abstract
Background:
To improve the use of evidence in policy and practice, many organisations and individuals seek to promote research-policy engagement activities, but little is known about what works.
Aims and objectives:
We sought (a) to identify existing research-policy engagement activities, and (b) evidence on impacts of these activities on research and decision making.
Methods:
We conducted systematic desk-based searches for organisations active in this area (such as funders, practice organisations, and universities) and reviewed websites, strategy documents, published evaluations and relevant research. We used a stakeholder roundtable, and follow-up survey and interviews, with a subset of the sample to check the quality and robustness of our approach.
Findings:
We identified 1923 initiatives in 513 organisations world-wide. However, we found only 57 organisations had publicly-available evaluations, and only 6% (141/2321) of initiatives were evaluated. Most activities aim to improve research dissemination or create relationships. Existing evaluations offer an often rich and nuanced picture of evidence use in particular settings (such as local government), sectors (such as policing), or by particular providers (such as learned societies), but are extremely scarce.
Discussion and conclusions:
Funders, research- and decision-making organisations have contributed to a huge expansion in research-policy engagement initiatives. Unfortunately, these initiatives tend not to draw on existing evidence and theory, and are mostly unevaluated. The rudderless mass of activity therefore fails to provide useful lessons for those wishing to improve evidence use, leading to wasted time and resources. Future initiatives should draw on existing evidence about what works, seek to contribute to this evidence base, and respond to a more realistic picture of the decision-making context.
Key messages
There has been a huge expansion in research-policy engagement initiatives.
These are mostly poorly described, specified, and evaluated.
The lack of strategy may lead to significant harms (for example, increased competition, wasted time and resources).
Future initiatives should draw on and build the existing evidence about what works.
Background
For over four decades, researchers have written about how evaluation and research evidence is routinely ignored by decision makers (Weiss, 1993; Ham et al, 1995; Black and Donald, 2001; Schoemaker and Smulders, 2015). The perceived failure of decision makers to use evidence has led researchers to investigate barriers and facilitators of evidence use (Innvaer et al, 2002; Orton et al, 2011; Oliver et al, 2014), and conceptualisations of the ‘evidence-policy gap’ which seek to promote ‘bridging’ interventions (Hayre and Parker, 1997; Haines et al, 2004; Dobbins et al, 2009; Boaz et al, 2011; Davis et al, 2013; Milat and Li, 2017). This discourse may have shaped how individuals and organisations understand their role within the broader research-policy system (Gough and Boaz, 2017).
Responding to this perceived failure to use evidence, many organisations and individuals have sought to promote greater engagement between researchers and policymakers. ‘Engagement’, is often taken to mean greater interaction at the interpersonal (for example, networking events) or inter-organisational (for example, secondment schemes) level. As will be immediately obvious, a great many different types of activities may fall under the broad heading of ‘engagement’: from training courses for PhD students on how to maximise impact, to major investments by funders into centres or research programmes to deliver policy-and practice-relevant research (such as the What Works Centres in the UK).
This multiplicity is for a number of reasons. First, individuals and organisations actively seeking to promote evidence use through increased research-policy engagement have different perspectives on what is meant by the goal of ‘improved evidence use’, based on the well-known assumption that evidence is rarely or poorly used. As a goal, it is poorly defined, and hard to measure (Gitomer and Crouse, 2019). Frequently, proxy goals – often equally vague – are adopted. For example, researchers and funders tend to talk of ‘research impact’, ‘knowledge translation’, or ‘evidence uptake’ as goals (Armstrong et al, 2014; Boswell and Smith, 2017). Decision makers in the UK tend to talk about ‘academic-policy engagement’ or ‘optimising science advice’ (Government Office for Science, 2019; Stevenson, 2019). Terms also differ depending on sector or discipline, which further muddies the water (Oliver, 2019; Smith et al, 2019). These terms are often conflated, confused or elided, which is a problem. As one of the reviewers of this paper commented, they ‘have distinct meanings from engagement which is often a process for knowledge translation or a process to enable research impact. More critical comparison of terms and their relationship to engagement will help differentiate the terms which are not proxies for engagement but which are enabled by engagement’. Greater clarity would enable articulation of what we are doing, why, and to what effect.
Second, each individual or organisation wishing to participate in engagement activities is constrained and incentivised in different ways through different processes (Smith and Stewart, 2017; Dunleavy and Tinkler, 2021). Researchers are incentivised primarily to seek individual ‘research impact’ often articulated in a linear narrative, whereas policymakers seeking to ‘pull’ research in may be looking to understand a policy problem through reading across different narratives and framings. This means that participants in engagement activities may not share the same aim, even if taking part in the same activity; and will likely participate in a way which benefits their own interests most.
Third, research-policy engagement activities may be selected on the basis of the familiar, rather than the effective; the ‘if you’re a hammer, problems look like nails’ effect (Africa Evidence Network, 2018; Mijumibi-Deve, 2017). For example, funders will seek to fund, whereas conveners like learned societies may seek to hold events and strategic discussions. This may work well for them, but not necessarily for the broader goal of improving evidence use. At present relationships between organisations are not configured in ways that facilitate working towards wider/shared goals (Best and Holmes, 2010). Even where clarity around aims exists, people are most likely to reach for familiar tools and approaches, rather than identifying common aims and then utilising the most effective option available.
It is therefore understandable that there are so many approaches to promote to research-policy engagement. It is, however, a challenge to those wishing to identify the most effective approaches to achieve particular outcomes. It is also possible that individual practices (for example, secondments, training) may serve multiple goals (for example, improve decision making; improve teaching quality in universities). At present, however, few goals, approaches or practices are clearly defined and articulated, let alone linked in a theory of change. The overall effect is that phrases like ‘what works’ become demonstrably meaningless without a clear sense of what participants want to achieve and why. The pursuit of more evidence-informed policy may seem like a clear aim at first glance, but it remains a vague catch-all solution to an ill-defined problem.
Ultimately, without more information about the effects of different approaches to research-policy engagement, it is likely that activities will have limited impact. Worse, they risk undermining aspects of the broader system (such as capacity and goodwill to engage) elsewhere. Thus, it is important to answer two main questions:
What research-policy engagement activities are being used with the goal of improving evidence use?
What is known about the impacts of these activities?
Methods
To explore these questions, we undertook a large-scale mapping exercise. There are many ways of categorising these kinds of activities (see, for example, (Hoppe, 2009; Michie et al, 2011; Langer et al, 2016). We have taken the approach of identifying organisations, initiatives and activities as the units of analysis. For example, the UK Parliament and the Economic and Social Research Council (ESRC) are both organisations who fund an initiative called the Parliamentary Office for Science and Technology (POST). POST also carries out a number of activities (for example, runs networking events, publishes evidence syntheses (POST at 30,no date)). We included organisations in our dataset if there was evidence from their websites and associated documentation that they were now, or had ever been, actively engaged in promotion of academic-policy engagement activities, with a particular emphasis on extracting insights for the UK.
While it would be fascinating to explore the broad array of actors involved in academic-policy engagement worldwide, we recognise that mapping this picture globally would be an impossible, ever-changing and expanding task. Therefore, we decided to attempt a comprehensive mapping of UK organisations, with purposive samples from overseas. The overseas samples were selected in order to provide the broadest possible picture of the type of activities ongoing in this academic-policy engagement space to inform our iterative searches, through being very high profile or connected with our UK organisations. We have assumed that learning from UK universities and learned societies will be relevant to other countries with developed research-policy systems.
Thus, to identify relevant activities, we conducted systematic desk-based searches for eight types of organisation in the UK (research funders, learned societies, universities, intermediaries, policy organisations and bodies, practice organisations and bodies, think tanks and independent research organisations, non-profits, and for profits/consultancies), and five overseas (research funders, universities, learned societies, intermediaries and policy organisations). After an initial systematic search from December 2019 to September 2020 (with results summarised in Hopkins et al, 2021), we surveyed a subsample of these stakeholders to ensure (a) we had identified as many relevant organisations as possible, and (b) we had accurately collected data on activities. This led to a further 162 organisations being added to our dataset by December 2020.
Within each organisation or initiative, we identified specific activities used to promote engagement. For example, a research funder could directly fund research relevant to policy and/or practice; could support fellowships and secondments for academics to enter policy organisations; and could host policy-academic networking events. For each identified activity, we collected data on who it was aimed at, amount invested, and the key practices which strategies sought to employ. We shared an initial set of activities at a workshop in February 2020 with relevant funders, policymakers, researchers, and intermediaries. We used this workshop to identify an analytical strategy to synthesise and explore these data. Using their input, we refined our analytical approach and identified nine research-policy engagement practices: (1) disseminating and communicating research; (2) formal institutional requests for evidence; (3) facilitating access to research; (4) building decision-maker skills; (5) building researcher skills; (6) building professional partnerships; (7) strategic leadership; (8) rewarding impact; and (9) creating infrastructure and posts. Coding was undertaken by two researchers (AB and KO). Full details of our methods are available (Hopkins et al, 2021).
Thus, we collected information on:
which organisations and initiatives were actively promoting research-policy engagement (who; where; when; at what cost; funded by whom);
how (what specific activities, and what types of practices they were engaged in), and to what effect (whether there was any evaluation of these activities, or other research indicating impact of these activities).
Results
Overall, we identified 513 organisations globally who have been or are currently promoting research-policy engagement, in over 40 countries. Of these, the majority were university-based (including university teams, networks and multi-university research centres, but also included governmental departments and policy agencies, learned societies and professional bodies, intermediary organisations (such as What Works Centres and advocacy charities). We also found businesses, primarily publishers and database owners (see Figure 1).

Types of organisations which host research-policy engagement initiatives
Citation: Evidence & Policy 18, 4; 10.1332/174426421X16420918447616

Types of organisations which host research-policy engagement initiatives
Citation: Evidence & Policy 18, 4; 10.1332/174426421X16420918447616
Types of organisations which host research-policy engagement initiatives
Citation: Evidence & Policy 18, 4; 10.1332/174426421X16420918447616
We identified 1923 activities carried out by these organisations across multiple policy and practice areas. As is consistent with the literature, the majority of organisations worked in public policy (n = 129) health areas (n = 63 in public, environmental and clinical health, 20 in health and social care). Although difficult to attribute (single) policy areas in all cases, we identified a wide range of policy areas being targeted (see Figure 2).

Which policy areas were targeted by these initiatives and organisations?
Citation: Evidence & Policy 18, 4; 10.1332/174426421X16420918447616

Which policy areas were targeted by these initiatives and organisations?
Citation: Evidence & Policy 18, 4; 10.1332/174426421X16420918447616
Which policy areas were targeted by these initiatives and organisations?
Citation: Evidence & Policy 18, 4; 10.1332/174426421X16420918447616
Although the very oldest organisations in our dataset began over 500 years ago, by far the majority of research-policy engagement activities themselves date from 1945 onwards, with a large increase in activity from 2010 onwards. We conducted an analysis of the primary practice that initiatives were using, summarised in Table 1.
What types of activities were carried out? (NB: categories are overlapping)
Practice | N organisations | Number of individual activities (e.g networking event, website) | How many of these activities were evaluated? |
---|---|---|---|
1. Disseminating and communicating research | 404 | 503 | 26 (6%) |
2. Formal requests for evidence | 158 | 174 | 2 (1%) |
3. Facilitating access to research | 256 | 293 | 23 (8%) |
4. Building decision-maker skills | 177 | 252 | 28 (11%) |
5. Building researcher skills | 167 | 253 | 9 (4%) |
6. Building professional partnerships | 258 | 286 | 28 (8%) |
7. Strategic leadership | 238 | 257 | 4 (2%) |
8. Rewarding impact | 54 | 58 | 0 |
9. Creating infrastructure and posts | 211 | 245 | 21 (9%) |
Total | 513 organisations | 1923 individual activities | 141 (6%) |
By far the majority of activities (see Table 1) identified fell into the first category of disseminating and communicating research: practice (1), with investment increasing since the late 1990s. For many, this has meant attempting to increase the impact of one piece of research, or pulling in evidence in direct response to a policy or practice need. Examples of this include the writing and dissemination of policy briefs, often based on evidence syntheses (see, for example, Partners for Evidence-driven Rapid Learning in Social Systems (PERLSS, 2018)), to increase research appeal and accessibility. Existing evaluations suggest that these types of approaches to improving evidence use do little to address practical, cultural or institutional barriers to engagement (Langer et al, 2016), and that although communication and dissemination products and events (for example, newsletters and conferences) are valued by participants, they can demonstrate little impact on policy or practice.
The issuing of and response to formal evidence requests: practice (2) is one of the oldest ways in which governments seek to pull in evidence and expertise, usually to address a particular need, often using formal institutional mechanisms such as science advisory committees, or requests for evidence issued through legislatures and consultations (Beswick and Geddes, 2020). The publication of evidence priorities, such as the UK’s Areas of Research Interest also operate as a static but public request for evidence (Nurse, 2015; Teers et al, 2018). Evaluations suggest that greater support with thinking through the purpose and goal of formal evidence requests and associated activities would benefit governments by providing a more diverse and appropriate evidence base (House of Commons Liaison Committee, 2019). ‘Push’ mechanisms that aim to inform government advice or consultation may be hampered by low academic and public visibility of Scientific Advisory Committees and Expert Committees (Cabinet Office, 2017).
Deliberate attempts to facilitate access to evidence: practice (3) have expanded over the past two decades, such as rapid response synthesis services (Mijumbi-Deve et al, 2017) and supported commissioning processes (Gough et al, 2018). Some are funded directly by government, others by research funders. We found several long-term funder-led initiatives to promote partnership working and identify policy-relevant questions (Living with Environmental Change, nd; Bednarek et al, 2016). Government- and researcher-led activities include the co-creation research (for example, the What Works Trials Advice Panel (https://www.gov.uk/government/publications/cross-government-trial-advice-panel-role-and-membership), which has worked with the UK government on evaluation projects across 18 departments and public bodies), and the development of tools to support commissioning and help government departments set up research and evaluation projects (Sax Institute, 2019). Internally-conducted evaluations of initiatives supporting government to commission and co-develop research may have more potential to conduct policy-responsive research to both short and longer-term timescales (Mays, 2018; Teers et al, 2018).
Organisations using building policymaker (research) skills: practice (4) focused on training or capacity-building (for example, Canadian Science Policy Centre, the US Coalition for Evidence Based Policy and the UK Alliance for Useful Evidence). Training focused on understanding and using evidence is often provided by policy intermediaries (Morgan, 2020) or think tanks (Haddon and Sasse, 2018)). University-based training tends to focus on developing the expertise of policy professionals in specific areas such as security and communications (see the KCL Centre for Strategic Communications (KCSC) Executive Education programme). Such training was found by evaluations to often be too academic, with skills and knowledge at too detailed a level to easily apply (see, for example, Page, 2021). Some evaluations report increased capacity for evidence use or generation, but often impact of this on practice was unclear. Policy fellowship programmes, found in 11 universities in the UK and US aim to formalise or increase the exchange of people and ideas between policy organisations and campuses. More commonly, research funders, policy organisations and universities aim to build researcher skills: practice (5), meaning offer exposure to and knowledge about how policy works. These often takes the form of secondments and internship schemes (Tyler, 2017; Morgan, 2020; Wye et al, 2020); in-house training provided by university policy teams; mentoring and coaching opportunities (https://www.r4impact.org); and advocacy training (at the US Center for Child Health Policy and Advocacy). Training and professional development focused on engagement is an expanding area, but almost no evaluations exist for what works for whom, in which settings and contexts, other than to say that there is considerable variation in the needs of researchers and policy partners (Langer et al, 2016).
Building professional partnerships: practice (6) appears to be an increasingly popular approach, primarily focusing on the creation of policy/practice-research collaborations, usually of limited lifespan (Anderson et al, 2019), and/or networks. Factors which appear to make these successful are linking related collaborations through funding or networking schemes, such as the William T Grant Foundation’s Research-Practice Partnership programme, supported by a national knowledge-sharing network (Tseng et al, 2018). In the US, these partnerships are funded primarily by philanthropic donors, and in Africa through development budgets. The UK research councils and government have funded multiple such partnerships, primarily in health (such as the Collaborations for Leadership in Applied Health Research and Care, which bring together clinicians and researchers, or the NIHR-funded Policy Research Units (Policy Innovation Research Unit, no date; Policy Innovation Research Unit, 2014), but also in local government (such as Leading Places (O’Brien, 2018), or engaging with Scottish Local Authorities (Hardill and Baines, 2012); and some issue-oriented collaborations, such as sustainability, (for example, Living With Environmental Change (Warburton, 2011)), and policing (Hunter et al, 2017; Teers et al, 2018; Page, 2019; May et al, 2020). Collaborative research initiatives have been more robustly evaluated (Delaney et al, 2010; Hunter et al, 2017; Kislov et al, 2018; Interface Associates UK Limited, 2020). These evaluations tend to show improved cross-sectoral collaboration in research, but rarely demonstrate enhanced evidence use or improved outcomes for service users (Ferlie et al, 2017). Initiatives that aim to build relationships over the long term through partnerships or networks may be limited by insecure or project-based funding (Allen et al, 2015). There is also an expanding literature on research-practice partnerships which suggests that long-term, mutualistic, collaborative working may be central to addressing barriers to improving evidence use identified in research, and improving the ability of engagement activities to provoke shifts in organisational cultures and routines (Coburn and Penuel, 2016; Farrell et al, 2019; Baginsky et al, 2019).
In addition to research collaborations, networks and networking opportunities were (according to mainly internal evaluations) valued by participants, particularly where sustained over longer periods (Frost et al, 2012). Disciplinary examples include the specialist networks run by the British Society of Criminology (BSC), while others are sector-specific (the UN Science-Policy-Business Forum on the Environment) or organised at regional or national levels (for example, the brokerage network run by the Scottish Policy and Research Exchange, or the federally-organised US Scholars Strategy Network).
Activities focused on strategic leadership: practice (7) tended to either be examples of organisations claiming they advocated for evidence-informed decision making (for example, EVIPNet, the Coalition for Evidence-based Policymaking in the US), or providing training and capacity building for individuals to develop strategic leadership skills. Organisational strategic leadership was noted for some international networks (for example, Lister et al, 2015; Lester et al, 2020) and funders (for example, (ERA-Net, 2005) who were able to demonstrate convening powers around contentious issues, or to set agendas and expectations for engagement. Major academies in the UK, such as the Royal Society of Edinburgh, devote resources to pooling academic expertise and convening stakeholders with the aim of influencing global policy discussions. International associations build on the work of national academies in service of policy engagement, for example through the European Academies’ Science Advisory Council (EASAC). At university level in the UK, the establishment of over a dozen dedicated policy teams in the past ten years reflects an attempt to more strategically embed policy skillsets and provide institutional strategy for knowledge exchange (Beswick and Gedddes, 2020), although in practice many may work more pragmatically to support individual researchers.
Our review of initiatives to reward and incentivise engagement: practice (8) identified over 60 prizes or rewards for impact, knowledge exchange, or ‘best use of evidence’. These are often run by journals (Evidence & Policy’s Carol Weiss Prize); Policy Institutes (the King’s Policy Idol Competition); universities (Nottingham Universities’ Keystone Award for non-academic members of staff); research networks (Life Sciences Research Network Wales’ Research Impact Awards); and learned societies (for example the UK Political Studies Association’s ‘Best Use of Evidence’ Award); as well as funding bodies (the ESRC’s ‘Impact Prize’) (ESRC, 2017). For funders, recent interest in professional development and research leadership may signal this as an area for future investment (Flinders and Andersson, 2019). Prizes may be perceived as attempting to incentivise academic-policy engagement, although as far as we know none have a clearly articulated theory of change or strategy in the public domain. None have evaluations in the public domain.
Finally, some activities seek to create and embed infrastructure: practice (9) at a more systemic level. Examples of this type of activity include the UK’s Areas of Research Interest (ARIs) Fellowships which represent the first strategic attempt to align the work of public research councils with Departmental priorities (Government Office for Science, 2019). More frequently, research-policy engagement activities have sought to embed infrastructure by creating longer-term relationships to ensure the (financial) sustainability of their project beyond the funded lifespan. However, most of the examples we identified demonstrated that these outcomes depended on links between individual researchers and policymakers rather than greater systemic connectivity (Knight and Lightowler, 2010; Allen et al, 2015), and indeed any impacts at this level appear to be the result of individuals going beyond their remits to create and sustain relationships via, for example, sharing resources such as staff and knowledge, leading to and depending on trust and goodwill (Kenny et al, 2018). Most evaluations discuss job creation rather than systems-level indicators; however, there is clearly value in identifying where activities may complement one another, and the different roles organisations may play, in order to avoid competition for resources such as policymaker time.
Overall, we identified a total of 57 evaluations, of varied quality. Most of these evaluations focused on one activity within one organisation, although some activities were evaluated more than once, and some organisations evaluated more than one activity per evaluation. We estimate between 3–13% of all activities were evaluated (57/2321 activities, to a maximum of 57/513 organisations) (See Table 2, and for more details, Hopkins et al, 2021).
Summary of evaluations of research-policy engagement initiatives
Practice | Number of evaluated initiatives (%) | Overall state of evidence for this practice |
---|---|---|
1. Disseminating and communicating research | 26 (6%) | Mostly internal evaluations suggest that organisations’ stakeholders value their research outputs; limited evidence of effect on policy or practice |
2. Formal requests for evidence | 2 (1%) | Evaluations suggest that greater support with thinking through the purpose and goal of engagement activities would benefit governments by providing a more diverse and appropriate evidence base. |
3. Facilitating access to research | 23 (8%) | Collaborative research structures tend to benefit research and researchers more than partners by leading to more research; also more effective at dissemination than decision-influencing. Demand-side initiatives common. |
4. Building decision-maker skills | 28 (11%) | activities often focused on didactic, academically-heavy content to provide users with research skills which were often beyond the needs of the user. Some development of skills was valued by participants, but they often struggled to apply them in practice. Activities which focused on evidence use tended to focus on naïve interpretations of the policy process - very much the ‘do policy better’ model. Exceptions include the N8 which worked with police forces to determine their training needs (in this case data) and designing a course. |
5. Building researcher skills | 9 (4%) | Activities tend to focus on exposing researchers to policy or practice context, with individual but not organisational benefit (particularly for home organisation). Host organisations tended to benefit temporarily from increased resources. |
6. Building professional partnerships | 28 (8%) |
|
7. Strategic leadership | 4 (2%) | Existing evaluations are rare. Evidence suggests that strategic leadership activities promote individual rather than organisational benefits. Where organisations claim strategic leadership roles, this is usually un-evidenced. |
8. Rewarding impact | 0 | |
9. Creating infrastructure and posts | 21 (9%) |
|
Some were independent and robust, but mostly these evaluations took the format of annual or ‘end of project’ summaries, which described only selected aspects of research-policy engagement activities carried out by that organisation (for example, of their Fellowship programme (Goodman Research Group, 2019) but not of their convening or strategic advocacy activities). Thus, these should be taken as indications of the state of knowledge about practices, not at the unit of the organisation. Most evaluations were quite recent, or in grey literature, and focused on individual projects, and were often done by those who worked on the original research project.
Discussion
Key findings
There has been huge expansion in research-policy engagement initiatives, probably a natural result of the longstanding (academic) focus on the perceived failure of policymakers to use evidence well (Weiss, 1979; 1998; Lomas and Brown, 2009). Overall, we found 1923 individual activities conducted by 513 organisations. As far as we know, this is the first attempt to systematically identify all research-policy engagement activities within the UK, with significant overseas coverage. This allows us to speak with confidence about the state of the evidence base, which we find to be scarce and/or hard to access. Our search methods (desk-based search) relied on websites, a participant survey and a stakeholder roundtable, and we committed significant resources to access this information in a way which we think has not been previously collated. Nevertheless, we would not claim to have exhaustively identified every paper or evaluation relevant to this topic. There is likely to be information out there produced for internal purposes that we were unable to access, and in addition we recognise that there are certainly a significant number of initiatives in non-UK countries which we have not collated. More research on how initiatives vary in structure, funding, activity, and efficacy, and how this relates to national context, is certainly required.
Most activity, and probably most money is still spent on disseminating and communicating research, which, as a sole strategy, has long been known to be ineffective at producing policy and practice change, or societal impact (Knott and Wildavsky, 1980; Contandriopoulos et al, 2010). There has also recently been a focus on initiatives which seek to initiate or support interpersonal or inter-organisational relations (>700 identified). These operate on the assumption that creating direct interpersonal links leads to greater research use (Secret et al, 2011; Gainforth et al, 2014; Topp et al, 2018). Recent research suggests that interpersonal links are indeed important in the production and use of relevant evidence (Sin, 2008; Ward, 2017), but need to be underpinned by long-term strategic and institutional support (Coburn and Penuel, 2016; Tseng et al, 2018), however, few of the relational initiatives we found were designed or operated in this way. We found few initiatives which could be described as attempting to operate at this higher systemic level. Most were in practice linear or relational activities, undertaken with an awareness of, for example, sets of stakeholders or organisational constraints. The literature on evidence use would suggest, however, that it is precisely this type of long-term, strategic working, that attempts to bring together organisational goals and ways of working, which is most likely to promote evidence use effectively (Holmes et al, 2017). The emphasis on the knowledge-policy gap has led to a proliferation of activities, but few activities have been evaluated robustly or in a way likely to help researchers or policymakers to make effective decisions (Dwan et al, 2015).
We diagnose a dual design failure: unclear aims, and a lack of appreciation of the policy and practice contexts within which they are attempting to operate. Most initiatives appear to address the assumption that decision makers do not listen to evidence, which is still widely held despite increasing evidence to the contrary (Elliott and Popay, 2000; Fischhoff and Scheufele, 2013). These initiatives are targeting a problem which may not exist (or at least not at the scale assumed by researchers). This means that when designing initiatives, many providers may have an inappropriate, or a poorly-articulated goal. We find this to be the case, with few having pre-specified outcomes which would indicate success or failure. Most initiatives refer to vague goals such as ‘research impact’ or ‘policy change’. It may not be a reasonable outcome for a newsletter to demonstrate this kind of outcome – which may instead lead to interim goals like greater awareness of activities, interest on the part of decision makers, willingness to converse, or initiation of relationships. This lack of specificity hinders quality evaluation – and as we have seen, very few of these initiatives had evaluations available in the public domain. The fact that few evaluations reported on potential impacts on policy and practice should not be taken as evidence of ineffectiveness. They may be having impacts on all kinds of outcomes and, as previous reports have noted, it is important that these impacts are captured (Walker et al, 2019). But because the existing evaluation evidence is so scarce, and there is so little disentangling of the mechanisms, assumptions, goals and outcomes associated with different activities, it is not possible to say which types of research-policy engagement activities will lead to which types of impact.
That does not mean, however, that it would not be possible to design evidence-informed research-policy engagement initiatives (Boaz et al, 2016; 2019). An obvious next step would be to more fully synthesise the evidence to enable teasing out of the context-specific lessons for each. For example, syntheses could usefully be undertaken on particular settings, such as local government, particularly on the supply vs demand of evidence and on the civic university (Grace, 2006; Mawson, 2007; Barker, 2010; Curran, 2011; Hardill and Baines, 2012; Allen et al, 2015; O’Brien, 2018; Rushmer and Shucksmith, 2018; UCL/LGA, 2020; Wilson and Lilly, 2016), Syntheses could also focus on initiatives provided by types of organisations (for example, learned societies, or funders (for example, Grace, 2006; Health Economics Research Group, 2008; Flinders and Anderson, 2019; Hardill et al, 2012; France et al, 2016) on specific sectors (e.g. policing, see Holden, 2016: 2012), or on types of activities (Fellowship schemes, for example). This would enable teasing out of the context-specific lessons for each of these. There is clearly scope to further build the evidence base to capture the learning from the immense amount of activities ongoing, including building rigorous evaluation into proposals for engagement activities. Submissions to research assessment exercises (such as the Research Excellence Framework (REF) in the UK) might provide detailed accounts of how researchers perceive their activities and efficacy in policy engagement for secondary analysis; or primary research could be undertaken.
It is understandable that there has been an increase in busyness as opposed to effective action. Much of the existing advice aimed at researchers generally exhorts them to ‘increase your own impact or the impact of your own research’ (Oliver and Cairney, 2019). If everyone took this advice, it would lead to increasing noise and busyness, with unclear effects on decision making or outcomes. Other advice is aimed at decision makers, proposing greater use of evidence synthesis (Brownson et al, 2018) or use of intermediaries (Davies et al, 2017). As has been widely discussed in the literature, however, much of this advice – and indeed activity – does not seem to be based on a good understanding of the policy world (Cairney and Oliver, 2018), or of the ways in which evidence and knowledge can inform and interact with decision making (Tseng, 2008; Grundmann, 2017).These initiatives would also be more likely to be effective if they responded to a more realistic picture of the decision-making context they sought to influence (Wellstead et al, 2018).
This would help prevent significant wasted investment (time, money and resources) in ineffective activities. It is hard to get a clear financial picture of how much has been spent on these types of activities, but it seems reasonable to assume that a very significant sum has been spent in the last decade alone on the attempt to achieve ‘impact’ – what Knott and Wildavsky call ‘tangible benefits of information’ (Knott and Wildavsky, 1980). There are likely to be multiple drivers and motivations for this increase in activity, such as research assessment exercises, and increased expectation on the part of funders to see return on investment. It would be useful to see whether the policy environment created by these drivers has led to an increase in useful engagement activity. Unfortunately, we did not find a single robust evaluation which convincingly demonstrated impacts of evidence use on social outcomes, or even evidence use by decision-makers. Rather evaluations demonstrated the impossibility of attributing distal outcomes to research investments (see, for example, Kislov et al, 2018). Evaluations of interventions tend to report on more proximal outcomes, such as influence on ideas held by decision makers or attitudes to evidence use in general, but there are very few which track through to those later stages of implementation (Hanney et al, 2020). This matters, particularly where interventions are claiming impact on, or attempting to address these distal goals (even if, in practice, more proximal ones are being targeted).
Less concretely, but we believe equally importantly, the increased number of initiatives in this space are likely to lead to competition between them (for example, the scarce resource of policymaker attention (Cairney, 2018)). Without good evidence to help them choose where to engage, policymakers risk opportunity costs (for example, engaging with the less effective initiative). Poor experiences of engagement can reduce goodwill on all sides, harming not just the initiative in question, but the system more broadly. The next academic to knock on the door may receive a less favourable response from an unhappy policymaker (Cairney and Oliver, 2020). It is in all our interests to support effective engagement.
Increased competition may also exacerbate existing inequalities (for example, by engaging with richer sets of researchers, with all the structural inequities which that implies (Oliver et al, 2019)). Competition between research-led engagement initiatives to be the ‘go-to’ voice for academic policy engagement in particular may favour better-resourced or more ‘acceptable’ academic voices. It could also limit opportunities for conversation or deliberation about what shared goals there may be, within this crowded space. Few initiatives make their values explicit; indeed, many prefer to see research production as a morally neutral activity (Sarewitz, 2018). With the focus on delivery rather than reflection and learning, there is a real risk that important questions about the ethics and values underpinning existing activities go unchallenged and unexamined.
Conclusion
Overall, the picture is of a vast and increasing mass of rudderless activity, which is busy rather than effective. Without clear goals, and without strategic coordination, it is impossible to pick out any signal over the noise. Worse, without clearly collecting and building on existing evidence about these type of interventions, or on a well-founded understanding of the decision-making context, there are almost certainly harms being inflicted. Harms are likely to include wasted time and resources, reduced goodwill and interaction, and increased inequalities in terms of participation in evidence production and use. We also believe we have a moral responsibility to understand and debate the moral and ethical values underpinning these activities, and to somehow create space for shared conversation about whom the research-policy system does, and should serve.
What could we do differently? We see three main ways forward. First, for those wishing to design and implement new initiatives and interventions, we suggest engaging with the existing literature on (a) what policy is and how it works, (b) ethics and values of engagement, and (c) evaluations and interventions of academic-policy engagement practices, to help clarify what you are doing, why, and how it can be informed by existing studies and perspectives. There is now a wealth of empirical evidence and commentary about how policymaking works in different contexts ((Boaz and Gough, 2012; Ferlie et al, 2012; Smith, 2014; Cairney, 2016; Gough et al, 2018). The evidence about research-policy engagement initiatives is more limited, but pockets of rich and nuanced evidence exist and should be used. There is also a rich multidisciplinary scholarship on evidence production and use, which should be better shared and used (Oliver and Boaz, 2019; Smith et al, 2019). This step is crucial, since too many initiatives begin with unclear aims, not discussed properly among pushers and pullers. Researchers would not get away with an undeveloped approach to funded research, so the same should apply with funded engagement.
Second, if you have clarified your aims, you can use existing evidence to plan and execute engagement effectively. For example, you should establish the extent to which a new initiative complements or competes with projects in the current landscape. If competition between initiatives does indeed cause the harms we outline above, could discussions about shared goals and coordinated activities offer a way forward? Funders in particular could incentivise collaborative work/evaluation across these initiatives. Further empirical and conceptual work is required, taking a systems lens to articulate shared goals and activities to prevent working at cross purposes, select activities, and invest in them strategically. This would be more democratic, in terms of enabling a shared conversation about what different stakeholders wish to put into, and get out of, our common research-policy engagement activities (including the generation of research). It would also enable more strategic activity and investment, rather than the current ‘throw it at the wall and see what sticks / suck it and see’ model, which, as this paper demonstrates, is both wasteful, likely to be ineffective, and may cause harm through opportunity costs and reduction of the goodwill on which this entire endeavour patently depends. You would also be able to plan your engagement activities appropriately – to identify shared goals with relevant stakeholders, with defined short and long-term outcomes to be evaluated. There is a significant helpful literature on how to do this well (see, for example, the work of the National Coordinating Centre for Public Engagement).
Third, if you complete the first two steps, you can take seriously the existing evidence on ‘what works’ in relation to comparable initiatives, use it to produce a clear plan of action that can be evaluated, and establish how an evaluation of this work will aid comparable projects. All those with an interest in more effective and ethical research-policy engagement should actively seek to contribute to the evidence base. Most evaluations were conducted by the researchers involved in delivery, and were mainly reports for funders. As such, they had no incentive to connect with the broader field of study on evidence production and use, nor to draw out broader lessons. Yet it is possible to specify clear research gaps in this area. We need to better understand the goals and aims of different strategies and whether they achieve them. We can do this by asking clearer questions to guide research and evaluation of these types of activities, such as:
What are the goals and outcomes of different evidence-use activities employed by different actors (including funders, decision makers, or researchers) within the research system?
What are the specific goals of these different activities, and how do they work in practice? Is it possible for those doing these activities to articulate and share their theories of change?
Do some activities deliver particular goals or outcomes more effectively than others?
If so, what types of activities (generational; mechanism; deliverer; context) should be employed to deliver what types of outcomes?
Who is best placed to deliver which kinds of activities, and what relationships with others are necessary to support them? (For example, university policy teams may be well placed to support researchers, but not well placed to lead or shape engagement with government).
How can this kind of strategic planning and implementation for effective engagement best be undertaken and supported?
There are some signs that the wider agenda we set out is beginning to be recognised by governments and funders, in their recognition of (aspects of) the research-policy system (Leyser, 2020; Fransman et al, 2018; Government Office for Science, 2019; UKRI, 2019). Finding ways to connect these conversations, and to act on our shared learning, will be key to establishing a research system which works for us all.
Research ethics statement
The authors of this paper have declared that research ethics approval was not required since the paper does not present or draw directly on data/findings from empirical research.
Funding
This work was supported by ESRC and Research England.
Contributor statement
All authors approved and commented on drafts of the original manuscript. KO had the idea for the paper, ANH, KO and SGW collated the majority of the empirical data, AB, KO and ANH developed the analysis approached and applied it to the data, and all Authors discussed the emerging conclusions and interpretation.
Acknowledgement
We would like to thank Simon Innvaer who originally suggested looking at institutions, and the attendees of our roundtables and interviews for all their generously-given insights and time.
Conflict of interest
The authors declare that there is no conflict of interest.
References
AEN (Africa Evidence Network) (2018) Annual member survey, https://africaevidencenetwork.org/en/publications/.
Allen, T., Grace, C. and Martin, S. (2015) Making the most of research, final report of the ESRC lcal government knowledge navigator, https://solace.org.uk/wp-content/uploads/2019/05/SOLACE-Reports-and-Guides-Solace-Making-the-Most-of-Research.pdf.
Anderson, K., Podkul, T., Singleton, C. and D’Angelo, C. (2019) Summative evaluation of the research + practice collaboratory: final report, https://www.sri.com/publication/summative-evaluation-of-the-research-practice-collaboratory-final-report/.
Armstrong, R., Pettman, T.L. and Waters, E. (2014) Shifting sands: from descriptions to solutions, Public Health, 128(6): 525–32, doi: 10.1016/j.puhe.2014.03.013.
Baginsky, M., Manthorpe, J. and Hickman, B. (2019) Social work teaching partnerships: a discussion paper, Social Work Education, Routledge, 38(8): 968–82, doi: 10.1080/02615479.2019.1616685.
Barker, A. (2010) Co-production: a series of commissioned reports (LARCI).
Bednarek, A.T., Shouse, B., Hudson, C.G. and Goldburg, R. (2016) Science-policy intermediaries from a practitioner’s perspective: the lenfest ocean program experience, Science and Public Policy, 43(2): 291–300, doi: 10.1093/scipol/scv008.
Best, A. and Holmes, B. (2010) Systems thinking, knowledge and action: towards better models and methods, Evidence & Policy, 6(2): 145–59, doi: 10.1332/174426410X502284.
Beswick, D. and Geddes, M. (2020) Evaluating academic engagement with UK legislatures, http://www.pol.ed.ac.uk/__data/assets/pdf_file/0008/268496/Evaluating_academic_engagement_with_UK_legislatures_Web.pdf.
Black, N. and Donald, A. (2001) Evidence based policy: proceed with care, BMJ, 323: 275, doi: 10.1136/bmj.323.7307.275.
Boaz, A. et al. (2016) The future of Evidence & Policy: moving forward from Valencia, Evidence & Policy, 12(1): 3–8, doi: 10.1332/174426416x14531221882404.
Boaz, A., Davies, H., Fraser, A. and Nutley, S. (2019) What Works Now?Evidence-Based Policy and Practice, Bristol: Policy Press, https://policy.bristoluniversitypress.co.uk/what-works-now.
Boaz, A., Baeza, J. and Fraser, A. (2011) Effective implementation of research into practice: an overview of systematic reviews of the health literature, BMC Research Notes, 4(4): 212, doi: 10.1186/1756-0500-4-212.
Boaz, A. and Gough, D. (2012) Complexity and clarity in evidence use, Evidence & Policy, 8(1): 3–5, doi: 10.1332/174426412X620092.
Boswell, C. and Smith, K. (2017) Rethinking policy ‘impact’: four models of research-policy relations, Palgrave Communications, 3(1): 44, doi: 10.1057/s41599-017-0042-z.
Brownson, R.C., Eyler, A.A., Harris, J.K., Moore, J.B. and Tabak, R.G. (2018) Getting the word out: new approaches for disseminating public health science, Journal of Public Health Management and Practice, 24(2): 102–111, doi: 10.1097/PHH.0000000000000673.
Cabinet Office (2017) Functional review of bodies providing expert advice to government - A review by the cabinet office public bodies reform team, https://www.gov.uk/government/publications/public-bodies-2016.
Cairney, P. (2016) The Politics of Evidence-based Policymaking, 1st edn, London: Palgrave Macmillan, doi: 10.1057/978-1-137-51781-4.
Cairney, P. (2018) Three habits of successful policy entrepreneurs, Policy & Politics, 46(2): 199–215, doi: 10.1332/030557318X15230056771696.
Cairney, P. and Oliver, K. (2018) How should academics engage in policymaking to achieve impact? Political Studies Review, 18(2): 228-44, doi: 10.1177/1478929918807714.
Cairney, P. and Oliver, K. (2020) How should academics engage in policymaking to achieve impact?, Political Studies Review, 18(2): 228-44.
Coburn, C.E. and Penuel, W.R. (2016) Research–practice partnerships in education: outcomes, dynamics, and open questions, Educational Researcher, 45(1): 48–54, doi: 10.3102/0013189X16631750.
Contandriopoulos, D., Lemire, M., Denis, J.L. and Tremblay, E. (2010) Knowledge exchange processes in organizations and policy arenas: a narrative systematic review of the literature, Milbank Quarterly, 88(4): 444–83, doi: 10.1111/j.1468-0009.2010.00608.x.
Curran, G. (2011) Modernising climate policy in Australia: climate narratives, and the undoing of a Prime Minister, Environment and Planning C: Government and Policy, 29(6): 1004–17, doi: 10.1068/c10217.
Davies, S., Herbert, P., Wales, A., Ritchie, K., Wilson, S., Dobie, L. and Thain, A. (2017) Knowledge into action: supporting the implementation of evidence into practice in Scotland, Health Information and Libraries Journal, 34(1): 74–85, doi: 10.1111/hir.12159.
Davis, S., Gervin, D., White, G., Williams, A., Taylor, A. and McGriff, E. (2013) Bridging the gap between research, evaluation, and Evidence-based practice, Journal of Social Work Education, 49(1): 16–29, doi: 10.1080/10437797.2013.755099.
Delaney, B., Moxham, J. and Lechler, R. (2010) Academic health sciences centres: an opportunity to improve services, teaching, and research, British Journal of General Practice, 60(579): 719–20, doi: 10.3399/bjgp10X532620.
Dobbins, M., Robeson, P., Ciliska, D., Hanna, S., Cameron, R., O’Mara, L., DeCorby, K. and Mercer, S. (2009) A description of a knowledge broker role implemented as part of a randomized controlled trial evaluating three knowledge translation strategies, Implementation Science, 4(3): 23, doi: 10.1186/1748-5908-4-23.
Dunleavy, P. and Tinkler, J. (2021) Maximizing the Impacts of Academic Research, London: Bloomsbury, https://www.macmillanihe.com/page/detail/Maximizing- the-Impacts-of-Academic-Research/?K=9780230377608.
Dwan, K.M., McInnes, P. and Mazumdar, S. (2015) Measuring the success of facilitated engagement between knowledge producers and users: a validated scale, Evidence & Policy, 11(2): 239–52, doi: 10.1332/174426414X14165029835102.
Elliott, H. and Popay, J. (2000) How are policy makers using evidence? Models of research utilisation and local NHS policy making, Journal of Epidemiology and Community Health, 54(6): 461–68, doi: 10.1136/jech.54.6.461.
ERA-Net (2005) Scientific knowledge for environmental protection (SKEP): network of funding agencies, https://ec.europa.eu/research/fp7/pdf/era-net/fact_sheets/fp6/skep_en.pdf.
ESRC (Economic and Social Research Council) (2017) ESRC impact acceleration accounts high level strategic review.
Farrell, C.C., Harrison, C. and Coburn, C.E. (2019) ‘What the hell is this, and who the hell are you?’ role and identity negotiation in research-practice partnerships, AERA Open, 5(2): 233285841984959, doi: 10.1177/2332858419849595.
Ferlie, E., Crilly, T., Jashapara, A. and Peckham, A. (2012) Knowledge mobilisation in healthcare: a critical review of health sector and generic management literature, Social Science and Medicine, 74(8): 1297–304, doi: 10.1016/j.socscimed.2011.11.042.
Ferlie, E., Nicolini, D., Ledger, J., D’Andreta, D., Kravcenko, D. and de Pury, J. (2017) NHS top managers, knowledge exchange and leadership: the early development of Academic Health Science Networks – a mixed-methods study, http://www.ncbi.nlm.nih.gov/pubmed/28650596.
Fischhoff, B. and Scheufele, D.A. (2013) The science of science communication, Proceedings of the National Academy of Sciences of the United States of America, 110(Suppl 3): 14031–032, doi: 10.1073/pnas.1312080110.
Flinders, M. and Anderson, A. (2019) Fit for the Future? Researcher Development and Research Leadership in the Social Sciences: Evidence Review, Sheffield: ESRC.
France, J., Rajana, A., Goodman, R., Ram, M., Longhurst, R., Pelka, V. and Erskine, C. (2016) Evaluation the impact of the ESRC-DFIF Joint Fund for Policy Alleviation Research. Final report to ESRC and DfiD: RE140168ESRC. https://esrc.ukri.org/files/research/research-and-impact-evaluation/evaluating-the-impact-of-the-esrc-dfid-joint-fund-for-poverty-alleviation-research/
Fransman, J., Hall, B., Hayman, R., Narayanan, P., Newman, K. and Tandon, R. (2018) Promoting fair and equitable research partnerships to respond to global challenges, Rethinking Research Collaborative, http://oro.open.ac.uk/57134/.
Frost, H., Geddes, R., Haw, S., Jackson, C.A., Jepson, R., Mooney, J.D. and Frank, J. (2012) Experiences of knowledge brokering for evidence-informed public health policy and practice: three years of the Scottish Collaboration for Public Health Research and Policy, Evidence & Policy, 8(3): 347–59, doi: 10.1332/174426412X654068.
Gainforth, H.L., Latimer-Cheung, A.E., Athanasopoulos, P., Moore, S. and Martin, K.A. (2014) The role of interpersonal communication in the process of knowledge mobilization within a community-based organization: a network analysis, Implementation Science, 9: 59, doi: 10.1186/1748-5908-9-59.
Gitomer, D. and Crouse, K. (2019) Studying the Use of Research Evidence: A Review of Methods, Rutgers, NJ: William T. Grant Foundation.
Goodman Research Group (2019) Science policy fellowship has durable impact, independent evaluation shows | American association for the advancement of science, https://www.aaas.org/news/science-policy-fellowship-has-durable-impact- independent-evaluation-shows.
Gough, D., Boaz, A., Pearson, M. and Smith, K. (2018) Knowledge commons for evidence use as a democratic and practice good, Evidence & Policy, 14(1): 3–6, doi: 10.1332/174426418X15168109224301.
Gough, D. and Boaz, A. (2017) Applying the rational model of evidence-informed policy and practice in the real world, Evidence & Policy, 13(1): 3–6, doi: 10.1332/174426417X14843185655356.
Gough, D., Maidment, C. and Sharples, J. (2018) UK What Works Centres: Aims, Methods and Contexts, London: EPPI Centre, UCL Institute of Education, https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3731.
Government Office for Science (2019) Realising our ambition through science: a review of government science capability, https://www.gov.uk/government/publications/government-science-capability-review.
Grace, C. (2006) A Strategic and Practical Partnership: The Research Councils of the UK and Local Government. Co-Producing Knowledge to Maximise Public Value. A Report for LARCI. Cardiff: Centre for Local and Regional Government Research Cardiff Business School.
Grundmann, R. (2017) The problem of expertise in knowledge societies, Minerva, 55(1): 25–48, doi: 10.1007/s11024-016-9308-7.
Haddon, C. and Sasse, T. (2018) How Government Can Work with Academia, London: Institute for Government.
Haines, A., Kuruvilla, S. and Borchert, M. (2004) Bridging the implementation gap between knowledge and action for health, Bulletin of the World Health Organization, Geneva: WHO, 82(10): 724–31.
Ham, C., Hunter, D.J. and Robinson, R. (1995) Evidence based policymaking, BMJ, 310: 71, doi: 10.1136/bmj.310.6972.71.
Hanney, S.R., Kanya, L., Pokhrel, S., Jones, T.H. and Boaz, A. (2020) How to strengthen a health research system: WHO’s review, whose literature and who is providing leadership?, Health Research Policy and Systems, 18: 72, doi: 10.1186/s12961-020-00581-1.
Hardill, I. and Baines, S. (2012) Process and Impact Review: Engaging with Scottish Local Authorities Scheme, ESRC End of Award Report, RES-809-19-0022-A, Swindon: ESRC.
Hardill, I., Moss, O. and Biddle, P. (2012) ESRC Follow-On-Fund (FoF) scheme: external evaluation final report-October 2012.
Hayre, J. and Parker, T. (1997) Putting research into practice: how to bridge the gap between policy and practice, NT Research, 2(1): 5–6, doi: 10.1177/136140969700200102.
Health Economics Research Group (2008) Medical Research: What’s it Worth? Estimating the Economic Benefits of Research in the UK, Uxbridge: Brunel University, Office of Health Economics, https://mrc.ukri.org/publications/browse/medical-research-whats-it-worth/.
Holden, D. (2016) Impact review: the scottish institute for policing research, www.sfc.ac.uk.
Holmes, B.J., Best, A., Davies, H., Hunter, D., Kelly, M.P., Marshall, M. and Rycroft-Malone, J. (2017) Mobilising knowledge in complex health systems: a call to action, Evidence & Policy, 13(3): 539–60, doi: 10.1332/174426416X14712553750311.
Hopkins, A., Oliver, K., Boaz, A., Guillot-Wright, S., and Cairney, P. (2021) Are research-policy engagement activities informed by policy theory and evidence? 7 challenges to the UK impact agenda, Policy Design and Practice, 4(3): 341-56.
Hoppe, R. (2009) Scientific advice and public policy: expert advisers’ and policymakers’ discourses on boundary work, Poiesis und Praxis, 6(3–4): 235–63, doi: 10.1007/s10202-008-0053-3.
House of Commons Liaison Committee (2019) The effectiveness and influence of the select committee system, Fourth report of Session 2017-19. HC1860
Hunter, G., May, T. and Hough, M. (2017) An Evaluation of the ‘What Works Centre for Crime Reduction’, Final Report, London: Birkbeck University of London, Institute for Criminal Policy Research.
Innvaer, S., Vist, G. and Trommald, M. (2002) Health Policy-makers’ perceptions of their use of evidence: a systematic review, Journal of Health Services Research and Policy, 7(4): 239–44, doi: 10.1258/135581902320432778.
Interface Associates UK Limited (2020) Social Work Teaching Partnerships: An Evaluation - Final Report, London: Department for Education.
Kenny, C., Hobbs, A., Tyler, C. and Blackstock, J. (2018) POST Research Study: the Work and Impact of POST, London: Houses of Parliament, https://www.parliament.uk/globalassets/documents/post/POST_Research_Study_The_work_and_impact_of_POST_Sept_2018.pdf.
Kislov, R., Wilson, P.M., Knowles, S. and Boaden, R. (2018) Learning from the emergence of NIHR Collaborations for Leadership in Applied Health Research and Care (CLAHRCs): a systematic review of evaluations, Implementation Science, 13: 111, doi: 10.1186/s13012-018-0805-y.
Knight, C. and Lightowler, C. (2010) Reflections of ‘knowledge exchange professionals’ in the social sciences: emerging opportunities and challenges for university-based knowledge brokers, Evidence & Policy, 6(4): 543–56, doi: 10.1332/174426410X535891.
Knott, J. and Wildavsky, A. (1980) If dissemination is the solution, what is the problem?, Science Communication, 1(4): 537-78. doi: 10.1177/107554708000100404.
Langer, L., Tripney, J. and Gough, D. (2016) The Science of Using Science: Researching the use of Research Evidence in Decision-Making, London: EPPI-Centre, UCL Institute of Education, https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3504.
Lester, L. et al. (2020) Evaluation of the performance and achievements of the WHO Evidence-informed policy network (EVIPNet) Europe, Health Research Policy and Systems, NLM (Medline), 18(1): 109, doi: 10.1186/s12961-020-00612-x.
Leyser, O. (2020) CaSE Annual Lecture 2020 With the Chief Executive of UKRI, London: Campaign for Science and Engineering, https://www.sciencecampaign.org.uk/engaging-with-policy/events/case-annual-lecture-2020.html.
Leyser, O. (2020) CaSE Annual Lecture 2020 with the Chief Executive of UKRI. Available at: https://www.sciencecampaign.org.uk/engaging-with-policy/events/case-annual-lecture-2020.html (Accessed: 3 March 2021).
Lister, S., Dixon, F., Selvester, K. and Driscoll, Z. (2015) Evaluation of the Emergency Nutrition Network (ENN). Evaluation Report: Mokoro Limited.
Living with Environmental Change (LWEC) (no date) EPSRC website Available at: https://epsrc.ukri.org/research/ourportfolio/themes/livingwithenvironmental change/ (Accessed: 13 February 2021).
Lomas, J. and Brown, A.D. (2009) Research and advice giving: a functional view of evidence-informed policy advice in a Canadian Ministry of Health, Milbank Quarterly, 87(4): 903–26, doi: 10.1111/j.1468-0009.2009.00583.x.
Mawson, J. (2007) Research councils, universities and local government: building bridges, Public Money and Management, 27(4): 265–72, doi: 10.1111/j.1467-9302.2007.00593.x.
Mays, N. (2018) Policy Research Unity Final Report 2011/18. PRU102/0001, London School of Hygiene and Tropical Medicine.
May, T., Sen, R. and Hough, M. (2020) T he N8 Policing Research Partnership: Examining the First four Years, Project Report, London: ICPR.
Michie, S., van Stralen, M.M. and West, R. (2011) The behaviour change wheel: a new method for characterising and designing behaviour change interventions, Implementation Science, 6: 42, doi: 10.1186/1748-5908-6-42.
Mijumbi-Deve, R., Rosenbaum, S.E., Oxman, A.D., Lavis, J.N. and Sewankambo, N.K. (2017) Policymaker experiences with rapid response briefs to address Health-system and technology questions in Uganda, Health Research Policy and Systems, 15: 37, doi: 10.1186/s12961-017-0200-1.
Milat, A.J. and Li, B. (2017) Narrative review of frameworks for translating research evidence into policy and practice, Public Health Research and Practice, 27(1): 2711704, doi: 10.17061/phrp2711704.
Morgan, K. (2020) The evidence masterclass, alliance for useful evidence, alliance.4usefulevidence@nesta.org.uk.
Nurse, P. (2015) Ensuring a successful UK research endeavour: a review of the UK research councils, BIS/15/625.
O’Brien, P. (2017) Interim evaluation of leading places Phase 1, https://www.local.gov.uk/sites/default/files/documents/Interim%20Evaluation%20of%20Leading%20Places%20phase%201%20-%20Peter%20O%27Brien%20%28HEFCE%29%20Slides.pdf.
O’Brien, P. (2018) Evaluation of leading places: Phase 2, https://www.local.gov.uk/topics/devolution/devolution-online-hub/local-growth/leading-places.
Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas, J. (2014) A systematic review of barriers to and facilitators of the use of evidence by policymakers, BMC Health Services Research, 14: 2, doi: 10.1186/1472-6963-14-2.
Oliver, K. (2019) Studying evidence: does it matter what we call it? Transforming evidence blog, https://transforming-evidence.org/blog/studying-evidence-use-is-an-established-science-but-we-need-new-spaces-to-discuss-it.
Oliver, K. and Boaz, A. (2019) Transforming evidence for policy and practice: creating space for new conversations, Palgrave Communications, 5: 60, doi: 10.1057/s41599-019-0266-1.
Oliver, K. and Cairney, P. (2019) The dos and don’ts of influencing policy: a systematic review of the advice literature, Palgrave Communications, 5: 21.
Oliver, K., Kothari, A. and Mays, N. (2019) The dark side of coproduction: do the costs outweigh the benefits for health research?, Health Research Policy and Systems, 17: 33, doi: 10.1186/s12961-019-0432-3.
Orton, L., Lloyd-Williams, F., Taylor-Robinson, D., O’Flaherty, M. and Capewell, S. (2011) The use of research evidence in public health decision making processes: systematic review, PLoS ONE, 6(7): e21704, doi: 10.1371/journal.pone.0021704.
Page, G. (2021) Final Report: Evaluating the N8 Policing Research Partnership. https://documents.manchester.ac.uk/display.aspx?DocID=56473
PERLSS (Partners for Evidence-driven Rapid Learning in Social Systems) (2018) Partners for evidence-driven rapid learning in social systems (PERLSS) | IDRC - International development research centre, https://www.idrc.ca/en/project/partners-evidence-driven-rapid-learning-social-systems-perlss.
Policy Innovation Research Unit (no date) Progress Report 2018–19, https://piru.ac.uk/assets/files/PIRU.
Policy Innovation Research Unit (2014) Progress Report 2014, https://piru.ac.uk/assets/files/PIRU progress report, Jan 11-Aug 14 for website 26 Oct.pdf.
POST at 30: Bridging research and policy (no date) www.parliament.uk/post@POST_UKpost@parliament.uk.
Rushmer, R. and Shucksmith, J. (2018) AskFuse origins: system barriers to providing the research that public health practice and policy partners say they need, Evidence & Policy, 14(1): 81–101, doi: 10.1332/174426416X14829329957024.
Sarewitz, D. (2018) Of cold mice and isotopes or should we do less science?, Talk delivered at Science and Politics: Exploring Relations between Academic Research, Higher Education, and Science Policy Summer School in Higher Education Research and Science Studies, Universität Bonn, Forum Internationale Wissenschaft, September 10, 2018. https://drive.google.com/file/d/1_GhlT2z-z5pVu072LH00t7SjF99IFkEb/view
Sax Institute (2019) Annual Report 2018–19, www.saxinstitute.org.au.
Schoemaker, C.G. and Smulders, Y.M. (2015) The forgotten capitulation of Evidence-based medicine, Nederlands Tijdschrift Voor Geneeskunde, 159(2015): A9249.
Secret, M., Abell, M.L. and Berlin, T. (2011) The promise and challenge of practice-research collaborations: guiding principles and strategies for initiating, designing, and implementing program evaluation research, Social Work, 56(1): 9–20. doi: 10.1093/sw/56.1.9
Sin, C.H. (2008) The role of intermediaries in getting evidence into policy and practice: Some useful lessons from examining Consultancy-client relationships, Evidence & Policy, 4(1): 85–103, doi: 10.1332/174426408783477828.
Smith, K.E. (2014) The politics of ideas: the complex interplay of health inequalities research and policy, Science and Public Policy, 41(5): 561–74, doi: 10.1093/scipol/sct085.
Smith, K.E., Pearson, M., Allen, W., Barwick, M., Farrell, C., Hardy, M., Harvey, B., Kothari, A., Neal, Z. and Pellini, A. (2019) Building and diversifying our interdisciplinary giants: moving scholarship on evidence and policy forward, Evidence & Policy, 15(4): 455–60, doi: 10.1332/174426419x15705227786110.
Smith, K. and Stewart, E. (2017) We need to talk about impact: why social policy academics need to engage with the UK’s research impact agenda, Journal of Social Policy, 46(1): 109–27, doi: 10.1017/S0047279416000283.
Stevenson, O. (2019) Making space for new models of academic-policy engagement, Universities Policy Engagement Network (UPEN) Blog, http://www.upen.ac.uk/blogs/?action=story&id=41.
Teers, R., Miller, N. and Braddock, R. (2018) Police knowledge fund review, http://whatworks.college.police.uk/Partnerships/Knowledge-Fund/Pages/Police-Knowledge-Fund.aspx.
Topp, L., Mair, D., Smillie, L. and Cairney, P. (2018) Knowledge management for policy impact: the case of the European commission’s joint research centre introduction, Palgrave Communications, 4: 87, doi: 10.1057/s41599-018-0143-3.
Tseng, V. (2008) Studying the Use of Research Evidence in Policy and Practice, New York: William T. Grant Foundation, https://nyuscholars.nyu.edu/en/publications/studying-the-use-of-research-evidence-in-policy-and-practice.
Tseng, V., Easton, J.Q. and Supplee, L.H. (2018) Research-practice partnerships: building Two-way streets of engagement, Social Policy Report, 30(4): 1–17, doi: 10.1002/j.2379–3988.2017.tb00089.x.
Tyler, C. (2017) Wanted: academics wise to the needs of government, Nature, 552(7683): 7, doi: 10.1038/d41586-017-07744-1.
UCL/LGA (2020) Net zero innovation programme: a UCL and local government association collaboration, UCL Public Policy. https://www.ucl.ac.uk/public-policy/home/collaborate/net-zero-innovation-programme
UKRI (2019) Delivery plan 2019, https://www.ukri.org/files/about/dps/ukri-dp-2019/.
Walker, L., Pike, L., Chambers, C., Lawrence, N., Wood, M. and Durrant, H. (2019) Understanding and navigating the landscape of evidence-based policy: recommendations for improving academic-policy engagement, https://www.bath.ac.uk/publications/understanding-and-navigating-the-landscape-of-evidence-based-policy/attachments/understanding-and-navigating-the-landscape-of-evidence-based-policy.pdf.
Warburton, D. (2011) Evaluation of the Living with Environmental Change (LWEC) Citizens’ advisory forum, final report, www.sharedpractice.org.uk.
Ward, V. (2017) Why, whose, what and how? A framework for knowledge mobilisers, Evidence & Policy, 13(3): 477–97, doi: 10.1332/174426416X14634763278725.
Weiss, C.H. (1979) The many meanings of research utilization, Public Administration Review, 39(5): 426, doi: 10.2307/3109916.
Weiss, C.H. (1993) Where politics and evaluation meet, Evaluation Practice, 14(1): 93–106, doi: 10.1177/109821409301400119.
Weiss, C.H. (1998) Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1): 21–33, doi: 10.1177/109821409801900103.
Wellstead, A., Cairney, P. and Oliver, K. (2018) Reducing ambiguity to close the Science-policy gap, Policy Design and Practice, 1(2): 115–25, doi: 10.1080/25741292.2018.1458397.
What Works Trial Advice Panel (2020) https://whatworks.blog.gov.uk/trial-advice-panel/.
Wilson, S. and Lilly, A. (2016) The local government knowledge navigator. Case study. London: Institute for Government.
Wye, L. et al. (2020) Collective knowledge brokering: the model and impact of an embedded team, Evidence and Policy, Policy Press, 16(3): 429–52, doi: 10.1332/174426419X15468577044957.