Abstract

Target audience:

What Works Centres; other intermediary brokerage agencies; their funders and users; and researchers of research use.

Background:

Knowledge brokerage and knowledge mobilisation (KM) are generic terms used to describe activities to enable the use of research evidence to inform policy, practice and individual decision making. Knowledge brokerage intermediary (KBI) initiatives facilitate such use of research evidence. This debate paper argues that although the work of KBIs is to enable evidence-informed decision making (EIDM), they may not always be overt and consistent in how they follow the principles of EIDM in their own practice.

Key points for discussion:

Drawing on examples from existing brokerage initiatives, four areas are suggested where KBIs could be more evidence-informed in their work: (1) needs analysis: evidence-informed in their analysis of where and how the KBI can best contribute to the existing evidence ecosystem; (2) methods and theories of change: evidence-informed in the methods that the KBI uses to achieve its goals; (3) evidence standards: credible standards for making evidence claims; and (4) evaluation and monitoring: evidence-informed evaluation of their own activities and contribution to the knowledge base on evidence use. For each of these areas, questions are suggested for considering the extent that the principles are being followed in practice.

Conclusions and implications:

KBIs work with evidence but they may not always be evidence-informed in their practice. KBIs could benefit from more overtly attending to the extent that they apply the logic of EIDM to how they work. In doing so, KBIs can advance both the study, and practice, of using research evidence to inform decision making.

Background

Policy, practice and individual decisions are informed and influenced by many factors. Findings of research can be an important source of information. Over recent years, there has been a concern that research evidence is not always used to its full potential in decision making (Boaz et al, 2019), or is used to justify decisions that have really been made on other grounds (Weiss, 1979). A number of strategies have been used to enable the greater consideration and use of research evidence (Cooper, 2014; Langer et al, 2016; Gough et al, 2018). Knowledge Brokerage Intermediaries (KBIs) are individuals and organisations that aim to broker the intermediary space between the use and production of research evidence.

Examples of KBI organisations can include:

  • portals to communicate research findings to potential users of evidence;

  • knowledge brokerage organisations, including What Works Centres (WWCs) and research observatories (such as the International Public Policy Observatory on COVID-19);

  • university offices to communicate research findings;

  • evidence advisory systems for governments.

The strategies to enable the consideration of research findings in decision making can range across:

  • access: initiatives to raise awareness of research evidence and make it more available to potential users of research;

  • uptake: strategies to support and encourage decision makers to make use of research evidence in their work;

  • science advice: researchers’ availability to advise decision makers as in expert advisory committees, academic secondments to government departments, or in partnerships between universities and policymakers and professional practitioners;

  • co-production of research and its use: by researchers, users of that research, and intermediaries between the two;

  • impact: measures to encourage researchers to enable their work to influence decision making;

  • implementation: strategies to support changes in practice that are based on decisions informed by research evidence.

This paper builds on the work of Powell and colleagues (2016) to contribute to, and extend, the debate on the importance of KBIs themselves being evidence-informed in how they go about their work. If KBIs do not take an evidence-informed approach to their own work, they may be less effective than they might be. They may also lose credibility and trust from not following their own advice on the importance of making use of research evidence in decision making. This paper argues that a more overt focus on being evidence-informed can assist KBIs to reflect on and develop the theory, practice, and study of their work in at least four areas of:

  1. needs analysis: appraisal of the pre-existing evidence ecosystem that the initiatives wish to influence;

  2. methods and theories of change: the initiatives’ activities and methods and the basis for believing that they will produce the outcomes desired;

  3. evidence standards: the quality and relevance criteria for the evidence claims made by the KBIs;

  4. evaluation and monitoring: KBIs’ evaluation of their own activities and of their contribution to the knowledge base on evidence use.

1. Evidence-informed in their aims and needs analysis

Evidence-informed policy and practice is where relevant research findings are used in an appropriate and useful way to inform decision making. Evidence claims may be justified in some contexts but may be applied to decisions where they have no or limited relevance; for example, evidence about what works on average may be ineffective or even harmful in specific circumstances (and vice versa).

Matching the needs of the decision maker to the questions asked and the contexts in which they apply involves some engagement between decision making and research production. This can be conceived of as an evidence ecosystem that exists within a wider system of various stakeholders influencing research production and research use (Best and Holmes 2010; Gough et al, 2019). The main components of such an evidence ecosystem include decision making, research production and some engagement between such decision making and research production. All of this occurs within wider systems and contexts. An awareness of the components and functioning of such an evidence ecosystem (as in Figure 1) can help KBIs and other actors to plan and assess their work.

The illustration is titled evidence ecosystem and subtitled within wider systems and contexts influencing the evidence ecosystem. In the illustration, there is a horizontal double headed arrow at the center labeled engagement. On its left and right are a pair of big and small boxes. The bigger box on the left reads policy, practice and individual decision making. The smaller box on the left reads implementation of decisions. The bigger box on the right reads what we know from research (synthesis). The smaller box on the right reads research production (primary research). The big and small boxes are connected via vertical double headed arrows. Text around the left pair of big and small boxes reads enlightenment and/or instrumental use of evidence. Text around the right pair of big and small boxes conceptual and/or instrumental evidence. Above the horizontal double headed arrow are tiny rightward vertical double headed arrows. Text above the small double headed arrows reads many players and interests. Below that text another text reads actors/perspectives/issues/questions/power. Text below the horizontal double headed arrow reads demand for research, prioritize issues, support uptake, guidance, interpret, communicate.
Figure 1:

Evidence ecosystems (developed from Gough et al, 2011)

Citation: Evidence & Policy 18, 4; 10.1332/174426421X16353477842207

KBIs aim to facilitate the functioning of such evidence ecosystems in some way. An obvious starting point is therefore to assess the functioning of the evidence ecosystem that they plan to work or are currently working within. What is the pre-existing nature of research production, engagement with that research by users, and actual use of evidence in decision making? An assessment of this context can inform the choice of strategies for enabling such use of research evidence. KBIs can thus be seen as interventions into pre-existing evidence ecosystems to improve such systems in some way.

So, to what extent do KBIs systematically undertake such appraisal of the nature and extent of any deficiency in the relationship between the use and production of research evidence in their field? And having made such an appraisal, what are their strategies for intervening to improve the functioning of that evidence ecosystem?

What Works Centres (WWCs) are one type of KBI and evidence use infrastructure. In a study of UK WWCs (Gough et al, 2018), the most common aims identified were:

  • primary research base: development of primary research (for example, Education Endowment Foundation (EEF));

  • co-production: by researchers and users of primary and secondary research (for example, What Works Scotland);

  • synthesis: clarifying the knowledge base (all of the UK WWCs);

  • user access to research: communication of the evidence to professional practitioners (all of the UK WWCs);

  • supporting evidence uptake: enabling the consideration and uptake of research (all of the UK WWCs);

  • evidence-informed guidance: developing guidelines/recommendations for practice (for example, National Institute for Health and Care Excellence (NICE));

  • enabling implementation: of decisions that have been informed by research evidence, including the use of strategies informed by the behavioural needs of users (for example, EEF, and Early Intervention Foundation (EIF)).

The centres thought that there were important needs to be fulfilled that they could serve. They thought that it was important for decision makers to have access to research evidence or guidance informed by research. The logic is that it is more efficient for a national service to identify relevant research evidence than for individual policymakers and practitioners to do so. Nevertheless, it was not always clear why providing access was often the only or predominant aim. There were other aspects of the evidence ecosystem that could be attended to. At the time of the study, some of the centres did take a more holistic approach to appraising, or enabling, all parts of their evidence ecosystems, and the wider systems within which these existed (including political dynamics), though these might not be included in their public descriptions of their work. There was also some explicit discussion of how different KBIs might relate to and interact with each other.

There are also differences in the type of policy and practice issues and related research evidence that KBIs engage with. Many KBIs focus on the identification and implementation of effective interventions or ‘what works’. For some of these the emphasis is on manualised programmes for intervention with a concern for fidelity of application. Others emphasise effective strategies and mechanisms that can be applied differently in different contexts (Gough, 2021).

There has been development over time in the aims and methods of the UK WWCs. Most started with a focus on the synthesis and communication of evidence, and then have developed an increased focus on user engagement and implementation. EEF in particular has invested in developing and, most importantly, evaluating a number of different strategies for enabling the use of evidence (Sharples et al, 2019), including how schools use research as a result of engaging with the Research Schools Network (Gu et al, 2020) and the scale-up of research-informed practice in the use of teaching assistants in schools (Maxwell et al, 2019). Another example is the Early Intervention Foundation’s (EIF) work on ‘supporting evidence use in policy and practice’ (Waddell, 2021), which includes better understanding of the behavioural needs of users (Waddell and Sharples, 2020). Bache (2020) has also written about the role of evidence in the work of the What Works Centre for Wellbeing.

A similar principle also applies to expert scientific advisory committees. They are explicit in that they provide science advice to parliaments and government departments. What is less clear is the rationale for developing this type of structure, as opposed to other ways that government could access research evidence, such as academic societies and government research analysts (Gough, 2020).

In sum, there are opportunities for KBIs to be more explicit about their analysis of the ecosystem in which they are intervening, what is most needed to improve the functioning of this system, why they have chosen their specific strategy, and how their contribution fits into this wider picture. Questions to consider are the extent to which KBIs are evidence-informed in their aims and needs analysis in terms of:

  • analysis of the evidence ecosystem: how has the KBI assessed the pre-existing relationships between the use of research and its production and the ways in which it proposes to enhance this? The aims may be evident from a KBI’s name, but is there a justification of why a particular approach has been chosen over others?

  • specific aims: which particular parts of the pre-existing evidence ecosystem does a KBI wish to change? What does it wish to change? What type of user issues and what types of research evidence and evidence claim does it focus on?

  • users and beneficiaries: who will use and/or benefit from the KBI’s work?

  • KBI development over time: what changes are there in the focus of their work over time, and the reasons for this (including changes in the wider evidence ecosystem or their position within it)?

  • collaboration within the evidence ecosystem: what interactions are there with the other actors (including overlaps with the aims and work of other KBIs and collaboration with them)?

2. Evidence-informed in their methods and theory of change

In addition to, and highly related to the aims of the KBIs, are the methods and theories of change by which these aims will be achieved. If the aim is, for example, to synthesise and communicate evidence, KBIs will likely state the methods they use to achieve this. The study of WWCs (Gough et al, 2018) and another study of evidence web portals (Gough and White, 2018) found considerable variation in the nature and the extent of their description of KBIs’ methods of work in terms of:

  • the use of standardised specific methods, guidance that allows flexibility, or individual project-specific methods;

  • explaining and justifying the choice of specific methods;

  • the quality of reporting of those methods.

KBIs are increasingly developing Theories of Change, and in doing so are becoming more explicit about how their methods will be effective in achieving their fundamental aims (for example, Blache, 2020; Gough, 2021; Waddell, 2021). But there are still instances of KBIs seeming to assume that an approach will be effective and useful without being explicit about why that might be so.

This is well illustrated by the aim of communicating research findings, which is often adopted as a default approach to supporting user engagement and decision making (Davies et al, 2015). This is despite the evidence from ‘research on research use’ that the communication of research findings, on its own, is not associated with increased use of those findings (Langer et al, 2016). EEF has shown this through its multi-armed randomised controlled trial of different ways to communicate research on literacy to teachers, where no evidence was found that any of these strategies were on their own effective (Lord et al, 2017). The communication of evidence seems to be a necessary but not a sufficient condition for the use of evidence.

There are behavioural factors to consider, such as the capacity (personal attributes), opportunity (environmental attributes), and motivations (psychological processes) that enable the use of evidence (Michie et al, 2011). Research use activities can often be driven by a desire by researchers for their findings to have impact, rather than by user demand for research on particular topics and perspectives, or more nuanced interactions between evidence and policy (Boswell and Smith, 2018; Langer et al, 2016). This can be addressed by KBIs as in, for example, the previously mentioned EIF project that designed KM strategies based on an understanding of the behavioural needs of research users (Waddell and Sharples, 2020).

There are also strong examples of KBIs integrating user perspectives into their work. The National Institute for Health and Care Excellence (NICE), for example, has a stakeholder-driven process for identifying health and social care practice questions, commissioning systematic reviews to address these issues (including the cost/benefits of different actions), and then stakeholder-driven interpretation of this to make recommendations for practice. The process is based on an explicit logic that is supported by research on stakeholder engagement, synthesis of evidence, social values, and the importance of contextual information (NICE, 2020; Gough, 2021). Similar user-informed approaches to developing guidance have been adopted by the WWC for Crime Reduction. Other centres, such as the Wales Centre for Public Policy, are demand-driven by government requests for evidence to inform policy decisions.

Some KBI strategies put an emphasis on building relationships between researchers and the potential users of research as in, for example, the secondment of researchers to government departments. However, ‘research on research’ indicates that such relationships are also a necessary, but not sufficient, condition. Relationships can have an effect on research use so long as they are accompanied by efforts to increase the capacity, opportunity and motivation for the evidence to be used in practice (Langer et al, 2016). Encouragingly, UK WWCs are currently collaborating in a project on how they can build the capacity of users to make, and act on, evidence-informed decisions, with a particular focus on effective implementation.

Similar questions about the nature of the brokerage activity can be asked of expert scientific advisory committees. There is not always clarity about how they identify and select experts to be members (including skills, topic areas, relationships with and perspectives shared with government), or the functioning of the committees and how they make decisions (Gough, 2020; Geddes, 2020). As the methods and processes are not explicit, the theories of change about their outcomes (and how this would differ from other approaches to providing science advice) are also not clear.

In sum, KBIs could build further confidence in their value and impact by demonstrating that their ways of operating are supported by or based on evidence on research use. Are KBIs evidence-informed in their methods and theories of change in terms of:

  • overt consideration: (1) of both the demand (‘pull’) as well as the production (‘push’) components of the evidence ecosystem; (2) of the engagement of the planned users and beneficiaries in the work and their role and power in such decision making; (3) of the capacity, opportunity and motivation of decision makers to use research evidence in their work; (4) of potential negative effects and risks from the KBI’s work and how these will be avoided or ameliorated; and (5) of sustainability of the aims, methods and theories of change and capacity of the KBI to achieve this over time?

  • theory of change: what specific methods are being used and what is the causal chain by which these are thought to achieve the interim and ultimate aims of the KBI?

  • fitness for purpose and effectiveness: what is the basis for believing that the methods and theory of change are appropriate and effective and that this is supported by ‘research on research use’?

3. Credible standards for making justifiable evidence claims

KBIs aim to increase the use of research evidence by decision makers. In communicating selected research evidence, they are making claims about the trustworthiness and relevance of research evidence, and so the criteria that they use for making such evidence claims are key. The strength of evidence required to inform a decision may depend, of course, on the importance of the decision and the opportunity costs of making a decision one way or another. A short-term response to an immediate crisis is not the same as a long-term policy strategy. Whatever the nature of the decision, there is the danger that if the evidence claims are based on weak or inconsistent standards (and so not justifiable), then the users of research may be misled. Similar arguments can be made for the use of evidence provided by expert witnesses in courts (Ward, 2015) and scientific advisory committees. The evidence claims need to be both justifiable from the research and relevant to the issue at hand.

An evidence claim may not be justified for many reasons including:

  • representativeness of the evidence base: the claim is made on the basis of research findings that are not representative of all the relevant and trustworthy studies on an issue;

  • quality and relevance: the research is not of sufficient quality (methodologically trustworthy) and relevance to be relied upon;

  • extent of evidence: the research is of sufficient quality and relevance but is not sufficient in extent to make the specific evidence claim (for example, trustworthy relevant evidence may be available on a large population but it might not be able to make justifiable claims about some sub-populations);

  • interpretation and application: the research findings are not being applied appropriately to the issue under consideration.

When making a claim about what is known about a particular research question, it is of course important to consider all the relevant evidence, rather than individual studies which may not be representative of all the relevant and trustworthy evidence. The appraisal of evidence therefore requires an assessment of: (1) the ways that the evidence has been identified and brought together; (2) the quality and relevance of the studies included in such reviews, including ethical issues; and (3) the nature and extent of the totality of all the relevant evidence (Gough, 2021).

There can be dangers in only focusing on individual studies on their own. An example is the pressure that academics can be under for their research to have an impact. Their individual studies may be of high quality, but a decision maker would be better informed by knowing and being able to take account of all relevant justifiable research claims.

One way to examine the evidence standards of KBIs is to examine the evidence claims in their summaries of ‘evidence-informed’ policy and practice interventions, as in evidence portals and toolkits. The recommendations of portals may have widespread effect. Results for America, for example, has produced an Economic Mobility Catalog, which is a web-based resource for local governments to identify strategies that are effective in driving upward social mobility. The ratings of ‘good enough’ evidence are based on the ratings provided by a number of evidence portals that may have different evidence standards.

The previously mentioned small international survey of 15 national or international evidence portals found that only six of the portals used a formal (systematic) method of identifying and synthesising evidence for informing users of research about effective interventions (Gough and White, 2018). In some cases, there was variation in the standards used within a single centre.

Two of the portals used expert reviews, whereby a researcher uses their knowledge of a field to provide an overview of what is known. Such reviews may be excellent but rely on the knowledge of the expert, which may not be systematic in the depth and detail that they are able to identify, evaluate and synthesise knowledge from different studies. A similar approach is used by expert scientific advisory committees, who receive an interpretation of the evidence base that depends upon who is selected to be on the committee.

Five of the portals in the survey based their evidence claims on the basis of one or two good studies. The danger with such an evidence standard is that it is not considering the whole knowledge base and there may be many other good-quality studies that found an opposite result. It is interesting that most of the portals using the ‘one or two good studies’ criteria were focused on the effectiveness of intervention programmes (a specific combination of intervention components) rather than evidence of particular intervention strategies. Specific programmes are useful for indicating efficacy where there is intervention fidelity, but may be less adaptable to use in contexts which may differ significantly from those in which the programmes were developed. Table 1 shows that, for the five portals providing evidence on programmes, just one or two studies were enough for the evidence portals to inform users that the programmes were effective. The evidence standards of at least one of these portals have developed since the survey, but it is clear that standards of some KBIs for making an evidence claim of effectiveness can be quite low.

Table 1:

Web based portals’ evidence standards for making evidence claims (adapted from Gough and White, 2018)

Evidence claim of efficacy
Basis for applying criteria Intervention programmes Intervention approaches
Systematic reviews - 6
‘Narrative’ / expert reviews - 2
Listing studies and results - 2
Vote counting - -
One or two good studies* 5 -
Total N =15 5 10

Maybe plus no evidence of harms from the intervention

Specific standards will depend on the research questions being asked and the evidence claims made in response to these (Gough, 2021). The nature of the evidence and the standards for making evidence claims vary between, for example, research evidence on the effectiveness of an intervention, and the evidence in support of a causal model by which it had its effect.

The term ‘evidence standards’ itself can be problematic in that it is used to describe a range of different approaches to supporting or appraising research to make justifiable evidence claims. These approaches, can include (Gough, 2021):

  • methods standards criteria (methodological criteria for making an evidence claim);

  • methods guidance (advice as to appropriate research methods to make justifiable evidence claims);

  • internal quality assurance (processes for ensuring that research methods are performed appropriately);

  • reporting standards (criteria for transparent reporting of the execution of research);

  • methods appraisal (procedures for checking and reporting on the relevance and trustworthiness of research studies and the basis of their evidence claims);

  • stage of development, appraisal of effectiveness, and implementation of interventions (the extent that a certain policy or practice intervention has research evidence to justify its effectiveness and use).

All of these may be specified in extensive or minimal detail. There is thus much potential for confusion about what ‘evidence standards’ means, as well as the basis for making judgements within each of these different types of standards. Clarity about these issues is an important area for KBIs to be clear, consistent and coherent.

In sum, inadequate or inconsistent evidence standards could lead to audiences misinterpreting or placing too much trust in the findings and guidance presented. So, are KBIs evidence-informed in their use of credible evidence standards for making evidence claims in terms of:

  • transparency: do the KBIs fully and explicitly report their specific methods and criteria for making evidence claims? Are these simple lists or are they manuals providing detailed explanations of the nature and basis of such judgments?

  • consistency: are the KBIs consistent in their methods and criteria for making different evidence claims in different outputs?

  • clarity: are KBIs clear about the nature of the evidence claim and how it is relevant and fits the needs of those to whom the claim is being communicated?

4. Evaluation of KBIs and contributing to the ‘use of research’ knowledge base

How do KBIs assess whether their aims have been achieved? Do these results inform their own development and research evidence on ‘research use’? Are we clear about ‘What works in What Works’ (ESRC, 2016)?

For KBIs to be evidence-informed, it would be expected that they would evaluate their progress in meeting their aims and modify their activity in response to their evaluations (as above on the KBIs’ theories of change). Such evaluation would also allow KBIs to provide research findings to contribute to the scientific knowledge of ‘research on research use’.

KBIs are naturally focused on undertaking the activities that they have been funded for. There may therefore be few resources available for them to commission external independent or internal self-evaluations. There are of course exceptions, with some KBIs formally evaluating most of their activities.

Where evaluation does take place, a distinction can be made between monitoring of work activity, measuring the achievement of desired outcomes (KBI goals), and the processes by which these are achieved. Monitoring activity can be relatively straightforward, such as recording numbers of meetings or products produced. For measuring the extent of desired outcomes, a distinction can be made between interim and final outcome measures. Interim measures can be testing stages in a hypothesised theory of change and the processes involved.

Assessing detailed theories of change are rare and interim measures of assessing change can be very simplistic, such as web analytics of visits to KBIs’ web pages. These may indicate that users have at least had some contact with KBIs’ resources, though this does not necessarily mean that this has then informed decision making and policy and practice. ‘Use of research’ means that research evidence was considered, though it may not always be easily apparent what role the research had in the decision-making process.

The case of expert scientific advisory committees for government is relevant, as there do not seem to be clear methods by which they are evaluated. There has been a focus on how government uses advice to respond to health emergencies such as the BSE crisis (Hinchliffe, 2001) and now the COVID-19 pandemic, with the latter subject to an inquiry by the UK Parliament’s Science and Technology Committee (Gough, 2020). There is also research on the use of evidence by legislatures (Kenny et al, 2017; Geddes, 2020).

Final outcome measures are often weakly specified. If the overall aim of a KBI is to increase the use of evidence, then any data showing that use has increased may be a measure of success (though in such ‘natural experiments’ the data is correlational and one cannot be sure what the causes of the changes are). This does not necessarily mean that the research has been used wisely or appropriately – just that it has been used. Even where the research has been ‘wisely used’, it may have led a decision maker to stop a planned action and so the influence of the KBI may be difficult to measure.

A more detailed way of appraising outcomes is to assess the effects on the intended beneficiaries of a KBI’s work. KBIs occasionally do measure changes in achievements of their ultimate beneficiaries (for example, pupil attainment), although this is rare (for example, Sibieta and Sianesi, 2019). For expert scientific advisory committees, final outcome measures could be based on whether the advice was acted upon and by the nature of the outcomes ultimately achieved.

In sum, external or self-evaluation is important in determining whether and to what extent KBIs are meeting their objectives, and how they or others can better meet such objectives in the future. Are KBIs explicit about how they are being evidence-informed in the evaluation of their work, and the contribution that this could make to the knowledge base (on research use), in terms of:

  • rigorous evaluation: to indicate how they are meeting their aims (and other positive and negative effects) through the planned interim and final outcomes and appraisal of the adequacy of their theory of change?

  • KBI development: using their evaluations to adjust and develop their work over time?

  • evidence of effect: providing justifiable and relevant evidence claims about their positive contribution to the users and/or planned ultimate beneficiaries of their work?

  • evidence standards: for making any such evidence claims (including the methods used to assess change and the use of subjective or objective measures of change)?

  • research on research use: justifiable evidence claims about the KBIs’ work that contributes to the knowledge base on ‘research on research use’?

Discussion and implications

KBI organisations are an important part of the infrastructure of evidence ecosystems in enabling the use of research evidence in decision making. KBIs apply to varying extents the same logic of ‘evidence use’ to their own decision making and in contributing to the knowledge base on evidence use.

It is useful to consider why KBIs are not always evidence-informed in their work. One reason may be that the funders of new initiatives and the initiatives themselves are focused on action. The initiatives wish to progress the tasks that they are meant to undertake, and they may be evaluated and obtain further funding on the basis of such activity, products and outputs. When KBIs are initiated, particularly when the focus is on providing access to research evidence, there may be an expectation of immediate evidence products. Evidence standards may then continue to develop organically rather than systematically and not be applied consistently.

Impact may be assumed. The priorities of the funders and the initiatives is on action to increase research use rather than seeing it as something that needs itself to be evidence-informed. There has only been limited, though recently increasing, research on KBIs. The ESRC in the UK, for example, partly funds some UK WWCs and has funded some studies of their work, but such studies tend to be funded as administrative appraisals and development work rather than academic studies of the nature and effectiveness of knowledge brokerage. There are relatively few social science studies of ‘research on research use’ despite it being a key area of social science with major practical implications. The use of evidence is an issue for all sciences and so its study is the one area of social science that applies to all other sciences.

A second possible reason is that, despite KBIs being very aware of and often making use of systematic reviews, the full logic and use of reviews may not be at the core of their thinking. Some KBIs are very sophisticated methodologically about primary research methods but have not always applied the same thinking to systematic reviews.

A third possible reason could be that, even though KBIs should be major players within evidence ecosystems, they may not fully take on board an ecosystem perspective. The usefulness of the KBIs only providing access to research evidence may be considered obvious. It may seem common sense that research needs to be communicated in order to be used. Research evidence is difficult to access and understand, and so a KBI can help to communicate research evidence to decision makers, which indeed it can. Yet this obvious conclusion may hinder reflections on the limitations of simply ‘pushing’ research to users and messages from research on such issues.

Similarly, developing relationships between researchers and decision makers, and seconding researchers in policy departments, can seem like common sense. Yet, as already discussed, ‘research on research use’ indicates that such mechanisms in themselves may not be sufficient. Such assumptions of efficacy can arise from not taking a holistic approach to examining the evidence ecosystem and having evidence for judging whether a KBI could most effectively contribute.

This paper has aimed to contribute to the debate on how KBIs can more overtly reflect on the extent to which they use the logic of EIDM to frame how they work, and so advance both the study and practice of using research evidence to inform decision making. It has provided some examples of how KBIs have become more explicit about being evidence-informed, particularly in regard to being more specific about their aims, beneficiaries, and methods to enable the uptake of research evidence. This should help increase the coherence of the planning and evaluation of KBIs and help to develop knowledge brokerage as a field. Not attending to the issue of being evidence-informed may create dangers in terms of credibility and effectiveness. On the other hand, we must acknowledge that political issues in the wider ecosystem within which evidence ecosystems exist may, of course, have a larger impact than the rational arguments of being evidence-informed.

Funding

Some of the work referred to in this paper was supported by the ESRC and the UCL IOE Institute of Education (Gough et al, 2018, and by the Centre for Homelessness Impact (Gough and White, 2018).

Contributor statement

DG wrote the first and subsequent drafts of the manuscript, with comments from CM and JS. All Authors were directly involved in previously published research referred to in this paper (Gough et al, 2018).

Conflict of interest

The authors have previously been or are still actively involved in the study of knowledge intermediary organisations. Jonathan Sharples is seconded to work for the Education Endowment Foundation, which is a KBI. There are no further known potential conflicts of interest.

References

  • Bache, I. (2020) Evidence, Policy and Wellbeing: Wellbeing in Politics and Policy, London: Palgrave Pivot.

  • Best, A. and Holmes, B. (2010) Systems thinking, knowledge and action: towards better models and methods, Evidence & Policy, 6(2):14559, doi: 10.1332/174426410X502284

    • Search Google Scholar
    • Export Citation
  • Boaz, A., Davies, H., Fraser, A. and Nutley, S. (2019) What Works Now? Evidence-Informed Policy and Practice, Bristol: Policy Press.

  • Boswell, C. and Smith, K. (2018) Rethinking policy ‘impact’: four models of research-policy relations, Palgrave Communications, 4(UNSP 20): 111, doi: 10.1057/s41599-017-0055-7

    • Search Google Scholar
    • Export Citation
  • Cooper, A. (2014) Knowledge mobilisation in education across Canada: a cross-case analysis of 44 research brokering organisations in Canada, Evidence & Policy, 10(1): 2959, doi: 10.1332/174426413X662806.

    • Search Google Scholar
    • Export Citation
  • Davies, H.T.O., Powell, A.E. and Nutley, S.M. (2015) Mobilising knowledge to improve UK health care: learning from other countries and other sectors: a multimethod mapping study, Health Services and Delivery Research, 3(27).  doi: 10.3310/hsdr03270

    • Search Google Scholar
    • Export Citation
  • ESRC (Economic and Social Research Council) (2016) What Works Strategic Review: report of stakeholder survey and documentary analysis, https://esrc.ukri.org/files/collaboration/what-works-strategic-review-report/.

    • Search Google Scholar
    • Export Citation
  • Geddes, M. (2020) The webs of belief around ‘evidence’ in legislatures: the case of select committees in the UK House of Commons, Public Administration, 99(1): 4054, doi: 10.1111/padm.12687

    • Search Google Scholar
    • Export Citation
  • Gough, D. (2020) Written evidence submitted to Science and Technology Committee (Commons) inquiry: UK Science, Research and Technology Capability and Influence in Global Health Disease Outbreaks, Submission C190097, https://committees.parliament.uk/writtenevidence/9567/pdf/.

  • Gough, D. (2021) Appraising evidence statements, Review of Research in Education, 45(1): 126, doi: 10.3102/0091732X20985072

  • Gough, D., Maidment, C. and Sharples, J. (2018) UK What Works Centres: Aims, Methods and Contexts, London: EPPI-Centre, University College London, https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3731.

    • Search Google Scholar
    • Export Citation
  • Gough, D., Thomas, J. and Oliver, S. (2019) Clarifying differences between reviews within evidence ecosystems, Systematic Reviews, 8(1): 170, doi: 10.1186/s13643-019-1089-2

    • Search Google Scholar
    • Export Citation
  • Gough, D., Tripney, J., Kenny, C. and Buk-Berge, E. (2011) Evidence Informed Policy in Education in Europe: EIPEE Final Project Report, London: EPPI-Centre, Social Science Research Unit, Institute of Education, University of London.

    • Search Google Scholar
    • Export Citation
  • Gough, D. and White, H. (2018) Evidence Standards and Evidence Claims in Web-Based Research Portals, London: Centre for Homelessness Impact, https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3743.

    • Search Google Scholar
    • Export Citation
  • Gu, Q., Rea, S., Seymour, S., Smethem, L., Bryant, B., Armstrong, P., Ahn, M., Hodgen, J. and Knight, R. (2020) The Research Schools Network: Supporting Schools to Develop Evidence-Informed Practice, London: Education Endowment Foundation, https://educationendowmentfoundation.org.uk/public/files/RS_Evaluation.pdf.

    • Search Google Scholar
    • Export Citation
  • Hinchliffe, S. (2001) Indeterminacy in‐decisions: science, policy and politics in the BSE (Bovine spongiform Encephalopathy) crisis, Transactions of the Institute of British Geographers, 26(2): 182204, doi: 10.1111/1475-5661.00014

    • Search Google Scholar
    • Export Citation
  • Kenny, C., Rose, D.C., Hobbs, A., Tyler, C. and Blackstock, J. (2017) The Role of Research in the UK Parliament, Vol 1, London, Houses of Parliament, https://www.parliament.uk/globalassets/documents/post/The-Role-of-Research-in-the-UK-Parliament.pdf.

    • Search Google Scholar
    • Export Citation
  • Langer, L., Tripney, J. and Gough, D. (2016) The Science of Using Science: Researching the Use of Research Evidence in Decision-Making, London: EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London, https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3504.

    • Search Google Scholar
    • Export Citation
  • Lord, P., Rabiasz, A. and Styles, B. (2017) Evaluation of the ‘Literacy Octopus’ dissemination trial, https://www.nfer.ac.uk/media/1689/eefa02.pdf.

    • Search Google Scholar
    • Export Citation
  • Maxwell, B., Coldwell, M., Willis, B. and Culliney, M. (2019) Teaching Assistants Regional Scale-up Campaigns: Lessons Learned, London: Education Endowment Foundation, https://educationendowmentfoundation.org.uk/public/files/Campaigns/TA_scale_up_lessons_learned.pdf.

    • Search Google Scholar
    • Export Citation
  • Michie, S., van Stralen, M.M. and West, R. (2011) The behaviour change wheel: a new method for characterising and designing behaviour change interventions, Implementation Science, 6(1): 42, doi: 10.1186/1748-5908-6-42

    • Search Google Scholar
    • Export Citation
  • NICE (National Institute for Health and Care Excellence) (2020) Our principles: the principles that guide the development of NICE guidance and standards, https://www.nice.org.uk/about/who-we-are/our-principles.

    • Search Google Scholar
    • Export Citation
  • Powell, A., Davies, H. and Nutley, S. (2016) Missing in action? The role of the knowledge mobilisation literature in developing knowledge mobilisation practices, Evidence & Policy, 13(2): 20123, doi: 10.1332/174426416X14534671325644

    • Search Google Scholar
    • Export Citation
  • Sharples, J., Albers, B., Fraser, S. and Kime, S. (2019) Putting Evidence to Work: A School’s Guide to Implementation, guidance report, 2nd edn, London: Educational Endowment Foundation, https://educationendowmentfoundation.org.uk/education-evidence/guidance-reports/implementation.

    • Search Google Scholar
    • Export Citation
  • Sibieta, L. and Sianesi, B. (2019) Impact Evaluation of the South West Yorkshire Teaching Assistants Scaleup Campaign, London: Education Endowment Foundation, https://educationendowmentfoundation.org.uk/public/files/Campaigns/TA_scale_up_lessons_learned.pdf.

    • Search Google Scholar
    • Export Citation
  • Waddell, S. (2021) Supporting Evidence-use in Policy and Practice Reflections for the What Works Network, London: Early Intervention Foundation, https://www.eif.org.uk/report/supporting-evidence-use-in-policy-and-practice-reflections-for-the-what-works-network.

    • Search Google Scholar
    • Export Citation
  • Waddell, S. and Sharples, J. (2020) Developing A Behavioural Approach to Knowledge Mobilisation: Reflections for the What Works Network, London: Early Intervention Foundation, https://www.eif.org.uk/report/developing-a-behavioural-approach-to-knowledge-mobilisation-reflections-for-the-what-works-network.

    • Search Google Scholar
    • Export Citation
  • Ward, T. (2015) A new and more rigorous approach to expert evidence in England and Wales? International Journal of Evidence & Proof, 19(4): 22845, doi: 10.1177/1365712715591471

    • Search Google Scholar
    • Export Citation
  • Weiss, C. (1979) The many meanings of research utilization, Public Administration Review, 39(5): 42631, doi: 10.2307/3109916