3: WHAT counts as evidence

Author:

It is not always obvious what is seen as valid evidence. Different stakeholders have different needs and value different kinds of information. This might include surveys, local health needs information, general media coverage as well as published research. Even for published research, it is not always easy to judge what is most reliable or relevant. This has become more of a problem with the exponential increase in scientific and other outputs, accelerated by trends towards Open Access publishing. Readers need help to filter and prioritise the evidence which is of most value to them. At our evidence centre, we did this with a community of people working and using health services who told us what research mattered to them. Researchers need to involve their target audience at all stages of their projects to ensure their research stays relevant to their readership. Early engagement will help to stay focused on the problems and outcomes that matter to that audience and to understand the ways in which they might make sense of the findings. This will help to translate formal academic knowledge into evidence which will support and inform everyday practice. There are insights on making your research inclusive and reaching diverse audiences.

It seems obvious what we are talking about when we are talking about evidence. It is published research, right? Not always. Evidence means different things to different people. And asking for the ‘best evidence’ or ‘most relevant evidence’ may end up with very different kinds of information, depending on who it is for and the nature of the question. Let’s take just one example – social prescribing (Box 3.1).

Summary

It is not always obvious what is seen as valid evidence. Different stakeholders have different needs and value different kinds of information. This might include surveys, local health needs information, general media coverage as well as published research. Even for published research, it is not always easy to judge what is most reliable or relevant. This has become more of a problem with the exponential increase in scientific and other outputs, accelerated by trends towards Open Access publishing. Readers need help to filter and prioritise the evidence which is of most value to them. At our evidence centre, we did this with a community of people working and using health services who told us what research mattered to them. Researchers need to involve their target audience at all stages of their projects to ensure their research stays relevant to their readership. Early engagement will help to stay focused on the problems and outcomes that matter to that audience and to understand the ways in which they might make sense of the findings. This will help to translate formal academic knowledge into evidence which will support and inform everyday practice. There are insights on making your research inclusive and reaching diverse audiences.

Different kinds of evidence

It seems obvious what we are talking about when we are talking about evidence. It is published research, right? Not always. Evidence means different things to different people. And asking for the ‘best evidence’ or ‘most relevant evidence’ may end up with very different kinds of information, depending on who it is for and the nature of the question. Let’s take just one example – social prescribing (Box 3.1).

Research example – social prescribing

Just what the doctor ordered

Social prescribing has been widely promoted and adopted in the NHS, most recently in the Longterm Plan in England (NHS 2019). This is a general approach where general practitioners (GPs) or other health staff can refer patients with complex needs or problems to a range of non-clinical services. This is usually done through a link worker and may include walking clubs, art classes or befriending schemes often run by voluntary and community organisations. Given that many people presenting to GPs have longstanding problems not easily translated into treatment solutions, and our growing understanding of the social and wider determinants of health, this seems self-evidently a good thing.

But in terms of what difference it makes, the evidence is mixed. A comprehensive review of UK-relevant literature in 2017 (Bickerdike et al 2017) found only 15 evaluations of social prescribing activities, of low-quality and high risk of bias. This review highlighted the complex nature of the intervention, with differing objectives from improving physical and mental wellbeing to reducing use of GP and other services. There are no current evidence-based national guidelines for their use.

Interestingly, a realist review on social prescribing has since been published (Tierney et al 2020). This drew on a large range of UK-relevant material, including grey literature and local evaluations from commissioning authorities, to draw up programme theory to explain the kinds of features which need to be in place for link workers to be effective. This approach which pays less attention to methodological quality of individual studies, as even uncontrolled before-after studies may usefully describe features of schemes, helps to identify mechanisms of success. This includes the ‘buy-in’ and relational connections of link workers within their local communities which are conditions for success of such complex schemes. In this case, the ‘what’ of evidence is closely aligned to the ‘how’ of study design and methodology. A formal systematic review would only have included randomised trials. This realist review embraced a range of evidence for a different purpose.

What interests me is not just the paucity of evidence for this new approach which has been adopted so enthusiastically in policy and practice, but thinking about social prescribing also shows the variety of evidence which different people might want. From the GP perspective, the most important question might be – which of my many patients presenting with non-specific or complex problems would be most likely to benefit? And which schemes are most likely to be effective in terms of improved outcomes? For the local commissioner of services, it might be which of these activities are most cost-effective and what data is there locally on provision, uptake and resources. And how can we best recruit, support and retain link workers or navigators? Individual patients might want to know what it is like to join a healthy eating cooking class, did people enjoy it and what made them stay? A local advocacy group for people with learning disabilities might want to know how these services are funded and their fit with other statutory and voluntary services in their patch. And at a national policy level, as well as questions of cost-effectiveness, there may be an appetite for ‘good news’ stories to satisfy ministers and Treasury officials as part of the story of the policy on universal personalised care.

These are all different questions demanding different kinds of evidence. Evidence could include controlled before-after studies of particular social prescribing initiatives but also mapping data on availability and use of local services, descriptive case studies with quotes from referring staff, patient diaries and videos.

The case of social prescribing evidence shows that the ‘what’ is not straightforward. This struck home with me in an exercise some time ago when I carried out a series of recorded interviews with leading health service leaders. I naively asked each of these individuals with distinguished careers as top managers to name a health service research study which had influenced them. They were stumped (luckily, not live broadcast). After a bit of gentle prompting from me, a few came up with some of the management theorists fashionable at the time, from Clayton Christensen on disruptive innovation (Christensen 2013) to Michael Porter on value chains (Porter 1985). The work of academic health services researchers was not mentioned. This was confirmed by a research study of NHS general managers who ranked academic journals as the very lowest source of influence (Dopson et al 2013). This contrasts perhaps with a clinical leadership culture which is more closely aligned with biomedical or health services research. I was struck at patient safety events over the years at how many senior medical and nursing leaders appeared fluent and conversant with the complex and nuanced work of leading scholars like James Reason or Mary Dixon-Woods. In Chapter 6, I will discuss some of the differences between clinical leaders, largely embracing evidence-based healthcare cultures, and general managers who come from a different tradition. As Walshe and Rundall (2001) note, these are different audiences with different resources and notions of what counts as evidence:

Overall, the tightly defined, well-organized, highly quantitative and relatively generalizeable research base for many clinical professions provides a strong and secure foundation for evidence-based practice and lends itself to a systematic process of review and synthesis and to the production of guidelines and protocols. In contrast, the loosely defined, methodologically heterogenous, widely distributed, and hard-to-generalize research base for healthcare management is much more difficult to use in the same way. (Walshe and Rundall 2001: 443)

Understanding the expectations and cultures of your audience around research and evidence is important when framing your findings to address particular needs.

Make it relevant

It has become a bit of a tired notion to talk about the chasm between research and practice. But it is salutary to read through reviews of research in the company of a busy practitioner or manager. It begs the question of ‘so what?’ All too often, a systematic review will conclude that there was little evidence of quality on a particular question and more research is needed. Reviewers favour precise, narrow research questions with pre-specified outcomes and parameters in searches which can be replicated by others. This is important in the interest of building up reliable and trustworthy science. But all too often the research that is found is not answering the question that the decision-maker wants to ask.

One of the achievements of research systems, like the NIHR in the UK, was to set up systematic processes to ask stakeholders about the most pressing uncertainties in a particular area and to fund research to answer these questions. This complemented more traditional forms of grant-giving where renowned researchers came up with good ideas to further knowledge and made the case for their project being funded. I support one of NIHR’s national research programmes on delivery and quality of services which identifies priorities for new research from stakeholder workshops, surveys and meetings with clinicians, managers, patients and charities. This gave rise to new research in particular areas of uncertainty, from studies to improve 24/7 care to evaluations of joined-up health and care services for the homeless.

Without a steer from the decision-makers in health and social care, there is a risk that published research is not relevant to real problems and practice. This is the first pillar of research waste, identified by Glasziou and Chalmers (2018). There is little value in a high-quality, reliable randomised trial of a technology which is not likely to be used. The first step in ensuring the value of research is asking questions which are important to clinicians and patients (Box 3.2).

Research example – protective clothing

Will it keep me safe?

Asking the wrong question or the right question in the wrong way is a common issue with research. During the COVID-19 crisis, I passed on a rapid review that had just come out of qualitative research on barriers and enablers to staff adherence to infection control measures (Houghton et al 2020) to a friend who was working as a clinician on high dependency wards with affected patients. It seemed a topical and helpful subject, with a useful focus on staff experience. But to her it was not helpful. The review found 36 papers from different countries, in different healthcare settings and came to some rather general conclusions that adherence depended on training, availability of protective clothing, trust in the organisation and so on. My friend had more specific concerns. She hadn’t been properly fit tested for a face mask. The discomforts of wearing hot, sweaty, restricting protective equipment had been minimised at the start. There had been some confusion about supply and inconsistent advice on changing of scrubs. The issues she felt were very different in different clinical settings, from intensive care to general practice which had been combined together in this review. In short, she did not get any new insights or resonance with her lived experience from this research.

This is not a criticism of the review. It presented fairly and accurately the published research which met the search criteria. But until enough high-quality observational or other research capturing lived experience of patients and staff on infected wards can be added to the evidence base, formal literature reviews are likely to fall short of decision-maker needs.

The right kind of research needs to be funded to answer questions about most appropriate and effective solutions. But also in formulating and understanding the problem, a wide range of information can be helpful. In the case of protective equipment, more informal sources from staff surveys to free-text content analysis of WhatsApp exchanges, might have generated more immediate and vivid examples of the problem. More pragmatic approaches to rapid testing and evaluation can generate usable findings of solutions, although it is important to understand the weight of evidence and the extent to which single studies can address questions of ‘what works?’

Timing is key and a policymaker, union leader or health professional wanting to know urgently what matters to staff, which infection control strategies seem most robust may find that available published research does not have all the answers. Academic discussion of what counts as ‘good enough’ evidence in areas like face coverings have become part of national discourse in the current pandemic crisis (Greenhalgh et al 2020). Who knew that arcane and scholarly debate on different methodologies and paradigms would be so widely aired in viral discussion of trials of face coverings or vitamin D? It will be interesting to see if these discussions reflect a turn in public understanding of the complex nature of the many kinds of research and evidence which influence policy and practice.

Relevant to whom?

In thinking about generating new knowledge and what is produced, researchers need to be aware of who might be excluded or not heard. There are many ways in which research may reinforce existing patterns of behaviour or information which are unfair or unjust. To take one example, historic failure to record ethnicity in routine data has led to gaps in what we know about inequalities in access or outcome of services for certain groups (Public Health England 2017). It may seem difficult and costly for researchers to reach more marginalised communities to take part in research. Studies which focus on excluded groups, whether they are young offenders or recent immigrants, may be seen as high-risk in terms of delivery against recruitment targets and study milestones. But there are positive steps that can be taken by researchers, identified in a recent conversation I had (Box 3.3) with a researcher leading work on underserved communities and disadvantaged groups.

Interview – Ghazala Mir

Thinking about inclusion in research

Ghazala Mir, based at the University of Leeds, has a longstanding research interest in health and social inequalities. Historic inequalities have been brought into sharp relief recently by the pandemic and Black Lives Matter movement. It is everybody’s business, and Ghazala spoke to me about the kinds of things researchers should think about to address issues of diversity and inclusion.

Hard to reach?

People often feel overwhelmed when thinking about equity and inclusion and how to go about it in the right way. But in fact we know a fair amount from research already about best practice (Mir et al 2013). It’s partly having a different mindset – people talk about groups being hard to reach, but it depends on what your normal networks are. We all have our life experience, circles of influence and social networks. You can be mindful of the kinds of perspectives you have on your study team and advisory groups. You may not cover all bases, particularly if your research is wide-ranging, but as a research team you should expect to represent at least some of the populations you are studying in some way.

Start with the right question

Make sure your research is inclusive from the start. Is the research question relevant – Ghazala leads a large network in Leeds which brings together policymakers, practitioners, service users and voluntary organisations to work out what research will make a difference in reducing inequities in public services. Having the input of people working in and using these services every day means the research which follows is important and collectively owned, not reliant on an individual researcher identifying a gap or following their own particular interest.

Engage throughout

It’s also about how the research is done and who is in your research team. Participatory methods are important in sharing power. You need a range of perspectives in the wider study team to ensure buy-in. You may not be able to cover all bases, but you can identify some important target groups and work with trusted community partners and advocacy groups to reach them. You shouldn’t be generating knowledge about people if you haven’t got their voice in the process. Research about excluded people which isn’t validated by them can actually be harmful or unethical. Participatory methods can build in an important sense of accountability to the people who are the focus of your research.

Don’t assume you understand

Having people on the team who act as cultural brokers, with informal and personal understanding of particular communities, is important in areas like interpreting data. When making sense of interview data, you may misunderstand something when viewing through your own cultural lens. It is important to validate your findings with the right people and check out what you think you are hearing. Part of this is not treating people as a homogenous group – you need to understand how ethnicity intersects with other kinds of difference like social class, religion, age. Ghazala made the point that she has a minority ethnic background but is not an insider in every context, given her education and work status.

Tailor and test outputs

Researchers can also think about the way findings are communicated, investing effort in summaries, visuals and easy-read versions. Making materials easy to read can improve access for many people from minority ethnic groups, as well as people with limited literacy or learning disabilities. Think about the range of language and formats which might work best, as well as issues of language support throughout the study. Consider tailored events or products – for instance, Ghazala ran a separate workshop with service users to share findings from her research study on depression in Muslim communities, as well as involving them in a more general conference. Working with community partners can be very helpful in testing different versions and outputs with target readers and co-designing events with impact and reach.

Right kind of knowledge

A seminal study in the emergent knowledge mobilisation field has been the research carried out by John Gabbay and Andree le May of how staff in general practices used evidence, guidelines and other sources of information in their daily work (Gabbay and le May 2011). Their insights draw from more than two years of ethnographic research, a ‘deep dive’ into working practices and evidence-using behaviour. Through shadowing staff, observing team meetings, quality and audit reviews, interviewing teams and individuals over time they concluded that there was little use of published research or clinical guidelines in a formal, explicit way. Instead, knowledge was acquired through speaking to colleagues, trusted opinion leaders, patients and pharmaceutical representatives. Knowledge was laid down through ‘mindlines’ or ‘collectively reinforced, internalised, tacit guidelines’. This included their own training and experience, advice and information from trusted clinicians in their professional network as well as memories of the ‘last worse case’ for their or neighbouring practices. The concept of mindlines as a way of describing the complex ways that different kinds of knowledge are used has spawned other studies in the field (Wieringa and Greenhalgh 2015).

Another piece of ethnographic research on the subject of what ‘counts’ as evidence is an early study on patient safety in the UK (Currie et al 2008). In this case, researchers observed surgical teams, department and hospital approaches to identifying and recording patient safety incidents. They concluded that many common errors and patient safety incidents were not counted as they had been normalised and ‘seen as routine … within the everyday context of care delivery’. This included failings due to organisational issues such as staff shortages or availability of beds. Doctors were often reluctant to acknowledge such latent risk factors, terming them ‘organisational’ or ‘non-clinical’ issues. Other kinds of error were around communication problems, missing information at handover or on records. Again, these were often not recognised as incidents by medical staff in the same kind of way that rare but well-defined acts of error, such as wrong site surgery, were identified and reported. The authors concluded that ‘a major concern lies with mediating problems around the nature of knowledge’. In other words, defining what we mean by error or evidence is itself an important question for people trying to make services safer or improve the quality of care.

Evidence then is not only the research findings themselves, but how it is interpreted and made sense of together with a range of other inputs. For practitioners this may be ‘praxis’ – the clinical or professional wisdom that comes with years of experience and is embodied as tacit knowledge (Van de Ven and Johnson 2006). For policymakers, this might be a sixth sense of what will play well with ministers or meet the concerns of elected members. Cairney notes:

For scientists, ‘the word evidence is synonymous with research’, but for policymakers such as civil servants, it is ‘more synonymous with data, analysis, or investigation’; ‘evidence’ will include ‘gray literature, raw data’, advice from experts, lessons from other governments, public opinion and, in some cases, anecdotal evidence of success. (Cairney 2016: 22)

I will return again in Chapters 46 to the process by which the reader makes sense of findings with and through their own communities of practice. But it is worth highlighting the different kinds of evidence which might be seen as legitimate by different groups of people, including what might be termed practical and research knowledge (Van de Ven and Johnson 2006).

One overview of published evidence on knowledge translation schemes provided three helpful groupings: ‘does the knowledge arise from structured data gathering (empirical knowledge), is it from practical experience (experiential knowledge) or from abstract discourse and debate (theoretical knowledge)?’ (Davies et al 2015: 35). The authors noted the lack of agreement in the literature or in policies and practice of knowledge mobilisation agencies, such as thinktanks or policy research institutes, about what is meant by evidence.

Assessing quality of evidence

I return to the opening question – what counts as evidence? And it seems clear that it may be many different kinds of information with some link to an investigation, exploration or inquiry (research). If we only want to promote ‘high-quality’ evidence, what does this mean? Let’s think first of all about published research. In recent years, the model of academic publishing has changed almost beyond recognition.

In the past, the assumption was that quality was guaranteed by the gatekeepers, that is editors and reviewers of medical journals. We know now that flawed research has been published in good journals. And there is a growing evidence base on the flawed model of peer review for publication and funding research, since an early review found few studies in this area (Smith 2006). Many agree that asking three to five experts for their views on the soundness of methods, interpretation of findings and fit with a wider evidence base is helpful. This is a fundamental principle of science, with the norm of ‘organised scepticism’ in which emerging findings are subject to challenge and criticism by fellow scientists. But these opinions and decisions are often inconsistent and unreliable. The evidence base includes studies showing systematic failure to spot errors, agree on aspects of quality and the impact of cognitive and institutional biases which may make radical or innovative papers more likely to be rejected (Nicholson and Ioannides 2012). Indeed, an interesting if controversial proposition has been put forward that peer review should be replaced by a lottery system for all studies meeting a minimum quality bar (Roumbanis 2019). Given the capriciousness inherent in decision-making around publication and the time taken from completion of studies to publication – a matter of years, not months (Welsh et al 2018) – this does not seem unreasonable. So we know there are weaknesses in the current system. And this is important, as recognition and progression in academic careers still rests on attracting grant income and publication in high-impact journals.

In response to these challenges, disruptive models of publishing have developed at pace. Open Access is a general term covering a range of activities by which scholarly papers are available freely at no cost to the reader. This contrasts with traditional models of scientific publishing where papers are only available by subscription, usually through libraries. The democratic urge to make science directly accessible to the people has fervent advocates and has taken on the nature of a social movement. Open Access takes many forms, but includes models of pay-to-publish (associated at times with journals of variable quality) and platforms which encourage preprint papers, where review happens after publication. By 2016, over half of UK publications were Open Access (Hook et al 2019). This proportion will now be higher, with recent commitment from 2018 through Plan S (www.coalition-s.org) of many major research funding bodies in Europe to making funded work openly available. This trend has accelerated in recent months, with the COVID-19 pandemic seeing many of the most influential pieces of research appearing as preprint articles. There are interesting debates at present and the eco-system of academic and science publishing is changing at speed.

The explosion of research activity around COVID-19 has added to the growing volume and production of research – estimates in 2014 that scientific output doubled every nine years is probably conservative (Van Noorden 2014). This is amplified by the ‘noise’ of social media, promoting articles and threads of interest.

This adds to the burden for the consumer or reader. How to discriminate between good and bad research? What is worth reading? There is a responsibility on the researcher not to add to this information overload unthinkingly. We are all ‘cognitive misers’ (Kam 2005 cited by Cairney and Kwiatkowski 2017), looking for the least possible information to address our needs. Not every piece of research needs to find a wide readership. Some may provide a useful block or foundation for other research which can make a difference. This is partly about the status of single studies as opposed to syntheses of evidence. The issue of spin and over-claiming for single studies is discussed briefly in Chapter 9.

But it is worth underlining here that not all studies deserve active promotion to non-academic audiences. Some research will add usefully to the body of knowledge, for instance a scoping review which may set the agenda for future research or a methodological study to validate outcome measures. These are helpful for other researchers, but not likely to be of interest to wider audiences. Spending time in planning the wider promotion of research and engaging with networks and partner bodies is not always needed. Researchers have a responsibility not to ‘push’ research unthinkingly in a crowded market of health and social care information. Understanding what your research means for the intended audience and, importantly, who does not need to know, are important parts of the research planning process.

Involving end users in finding research which matters

In our work in a national health evidence centre, we produced several critical summaries a week of recent evidence. We set up a rating system to help us sift out the most important research from a user perspective. We looked at a good spread of research outputs from major funders and a hundred top biomedical, health management and public health journals each week. We recruited a panel of over a thousand nurses, allied health professionals, doctors, managers, commissioners, public health staff, social workers, patients and the public. We adapted and broadened a model developed by McMaster in Canada who set up a panel of doctors to rate medical abstracts.1 We kept our system simple, asking people on a six-point scale whether they thought the research was interesting, important and worth sharing. They weren’t asked to consider the quality or reliability of research, as we had critical appraisal experts on the team who had already screened the papers using recognised quality assessment tools.

What surprised us was the richness and diversity of the comments and the value of different perspectives on a single piece of research. They told us not only what was important to them but gave us valuable bits of context about how the clinic was usually run and why this information might help with a particular area of uncertainty. This extra layer of sense-making is important, particularly for international reviews of complex service interventions where individual studies may take place in very different health systems. For instance, understanding the findings of nurse-led clinics would depend on the professional scope of practice, cultural norms and models of chronic disease management in different countries. A respiratory specialist nurse could help us judge what this evidence might mean for asthma clinics here. The comments from raters, such as the ones in Figure 3.1 on a study of models of end of life care, helped us not just to choose the best research but to add context on why it mattered to different staff and service users.

Figure 3.1:
Figure 3.1:

Stakeholders rating research on specialist palliative care

We cannot consider the ‘what’ of research apart from the ‘who’ or the people who might value it. We will see in the next chapters that the process of making sense of information also creates knowledge. Our rater panel (Box 3.4) was one approach to getting this engagement in a relatively high volume evidence-promoting process. Many research projects engage stakeholders – managers, service users or practitioners – in their project from the outset to make sure that the study is asking the right questions in the right way throughout. This attention to optimising the value of the evidence being produced will make later stages of promoting the research easier and better.

Practical pointers on WHAT counts as evidence

What kinds of information matter to your audience?

Find out what kinds of evidence are used by your primary audience by browsing practice journals, professional newsletters or chat forums. How is information presented and framed? For a manager, this might mean extrapolating your findings at a locality level – if this was implemented in a typical area, it could achieve this number of reduced bed days, cost savings or fewer patient journeys.

Add colour and context to your research

You can enhance and enrich your research findings by finding related news items, patient stories and case studies which may resonate with your target audience. For instance, a musculoskeletal study on referral pathways for low back pain may be strengthened by accompanying interviews with physiotherapists using new triage methods and vignettes of patient journeys.

Passing the relevance test

As your study develops, keep in close touch with people and organisations who are good proxies for the audiences you want to use. Share emerging findings in different forms and find out what resonates and interests them (and why). Ask them what they think the main messages are and who might want to know. Frame your study findings around the original decision problem or uncertainty which prompted the research and be clear about the weight of your contribution and what we now know.

  • Figure 3.1:

    Stakeholders rating research on specialist palliative care

  • Adams, C.E., Jayaram, M., Bodart, A.Y.M., Sampson, S., Zhao, S. and Montgomery, A.A. (2016) ‘Tweeting links to Cochrane Schizophrenia Group reviews: a randomised controlled trial’, BMJ Open, 6(3): e010509.

  • Adams, R.C., Challenger, A., Bratton, L., Boivin, J., Bott, L., Powell, G., Williams, A., Chambers, C.D. and Sumner, P. (2019) ‘Claims of causality in health news: a randomised trial’, BMC Medicine, 17(1): 1–11.

  • Aiken, L.H., Sloane, D.M., Bruyneel, L., Van den Heede, K., Griffiths, P., Busse, R. et al (2014) ‘Nurse staffing and education and hospital mortality in nine European countries: a retrospective observational study’, The Lancet, 383: 1824–30.

  • Alvesson, M., Gabriel, Y. and Paulsen, R. (2017) Return to Meaning: A Social Science with Something to Say, Oxford: Oxford University Press.

  • Appleby, J., Raleigh, V., Frosini, F., Bevan, G., Gao, H. and Lyscom, T. (2011) Variations in Health Care: The Good, the Bad and the Inexplicable, London: King’s Fund.

  • Atkins, L., Smith, J.A., Kelly, M.P. and Michie, S. (2013) ‘The process of developing evidence-based guidance in medicine and public health: a qualitative study of views from the inside’, Implementation Science, 8(1): 1–12.

  • Badenoch, D. and Tomlin, A. (2015) ‘Keeping up to date with reliable mental health research: Minervation White Paper’. Available from: www.minervation.com (accessed 19 October 2020).

  • Banks, S., Herrington, T. and Carter, K. (2017) ‘Pathways to co-impact: action research and community organising’, Educational Action Research, 25(4): 541–59.

  • Bath P.M., Woodhouse, L.J., Appleton, J.P., Beridze, M., Christensen, H., Dineen, R.A. et al (2017) ‘Antiplatelet therapy with aspirin, clopidogrel, and dipyridamole versus clopidogrel alone or aspirin and dipyridamole in patients with acute cerebral ischaemia (TARDIS): a randomised, open-label, phase 3 superiority trial’, The Lancet, 391(10123): 850–9.

  • Baxter, K., Heavey, E. and Birks, Y. (2020) ‘Choice and control in social care: experiences of older self-funders in England’, Social Policy & Administration, 54(3): 460–74.

  • Bayley, J. and Phipps, D. (2019) ‘Extending the concept of research impact literacy: levels of literacy, institutional role and ethical considerations’, Emerald Open Research, 1: 14.

  • Bellos, D. (2012) Is That a Fish in Your Ear? Translation and the Meaning of Everything, London: Penguin Books.

  • Beresford, P. (2016) All Our Welfare: Towards Participatory Social Policy, Bristol: Policy Press.

  • Best, A. and Holmes, B. (2010) ‘Systems thinking, knowledge and action: towards better models and methods’, Evidence & Policy: A Journal of Research, Debate and Practice, 6(2): 145–59.

  • Bickerdike, L., Booth, A., Wilson, P.M., Farley, K. and Wright, K. (2017) ‘Social prescribing: less rhetoric and more reality. A systematic review of the evidence’, BMJ Open, 7(4): e013384.

  • Boaz, A. and Nutley, S. (2019) ‘Using evidence’, in A. Boaz, H. Davies, A. Fraser and S. Nutley (eds) What Works Now? Evidence-Informed Policy and Practice, Bristol: Policy Press, pp 251–77.

  • Boaz, A., Biri, D. and McKevitt, C. (2016) ‘Rethinking the relationship between science and society: has there been a shift in attitudes to patient and public involvement and public engagement in science in the United Kingdom?’, Health Expectations, 19(3): 592–601.

  • Boaz, A., Hanney, S., Borst, R., O’Shea, A. and Kok, M. (2018) ‘How to engage stakeholders in research: design principles to support improvement’, Health Research Policy and Systems, 16(1): 1–9.

  • Boaz, A., Davies, H., Fraser, A. and Nutley, S. (eds) (2019) What Works Now? Evidence-informed Policy and Practice, Bristol: Policy Press.

  • Bornbaum, C.C., Kornas, K., Peirson, L. and Rosella, L.C. (2015) ‘Exploring the function and effectiveness of knowledge brokers as facilitators of knowledge translation in health-related settings: a systematic review and thematic analysis’, Implementation Science, 10: 1–12.

  • Bowman, D. (2019) ‘I’m a Professor of Medical Ethics, but having cancer changed my beliefs about medicine’ . Available from: www.royalmarsden.nhs.uk/im-professor-medical-ethics-having-cancer-changed-my-beliefs-about-medicine (accessed 24 October 2020).

  • Boyd, B. (2009) On the Origin of Stories: Evolution, Cognition, and Fiction, Cambridge, Mass: Harvard University Press.

  • Braithwaite, J., Glasziou, P. and Westbrook, J. (2020) ‘The three numbers you need to know about healthcare: the 60–30–10 challenge’, BMC Medicine, 18: 1–8.

  • Breckon, J. and Gough, D. (2019) ‘Using evidence in the UK’, in A. Boaz, H. Davies, A. Fraser and S. Nutley (eds) What Works Now? Evidence-Informed Policy and Practice, Bristol: Policy Press, pp 285–302.

  • Brooks, P. (1984) Reading for the Plot: Design and Intention in Narrative, New York: AA Knopf.

  • Brown, J.S. and Duguid, P. (2017) The Social Life of Information: Updated, with a New Preface, Boston, Mass: Harvard Business Review Press.

  • Cairney, P. (2016) The Politics of Evidence-Based Policymaking, London: Palgrave Macmillan.

  • Cairney, P. (2020) Understanding Public Policy (2nd edn), London: Red Globe Press.

  • Cairney, P. and Kwiatkowski, R. (2017) ‘How to communicate effectively with policymakers: combine insights from psychology and policy studies’, Palgrave Communications, 3(1): 1–8.

  • Campbell, J. (2008) The Hero with a Thousand Faces (3rd edn), Novato, Calif: New World Library.

  • Carroll, N. and Conboy, K. (2020) ‘Normalising the “new normal”: changing tech-driven work practices under pandemic time pressure’, International Journal of Information Management, 55: 102186.

  • Chakravarthy, U., Harding, S.P., Rogers, C.A., Downes, S.M., Lotery, A.J., Culliford, L.A. et al (2013) ‘Alternative treatments to inhibit VEGF in age-related choroidal neovascularisation: 2-year findings of the IVAN randomised controlled trial’, The Lancet, 382 (9900): 1258–67.

  • Chalmers, I. and Glasziou P. (2009) ‘Avoidable waste in the production and reporting of research evidence’, The Lancet, 374(9683): 86–9.

  • Chapman, A.L. and Greenhow, C. (2019) ‘Citizen-scholars: social media and the changing nature of scholarship’, Publications, 7(1): 11.

  • Chapman, S. (2017) ‘Frozen shoulder: making choices about treatment’, 12 October. Available from: www.evidentlycochrane.net/frozen-shoulder-2/ (accessed 24 October 2020).

  • Charon, R. (2008) Narrative Medicine: Honoring the Stories of Illness, Oxford: Oxford University Press.

  • Christensen, C.M. (2013) The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, Boston, Mass: Harvard Business Review Press.

  • Clark D.M. (2018) ‘Realizing the mass public benefit of evidence-based psychological therapies: the IAPT program’, Annual Review of Clinical Psychology, 14: 159–83.

  • Cook, E. (1913) The Life of Florence Nightingale Vol 2 (1862–1910), London: Macmillan, pp 25–35. Available as ebook (2012), Urbana Illinois: Project Gutenberg, at http://www.gutenberg.org/files/40058/40058-h/40058-h.htm (accessed 22 March 2021).

  • Correll, C.U., Galling, B., Pawar, A., Krivko, A., Bonetto, C., Ruggeri, M. et al (2018) ‘Comparison of early intervention services vs treatment as usual for early-phase psychosis: a systematic review, meta-analysis, and meta-regression’, JAMA Psychiatry, 75: 555–65.

  • Cowan, K. and Oliver S. (2021) The James Lind Alliance Guidebook (Version 10), Southampton: National Institute for Health Research Evaluation, Trials and Studies Coordinating Centre. Available from: www.jla.nihr.ac.uk/jla-guidebook/ (accessed 13 March 2021).

  • Currie, G., Waring, J. and Finn, R. (2008) ‘The limits of knowledge management for UK public services modernization: the case of patient safety and service quality’, Public Administration, 86(2): 363–85.

  • Davies, H.T.O., Powell, A.E. and Nutley, S.M. (2015) ‘Mobilising knowledge to improve UK health care: learning from other countries and other sectors – a multimethod mapping study’, Health Services & Delivery Research, 3(27), https://doi.org/10.3310/hsdr03270

  • Dixon-Woods M. (2014) ‘The problem of context in quality improvement’, Perspectives on Context: A Selection of Essays Considering the Role of Context in Successful Quality Improvement, Health Foundation. Available from: www.health.org.uk/publications/perspectives-on-context (accessed 14 March 2021).

  • Dopson, S., Bennett, C., Fitzgerald, L., Ferlie, E., Fischer, M., Ledger, J., McCulloch, J. and McGivern, G. (2013) ‘Health care managers’ access and use of management research’, NIHR Service Delivery and Organisation Programme.

  • Drummond, M. and Banta, D. (2009) ‘Health technology assessment in the United Kingdom’, International Journal of Technology Assessment in Health Care, 25(S1): 178–81.

  • Dunleavy, P. and Tinkler, J. (2020) Maximizing the Impacts of Academic Research: How to Grow the Recognition, Influence, Practical Application and Public Understanding of Science and Scholarship, London: Macmillan.

  • DuVal, G. and Shah, S. (2020) ‘When does evidence from clinical trials influence health policy? A qualitative study of officials in nine African countries of the factors behind the HIV policy decision to adopt Option B+’, Evidence & Policy: A Journal of Research, Debate and Practice, 16(1): 123–44.

  • Elbow, P. (2013) ‘Maybe academics aren’t so stupid after all’, OUPblog, 6 February. Available from: https://blog.oup.com/2013/02/academic-speech-patterns-linguistics/ (accessed 20 October 2020).

  • Elliott, J.H., Turner, T., Clavisi, O., Thomas, J., Higgins, J.P., Mavergames, C. and Gruen, R.L. (2014) ‘Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap’, PLOS Medicine, 11(2): e1001603.

  • Engebretsen, M. and Kennedy, H. (eds) (2020) Data Visualization in Society, Amsterdam: Amsterdam University Press.

  • Evans S. and Scarbrough H. (2014) ‘Supporting knowledge translation through collaborative translational research initiatives: “bridging” versus “blurring” boundary-spanning approaches in the UK CLAHRC initiative’, Social Science & Medicine, 106: 119–27.

  • Fanshawe T.R., Halliwell W., Lindson N., Aveyard, P., Livingstone-Banks, J. and Hartmann-Boyce, J. (2017) ‘Tobacco cessation interventions for young people’, Cochrane Database of Systematic Reviews, 11: CD003289.

  • Featherstone, K., Northcott, A., Harden, J., Harrison-Denning, K., Tope, R., Bale, S. and Bridges, J. (2019) ‘Refusal and resistance to care by people living with dementia being cared for within acute hospital wards: an ethnographic study’, Health Services & Delivery Research, 7(11), https://doi.org/10.3310/hsdr07110

  • Franck, G. (2019) ‘The economy of attention’, Journal of Sociology, 55(1): 8–19.

  • Freeman, R. (2007) ‘Epistemological bricolage: how practitioners make sense of learning’, Administration & Society, 39(4): 476–96.

  • Fulop, N.J., Ramsay, A.I.G., Hunter, R.M., McKevitt, C., Perry, C., Turner, S.J. et al (2019) ‘Evaluation of reconfigurations of acute stroke services in different regions of England and lessons for implementation: a mixed-methods study’, Health Services and Delivery Research, 7(7), https://doi.org/10.3310/hsdr07070

  • Gabbay, J. and le May, A. (2011) Practice-based Evidence for Health Care: Clinical Mindlines, Abingdon: Routledge.

  • Gates, S., Lall, R., Quinn, T., Deakin, C.D., Cooke, M.W., Horton, J., Lamb, S.E., Slowther, A.M., Woollard, M., Carson, A. and Smyth, M. (2017) ‘Prehospital randomised assessment of a mechanical compression device in out-of-hospital cardiac arrest (PARAMEDIC): a pragmatic, cluster randomised trial and economic evaluation’, Health Technology Assessment, 21(11), https://doi.org/10.3310/hta21110

  • Gawande, A. (2014) Being Mortal: Medicine and What Matters in the End, New York: Metropolitan Books.

  • Glasziou, P. and Chalmers, I. (2018) ‘Research waste is still a scandal: an essay by Paul Glasziou and Iain Chalmers’, BMJ, 363: k4645.

  • Glenton, C. (2017) ‘How to write a plain language summary of a Cochrane intervention review’, Cochrane Norway. Available from: www.cochrane.no/sites/cochrane.no/files/public/uploads/how_to_write_a_cochrane_pls_12th_february_2019.pdf (accessed 26 February 2021).

  • Glenton C., Rosenbaum, S. and Fønhus, M.S. (2019) ‘Checklist and guidance for disseminating findings from Cochrane intervention reviews’ Cochrane. Available from: https://training.cochrane.org/sites/training.cochrane.org/files/public/uploads/Checklist%20FINAL%20version%201.1%20April%202020pdf.pdf (accessed 26 February 2021).

  • Gough, D., Maidment, C. and Sharples, J. (2018) UK What Works Centres: Aims, Methods and Contexts, London: EPPI-Centre. Available from: https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3731 (accessed 14 March 2021).

  • Graff, G. and Birkenstein, C. (2010) ‘They Say/I Say’: The Moves that Matter in Persuasive Writing (2nd edn), New York: Norton.

  • Graham, I.D. and Tetroe, J. (2007) ‘Some theoretical underpinnings of knowledge translation’, Academy of Emergency Medicine, 14(11): 936–41.

  • Gravier, E. (2019) ‘Spending 2 hours in nature each week can make you happier and healthier, new study says’, 2 July, cnbc.com (online), www.cnbc.com/2019/07/02/spending-2-hours-in-nature-per-week-can-make-you-happier-and-healthier.html (accessed 14 March 2021).

  • Green, L.W. (2008) ‘Making research relevant: if it is an evidence-based practice, where’s the practice-based evidence?’ Family Practice, 25(Suppl 1): i20–4.

  • Greenhalgh, T. (2018) How to Implement Evidence-Based Healthcare, Oxford: Wiley Blackwell.

  • Greenhalgh, T. and Wieringa, S. (2011) ‘Is it time to drop the “knowledge translation” metaphor? A critical literature review’, Journal of the Royal Society of Medicine, 104(12): 501–09.

  • Greenhalgh, T. and Fahy, N. (2015) ‘Research impact in the community-based health sciences: an analysis of 162 case studies from the 2014 UK Research Excellence Framework’, BMC Medicine, 13(1): 1–12.

  • Greenhalgh, T., Schmid, M.B., Czypionka, T., Bassler, D. and Gruer, L. (2020) ‘Face masks for the public during the covid-19 crisis’, BMJ, 369: m1435.

  • Grey, C. (2012) Decoding Organization: Bletchley Park, Codebreaking and Organization Studies, Cambridge: Cambridge University Press.

  • Griffiths, P., Ball, J., Bloor, K., Böhning, D., Briggs, J., Dall’Ora, C. et al (2018) ‘Nurse staffing levels, missed vital signs and mortality in hospitals: retrospective longitudinal observational study’, Health Services & Delivery Research: 6(38), https://doi.org/10.3310/hsdr06380

  • Hanney, S.R., Castle-Clarke, S., Grant, J., Guthrie, S., Henshall, C., Mestre-Ferrandiz, J., Pistollato, M., Pollitt, A., Sussex, J. and Wooding, S. (2015) ‘How long does biomedical research take? Studying the time taken between biomedical and health research and its translation into products, policy, and practice’, Health Research Policy and Systems, 13(1): 1–18.

  • Harris R., Sims S., Leamy M., Levenson R., Davies N., Brearley S. et al (2019) ‘Intentional rounding in hospital wards to improve regular interaction and engagement between nurses and patients: a realist evaluation’, Health Services & Delivery Research, 7(35), https://doi.org/10.3310/hsdr07350

  • Haux, T. (2019) Dimensions of Impact in the Social Sciences: The Case of Social Policy, Sociology and Political Science Research, Bristol: Policy Press.

  • Hickey, G., Richards, T. and Sheehy, J. (2018) ‘Co-production from proposal to paper’, Nature, 562: 29–31.

  • Hogwood, B.W. and Gunn, L.A. (1984) Policy Analysis for the Real World, Oxford: Oxford University Press.

  • Holmes, A., Dixon-Woods, M., Ahmad, R., Brewster, E., Castro Sanchez, E.M., Secci, F., Zingg, W. et al (2015) Infection Prevention and Control: Lessons from Acute Care in England. Towards a Whole Health Economy Approach, Health Foundation. Available from: www.health.org.uk/publications/infection-prevention-and-control-lessons-from-acute-care-in-england (accessed 14 March 2021).

  • Holmes, B.J., Best, A., Davies, H., Hunter, D., Kelly, M.P., Marshall, M. and Rycroft-Malone, J. (2017) ‘Mobilising knowledge in complex health systems: a call to action’, Evidence & Policy, 13(3): 539–60.

  • Hook, D.W., Calvert, I. and Hahnel, M. (2019) The Ascent of Open Access: An Analysis of the Open Access Landscape since the Turn of the Millennium. Available from: https://digitalscience.figshare.com/articles/report/The_Ascent_of_Open_Access/7618751 (accessed 14 March 2021).

  • Hopkins, C. (1923) Scientific Advertising, New York: Crown Publishers.

  • Houghton, C., Meskell, P., Delaney, H., Smalle, M., Glenton, C., Booth, A. et al (2020) ‘Barriers and facilitators to healthcare workers’ adherence with infection prevention and control (IPC) guidelines for respiratory infectious diseases: a rapid qualitative evidence synthesis’, Cochrane Database of Systematic Reviews, 4.

  • Hutchinson, J.R. and Huberman, M. (1994) ‘Knowledge dissemination and use in science and mathematics education: a literature review’, Journal of Science Education and Technology, 3: 27–47.

  • Isett, K.R. and Hicks, D. (2020) ‘Pathways from research into public decision making: intermediaries as the third community’, Perspectives on Public Management and Governance, 3(1): 45–58.

  • Johnson, D., Deterding, S., Kuhn, K.A., Staneva, A., Stoyanov, S. and Hides, L. (2016) ‘Gamification for health and wellbeing: a systematic review of the literature’, Internet Interventions, 6: 89–106.

  • Kam, C.D. (2005) ‘Who toes the party line? Cues, values, and individual differences’, Political Behavior, 27(2): 163–82.

  • Kazdin, A. E. (2008) ‘Evidence-based treatment and practice: new opportunities to bridge clinical research and practice, enhance the knowledge base, and improve patient care’, American Psychologist, 63(3), 146–59.

  • Kincheloe, J.L. (2001) ‘Describing the bricolage: conceptualizing a new rigor in qualitative research’, Qualitative Inquiry, 7(6): 679–92.

  • Kingdon, J. (1995) Agendas, Alternatives and Public Policies (2nd edn), New York: Harper Collins.

  • Lamont, T. (2020) ‘But does it work? Evidence, policy-making and systems thinking: comment on “what can policy-makers get out of systems thinking? Policy partners’ experiences of a systems-focused research collaboration in preventive health”’, International Journal of Health Policy and Management, 10(5): 287–9, doi: 10.34172/ijhpm.2020.71

  • Landhuis, E. (2016) ‘Scientific literature: information overload’, Nature, 535: 457–8.

  • Langer, L., Tripney, J. and Gough, D.A. (2016) The Science of Using Science: Researching the Use of Research Evidence in Decision-Making, London: EPPI-Centre, Social Science Research Unit, UCL Institute of Education, University College London. Available from: https://eppi.ioe.ac.uk/cms/Default.aspx?tabid=3504 (accessed 14 March 2021).

  • Larivière, V., Gingras, Y. and Archambault, É. (2009) ‘The decline in the concentration of citations, 1900–2007’, Journal of the American Society for Information Science and Technology, 60(4): 858–62.

  • Lave J. and Wenger E. (1991) Situated Learning: Legitimate Peripheral Participation, New York: Cambridge University Press.

  • Lavis J.N., Permanand G., Oxman A.D., Lewin S. and Fretheim A. (2009) SUPPORT Tools for evidence-informed health policymaking (STP) 13: preparing and using policy briefs to support evidence-informed policymaking. Health Research Policy and Systems, 7(Suppl 1): S13, doi:10.1186/1478-4505-7-S1-S13

  • Layard R., Clark D.M., Knapp M. and Mayraz G. (2007) ‘Cost-benefit analysis of psychological therapy’, National Institute Economic Review, 202(1): 90–8.

  • Leder, D. (1990) The Absent Body, Chicago: University of Chicago Press.

  • Leith, S. (2012) You Talkin’ to Me: Rhetoric from Aristotle to Obama, London: Profile Books.

  • Lindstrom, M. (2012) Brandwashed: Tricks Companies Use to Manipulate our Minds and Persuade Us to Buy, London: Kogan Page Publishers.

  • Lomas J. (2000) ‘Using “linkage and exchange” to move research into policy at a Canadian foundation’, Health Affairs (Millwood), 19(3): 236–40.

  • Maben, J., Peccei, R., Adams, M., Robert, G., Richardson, A., Murrells, T. and Morrow, E. (2012) Exploring the Relationship between Patients’ Experiences of Care and the Influence of Staff Motivation, Affect and Wellbeing, Final report. Southampton: NIHR service delivery and organization programme.

  • Maguire, L.K. and Clarke, M. (2014) ‘How much do you need: a randomised experiment of whether readers can understand the key messages from summaries of Cochrane Reviews without reading the full review’, Journal of the Royal Society of Medicine, 107(11): 444–9.

  • Marshall, M.N. (2014) ‘Bridging the ivory towers and the swampy lowlands: increasing the impact of health services research on quality improvement’, International Journal for Quality in Health Care, 26(1): 1–5.

  • May, C.R. and Finch, T. (2009) ‘Implementing, embedding, and integrating practices: an outline of normalization process theory’, Sociology, 43(3): 535–54.

  • May, C.R., Eton, D.T., Boehmer, K., Gallacher, K., Hunt, K., MacDonald, S. et al (2014) ‘Rethinking the patient: using Burden of Treatment Theory to understand the changing dynamics of illness’, BMC Health Services Research, 14(1): 1–11.

  • Maybin, J. (2016) Producing Health Policy: Knowledge and Knowing in Government Policy Work, Basingstoke: Palgrave Macmillan.

  • McDonald, L. (ed) (2005) Collected Works of Florence Nightingale (Vol. 8), Waterloo ON: Wilfrid Laurier University Press.

  • McDonald, L. (2015) ‘Florence Nightingale: a research-based approach to health, healthcare and hospital safety’, in F. Collyer (ed) The Palgrave Handbook of Social Theory in Health, Illness and Medicine, New York: Springer, pp 59–74.

  • Meacock, R., Anselmi, L., Kristensen, S.R., Doran, T. and Sutton, M. (2017) ‘Higher mortality rates amongst emergency patients admitted to hospital at weekends reflect a lower probability of admission’, Journal of Health Services Research & Policy, 22(1): 12–19.

  • Mintrom, M. (2019) ‘So you want to be a policy entrepreneur?’, Policy Design and Practice, 2(4): 307–23.

  • Mir, G., Salway, S., Kai, J., Karlsen, S., Bhopal, R., Ellison, G.T. and Sheikh, A. (2013) ‘Principles for research on ethnicity and health: the Leeds Consensus Statement’, The European Journal of Public Health, 23(3): 504–10.

  • Mitchell, K.R., Purcell, C., Forsyth, R., Barry, S., Hunter, R., Simpson, S.A. et al (2020) ‘A peer-led intervention to promote sexual health in secondary schools: the STASH feasibility study’, Public Health Research, 8(15), https://doi.org/10.3310/phr08150

  • Moat, K.A., Lavis, J.N. and Abelson, J. (2013) ‘How contexts and issues influence the use of policy-relevant research syntheses: a critical interpretive synthesis’, The Milbank Quarterly, 91(3): 604–48.

  • Morris, S., Hunter, R.M., Ramsay, A.I.G., Boaden, R., McKevitt, C., Perry, C. et al (2014) ‘Impact of centralising acute stroke services in English metropolitan areas on mortality and length of hospital stay: difference-in-differences analysis’, BMJ, 349: g4757.

  • Morris, S., Ramsay A.I.G., Boaden R.J., Hunter R.M., McKevitt C., Paley L. et al (2019) ‘Impact and sustainability of centralising acute stroke services in English metropolitan areas: retrospective analysis of hospital episode statistics and stroke national audit data’, BMJ, 364: 11.

  • Morris, Z.S., Wooding, S. and Grant, J. (2011) ‘The answer is 17 years, what is the question: understanding time lags in translational research’, Journal of the Royal Society of Medicine, 104(12): 510–20.

  • National Institute for Health and Care Excellence (NICE) (2016) Psychosis and Schizophrenia in Children and Young People: Recognition and Management. Clinical Guideline (CG155). Available from: www.nice.org.uk/guidance/cg155 (accessed 4 February 2021).

  • National Institute for Health and Care Excellence (NICE) (2017) Intrapartum Care for Healthy Women and Babies. Clinical Guideline (CG190). Available from: www.nice.org.uk/guidance/cg190 (accessed 4 February 2021).

  • National Institute for Health and Care Excellence (NICE) (2020) Developing NICE Guidelines: The Manual. Process and Methods (PMG20). Available from: www.nice.org.uk/process/pmg20/chapter/glossary (accessed 4 February 2021).

  • National Institute for Health Research (NIHR) (2020) Living with COVID-19, NIHR Evidence. Available from: https://evidence.nihr.ac.uk/themedreview/living-with-covid19/ (accessed 30 October 2020).

  • National Institute for Health Research (NIHR) (2021) Living with COVID-19, Second Review, NIHR Evidence. Available from: https://evidence.nihr.ac.uk/themedreview/living-with-covid19-second-review/ (accessed 20 March 2021).

  • Newbould, J., Ball, S., Abel, G., Barclay, M., Brown, T., Corbett, J. et al (2019) ‘A “telephone first” approach to demand management in English general practice: a multimethod evaluation’, Health Service & Delivery Research, 7(17), https://doi.org/10.3310/hsdr07170

  • Newman, T.B. (2003) ‘The power of stories over statistics’, BMJ, 327(7429): 1424–7.

  • NHS (2019) The NHS Long Term Plan, https://www.longtermplan.nhs.uk/

  • Nicholson, J. and Ioannidis, J. (2012) ‘Conform and be funded’, Nature, 492(7427): 34–6.

  • Nicolini, D., Powell, J. and Korica, M. (2014) ‘Keeping knowledgeable: how NHS Chief Executives mobilise knowledge and information in their daily work’, Health Services & Delivery Research, 2(26), https://doi.org/10.3310/hsdr02260

  • Nixon, J., Smith, I.L., Brown, S., McGinnis, E., Vargas-Palacios, A., Nelson, E.A., Coleman, S., Collier, H., Fernandez, C., Gilberts, R. and Henderson, V. (2019) ‘Pressure relieving support surfaces for pressure ulcer prevention (PRESSURE 2): clinical and health economic results of a randomised controlled trial’, EClinicalMedicine, 14: 42–52.

  • Ocloo, J. and Matthews, J. (2016) ‘From tokenism to empowerment: progressing patient and public involvement in healthcare improvement’, BMJ Quality & Safety, 25(8): 626–32.

  • Oliver, K. and Boaz, A. (2019) ‘Transforming evidence for policy and practice: creating space for new conversations’, Palgrave Communications, 5: 60.

  • Oliver, K. and Cairney, P. (2019) ‘The dos and don’ts of influencing policy: a systematic review of advice to academics’, Palgrave Communications, 5: 21.

  • Oliver, K., Innvar, S., Lorenc, T., Woodman, J. and Thomas J. (2014) ‘A systematic review of barriers to and facilitators of the use of evidence by policymakers’, BMC Health Services Research, 14(1): 1–12.

  • Oliver, K., Kothari, A. and Mays, N. (2019) ‘The dark side of coproduction: do the costs outweigh the benefits for health research?’, Health Research Policy and Systems, 17(1): 33.

  • Oran, D.P. and Topol, E.J. (2020) ‘Prevalence of asymptomatic SARS-CoV-2 infection: a narrative review’, Annals of Internal Medicine, 173: 362–7.

  • Oxman, A.D., Glenton, C., Flottorp, S., Lewin, S., Rosenbaum, S. and Fretheim, A. (2020) ‘Development of a checklist for people communicating evidence-based information about the effects of healthcare interventions: a mixed methods study’, BMJ Open, 10(7): e036348.

  • Pagel, C., Rogers, L., Brown, K., Ambler, G., Anderson, D., Barron, D. et al (2017) ‘Improving risk adjustment in the PRAiS (Partial Risk Adjustment in Surgery) model for mortality after paediatric cardiac surgery and improving public understanding of its use in monitoring outcomes’, Health Services & Delivery Research, 5(23), https://doi.org/10.3310/hsdr05230

  • Petchey, R., Hughes, J., Pinder, R., Needle, J., Partington, J. and Sims, D. (2013) Allied Health Professionals and Management: An Ethnographic Study, Southampton: National Institute for Health Research.

  • Phillips, D.P., Kanter, E.J., Bednarczyk, B. and Tastad, P.L. (1991) ‘Importance of the lay press in the transmission of medical knowledge to the scientific community’, New England Journal of Medicine, 325: 1180–3.

  • Pollitt, C. and Bouckaert, G. (2011) Public Management Reform: A Comparative Analysis: NPM, Governance and the Neo-Weberian State (3rd edn), Oxford: Oxford University Press.

  • Porter, M.E. (1985) The Competitive Advantage: Creating and Sustaining Superior Performance, New York: Free Press.

  • Powell, P. (2010) The Interrogative Mood, London: Profile Books.

  • Prichard, C. (2013) ‘All the lonely papers, where do they all belong?’, Organization, 20(1): 143–50.

  • Public Health England (2017) Public Health Outcomes Framework: Health Equity Report Focus on Ethnicity, London: Public Health England. Available from: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/733093/PHOF_Health_Equity_Report.pdf (accessed 1 February 2021).

  • Pyrko, I., Dörfler, V. and Eden, C. (2017) ‘Thinking together: what makes communities of practice work?’, Human Relations, 70(4): 389–409.

  • Radford, M. (2011) ‘A manifesto for the simple scribe: my 25 commandments for journalists’, The Guardian (online) 19 January. Available from: www.theguardian.com/science/blog/2011/jan/19/manifesto-simple-scribe-commandments-journalists (accessed 24 October 2020).

  • Rangan, A., Handoll, H., Brealey, S., Jefferson, L., Keding, A., Martin, B.C. et al (2015) ‘Surgical vs nonsurgical treatment of adults with displaced fractures of the proximal humerus: the PROFHER randomized clinical trial’, JAMA, 313(10): 1037–47.

  • RECOVERY Collaborative Group (2021) ‘Dexamethasone in hospitalized patients with Covid-19’, New England Journal of Medicine, 384(8): 693–704.

  • Reed, M. (2018) The Research Impact Handbook (2nd edn), Aberdeenshire: Fast Track Impact.

  • REF2021 (2019) ‘Assessment framework and guidance on submissions’. Available from: www.ref.ac.uk/publications/guidance-on-submissions-201901/ (accessed 17 March 2021).

  • Renolen, Å., Høye, S., Hjälmhult, E., Danbolt, L.J. and Kirkevold, M. (2018) ‘“Keeping on track” – hospital nurses’ struggles with maintaining workflow while seeking to integrate evidence-based practice into their daily work: a grounded theory study’, International Journal of Nursing Studies, 77: 179–88.

  • Rickinson, M., Walsh, L., Cirkony, C., Salisbury, M. and Gleeson, J. (2020) Quality Use of Research Evidence Framework, Melbourne: Monash University. Available from: www.monash.edu/education/research/projects/qproject/publications/quality-use-of-research-evidence-framework-qure-report (accessed 17 March 2021).

  • Roumbanis, L. (2019) ‘Peer review or lottery? A critical analysis of two different forms of decision-making mechanisms for allocation of research grants’, Science, Technology, & Human Values, 44(6): 994–1019.

  • Rudd, A.G., Bowen, A., Young, G. and James, M.A. (2017) ‘National clinical guideline for stroke’, Clinical Medicine, 17: 154–5.

  • Rushmer, R.K., Cheetham, M., Cox, L., Crosland, A., Gray, J., Hughes, L. et al (2015) ‘Research utilisation and knowledge mobilisation in the commissioning and joint planning of public health interventions to reduce alcohol-related harms: a qualitative case design using a cocreation approach’, Health Services & Delivery Research, 3(33), https://doi.org/10.3310/hsdr03330

  • Rutter, H., Savona, N., Glonti, K., Bibby, J., Cummins, S., Finegood, D.T. et al (2017) ‘The need for a complex systems model of evidence for public health’, The Lancet, 390(10112): 2602–04.

  • Rycroft-Malone, J., Burton, C., Wilkinson, J., Harvey, G., McCormack, B., Baker, R. et al (2015) ‘Collective action for knowledge mobilisation: a realist evaluation of the Collaborations for Leadership in Applied Health Research and Care’, Health Services & Delivery Research, 3(44), https://doi.org/10.3310/hsdr03440

  • Sacks, O. (2014) The Man Who Mistook His Wife for a Hat, London: Pan Macmillan.

  • Santesso, N., Rader, T., Nilsen, E.S., Glenton, C., Rosenbaum, S., Ciapponi, A. et al (2015) ‘A summary to communicate evidence from systematic reviews to the public improved understanding and accessibility of information: a randomized controlled trial’, Journal of Clinical Epidemiology, 68(2): 182–90.

  • Santesso, N., Morgano, G.P., Jack, S.M., Haynes, R.B., Hill, S., Treweek, S. and Schünemann, H.J. (2016) ‘Dissemination of clinical practice guidelines: a content analysis of patient versions’, Medical Decision Making, 36(6): 692–702.

  • Scales, K., Bailey, S., Middleton, J. and Schneider, J. (2017) ‘Power, empowerment, and person-centred care: using ethnography to examine the everyday practice of unregistered dementia care staff’, Sociology of Health & Illness, 39(2): 227–43.

  • Shaw, I. and Lunt, N. (2018) ‘Forms of practitioner research’, British Journal of Social Work, 48(1): 141–57.

  • Sheikh, K. (2019) ‘How much nature is enough? 120 minutes a week, doctors say’, The New York Times, 13 June. Available from: www.nytimes.com/2019/06/13/health/nature-outdoors-health.html (accessed 17 March 2021).

  • Shojania, K.G., Sampson, M., Ansari, M.T., Ji, J., Doucette, S. and Moher, D. (2007) ‘How quickly do systematic reviews go out of date? A survival analysis’, Annals of Internal Medicine, 147(4): 224–33.

  • Smith, K.E., Bandola-Gill, J., Meer, N., Stewart, E. and Watermeyer, R. (2020) The Impact Agenda: Controversies, Consequences and Challenges, Bristol: Policy Press.

  • Smith, R. (2006) ‘Peer review: a flawed process at the heart of science and journals’, Journal of the Royal Society of Medicine, 99(4): 178–82.

  • Soares-Weiser, K. (2011) ‘Audit of the abstract, plain language summary and summary of findings tables in published Cochrane reviews’, Cochrane Collaboration. Available from: www.dropbox.com/s/39mp8t1jc7817ik/Abstract%20audit%20report%20CEU%202012.pdf (accessed 17 March 2021).

  • Squires, J.E., Hutchinson, A.M., Boström, A.M., O’Rourke, H.M., Cobban, S.J. and Estabrooks, C.A. (2011) ‘To what extent do nurses use research in clinical practice? A systematic review’, Implementation Science, 6(1): 1–17.

  • Staley, K., Crowe, S., Crocker, J.C., Madden, M. and Greenhalgh, T. (2020) ‘What happens after James Lind Alliance priority setting partnerships? A qualitative study of contexts, processes and impacts’, Research Involvement and Engagement, 6(1): 41.

  • Storr, W. (2019) The Science of Storytelling, London: William Collins.

  • Straus, S., Tetroe, J. and Graham, I.D. (eds) (2013) Knowledge Translation in Health Care: Moving from Evidence to Practice (2nd edn), Chichester: John Wiley & Sons.

  • Sugimoto, C.R., Work, S., Larivière, V. and Haustein, S. (2017) ‘Scholarly use of social media and altmetrics: a review of the literature’, Journal of the Association for Information Science and Technology, 68(9): 2037–62.

  • Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Venetis, C.A., Davies, A. et al (2014) ‘The association between exaggeration in health related science news and academic press releases: retrospective observational study’, BMJ, 349.

  • Sumner, P., Vivian-Griffiths, S., Boivin, J., Williams, A., Bott, L., Adams, R.C. et al (2016) ‘Exaggerations and caveats in press releases and health-related science news’, PLoS One, 11(12): e0168217.

  • Swan, J., Clarke, A., Nicolini, D., Powell, J., Scarbrough, H., Roginski, C. et al (2012) Evidence in Management Decisions (EMD): Advancing Knowledge Utilization in Healthcare Management, NIHR Service Delivery and Organisation Programme.

  • Sword, H. (2012) Stylish Academic Writing, Cambridge, Mass: Harvard University Press.

  • Synnot, A.J., Lowe, D., Merner, B. and Hill, S.J. (2018) ‘The evolution of Cochrane evidence summaries in health communication and participation: seeking and responding to stakeholder feedback’, Evidence & Policy, 14(2): 335–47.

  • Terkel, S. (1970) Hard Times: An Oral History of the Great Depression, New York: Pantheon Press.

  • Thompson, G.N., Estabrooks, C.A. and Degner L.F. (2006) ‘Clarifying the concepts in knowledge transfer: a literature review’, Journal of Advanced Nursing, 53(6): 691–701.

  • Thomson, H. (2013) ‘Improving utility of evidence synthesis for healthy public policy: the three Rs (relevance, rigor, and readability [and resources])’, American Journal of Public Health, 103: e17–23.

  • Tierney, S., Wong, G., Roberts, N., Boylan, A.-M., Park, S., Abrams, R. et al (2020) ‘Supporting social prescribing in primary care by linking people to local assets: a realist review’, BMC Medicine, 18(1): 1–15.

  • Timmins, N., Rawlins, M. and Appleby, J. (2017) ‘A terrible beauty: a short history of NICE the National Institute for Health and Care Excellence [version 1; not peer reviewed]’, F1000Research, 6: 915.

  • Tricco, A.C., Cardoso, R., Thomas, S.M., Motiwala, S., Sullivan, S., Kealey, M.R. et al (2016) ‘Barriers and facilitators to uptake of systematic reviews by policy makers and health care managers: a scoping review’, Implementation Science, 11(1): 1–20.

  • Turnbull, J., McKenna, G., Prichard, J., Rogers, A., Crouch, R., Lennon, A. and Pope, C. (2019) ‘Sense-making strategies and help-seeking behaviours associated with urgent care services: a mixed-methods study’, Health Services & Delivery Research, 7(26), https://doi.org/10.3310/hsdr07260

  • Van de Ven, A.H. (2007) Engaged Scholarship: A Guide for Organizational and Social Research, Oxford: Oxford University Press.

  • Van de Ven, A.H. and Johnson, P.E. (2006) ‘Knowledge for theory and practice’, Academy of Management Review, 31(4): 802–21.

  • Van Noorden R. (2014) ‘Global scientific output doubles every nine years’, Nature news blog (online) 7 May. Available from: http://blogs.nature.com/news/2014/05/global-scientific-output-doubles-every-nine-years.html (accessed 17 March 2021).

  • Vogel, J.P., Oxman, A.D., Glenton, C., Rosenbaum, S., Lewin, S., Gülmezoglu, A.M. and Souza, J.P. (2013) ‘Policymakers and other stakeholders perceptions of key considerations for health system decisions and the presentation of evidence to inform those considerations: an international survey’, Health Research Policy and Systems, 11(1): 1–9.

  • Vosoughi, S., Roy, D. and Aral, S. (2018) ‘The spread of true and false news online’, Science, 359(6380): 1146–51.

  • Wallace J., Byrne, C. and Clarke, M. (2012) ‘Making evidence more wanted: a systematic review of facilitators to enhance the uptake of evidence from systematic reviews and meta-analyses’, International Journal of Evidence Based Healthcare, 10(4): 338–46.

  • Wallace, J., Byrne, C. and Clarke, M.J. (2014) ‘Improving the uptake of systematic reviews: a systematic review of intervention effectiveness and relevance’, BMJ Open, 4: e005834.

  • Waller, R. (2011) Simplification: What Is Gained and What Is Lost. Technical report. Available from: www.academia.edu/3385977/Simplification_what_is_gained_and_what_is_lost (accessed 17 March 2021).

  • Walshe, K. and Rundall, T.G. (2001) ‘Evidence-based management: from theory to practice in health care’, The Milbank Quarterly, 79(3): 429–57.

  • Ward, V., House, A. and Hamer, S. (2009) ‘Knowledge brokering: the missing link in the evidence to action chain?’ Evidence & Policy, 5(3): 267–79.

  • Weick, K. (1995) Sensemaking in Organisations, Thousand Oaks, California: Sage Publications.

  • Welsh, J., Lu, Y., Dhruva, S.S., Bikdeli, B., Desai, N.R., Benchetrit, L. et al (2018) ‘Age of data at the time of publication of contemporary clinical trials’, JAMA Network Open, 1(4): e181065.

  • Westen, D. (2008) The Political Brain: The Role of Emotion in Deciding the Fate of the Nation, New York: PublicAffairs Books.

  • White, M.P., Alcock, I., Grellier, J., Wheeler, B.W., Hartig, T., Warber, S.L. et al (2019) ‘Spending at least 120 minutes a week in nature is associated with good health and wellbeing’, Scientific Reports, 9(1): 1–11.

  • Whitty, C.J.M. (2015) ‘What makes an academic paper useful for health policy?’, BMC Medicine, 13: 301.

  • Wickremasinghe, D., Kuruvilla, S., Mays, N. and Avan, B.I. (2016) ‘Taking knowledge users’ knowledge needs into account in health: an evidence synthesis framework’, Health Policy and Planning, 31(4): 527–37.

  • Widdowson, H.G. (1979) Explorations in Applied Linguistics, London: Oxford University Press.

  • Wieringa, S. and Greenhalgh, T. (2015) ‘10 years of mindlines: a systematic review and commentary’, Implementation Science, 104(12): 501–9.

  • Williams, O. and Annandale, E. (2020) ‘Obesity, stigma and reflexive embodiment: feeling the “weight” of expectation’, Health, 24(4): 421–41.

  • Williams, O., Sarre, S., Papoulias, S.C., Knowles, S., Robert, G., Beresford, P. et al (2020) ‘Lost in the shadows: reflections on the dark side of co-production’, Health Research Policy and Systems, 18: 1–10.

  • Wilsdon, J.R. (2017) ‘Responsible metrics’, in T. Strike (ed) Higher Education Strategy and Planning: A Professional Guide, Abingdon: Routledge, pp 247–53.

  • Wilson, P. and Sheldon, T.A. (2019) ‘Using evidence in health and healthcare’, in A. Boaz, H. Davies, A. Fraser and S. Nutley (eds) What Works Now? Evidence-Informed Policy and Practice, Bristol: Policy Press, pp 67–88.

  • Wye, L., Brangan, E., Cameron, A., Gabbay, J., Klein, J. and Pope, C. (2015) ‘Knowledge exchange in health-care commissioning: case studies of the use of commercial, not-for-profit and public sector agencies’, Southampton: NIHR Journals Library, Health Services Delivery & Research, 3(19).

Content Metrics

May 2022 onwards Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 372 280 61
PDF Downloads 136 61 5

Altmetrics