Abstract
Background:
Conceptualisations of what it means to use evidence in policymaking often appear divided between two extremes. On the one side are works presenting it as the implementation of research findings – particularly evaluations of intervention effect. In contrast stand theoretically informed works exploring the multiple meanings of evidence use, political complexities, and the constructed nature of research evidence itself. The first perspective has been criticised as over-simplistic, while the latter can make it difficult to answer questions of what might be good, or improved, uses of evidence in policymaking.
Methods:
To further debate, this paper develops a ‘programmatic approach’ to evidence use, drawing on theories of institutional decision making and empirical work on evidence use within 11 National Malaria Control Programmes in Africa. We apply the programmatic approach by investigating the key goals and tasks of programme officials, recognising that these will shape the routines and logics followed affecting evidence utilisation. We then map out the forms, sources, features, and applications of evidence that serve programme officials in their goals.
Findings:
In the case of malaria programmes, evidence use was understood in relation to tasks including: advocacy for funding, budget allocation, regulation development, national planning, and identification of information gaps – all of which might require different evidence sources, forms, and applications.
Discussion and conclusions:
Ultimately the programmatic approach aims to facilitate clearer understanding of what uses of evidence are appropriate to policymakers, while also allowing critical reflection on whether such uses are ‘good’ from both programme and broader social perspectives.
Key messages
Conceptualisations of evidence use are shaped by the goals and tasks of administrative programme officials.
Institutional logics shape perceptions of the appropriate forms and applications of evidence for policy needs.
A programmatic approach allows reflection on what constitutes improved uses of evidence within policymaking.
Introduction
An enduring concern in the field of evidence and policy has been around how to conceptualise the use of evidence within a policymaking arena – both in terms of what ‘use’ can mean, but also to consider what might constitute appropriate uses of evidence for particular policymaking needs and goals (see Parkhurst 2017; Parkhurst and Abeysinghe, 2016). To date, two main bodies of writing have dominated the literature. The first is work on knowledge translation (KT) and its related concepts of knowledge brokering, knowledge management, or knowledge exchange (Shaxson et al, 2012) – which appear concerned with how to maximise the uptake and implementation of results from research studies, often looking at the barriers or facilitators to ‘use’ defined in this way (see Innvaer et al, 2002; Oliver et al, 2014; van der Arend, 2014). In contrast stands the work of critical policy scholars who explore the political nature of policymaking to help understand why or how types of evidence may be used (or not used) in policy settings.
Many politically-informed authors are critical of simplifications or assumptions that lie behind the KT approach, often citing the early work of Carol Weiss (1979; 1977), who noted that there can be a range of different ways in which research can be utilised. Thus, these scholars typically reject the idea that there is a single way that evidence can be simply ‘taken up’ for policy purposes (see Lewis, 2003; Russell et al, 2008). Subsequent work by Nutley and colleagues (2007) has built on Weiss and mapped out even more ways that ‘evidence use’ has been conceptualised. Politically-oriented authors have instead approached the issue from the perspective that ‘evidence use’ can mean any number of things within a policy setting, with a vast range of concepts applied to help understand the politically-constructed nature of one or another form of usage. Examples of such concepts include bounded rationality (Cairney, 2016); diffusion of innovation (Nutley and Davies, 2000); institutionalism (Parkhurst et al, 2018); framing and cognitive sciences (Parkhurst, 2012; Cairney et al, 2016; Parkhurst, 2016); boundary work (Jasanoff, 1987; Hoppe, 2009); or combinations of multiple approaches (Gibson, 2003).
The literature can, therefore, appear divided into two extremes: either evidence use is taken for granted to be a known (assumed to be good) thing, with little consideration of political realities, or alternatively it is seen as multidimensional, the form of which is constructed by the nature of policy ideas, processes, and interactions. The first of these has been widely critiqued as over-simplistic, only valid in the most circumscribed technical decision-making situations. Yet the second, in its constructivist orientation, can leave practitioners with little clarity on how to identify what ‘evidence use’ means from a policy perspective, or how to improve its application for social goals. Ultimately, this divergence makes it difficult to discuss two normative questions – first, which evidence should be informing policy and, second, which forms of evidence utilisation are most important within a policy environment.
One effort to move past this seeming impasse has been suggested by Parkhurst (2017) who shifts discussions away from questions of ‘what is use?’, or ‘what shapes use?’, to more applied considerations of what might constitute improvements in use from a normative perspective. This work develops ideas around what might be considered ‘good evidence for policy’, as well as the ‘good use of evidence’ within policymaking processes. This work particularly notes that there is a need to consider the appropriateness of evidence for policy in explicit relation to policymaking needs and goals (Parkhurst and Abeysinghe, 2016; Parkhurst, 2017). In this approach, ‘goal clarification’ of policymakers is a critical first step, which allows for consideration of which pieces of evidence best address the policy concerns at hand, whether evidence is constructed in policy-useful ways, and whether pieces of evidence are applicable to the local context (see Parkhurst, 2017: 123).
In this paper we build on this approach through the development of a middle-range theory in the form of what is termed a ‘programmatic approach’ to evidence use. Sociologist Robert Merton (1968) defined ‘theories of the middle range’ to ‘lie between the minor but necessary working hypotheses that evolve in abundance during day-to-day research and the all-inclusive systematic efforts to develop a unified theory that will explain all the observed uniformities of social behaviour, social organization, and social change’. Merton continues that such theories are useful to ‘guide empirical inquiry’ – involving abstractions that ‘are close enough to observed data to be incorporated in propositions that permit empirical testing’ (Merton, 1968: 39). In this case the empirical inquiry of interest is the question of how to study, analyse, or explain the use of evidence within technical bureaucratic administrative bodies. By focusing on specific policymaking spaces like administrative bodies, this allows more explicit goal clarification in relation to the agencies’ remit, and thus allows exploration of evidence use from an identifiable position from which to judge appropriateness.
Conceptualising a programmatic perspective – administrative goals and institutional logics
Administrative bodies within government bureaucracies have been studied widely in the fields of public policy, public administration, and public management, due to their importance in shaping the realities of government operation and policy implementation (see Wilson, 2000; Peters, 2010). Institutionalist scholars March and Olsen (2006) have particularly emphasised the importance of understanding the operation of administrative bodies due to how their functioning serves to order political life. We therefore hold that greater theorisation about the use of evidence within bureaucratic administrative spaces can provide an important step in the evidence and policy literature. A programmatic approach would reflect on the nature of evidence use in relation to how officials work to achieve their mandated tasks and goals.
By considering evidence use from this perspective, we avoid imposition of external judgements about which pieces of research necessarily should be used (as KT literature risks doing), while also moving out of the constructivist dilemma arising from recognition that evidence use can mean any number of different things. While a programmatic approach does not deny that there are long lists of possible meanings of evidence use that exist and can be constructed, it narrows the analysis to the subset of use-types that serve the achievement of goals pursued within an administrative body. Similarly, while the approach does not deny that there can be contestation over whether an occurrence of evidence use is seen positively or negatively by advocacy groups pursuing different ideological or social goals, the delineation of a particular administrative body permits specification of the set of goals and values from which the appropriateness of evidence use can be judged.
In this analysis, National Malaria Control Programmes (NMCPs) serve as the administrative body of interest on which empirical investigation has aided conceptual development. NMCPs are officially mandated bodies, typically situated within Ministries of Health in countries facing a significant burden of malaria. They are seen to have key leadership roles in terms of responsibility over policy, planning, implementation, coordination, and evaluation of the range of malaria control efforts in countries (Bryce et al, 1994; Mortality Task Force of Roll Back Malaria’s Monitoring and Evaluation Reference Group, 2014). While NMCPs will only be established within Ministries of Health in malaria-endemic countries, they can be seen as a useful case study for many administrative settings as they possess several features common to many scholars’ description of Weber’s conception of the ‘ideal-type’ bureaucracy: they are established to execute government policies and functions under legal-rational justifications; reflect a hierarchical division of labour; are staffed by specialist administrators; and address a limited set of defined objectives or goals (see Udy Jr, 1959; Hall, 1963; Parsons, 1995).
Exploration of the objectives and goals of these bodies is seen as critical in order to understand how evidence is used to serve administrative needs. However, it is not just the officially-stated goals of an agency alone that drive programmatic actors’ behaviour. Rather, institutional scholars have also explained that particular ideas, logics or cultures can be embedded within agencies, further serving as important drivers of behaviour for officials. March and Olsen (2006), for instance, have noted that administrative institutions rely on rules and routines to shape the behaviour of individuals working within those bodies, stating that ‘[m]uch of the behaviour we observe in political institutions reflects the routine way in which people do what they are supposed to do’ (March and Olsen, 1989: 21). They further discuss the importance of institutional ‘logics’ as key to understanding the actions of decision makers within institutional bodies. Peters (2010) similarly discusses ‘organisational cultures’ within bureaucratic bodies by explaining that ‘bureaucratic organizations frequently have their own well-developed ideas about what government should do (Urban, 1982; Page and Jenkins, 2005). These ideas are not general statements, such as might be found on a political party platform, but rather are confined to the narrow area of expertise of the agency (198).’
Obviously, administrative bodies may have multiple goals, and undertake a variety of tasks in service of those goals. This further implies that officials within such bodies may have a range of ideas or logics that will shape the forms and uses of evidence that are perceived as appropriate to their needs. This, however, does not mean that any and all possible uses of evidence are important. Rather, it becomes an empirical project to identify the subset of forms, sources, features, and applications of evidence which are appropriate from this perspective.
From this conceptual basis, a programmatic approach to studying evidence use for policy begins by identifying the institutional goals pursued by officials, followed by how they understand the key tasks they undertake to achieve those goals. This is the approach taken in the presentation of the empirical case of evidence use within NMCPS, explored below. However, as the analysis and subsequent discussion will illustrate, for each individual task, it is possible to further explore a set of key elements in relation to evidence use:
the forms of evidence – representing the types of data or information needed for the task;
the sources of evidence – representing judgements on who would be the most useful providers of evidence;
the features of evidence – representing aspects of evidence that help it achieve the task at hand or make it more useful to that task; and
the targets of evidence – representing any stakeholders to which provision of evidence would be important to achieving the task.
The following section further details the research on which this paper is based, before exploring the empirical case from which the programmatic approach was developed.
Background and methods
Malaria represents one of the top causes of death globally, with an estimated 405,000 deaths from 228 million cases in 2018. Most (85%) of the burden of malaria falls on 20 countries, 19 of which are located in sub-Saharan Africa (WHO, 2019). Globally, most malaria-endemic countries are highly dependent on external funding for malaria control. The largest proportion of this comes from the Global Fund for AIDS, Tuberculosis and Malaria (the Global Fund), although national budget allocation is also important. Of particular note, the Global Fund has made reference to the importance of ‘evidence-based’ approaches in several guidance or review documents (The Global Fund, undated; 2015); and countries are further expected to align national strategies to technical guidance provided by the World Health Organisation Global Malaria Programme (WHO GMP) when applying to the Global Fund (WHO, 2017).
Empirical work informing this paper was conducted by two research programmes involved in malaria control in Africa: LINK-Data for Decision Making, and the IMPPACT project – programmes that worked directly with NMCPs to improve their use of evidence in malaria policy and planning. The LINK programme supported 13 malaria endemic countries in sub-Saharan Africa to develop modelled malaria prevalence risk maps and epidemiological profiles. The programme hypothesised that improved epidemiological data and maps would support malaria policymakers in the development of national and subnational strategies leading to more efficient allocation of resources for malaria control. The work included collation of prevalence survey data across each country and the development of a geostatistical model to estimate the malaria parasite prevalence rate in 2–10 year-old children (for mapping methods see Noor et al, 2014; Snow and Noor, 2015). Maps and profiles included geolocated information on the ecological context, malaria parasite prevalence over time and at sub-national level, entomological data and coverage of interventions.
The IMPPACT project was similarly established with a remit to inform national malaria policy using evidence from multicentre trials in sub-Saharan Africa led by the Malaria in Pregnancy Consortium – a large international research programme funded by the Bill and Melinda Gates Foundation and other agencies (see https://www.mip-consortium.org/about-us). IMPPACT partner countries were chosen because they had already participated in multi-country trials, whose findings had helped to inform global WHO recommendations. IMPPACT therefore worked to facilitate the implementation of those guidelines – assisting the development of evidence-informed policies and implementation plans to strengthen countries’ health provider practices.
Both programmes conducted a range of interviews to help inform translation activities and evaluate the use of their evidence and the policymaking process around malaria control. These interviews formed the bulk of evidence on which the programmatic approach to evidence use was derived and refined.
Given the LINK programme’s concern with informing NMCP decisions, it undertook semi-structured interviews with NMCP officials and malaria stakeholders across seven countries – Democratic Republic of Congo, Ghana, Kenya, Malawi, Mali, Sierra Leone, and Uganda – to evaluate the needs and the use of LINK (and additional) data and risk maps. In all, 75 of these interviews, conducted in 2018, were utilised to inform this analysis. They aimed to understand the factors affecting the use of LINK data, but further explored the broader policymaking and evidence needs of the NMCPs. The LINK interviews were based on a topic guide and coded across four levels: 1) type of maps and the data used to generate them; 2) use of maps, by stakeholder and purpose; 3) value and perception of the maps; 4) suggestions and criticisms of the maps. Sub-themes were added as they emerged from the data.
Similarly, IMPPACT’s focus on national uptake of evidence led to the project undertaking two policy-related studies including an initial assessment of the architecture for malaria control policy in each of four countries: The Gambia, Kenya, Mali and Malawi. This paper is informed by 28 of these interviews, undertaken between 2016 and 2017 with key stakeholders from within government, as well as donor and partner organisations working with government officials. Interviews investigated themes related to the translation of international (WHO) policy on malaria in pregnancy into national policy and its implementation. The findings were used to inform technical support provided by IMPPACT’s African research partners to their NMCPs.
For both IMPPACT and LINK projects, interview transcripts were coded and content and thematic analyses were undertaken using the NVivo 11 qualitative software package. The IMPPACT interviews were coded using a preexisting framework that included: 1) translation of international policy to national policy: decision-making architecture; policy context, content and processes; stakeholder interest and power; and 2) translation of national policy to effective implementation, using the health system building blocks framework (governance, financing, human resources, health information, products and technology, service delivery) (WHO, 2007). An inductive process was used and sub-themes were added as they emerged from the data.
Finally, in addition to these interviews, two workshop activities were undertaken which allowed additional exploration of specific themes related to evidence use. These allowed further refinement of the programmatic approach and its concepts in a more directive empirical manner. A one-day workshop in March 2018 in Senegal was facilitated by the LINK programme with six government officials from four partner countries – Kenya, Malawi, Nigeria, and Sierra Leone – who discussed the ways that evidence is used by malaria control programme officials. Results from interviews and insights from this first workshop were then presented to a larger group of officials from Kenya, Ghana, Sierra Leone, and Uganda at a second workshop in the UK in September 2018. In the second workshop, participants were surveyed to list the five most important tasks that they formally undertake – results of which helped to both validate and finalise the themes that were emergent from interview data and which served to ultimately structure the organisation and presentation of results below.
In short, both LINK and IMPPACT aimed to inform and improve decision making through evidence provision of one kind or another, and both conducted interviews to learn more about how that process worked. From these interviews and additional engagement through workshops, it was possible to identify uses of evidence in relation to official goals and tasks. This ultimately facilitated the development of the programmatic approach, which was conceptualised broadly in advance of data analysis, while elaborated in its empirical specifics through our case studies. Of note in the results section: while interviews were recorded, workshop discussions were not. Individual quotations are therefore provided only from interview data, with anonymous codes representing the country, interview number, and the parent project.
Results
The results in this section explore the uses of evidence most relevant to stakeholders in the included countries. However, this section is structured around the practical actions and activities undertaken by NMCP officials, in order to explore how aspects of evidence utilisation fit within their particular needs and goals. The programmatic approach developed in this paper follows from the above theorisation that it will be the goals and tasks of programme actors – as well as associated logics and beliefs held about achieving those tasks – that can fundamentally shape what is considered to be appropriate evidence for policy purposes. Each subsection therefore presents data illustrating how different uses and features of evidence serve the goals of programme officials.
Indeed, one of the most notable findings from the interviews and workshop discussions was that, when asked directly about their use of evidence, NMCP officials identified a number of different activities, goals, and strategies which related to some form of evidence use or another. The LINK programme worked from the assumption that evidence (in the form of country-level epidemiological data) serves as a tool with which to inform choices between multiple malaria strategies, guiding policymakers to more effective and/or efficient resource allocation. Alternatively, IMPPACT worked from the position that evidence from multicentre trials and nested studies undertaken in the partner countries that had informed global guidelines would subsequently translate into national guidelines, before being incorporated into national plans. While both uses of evidence (to inform choices of evidence, or (inter)national guideline development and implementation) were certainly understood by NMCP officials, respondents described several other aspects of evidence use that could be traced back to their programme goals, their logics about achieving those goals, and their specific policymaking contexts.
Advocating for a budget
One of the first ways that evidence use could be seen as linked to programmatic needs was seen when officials explained that research evidence was particularly useful when lobbying or applying for additional resources. While the provision of maps of malaria prevalence was envisaged by the LINK programme to inform decisions about where best to target resources, NMCP officials explained in workshop discussions that such maps were also particularly helpful when applying for funding to bodies such as ministries of finance, or international donor agencies. Indeed, the Global Fund insists on maps or robust epidemiological information in applications, and as such LINK maps were particularly useful there.
…each time the Global Fund is limited, we too are limited. The big problem is that we at our level we will have to do a lot more advocacy… (DRC-5-LINK)
What we will do is to market the idea that we would like to have funding and we can write national proposals to other institutions or donors including the Global Fund… we have to lobby for more funding even from government apart from the donors. (MWL-3-IMPPACT)
… the cost is not high; it is just a proportion, it will not be a high cost. Because the costs that go to quinine will now go to ACT. There is just a need to increase a bit the cost. I am sure that if we have convincing results with scientific proof today, we will get support with partners to easy understand and to implement the strategy… (MAL-11-IMPPACT)
Resource constraints were clearly a driving factor in the use of evidence. This is perhaps unsurprising when considering that most malaria-affected countries face limited budgets for public services, with Overseas Development Assistance (ODA) at times exceeding government budgets (World Bank, 2019).
Further workshop discussions went into greater depth on this theme. It was noted that, in addition to funders like the Global Fund or domestic Ministries of Finance, NMCP officials targeted legislatures, and higher levels of Ministries of Health as well, with evidence to advocate for malaria budgets. Officials further discussed which forms of evidence were most useful in budget advocacy, noting that high-quality LINK malaria maps, for example, were seen as indicative of the competence of the NMCP and an indicator of quality in Global Fund applications. From this perspective it was the professional quality of the data, and indeed the visual nature of maps, that were particularly helpful in their advocacy role.
Finally, interviews and workshop discussions identified other forms of evidence seen to be useful for budget advocacy. This could include prevalence data to demonstrate the size of a problem (such as with LINK malaria maps) but also include effectiveness or cost-effectiveness data for desired interventions or their delivery strategies. This is seen in the Mali quote above, as well as from another interview in that country (code MAL-8-IMPPACT), where an official explained that it was scientific evidence of effectiveness that could convince donors to support increasing the number of doses of a prophylactic anti-malarial for pregnant women.
Ultimately, it was the context of work within an NMCP in an aid-dependent setting that determined this particular logic of appropriate evidence use. The ultimate task of the agency – controlling malaria – was contingent on funding from donors like the Global Fund as well as national budget holders. If bureaucratic actors see their ability to achieve programme goals as contingent on obtaining more funds, and if advocacy for limited budgets is a standard activity in an aid-dependent environment, then a clear logic of lobbying becomes routinised as a norm, with particular forms, features, and uses of evidence serving this need.
Allocating a budget
A second key task that NMCP officials undertook was related to allocating available funds. This task perhaps reflects the most common conceptualisation of evidence use to inform policy – reflecting a planning process involving choice between potential interventions. This was indeed where the LINK programme hoped its evidence would be most beneficial. Yet malaria control has its own unique features which were apparent in discussions about evidence used to allocate budgets. In particular, NMCP officials were not typically reviewing evidence on the comparative effectiveness or cost-effectiveness in a choice between alternative, or combinations of, interventions, as might be typical in classic health technology appraisal efforts (Garrido et al, 2008). Rather, malaria control typically involves providing sets of well-established interventions (for example, nets, prophylactic or curative drugs, insecticide spraying), considering different delivery strategies, and targeting packages to areas of (possibly changing) need. Furthermore, for many malaria-affected countries, multiple stakeholders might be involved in providing and/or funding malaria interventions, and interventions could be at various stages of scale-up or development. For NMCP officials, then, their task was often focused on trying to understand which sets of interventions were being done, where they were occurring, and what results these were achieving at different points in time.
So while NMCPs would occasionally need to consider the relative cost-effectiveness of one treatment over another for allocative decisions, this form of evidence-informed choice represented infrequent allocative decision processes. Instead, interviews and workshop discussions presented a picture of operational decisions through a continuous monitoring and evaluation process that needed information on factors such as who was doing what, where, and whether activities undertaken in an area appeared to be having results.
… Global Fund, PMI [US Presidents Malaria Initiative] and government of Uganda and that is for the ACTs of course there are different players because when you go to things like iCCM you have other partners supporting like Malaria Consortium and UNICEF. (UG-13-LINK)
Given the nature of malaria control, budget allocation by NMCPs looks much more like trying to fill gaps in a puzzle than a comprehensively rational assessment of need to target funds accordingly. Thus, when asked about evidence used for spending decisions, basic informational needs were discussed as much as syntheses of scientific findings. Accordingly, a wide mix of sources of evidence were further presented as important in budget allocation decisions, including local routine data, academic research projects, advice from Technical Working Groups (TWGs) or global health bodies, as well as monitoring and evaluation (M&E) findings.
… indoor residual spraying... is an expensive intervention we cannot do a blanket indoor residual spraying... in the whole country... the stratification will help to monitor the... effectiveness of the various interventions. (GH-9-LINK)
… we are really trying to focus now on entomological activities especially in the northern region. To see maybe some kind of resistance has started developing. So if we are able to get that evidence, we will use that evidence for us to prioritize the intervention that we say okay “okay let’s target these ones in this district”… (SL-2-LINK)
Overall, then, it appeared that NMCPs were indeed greatly involved in allocative decisions of where to spend resources and on what interventions. Yet evidence-informed decisions on budget allocation did not take the form of one-off reviews of scientific research findings or local data to make a definitive choice. Instead, the forms and sources of evidence most appropriate to inform these choices was directly shaped by the realities of working in a complex context typified by multiple stakeholders, gaps in knowledge, and a need to continually reallocate resources as best as possible.
Standards, guidelines, and national plan development
There were several other key tasks identified by officials that were important in how they used evidence in their work. One activity mentioned was the development of guidelines, regulations, or standards in relation to various malaria control activities. Such activities would usually involve synthesis of evidence on effectiveness or cost-effectiveness of interventions, for example, and officials had clear ideas on the sources of evidence they felt were expert authorities for such information, including bodies such as the WHO, academic research centres, or expert advisory TWGs.
In workshop discussions, participants also noted that a key role of NMCPs was to write regular national malaria plans. As with guideline development, the forms and sources of evidence for these activities were not surprising, reflecting common conceptualisations of evidence-informed decision-making processes. Planning documents were said to be informed by pieces of evidence such as epidemiological data, modelling data, and cost-effectiveness data, while key sources of such evidence were identified in relation to their relevant expertise, including TWGs, routine data sources, and academic research centres.
Information gap assessment
The National Malaria and Control Programme and the Ministry of Health and Sanitation [are] charged with the responsibility of planning all activities relating to malaria, coordination, supervision, monitoring and operation. So the monitoring component takes care of the cases suspected, cases tested, cases treated and even the logistics management component. We want to know the quantity of drugs you received in terms of the antimalarial drugs and then how many you were able to use and how many do you have at that point in time… (SL-4-IMPPACT)
The first challenge is in relation to the treatment, the management of malaria. The big challenge we currently have is to make an effort that there is less out of stock compared to ACTs, compared to the diagnostic test, because if there is out of stock, the care will not be good, and the diagnosis itself must suffer. (DRC-13-LINK)
In workshops, NMCP officials explained that, in situations such as these, the analysis of information gaps became an important programme task. This led to broader discussion of the evidence forms that were needed to address such gaps, be it systematic reviews of scientific literature about potential treatments, or more targeted pieces of local information and data. This information was seen as best coming either from internal ministerial research departments or from external bodies such as TWGs or local research centres.
However, the discussion of information gaps also identified another case where NMCP officials felt that they could best achieve this task by serving as a provider of evidence, rather than just a receiver. In particular, officials noted that academic or scientific researchers could be provided with the evidence of what was needed to be known, to encourage researchers to generate information helpful to NMCPs. There was thus an identified need to generate evidence about informational gaps and to provide this to researchers who might help fill those gaps in turn.
NMCP’s programmatic approach to evidence use
The above notes the range of routine institutional tasks NMCP officials discussed and how they related to aspects of evidence. Table 1 summarises these findings to illustrate how differing programmatic goals and tasks fundamentally shaped the logics of evidence use for officials – logics about which types, sources, features, or uses of evidence best served those goals. This table is not necessarily intended to be a comprehensive mapping, as some NMCP tasks may not have been covered by our research, or may change over time. However, it illustrates key themes from our interviews and workshop discussions, and it is constructed to illustrate how a programmatic approach can work to inform thinking about what evidence use means for administrative agencies.
Mapping of evidence sources and targets for key NMCP tasks
NMCP Key Tasks | Appropriate forms of evidence | Sources of evidence | Useful forms/features of evidence | Whom to target with evidence |
---|---|---|---|---|
Advocate for budget |
|
|
|
|
Allocate budget – choice of intervention |
|
|
|
|
Allocate budget – choice of location |
|
|
|
Donor partners (when division of responsibility for service provision was involved) |
Regulation, rules and standards |
|
|
||
Future planning (national and programme plans) |
|
|
|
|
Identify information needs and gaps |
|
|
Evidence of policy-relevant gaps | Local or International research programmes and centres |
Discussion
This paper posits that while policymakers indeed ‘use’ evidence in a variety of ways, they will have multiple goals, the achievement of which requires different evidentiary sources, forms and strategies of use. An empirical analysis of NMCPs was used to inform and develop this approach by identifying key tasks and goals that were driving the logics of evidence use for officials. Such logics shaped perceptions of a number of important elements, however. First, for each goal pursued by the officials, there could be different forms of evidence that were most appropriate to those needs. This further indicated that the source of evidence deemed appropriate could also depend on the logic of the goal pursued. Furthermore, we identified some cases where programme officials targeted evidence at particular recipients depending on the specific task at hand. Finally, given the differing nature of goals being pursued, certain features of evidence were at times highlighted as particularly useful to programme officials – features that may not directly have to do with evidentiary strength or scientific validity, but were nonetheless believed to be helpful in achieving an outcome.
This combination of evidentiary elements (sources, forms, targets, and features) that were found to be important to programmatic needs stands in contrast to many idealised views of researchers aiming to improve public policy, who still often think about evidence use in a problem-solving technical rationality. Indeed, the LINK malaria programme itself, from which many of these interviews arose, hypothesised that if NMCP officials were provided with more accurate and robust malaria data and maps, they would be able to make more effective and efficient decisions about which malaria control interventions to pursue in what specific areas of a country. IMMPACT similarly assumed that once the WHO created policy informed by research evidence, endemic country governments would adopt it (recognising technical limitations like budget constraints).
Yet, while the idea of evidence use may have particular connotations for researchers, the programmatic needs of officials clearly directed their views on evidence in our study population. While there were indeed some allocative decisions that required expert reviews of scientific evidence, this related to only some of the tasks that NMCP officials undertake. They also had to be reviewers and synthesisers of local information when constructing national guidelines, or providers of evidence when advocating for budgets. These are not unknown uses of evidence, of course. Evidence synthesis is a widely recognised part of policy planning (Chalmers et al, 2002; Tricco et al, 2011). The idea of ‘evidence-based-advocacy’ has also been described elsewhere (see Mably, 2006; Storeng and Béhague, 2014). And while that term has been applied typically for politicians and interest groups, it has been theorised that bureaucrats may at times work as budget maximisers, advocating for funding and expansion of their programmes (Parsons, 1995; Peters, 2010). In such cases, it would make sense for evidence use to serve this administrative goal as well.
NMCP officials also had to deal with gaps in knowledge and uncertainty in undertaking their programmatic activities (further analysis of data on this topic is ongoing). Again, this is not an unknown concept in the field of evidence use. Ambiguity or uncertainty can typify a number of policymaking environments which require evidence (Cairney et al, 2016). Yet the nature and source of uncertainty will be specific to individual settings. For NMCPs, uncertainty arose from the sheer number of possible combinations of interventions that could target different regions and populations, as well as the need for multiple pieces of data to understand how combinations of interventions might be working. This was compounded by the fact that multiple stakeholders (government and non-governmental) could be involved in delivering activities, and large gaps in information could exist in who was doing what, where, and with what effect. Ultimately this presents a picture of evidence use as a continuous challenge to make incremental allocative choices within an always changing and partially obscured decision space.
Broader ethical questions
The programmatic approach detailed here is designed to allow for empirical exploration of the question of what constitutes evidence use within a specific policymaking context. This is achieved by considering the goals and tasks of policymakers and seeing how the sources, forms, and applications of evidence work to help achieve those goals or tasks. Yet, while this approach allows for the consideration of evidentiary appropriateness in relation to specific needs of decision makers, it cannot pass judgment on whether the forms and uses of evidence seen as appropriate to officials are necessarily ‘good’ from a societal perspective.
Indeed, the activities of programme officials may, at times, not align with other ideas about what government officials can or should be doing. While malaria control is typically seen as an important social goal, such can be said for most programmes pursued by ministries of health (and indeed many other government branches). There are legitimate debates to be had, for example, about whether programmatic officials should be working as advocates to direct funding to their particular cause, when there will be a number of other deserving concerns in a country. Seeing technocrats work as advocates – and strategically using evidence in doing so – might be judged problematic by observers who feel that advocacy should take place within more democratically accountable bodies. Similarly, there are questions around whether funders of social services might end up favouring a programme simply because its officers happen to be skilled at marshalling or presenting pieces of evidence, or because one social issue (like malaria) has been a favoured topic for the generation of robust research in the first place. Yet, regardless of one’s normative position, it was clear that the incentive structures, norms, and logics of health programme management in study countries resulted in a routine task of using evidence for advocacy roles.
Another ethical dilemma could arise if officials use evidence in ways that violate scientific norms or best practice, but do so to achieve organisational or social goals. While this was not observed in our study, it is conceivable that evidence might be ‘misused’ in technically biased ways – for example cherry-picked or misrepresented – to encourage funding to a desired social programme intended to help people. Such uses of evidence might violate scientific integrity or social norms of honesty amongst civil servants, yet strategic manipulation of evidence can be a way to achieve goals at times, if institutional incentives are particularly aligned. Again, the framework proposed here does not make judgements on such issues. Rather, it allows such debates to arise by illustrating how various forms and uses of evidence – scientifically valid or otherwise – will be seen as appropriate within the institutionalised logics of a programmatic body. Ultimately there will always be a need to balance broader sets of principles to come to conclusions about whether particular uses of evidence are ‘good’ or not, and typically it is national governments who ultimately decide on which rules they will enforce to dictate how their systems of evidence use will be governed (see Parkhurst (2017) for a broader discussion of the multiple social concerns relevant to the good governance of evidence).
Conclusion
It has long been established that evidence use for policymaking takes multiple forms. Yet many discussions of use (or uptake) of evidence continue to frame this as if it was a single binary variable. Such a conceptualisation contrasts with the realities of administrative bodies using evidence to inform policy decisions. It would be naïve to assume that government officials are principally motivated to improve or maximise their use of scientific evidence. Rather, officials can be seen to use evidence in ways that help them achieve their organisational goals.
Academic literature has previously highlighted the problems of equating evidence use with the implementation of specific research findings, with some authors challenging core assumptions about evidence itself, noting the social construction of knowledge or the boundary work that knowledge utilisation performs. These perspectives provide valuable understanding of the interface of science and society, yet they can leave applied social researchers struggling to find pragmatic answers to questions of what constitutes evidence use, what good evidence for policy might be, and how to improve evidence use to inform policymaking. To address this need, this paper has worked to develop a middle-range theory about what evidence use can mean within specific administrative bodies. Fundamentally the programmatic approach, developed and explored through the empirical case of NMCPs in Africa, theorises that key features of evidence, including the type, sources, features, and use of evidence, will be seen as appropriate to policymaking based on the institutionalised logics of administrative bodies developed in relation to programme goals.
The case of malaria control highlights that many uses of evidence differ from what evidence-producing researchers might initially assume. It further allows subsequent discussion of the ethical or normative issues that arise when programme officers’ uses of evidence differ from other social norms (such as those about who should be involved in policy advocacy). While this paper does not solve these dilemmas, a programmatic approach to evidence use allows them to be made clear, so that more transparent and informed debate can be had about the best ways to improve evidence use, both within specific national programme activities, or across national systems of policymaking more broadly.
Acknowledgements
Contributions by JP, LG, JHo, JW and CL were supported by the LINK programme. The LINK programme is funded by UK aid from the Department for International Development (DFID) for the Strengthening the use of data for malaria decision making in Africa project (https://devtracker.dfid.gov.uk/projects/GB-1-203155); however, the views expressed do not necessarily reflect the UK government’s official policies.
Contributions from JHi, JHo, JW and LG were funded by the IMPPACT project (https://www.lstmed.ac.uk/research/collaborations/imppact) with a grant from the EDCTP2 programme supported by the European Union (grant number CSA-MI-2014–276 IMPPACT).
We gratefully acknowledge the help of Kassoum Kayentao, Samba Diarra, Mwayi Madanitsa, Lucinda Manda-Taylor, Umberto D’Alessandro, Jane Achan, Simon Kariuki and George Okoth from the IMPPACT project and George Okello, Linda Nyondo-Mipando, Chawanangwa Mahebere Chirambo, Fathi Malongo, Samba Diarra, Ahmed Vandi, Mary Attaa-Pomaa and Robert Okello from the LINK project in data collection, and Manna Mostaghim, Yovitha Sedekia and Jieun Lee for their work in data coding and extraction.
Conflict of interest
The authors declare that there is no conflict of interest.
References
Bryce, J., Roungou, J., Nguyen-Dinh, P., Naimoli, J. and Breman, J. (1994) Evaluation of national malaria control programmes in Africa, Bulletin of the World Health Organization, 72(3): 371.
Cairney, P. (2016) The Politics of Evidence-based Policymaking, London: Palgrave Pivot.
Cairney, P., Oliver, K. and Wellstead, A. (2016) To bridge the divide between evidence and policy: reduce ambiguity as much as uncertainty, Public Administration Review, 76(3): 399–402. doi: 10.1111/puar.12555
Chalmers, I., Hedges, L. and Cooper, H. (2002) A brief history of research synthesis, Evaluation & the Health Professions, 25(1): 12–37.
Garrido, M., Kristensen, F., Nielson, C. and Busse, R. (2008) Health Technology Assessment and Health Policy-Making in Europe: Current Status, Challenges and Potential, Copenhagen: WHO Regional Office for Europe on behalf of the European Observatory on Health Systems and Policies.
Gibson, B. (2003) Beyond ‘two communities’, in V. Lin and B. Gibson (eds) Evidence-Based Health Policy: Problems and Possibilities, Oxford: Oxford University Press, pp 18–30.
Hall, R. (1963) The concept of bureaucracy: an empirical assessment, American Journal of Sociology, 69(1): 32–40. doi: 10.1086/223508
Hoppe, R. (2009) Scientific advice and public policy: expert advisers’ and policymakers’ discourses on boundary work, Poiesis & Praxis, 6(3–4): 235–63. doi: 10.1007/s10202-008-0053-3
Innvaer, S., Vist, G., Trommald, M. and Oxman, A. (2002) Health policy-makers’ perceptions of their use of evidence: a systematic review, Journal of Health Services Research and Policy, 7(4): 239–44. doi: 10.1258/135581902320432778
Jasanoff, S. (1987) Contested boundaries in policy-relevant science, Social Studies of Science, 17(2): 195–230. doi: 10.1177/030631287017002001
Lewis, J. (2003) Evidence-based policy: a technocratic wish in a political world, in V. Lin and B. Gibson (eds), Evidence-based Health Policy: Problems & Possibilities, Oxford: Oxford University Press, pp 250–59.
Mably, P. (2006) Evidence-based Advocacy: NGO Research Capacities and Policy Influence in the Field of International Trade, Ottawa: International Development Research Center.
March, J. and Olsen, J. (1989) Rediscovering Institutions: The Organisational Basis of Politics, New York: The Free Press.
March, J. and Olsen, J. (2006) The logic of appropriateness, in M. Moran, M. Rein and R.E. Goodin (eds) The Oxford Handbook of Public Policy, Oxford: Oxford University Press, pp 689–708.
Merton, R. (1968) Social Theory and Social Structure, New York: Simon and Schuster.
Mortality Task Force of Roll Back Malaria’s Monitoring & Evaluation Reference Group (2014) Guidance for Evaluating the Impact of National Malaria Control Programs in Highly Endemic Countries, Rockville, MD: MEASURE Evaluation.
Noor, A., Kinyoki, D., Mundia, C., Kabaria, C., Mutua, J., Alegana, V., Fall, I. and Snow, R. (2014) The changing risk of Plasmodium falciparum malaria infection in Africa: 2000–10: a spatial and temporal analysis of transmission intensity, The Lancet, 383(9930): 1739–47. doi: 10.1016/S0140-6736(13)62566-0
Nutley, S. and Davies, H. (2000) Getting research into practice: making a reality of evidence-based practice: some lessons from the diffusion of innovations, Public Money & Management, 20(4): 35–42. doi: 10.1111/1467-9302.00234
Nutley, S., Walter, I. and Davies, H. (2007) Using Evidence: How Research can Inform Public Services, Bristol: Policy Press.
Oliver, K., Innvaer, S., Lorenc, T., Woodman, J. and Thomas, J. (2014) A systematic review of barriers to and facilitators of the use of evidence by policymakers, BMC Health Services Research, 14(1): 2. doi: 10.1186/1472-6963-14-2
Parkhurst, J. (2012) Framing, ideology and evidence: Uganda’s HIV success and the development of PEPFAR’s ‘ABC’ policy for HIV prevention, Evidence & Policy, 8(1): 19–38.
Parkhurst, J. (2016) Appeals to evidence for the resolution of wicked problems: the origins and mechanisms of evidentiary bias, Policy Sciences, 49(4): 373–93. doi: 10.1007/s11077-016-9263-z
Parkhurst, J. (2017) The Politics of Evidence: From Evidence Based Policy to the Good Governance of Evidence, Abingdon: Routledge.
Parkhurst, J. and Abeysinghe, S. (2016) What constitutes ‘good’ evidence for public health and social policy-making? From hierarchies to appropriateness, Social Epistemology, 10.1080/02691728.2016.1172365: 1–15.
Parkhurst, J., Ettelt, S. and Hawkins, B. (2018) Evidence Use in Health Policy Making: An International Public Policy Perspective, Cham: Palgrave Macmillan.
Parsons, W. (1995) Public Policy: An Introduction to the Theory and Practice of Policy Analysis, Cheltenham: Edward Elgar.
Peters, B. (2010) The Politics of Bureaucracy, New York: Longman.
Russell, J., Greenhalgh, T., Byrne, E. and McDonnell, J. (2008) Recognizing rhetoric in health care policy analysis, Journal of Health Services Research & Policy, 13(1): 40–46. doi: 10.1258/jhsrp.2007.006029
Shaxson, L., Bielak, A., Ahmed, I., Brien, D., Conant, B., Middleton, A., Fisher, C., Gwyn, E. and Klerkx, L. (2012) Expanding Our Understanding of K* (KT, KE, KTT, KMb, KB, KM, etc), Hamilton, Ontario: United Nations University, Institute for Water, Environment and Health.
Snow, R.W. and Noor, A. (2015) Malaria Risk Mapping in Africa: The Historical Context to the Information for Malaria (INFORM) Project, Nairobi: UKaid and the Wellcome Trust.
Storeng, K. and Béhague, D. (2014) ‘Playing the numbers game’: evidence‐based advocacy and the technocratic narrowing of the safe motherhood initiative, Medical Anthropology Quarterly, 28(2): 260–79. doi: 10.1111/maq.12072
The Global Fund (2015) Report of the Technical Review Panel on the Concept Notes Submitted in the Third and Fourth Windows of the Funding Model, Geneva: The Global Fund.
The Global Fund (undated) Investing to End Epidemics: The Global Fund Strategy 2017–2022, Geneva: The Global Fund.
Tricco, A., Tetzlaff, J. and Moher, D. (2011) The art and science of knowledge synthesis, Journal of Clinical Epidemiology, 64(1): 11–20. doi: 10.1016/j.jclinepi.2009.11.007
Udy Jr, S. (1959) ‘Bureaucracy’ and ‘rationality’ in Weber’s organization theory: an empirical study, American Sociological Review, 24(6): 791–95. doi: 10.2307/2088566
Van der Arend, J. (2014) Bridging the research/policy gap: policy officials’ perspectives on the barriers and facilitators to effective links between academic and policy worlds, Policy Studies, 35(6): 611–30. doi: 10.1080/01442872.2014.971731
Weiss, C. (1977) Research for policy’s sake: the enlightenment function of social research, Policy Analysis, 3(4): 531–45.
Weiss, C. (1979) The many meanings of research utilization, Public Administration Review, 39(5): 426–31. doi: 10.2307/3109916
WHO (2007) Strengthening Health Systems to Improve Health Outcomes: WHO’s Framework for Action, Geneva: World Health Organization.
WHO (2017) WHO and the Global Fund, Achieving Impact Together, Geneva: World Health Organization.
WHO (2019) World Malaria Report 2019, Geneva: World Health Organization.
Wilson, J. (2000) Bureaucracy: What Government Agencies Do and Why They Do It, New York: Basic Books.
World Bank (2019) World Bank Open Data, Available at: https://data.worldbank.org/.