Abstract
Background:
Despite the growing attention given to the political process of evidence-based policymaking (EBPM), we still know little about how evidence is processed at the early stages of the policymaking process, especially at the agenda-setting stage. Whether and when political elites pay attention to evidence-based information is crucial to the study of EBPM but also essential to the well-functioning of democracy.
Aims and objectives:
The aim of this paper is to cover this gap, by asking whether evidence increases policymaker attention to policy proposals. The working hypothesis is that everything else being constant, evidence should increase policy-maker attention.
Methods:
To test this hypothesis, this paper relies on a field experiment embedded in a real-life fundraising campaign of an advocacy organisation targeted at the Members of the European Parliament (MEPs). The field experiment is embedded in a real-life fundraising campaign of an advocacy organisation targeted at the Members of the European Parliament (MEPs).
Findings:
Results show that information type matters to policy-maker attention, but evidence is not effective in this respect. Findings also suggest that there are no important differences between political groups and, crucially, that previous policy support does not have an impact on policy-maker attention. This paper shows that that while evidence is essential to the policy process, ideas are key to attract policymakers’ attention at the individual level in the absence of prior demand.
Discussion and conclusion:
Overall, findings suggest that empirical information is not a quick pass for policy-maker attention. In this context, other types of information and framing are likely to make a difference. Future studies should analyse how framing may alter political elites’ predisposition to attend empirical evidence.
Key messages
This paper adds to the literature on evidence-based policymaking by looking at how policymakers react to evidence in the absence of prior demand.
To assess the causal impact of evidence, a field experiment is employed, also increasing the external validity of findings.
Results suggest that political elites pay more attention to ideas rather than evidence-based information.
Findings show that this also applies across political groups and previous policy support.
Introduction
Most work on evidence-based policymaking analyses the production, demand and use of scientifically and empirically backed information. Yet the policymaking process is not linear and is characterised by multiple stages, of which the agenda-setting stage determines which policies and issues will be placed on the political agenda for serious consideration. The aim of this paper is to analyse whether and how empirically backed information contributes to placing issues on the agenda. This paper addresses the study and practice of evidence-based policymaking (EBPM). While this is a broad concept, it encompasses the idea of moving beyond ideologically driven policy proposals and debates, towards a rational approach of policy outcomes, and also a more practical pursuit of evidence-informed policymaking, in which the key actors in this process may balance the implications of their prior beliefs with factual information.
Literature on EBPM has studied in depth the demand side of evidence in the policy process, also known as the silver bullet, which refers to policymaker requests for a specific ‘killer’ piece of scientific information to support their proposals (Hardie and Cartwright, 2012). Policymakers usually rely on empirically backed information to strengthen the case for their policy proposals, ensure that they are addressing a problem effectively, or to depoliticise an issue (O’Brien, 2013; Wood, 2015). While the demand and use of empirical information is crucial, it is also important to analyse how policymakers react to empirical information even in the absence of prior demand, or in other words pay attention to the agenda-setting stage of evidence-based policymaking. It is within this identified gap of the agenda-setting process in EBPM that this paper makes a contribution.
Attention to evidence in the policy process is essential. By providing objective indicators, empirical evidence can help to signal the existence of new problems that had originally been overlooked by policymakers, or help to reassess the magnitude of an existing challenge. Evidence is also essential to problem solving: it can help to accurately determine the effectiveness of a given policy solution to a new or preexisting issue (Kingdon, 1984; for a review of how empirical information works see Cairney, 2016). Attention to evidence in a context of lack of demand is crucial, precisely because policymakers cannot track all the issues at once or anticipate the development of new challenges. Therefore, their ability and willingness to react to and attend to empirical evidence when it is produced – even in the absence of previous demand – is determinant to the well-functioning of the policy process and democracy more generally, because this information can signal new important developments that they could not anticipate. This reaction to empirical information by policymakers in the context of lack of demand is what, in this paper, I refer to as the reverse silver bullet: just as policymakers expect to gain attention and credibility when their proposals are backed by a killer piece of information, it is expected from policymakers that they are able to allocate attention when a key piece of empirical evidence is produced. This paper addresses the issue by asking whether information signals that are backed by empirical information receive more attention than informational inputs which are not.
This paper builds on much existing work. Firstly, it relates to EBPM literature, particularly the set of work which is interested in understanding the process through which evidence becomes policy. This area of scholarship recognises that evidence is not automatically translated into practice (Walter et al, 2005; Ward et al, 2009). There is no ‘leap’ from good quality evidence to the decision to apply such knowledge, and this is by far a political and contested process, not a technocratic one (Nutley et al, 2007). While the literature has pointed towards the involvement of judgement, heuristics and the role of policy entrepreneurs in selling empirical information (Cairney, 2016), there has been hardly any work devoted to understanding whether the presence of evidence per se can increase policymaker attention at the individual level.
This work also relates to agenda-setting and information-processing theories of the policy process (Jones and Baumgartner, 2005), although it moves beyond the marco-level focus of this field by exploring the individual-level dynamics of agenda setting, that is, whether and how much political elites pay attention to informational inflows. To theorise this, I draw from the individual-level studies of factual information processing (see, for example, Mondak, 1993; Barker, 2005; Kangas et al, 2014; Sides, 2015; Ceron and Negri, 2016). Although these studies have mainly focused on public opinion, they provide useful insights into theories about political elites.
Looking at attention as an outcome variable is crucial to the policy process and evidence-based policymaking. Attention is an essential prerequisite for agenda setting, policy support and implementation. Before an issue gets on the political and institutional agenda, it must first receive attention from policymakers. Hence, looking at attention means looking at the very early stages of the policy process, the key step where the issues that will get on the agenda for serious consideration and implementation are determined. Disentangling the relationship between attention and evidence can be complicated, if relying on observational data due to endogeneity. Many issues on which evidence is produced are later followed by peaks of attention. However, evidence is usually a product of a prior request and, hence, of previous attention to an issue. In this sense, using observational data may complicate causal identification, as what we may observe is an endogenous relation of attention-producing evidence, and evidence-producing attention. To overcome the limitations of using observational data to discern the causal relationship between the variables of interest, I approach this question experimentally, using a field experiment.
This field experiment is embedded in a real-life fundraising campaign targeted at members of the European Parliament (MEPs). The fundraising campaign was launched by an advocacy organisation (anonymised) promoting the basic income debate, with the objective of attending the European Youth Event at the European Parliament – a topic and event of great relevance to MEPs. Emails were sent to all members of Parliament who randomly receive one of two treatments: a letter containing a policy proposal backed by empirical evidence, and another in which the proposal was presented with no scientific support, as a policy idea.
Both the methodological approach and the initiative are relevant to this study. Field experiments – especially being embedded in a real-life initiative – help increase external validity, that is, generalisability of findings. Randomisation enables us to isolate the effect of information on the dependent variable of interest, in this case attention, enhancing internal validity. Basic income serves as a convenient case study for this matter because it is at a very early stage of the policy process, and it is discussed both in relation to ideas and to evidence. Here, ideas are defined as preconceptions or beliefs about potential outcomes, while evidence refers to information that is empirically backed about the potential consequences of a policy proposal. Basic income, as a policy, was not discussed at the time of this field experiment and had no clear ideological or political champion, reducing potential biases. I discuss this as well as other relevant details within the scope conditions section of this paper.
Through this field experiment I seek to measure to what extent policymakers are (a) attentive to the treatment (information flows), and (b) to what degree they respond to the treatment. Attention is conceptualised as the capacity to react and access the information input. Empirically, this is operationalised through the opening of emails. The rate of response captures other more costly behavioural outcomes of reacting to the informational inflows. This is measured through whether MEPs fulfill one of the three petitions from the campaign email, which include whether an MEP (a) responds to the email, (b) arranges a meeting with the organisation leading the fundraising campaign, and finally, (c) support the campaign financially (i.e. by making a donation).
Results go against the theoretical expectations. Rather than paying attention to empirically-supported information, policymakers are significantly more attentive to ideas-based information. This finding is consistent across political groups, previous support for the policy and gender, suggesting there are no heterogeneous effects. These results have many implications for the relationship between policy, politics and evidence. Far from arguing that empirical evidence is irrelevant in the policy process, the findings in this paper suggest that it is not all about evidence when attracting policymaker attention to policy proposals. In fact, results show that ideas are more effective in attracting initial attention by policymakers. In the discussion section I suggest several reasons why this could be the case, including lower informational processing costs, or general interest. These findings have important implications for communicating evidence to policymakers. Capturing policymaker attention could be a question of reframing evidence and connecting it to ideas, although further research should examine the conditions under which this reframing will be more effective.
The structure of this paper is as follows. The first section outlines previous work on agenda setting, information processing, and the theoretical expectations derived thereof. Next, I provide a detailed overview of the empirical strategy employed, outlining the key strengths of this methodological approach and justifying the relevance of the context of this field experiment. The fourth section presents the main results, and finally I close the paper with some concluding remarks and implications of the findings, as well as avenues for future research.
What we know
Information in the policy process
This paper looks at the role of evidence in the agenda-setting process, and whether informational inputs backed by evidence influence policymaker attention. The natural starting point of this work is agenda-setting theories which have been developed over time to understand patterns of attention and information processing across different venues: parliaments, government agendas and other institutions, adopting a macro-level focus (Kingdon, 1984; Jones and Baumgartner, 2005; Zahariadis, 2016). This literature shows that attention is an important resource in the policy process and an essential prerequisite for policy support and policymaking. Before a proposal gets seriously considered, and implemented, it must first receive policymaker attention.
According to agenda-setting literature, empirical, technical and evidence-based information is essential to the agenda-setting process for various reasons. For one, it can signal the existence of a problem, for instance through the development and change of indicators (Kingdon, 1984). To cite some examples, in the study of the death penalty Baumgartner et al (2008) show that indicators of the innocent deaths were key to conveying the problems arising from this policy. To highlight the importance of evidence-based information, Chaqués-Bonafont (2019) shows how the absence of official indicators of the level of corruption helped keep the issue off the agenda until the 1990s, when the first indicators were developed.
Empirical information is not only key to signalling the existence of a problem, but also in pointing to a relevant solution. Deborah Stone (1989) shows that for issues to become problems worthy of attention, they have to be seen as possible to solve by human action. Information backed by evidence is crucial to explain whether a given policy proposal is feasible or effective, but also what budget is required, and the legal framework available for such implementation. In any case, policy solution survival requires a perception of effectiveness that is based on evidence (Cairney and Zahariadis, 2016).
Other sets of scholarship aside from agenda setting also highlight the relevance of empirical evidence to attention from policymakers. Work on interest groups for instance, contends that empirical information is a key good for interest groups’ access to the policy process (Bouwen, 2002; 2004), where information is a currency though which interest groups are granted privileged access to policymakers. Due to the cognitive and institutional limitations faced by policymakers to work on numerous issues, taking informational resources from interest groups is comparatively advantageous, especially where interest groups stand as experts over policymakers (Hall and Deardorff, 2006).
According to policy diffusion scholarship, not only do policymakers pay attention to empirical information, they also learn from it. Learning, from the consequences of policy implementation elsewhere by policymakers, is a key mechanism that explains why policies implemented in one context drive policy adoption in a neighbouring context (Gray, 1973; Heinze, 2011; Obinger et al, 2013; Gilardi and Wasserfallen, 2017). Recent research suggests that learning could be moderated by ideology and co-partisanship (Butler et al, 2017), in other words, policy learning can occur selectively. Some other work also shows that learning occurs not only in terms of the policy consequences, but also the political consequences of policy adoptions (Gilardi, 2010). Overall, all of these studies looking at the policy process, and the role of evidence therein, suggest that policymakers pay considerable attention to empirically-backed information.
While macro-level theories might be useful to show how empirical information is broadly employed in the initial stages of the policy process, this paper focuses on individual- level dynamics of information processing by elites. This is a very relevant matter provided that policymakers have their own agency and agenda-setting capacity. As members of the institutions where decisions are made, they act as key filters of which issues are placed on the agenda. However, although agenda-setting theories (Jones and Baumgartner, 2005; Baumgartner and Jones, 2015) build on the notion of bounded human rationality and the cognitive limitations of information processing (Simon, 1985; Jones, 2017) there is scarcely any work on how political elites at the individual level process different types of information (some exceptions include: Sevenans, et al, 2015; Sevenans et al, 2016; Sevenans, 2017; Walgrave et al, 2018). I build on this work by examining the impact of evidence as information on policymaker attention to policy proposals. To overcome the theoretical gap, I draw on extensive literature on the scientific literacy model, motivated reasoning, and other work on how individuals process scientific information, applying all this knowledge on the workings of political elites.
Information processing at the individual level
The study of how humans process empirical information has generated a large and intense theoretical and empirical discussion (Reinard, 1988; Duchon et al, 1989). Most of this research analyses the effects of empirical information on attitudes (Sides, 2015), beliefs, intentions (Zebregs et al, 2014), or on how individuals process information (Gaines et al, 2007; Liu and Ditto, 2013). This work has offered three main accounts of how empirical information should be processed in relation to non-empirically backed information, as results so far have been mixed (Baesler and Burgoon, 1994; Kopfman et al, 1998; Nisbet and Mooney, 2007; Druckman and Bolsen, 2011; Zebregs et al, 2014; Sides, 2015).
On the one hand, some work conveys that empirical evidence is less straightforward to process than information that does not contain evidence cues. Information presented in a storytelling fashion, all else being equal, has been shown to be more effective, as narratives create direct links to knit causal arguments from the problem to the solution, easing judgement (Taylor and Thompson, 1982; Reinard, 1988), in front of empirical evidence which is more demanding to process, while narratives are more vivid (Kazoleas, 1993).
In the opposite direction, other studies have shown that empirical information is more effective for comprehension, attitudinal and behavioural change (see for example: Sides, 2015). This is in line with the scientific literacy model. Here, empirical information is understood more broadly, to include a form of presenting evidence through statistics, figures or causal facts (Gastel, 1983; Tufte, 2001; Dahlstrom, 2010; Tal and Wansink, 2014). This work has shown that empirical information’s impact is not so much due to its content but rather due to the form in which it is presented, through the incorporation of a graph, figures or statistics. Visual representation does not imply improved comprehension, but it does render the message more convincing (Tal and Wansink, 2014). In fact, some work showed that messages using statistical evidence are rated higher on appropriateness, effectiveness, reliability, knowledgeability, and credibility, and respondents show a greater sense of causal relevance (Kopfman et al, 1998). In line with this, the scientific literacy model of opinion formation argues that knowledge and evidence help accurate assessments of risk and benefits (see for instance, Kahan et al, 2008; Druckman and Bolsen, 2011).
A third strand of literature, on micro-level foundations of information processing, suggests that evidence processing is not only about the message, but is conditional on the recipients’ characteristics. Some of this work shows that evidence is less important when values or frames are already present in individuals’ minds (Nisbet and Mooney, 2007). This is in line with George Lakoff’s (2004) argument that new information must resonate with preexisting beliefs and frames in order to be processed. This is tightly related to motivated reasoning theory: individuals look for and process information in a biased manner, selectively processing information to fit in with their prior beliefs (Lord, et al, 1979; Kunda, 1990; Kahan et al, 2008; Druckman and Bolsen, 2011) and discounting that which does not (Gaines et al, 2007; Lodge and Taber, 2000; Taber et al, 2009; Liu and Ditto, 2013).
Aside from prior beliefs, other individual characteristics matter for how new information is processed, most notably motivation and ability (Chaiken and Trope, 1999). Motivation refers to the incentives to process new information, understood as the efforts assumed to develop or acquire further understanding and the payoff from such an invested effort (Scheufele and Lewenstein, 2005). On the other hand, ability relates to the capacity to process new information, relating to cognitive resources as well as other time and material assets that may assist in this endeavour. It’s about the individual’s capacity and interest in giving up predetermined ideas.
Most of this work is based on the dynamics of public opinion information processing. In applying these theories to elites, the key argument presented in this paper is that, due to motivation and ability, empirical information should be more effective in engaging their attention. The idea is not that elites are more sophisticated than public opinion on average. Rather, this is a question of incentives and external resources in processing information. For instance, in terms of ability, policymakers have more resources to process information: namely, a team of assistants who may simplify and support information treatment. The processing of factual information is a routine or frequent activity for policymakers as they are a key target of information inflows from various sources that use empirical evidence. Equally, they must support their proposals with empirical evidence to make them more consistent and credible (Vis, 2019). Regarding motivation, it is reasonable to suggest that policymakers may be more interested in evidence-supported information. As public representatives, they have incentives to be informed with precision about a diversity of issues – even outside their realm of interest. Processing unreliable information may generate extra costs. Not being aware of the facts, or delivering erroneous information amongst their colleagues or in public speeches, might undermine their reputation and support for their proposals and may even generate media backlash. In essence, policymakers have higher costs for processing unreliable information (Vis, 2019). This means that policymakers, in order to ensure that they are processing reliable information, must devote more attention to those informational inflows which are backed by empirics. Following this, the first expectation is that, in general, empirically-supported information will receive more attention from policymakers (Hypothesis 1).
However, policymakers are not homogeneous, they vary in a broad range of characteristics, among which are their own preferences. This paper argues that individual preferences should be a key moderator of information processing. In this sense, the second hypothesis contends that those policymakers who are already supportive of a policy proposal will pay equal attention to both sets of information (Hypothesis 2), and that will show larger levels of attention and response in comparison to those MEPs who are not in support of a policy (Hypothesis 3).
Experimental design
This paper relies on a field experiment directed at members of the European Parliament embedded in a real-life initiative. Political scientists are increasingly making use of experiments to solve a variety of questions that are difficult to deal with in causal terms using observational data. Initially employed for canvassing (Gerber and Green, 2000), field experiments are being increasingly used for a variety of research topics and areas like studying the attitudes and behaviour of policymakers (see, for example, Butler and Broockman, 2011; Richardson and John, 2012; Butler, 2014; Vries et al, 2016; Butler et al, 2017).
Experimental setting and context
This field experiment is embedded in a real-life fundraising campaign launched by an advocacy organisation working for the promotion of the basic income debate (name anonymised). The objective of this campaign was to raise sufficient funds to cover the attendance costs to 20 of its members, to the European Youth Event. The European Youth Event is a biannual event relevant to EU institutions and that all MEPs are aware of. This is the second experiment delivered to MEPs that I know of (for the first one see Vries et al, 2016). The experiment – sending of emails – occurred between the 23rd and 24th of April (Monday and Tuesday afternoon) 2018 and the experiment concluded four weeks after it started. Some of the emails rebounced – so these were revised on the 25th (after all of them were sent) and resent immediately. To send this high volume of personalised emails a Google sheet add-on, Yet Another Mail Merge (YAMM), was used, which also enabled the tracking of the email opening and response rates automatically. This information was then recoded into the main database. No reminders were sent for two reasons. First, because the objective was to isolate the treatment effect from treatment intensity: sending reminders would mean that some MEPs self-select themselves into a number of reminders depending on how much time they take to reply to. Secondly, this gives an indication of the attention and response rate of one contact attempt and helps us to understand what the threshold of attention is.
Treatment construction
Two types of email content were sent, providing two sets of information, the difference being that one contained evidence cues while the other did not. There are various ways of treating for empirical evidence, as outlined in the theoretical framework. The strategy in this paper is threefold: (1) first, to use academic citations, (2) secondly, to mention the existence of surveys, reports, academic and scientific evidence (employ scientific language), (3) and finally, use numbers and figures to support the data. To construct the treatment, I relied on information from academic papers and reports on the effects of basic income scheme experiments. Specifically, I drew on the evidence in Evelyn Forget’s (2011) paper on the health effects of basic income, a report on the pilot results in Namibia (BIG report, 2009) and public opinion data from Dalia Research (2017). Although these are not basic income experiments per se, they were all designed to test some aspect of basic income. The information is appropriately referenced, and words treating for empirical evidence or ideas treatment are highlighted in bold to ensure that the treatment is visible. The distinction between ideas and evidence is not trivial: ideas are about beliefs, notions and perceptions of potential outcomes and courses of actions, while evidence is about the actual tangible effects, supported by empirical proof.
One of the key concerns of the treatment is that it should respond to the independent variables that are being manipulated, eliciting from policymakers no other issues. In this sense, showing the traditional results of an experiment would involve scientific wording but also explaining the context of the pilot, or the dependent variables measured during the experiment (i.e. a specific country and a specific problem like poverty, unemployment or child malnutrition rates). To avoid eliciting other thoughts other than what the treatment intended to manipulate, the country where the experiment or pilot was set is not mentioned. This strategy is unfeasible with regards to the issues that the pilot projects or the experiments are carried out for, although mentioning these issues may be as problematic as mentioning the country-context for the purpose of this experiment. For instance, saying that a pilot project improved child malnutrition or life expectancy might elicit thoughts of contexts very different to Europe – where the field experiment is set. However, removing issues or areas in which pilot projects have provided evidence would be equally problematic as it would result in information being too generic (that is, ‘experiments have shown UBI has positive effects’). To tackle this issue, the treatment only mentions issues which have been considered in basic income related pilots and experiments, that are relevant to the EU context (like employment, gender equality or poverty rates). The information included in both emails is the same, but one group receives this information with data and framed as empirical evidence (evidence treatment), while the other receives the same information without the empirical backing (idea treatment). Both treatments therefore contain certain frames as understood by Entman (1993), highlighting aspects of a problem in order to make them more salient and linking it to a solution; however this is constant across both treatments. Variance occurs in whether evidence is included or not. The idea treatment excludes all references to evidence, and basic income is portrayed as a ‘concept’, ‘idea’, or ‘notion’. Scientific cues, replaced by terms like ‘we believe’, ‘advocates defend that’, or ‘the idea of’, are present in this vignette. No numbers, figures, facts or citations appear, and there is no mention of any evidence or experimentation on the topic. These treatments are included in the email subject line and email body.
Content of the evidence treatment email.
Subject line: bring the EVIDENCE on basic income to the European Parliament
Email body:
Dear {name of MEP},
I am writing to you as the Secretary of Unconditional Basic Income Europe (UBIE), an organization dedicated to promoting knowledge and evidence on basic income policy. We are asking for funds and support to attend the European Youth Event in June 2018, where 20 UBIE ambassadors from 10 different countries will be sent to attend the event together with a main speaker for a roundtable on basic income.
Evidence regarding basic income
Recent studies show that basic income does not reduce labor supply. Equally, empirical evidence supports that basic income improves graduation rates (Forget, 2011). Experiment reports have shown that it reduces crime rate and poverty levels, equally improving health (BIG, 2009). Recent surveys on the matter show that many countries in Europe enjoy at least 50% of support (Dalia Research, 2017) and another survey has shown that 68% of Europeans would vote in favor of basic income (Holmes, 2017).
Bring the facts of Basic Income to the European Youth Event
Help us put the evidence about basic income in the EU agenda by supporting our assistance at the European Youth Event, 2018. This is an annual event that gathers the European youth for three days to share innovative insights. In attending the event, we will be able to participate in a roundtable about basic income policy and meet MEPs to share evidence about basic income.
Contents of the idea treatment email.
Subject Line: Bring the IDEA of basic income to the European Parliament
Email body:
Dear {name of MEP},
I am writing to you as the Secretary of Unconditional Basic Income Europe (UBIE), an organization destined to promote the idea of basic income policy. We are asking for funds and support to attend the European Youth Event in June 2018, where 20 UBIE ambassadors from 10 different countries will be sent to attend the event together with a main speaker for a roundtable on discussion on the notion of basic income.
The idea of Basic Income
A basic income scheme does not reduce labor supply. A key notion is that basic income improves graduation rates. Similarly, the general understanding is that basic income reduces crime and poverty levels, equally improving health. Today the concept of basic income enjoys much support across many countries in Europe, promoting the view that many Europeans would vote in favor of basic income policy.
Bring the issue of basic income in the EU agenda
Help us put the idea of basic income in the EU agenda by supporting our assistance at the European Youth Event, 2018. This is an annual event that gathers the European youth for three days to share innovative insights. In attending the event, we will be able to participate in a roundtable discussion about basic income policy and meet MEPs to share the notion of a basic income.
After the introduction to the organisation, basic income policy and fundraising campaign, MEPs are asked whether they would be willing to contribute through the following options: (1) make a donation for the fundraising campaign; (2) meet with representatives from the organisation at the time of the event; (3) reply to the email saying what they think about basic income generally. The three options are thought of as a way of providing three alternatives with different degrees of cost. Cost can be in terms of time, reputational or economic, but precisely these three outcome possibilities represent minor to major degrees of all these costs.
Dependent variables and quantities of interest
In this experiment, the main outcome of interest is policymaker attention at the individual level. Attention is a necessary prerequisite for policy support and policymaking, and is defined as the process of accessing and reacting to informational inflows. Two different outcomes which represent different thresholds of attention are measured. The first is the email opening rate which would represent a low attention threshold, coded as 1 if an MEP opens the email or coded as 0 if he/she does not.
The second variable of interest, response rate, represents a measure of higher attention level, coded as 1 if the MEP replies to the email, participates in the fundraising campaign or meets one of the organisational representatives during the event, and 0 if the MEP in question does not comply with any one of these requirements. Because there is already a treatment in the email headline, which influences the first dependent variable (opening the email) which then influences the second dependent variable, one cannot consider the subset of policymakers who access the second treatment as random. In fact, the second dependent variable is a clear case of non-compliance because not all subjects have taken the treatment (reading the email), as not all subjects have opened the email. To deal with this issue of non-compliance, the estimation strategy for the second dependent variable accounts for non-random one-sided non-compliance (following Gerber and Green, 2012). Given this more complex setting of the second dependent variable – due to the bias driven by having the treatment in the email headline – the average treatment effect (ATE) is a biased estimate of treatment effect. To overcome this bias, I calculate two other quantities of interest which contain different properties. First, the intent-to-treat effect (ITT), which measures the effect of being assigned to treatment on the response-dependent variable and is calculated by the proportion of individuals who responded over the total number of individuals assigned to treatment. This measure does not account for those who did or did not actually receive the second treatment (the treatment within the email, in this case, does not account for those who opened and read the email); however, it is a key quantity of interest as it can account for the total effect of treatment assignment on response, given the bias between these processes. The second quantity of interest is the Complier Average Causal Effect (CACE), which is the treatment effect considering the actual proportion of compliers (individuals who actually opened the email). This is calculated as the ratio of the ITT, over the proportion of compliers, and is a measure of the average treatment effect on the compliers only. Table 1 summarises the different quantities of interest for each dependent variable and how they are calculated.
Dependent variables and quantities of interest.
Dependent variable | Quantity of interest | Concept | Calculation |
---|---|---|---|
Attention rate (proportion of MEPs who open the email) (1) | Average Treatment Effect (ATE) | Treatment effect on outcome | Number of individuals taking treatment (opening the email) divided over the number of individuals assigned to treatment |
Response rate (proportion of MEPs who respond in one of the three possible ways) (2) | Intention-to-treat effect (ITT) | Effect of treatment assignment on outcome | Number of individuals who respond, divided over the number of individuals assigned to a specific treatment |
Complier Average Causal Effect (CACE) | Average treatment effect on the compliers | ITT/Proportion of compliers | |
Proportion of compliers | Proportion of individuals that are assigned to treatment and receive the treatment | Proportion of individuals assigned to one treatment (evidence or ideas) that open the email (calculated separately for each treatment) – this is equivalent to the ATE of dependent variable 1 |
Randomisation strategy
Block randomisation on previous policy support is employed as the randomisation strategy. The data used for this purpose is obtained from the results of a voting procedure that took place in the European Parliament on February 2017, in which MEPs voted whether to include basic income within a report issued by the committee on civil rules and robotics with recommendations to the EU Commission. Basic income was included in the original proposal, but this was a highly contentious matter, so a separate vote took place to decide on the inclusion of this policy. These voting results are used as a proxy for MEPs being in favour of or against basic income policy. This is a reliable measure for two reasons. First, although the vote took place a year before the experiment took place, it was a serious and important debate, so it is expected that the vote was actively considered. Second, although this is a proposal lead by the Socialists and Democrats group in the European Parliament, the team that further developed it within the relevant committee was an expert group and contained members of other political groups. However, to make sure that there is no confounding of political groups, a chi-squared test of independence was performed on the relationship between voting and group membership, confirming that these two variables are independent (p-value < 2.2e-16). This data is available for 84% of MEPs (640 out of 750 MEPs). Table 2 shows the treatment distribution across previous policy support, gender and political groups.
Treatment distribution across policy support, political group and gender
Idea treatment | Evidence treatment | ||
---|---|---|---|
Policy Stance | For | 50% | 50% |
Against | 50% | 50% | |
Abstention | 50% | 50% | |
Missing | 52% | 48% | |
Political Group | ALDE | 51% | 49% |
ECR | 49% | 51% | |
EFDD | 35% | 65% | |
ENF | 50% | 50% | |
EPP | 51% | 49% | |
Greens/EFA | 58% | 42% | |
GUE/NGL | 48% | 52% | |
Non-attached | 44% | 56% | |
S&D | 53% | 47% | |
Gender | Female | 49% | 51% |
Male | 51% | 49% |
Ethical considerations
Before moving to the results, I briefly discuss the ethical standards of the experiment. No deception was carried out during this field experiment, both in the information displayed as much as in the initiative. All the scientific information displayed within the evidence-based email was taken from published studies and reports, and all information was appropriately referenced within the email. Most importantly, the initiative in which this experiment took place was a real-life campaign that would have taken place even in the absence of the experiment. Any promised actions carried out during the field experiment (for instance, agreeing to meet with MEPs) took place, so once more deception was avoided.
Results
Attention and response rates across treatments
Overall, the rate of attention is very low (16.95% of MEPs opened the email), but there are important differences across the two treatments, as Figure 1 shows. The attention rate (email opening) for the ideas treatment is 30.73%, while for the evidence treatment this goes down to 3.13%. More importantly, the differences in attention rate across treatments are statistically significant (p-value: 2.2e-16). Results suggest that the first hypothesis that evidence gathers more attention than ideas should be rejected. On the contrary, these findings suggest that ideas gather more attention than evidence-based information, at least in the announcement stage, that is the stage where the contents of the email are announced in the email headline (I discuss this in detail in the scope conditions and later on in the results).

Attention and response rates throughout the two treatments.
Citation: Evidence & Policy 17, 3; 10.1332/174426420X16017817089543

Attention and response rates throughout the two treatments.
Citation: Evidence & Policy 17, 3; 10.1332/174426420X16017817089543
Attention and response rates throughout the two treatments.
Citation: Evidence & Policy 17, 3; 10.1332/174426420X16017817089543
Response rates are also very low generally: 1.43% of MEPs replied to the email, and accounting for those who opened the emails, a total of 8.46% responded. In terms of the response-dependent variable, however, there are significant differences within both the ITT and the CACE (see table 3). Response rate for the ITT ideas treatment is 2.60%, while for the evidence treatment this is 0.26%, suggesting there is a significant effect of being assigned to treatment on the response rate (p-value: 0.009186). Accounting for the proportion of compliers, that is, those who open the email, there are still significant differences. In this case, the CACE for the ideas treatment response rate is 9.30%, while this drops to 0.85% for the evidence treatment.
Results for the attention and response rates measuring different quantities of interest.
Dependent variable | Quantity of interest | Idea treatment | Evidence treatment |
---|---|---|---|
Attention | ATE | 30.73% | 3.13% |
Response | ITT | 2.60% | 0.26% |
CACE | 9.30% | 0.85% |
These results suggest that the announcement of ideas is more attractive to policymakers than the announcement of evidence. This finding is taken from the attention rate, or the rate of MEPs who opened the emails depending on which treatment they were assigned. However, results are mixed for the response rate. Both the ITT and CACE indicate that, indeed, ideas generate more responses than evidence does. Overall, we reject the first hypothesis that policymakers are more attentive and respond more to evidence. The announcement of ideas gathers more attention, and in terms of response rate information type does not make a difference. This finding suggests that empirical evidence might not be as important as otherwise thought at least for gathering attention at the individual level. Far from indicating that evidence is not relevant to the policy process or the agenda-setting stage overall, these results convey that when policymakers are not actively looking for evidence, ideas might seem more attractive to them in the first instance. The implications of these findings are further examined in the discussion section.
Differences across levels of support
The second hypothesis concerned the policy position of MEPs as a key moderator of the attention and response to information. If policymakers were already in favour of this policy, then they would be equally likely to attend and respond to both types of information. Results suggest this hypothesis should be rejected.
What Figure 2 shows is that, generally, the attention and response levels across treatments are similar for MEPs regardless of their policy stance1.Results also suggest that the rate of attention is higher for those MEPs who opposed to UBI, for the idea treatment only, although these differences are not statistically significant. Overall, the difference in rate of attention for those in favour and those against is not statistically significant. These results seem to suggest that the type of information – ideas versus evidence – does not have a different effect across policy stance.

Attention and response rates across treatments and levels of support
Citation: Evidence & Policy 17, 3; 10.1332/174426420X16017817089543

Attention and response rates across treatments and levels of support
Citation: Evidence & Policy 17, 3; 10.1332/174426420X16017817089543
Attention and response rates across treatments and levels of support
Citation: Evidence & Policy 17, 3; 10.1332/174426420X16017817089543
Contrary to the second hypothesis, results show that even those in favour show different rates of attention across both treatments which are statistically significant (p-value of 1.3e-07). When presenting information as ideas, the attention rate of supportive MEPs is 29%, whereas with evidence the attention rate drops to 4.9%. For those MEPs who oppose the policy proposal in question, the differences across treatments in terms of attention rates are also statistically significant (p-value: 1.814e-14), gathering 1.8% attention rate for evidence, and 35.3% for the idea treatment. Taking the ITT for the response rate, significant differences appear for those in favour (p<0.05), but not for those against (p>0.05). Similarly, the attention rate for the CACE calculation is 17% for the ideas treatment, and 0% for evidence. On the contrary, those against provide more attention to evidence (33%), than ideas (7%).
The third hypothesis argued that those in favour would show generally higher rates of attention and response across treatments, than those against the policy proposal. To ensure that these differences are due to the treatment, and for the purpose of robustness, I perform two proportion tests for attention and response rates for those in favour and against the policy proposal, independently of the treatment they receive. Results indicate that the differences are not significant. In terms of attention rates, the differences across those in favour or against basic income are not statistically significant (p-value: 0.6592), gathering 17% of attention for those in favour, and 19% for those against. In terms of response rates, differences are not statistically relevant either (p-value: 0.5861), although the rates of response are lower, with 2.5% of those in favour responding, and 1.5% of those against responding. Finally, attention and response rates are explored across other variables, namely, gender and political group. As Table 4 shows, the attention and response rates are consistently higher for the ideas treatment across all groups.
Attention and response rates across policy support, gender and political groups.
Covariate | Dependent variable | Quantity of Interest | Idea | Evidence | |
---|---|---|---|---|---|
Policy support | In favour | Attention | ATE | 0,29 | 0,05 |
Against | 0,35 | 0,02 | |||
In favour | Response | ITT | 0,05 | 0,00 | |
Against | 0,02 | 0,01 | |||
In favour | CACE | 0,17 | 0,00 | ||
Against | 0,07 | 0,33 | |||
Gender | Male | Attention | ATE | 0,32 | 0,02 |
Female | 0,30 | 0,03 | |||
Male | Response | ITT | 0,02 | 0,00 | |
Female | 0,03 | 0,00 | |||
Male | CACE | 0,05 | 0,00 | ||
Female | 0,11 | 0,11 | |||
Political Groups | ALDE | Attention | ATE | 0,39 | 0,00 |
ECR | 0,28 | 0,00 | |||
EFDD | 0,07 | 0,04 | |||
ENF | 0,33 | 0,00 | |||
EPP | 0,35 | 0,04 | |||
Greens/EFA | 0,24 | 0,08 | |||
GUE/NGL | 0,38 | 0,00 | |||
NI | 0,25 | 0,00 | |||
S&D | 0,27 | 0,05 | |||
ALDE | Response | ITT | 0,03 | 0,00 | |
ECR | 0,03 | 0,00 | |||
EFDD | 0,00 | 0,00 | |||
ENF | 0,05 | 0,00 | |||
EPP | 0,02 | 0,01 | |||
Greens/EFA | 0,00 | 0,00 | |||
GUE/NGL | 0,00 | 0,00 | |||
NI | 0,00 | 0,00 | |||
S&D | 0,06 | 0,00 | |||
ALDE | CACE | 0,07 | 0,00 | ||
ECR | 0,10 | 0,00 | |||
EFDD | 0,00 | 0,00 | |||
ENF | 0,14 | 0,00 | |||
EPP | 0,05 | 0,25 | |||
Greens/EFA | 0,00 | 0,00 | |||
GUE/NGL | 0,00 | 0,00 | |||
NI | 0,00 | 0,00 | |||
S&D | 0,22 | 0,00 |
Overall, results suggest the three hypotheses presented in this paper should be rejected. Evidence does not gather more attention than ideas (H1): in fact, the announcement of ideas gathers significantly more attention than evidence. In terms of response rates, accounting for those who already opened the email (ATE), there are no significant differences between treatments. The second hypothesis should also be rejected, as there are no significant differences in treatment effect across policy support. This is also the case for the third hypothesis regarding the expected higher levels of attention/response from those MEPs who support the policy in question. Before moving on to the implications of these findings, I first discuss the scope conditions of what we have learnt so far.
Scope conditions
Before moving on to the discussion and conclusion of the paper I consider the scope conditions of these findings. A first question that may arise is who is processing the information. It may be the case that the assistants of MEPs are dealing with the informational inflows rather than MEPs themselves. This is an issue that cannot be addressed unless MEPs are asked directly, which could not be done in the context of a field experiment but would have to be embedded in a survey. I claim however, that whether it is MEPs or their assistants (or a combination of both) who open and respond to emails, this does not necessarily compromise the findings of this study. On the contrary, setting the experiment in a real-life context increases the external validity of findings. A first reason for this, is that the objective here is to analyse the impact of empirical information on the everyday life setting and workings of political elites. Whether this occurs through their assistants or directly through them is not a major concern, as by design the effect of the treatment can be isolated, regardless of who processes the information. Results concern the effect of the type of information in the communication-processing dynamics of policymakers. Because it is a field experiment, regardless of who processes the information, the external validity of the findings is not compromised. A second issue is related to the importance of understanding policymaker attention in a context of micro-communication. One might argue that analysing the impact of the first treatment within the subject line is not externally relevant, because the headline is a very scarce informational space and the treatment manipulation here is only one word. Yet, strikingly there is a treatment effect on the email headline, despite how subtle the treatment manipulation is. However, regardless of treatment effects, studying informational dynamics in micro-communications is highly relevant to the nature of communication in the current context, as it constitutes a common form of information processing by elites and public opinion alike. Nowadays, elites receive information through many digital and social media platforms and communication channels where the contents are very limited. Key examples include several social media platforms where post length is limited to a small number of characters, but it is also applicable to how digital mainstream media is processed: citizens and political elites increasingly access news media through mobile devices, which mainly show a headline and the first sentence of a news item. In this context, I argue that the processing of micro-informational signals is becoming increasingly relevant.
It is key to acknowledge that this experiment measures both the impact of different types of information, but also the announcement of this information, as this is present in the email headline. In this context of micro-communications, understanding the impact of the announcement of information does not compromise external validity, and is very relevant to the dynamics of information processing by policymakers. At the very least, this reflects a fundamental part of political elites’ communication processing which occurs through emails. This is also relevant to other platforms. For instance, selecting which news to read first requires an effect from the news headline; selecting which twitter links or threads to read in detail firstly requires reading the initial 140 characters. One can think of many forms in which a reduced piece of information announcing extended contents must be read and influence policymaker attention before accessing the rest of the piece. Finally, to add credit to the effect of the type of information, only one informational signal is sent out (one email with no reminders), to isolate the effect of the type of information fully, without needing to account for informational intensity across policymakers.
A third issue that may arise is whether these results could be generalised into other policy proposals or areas, other than basic income. This is likely to be the case for various reasons. Firstly, basic income is a topic that is discussed both in terms of general ideas but also of empirical evidence. Therefore, there are no strong a priori reasons why this policy issue should receive more attention when framed in terms of ideas rather than evidence. This subject matter has the advantage that it does not count with an ideological or political group/party champion, which also enables us to rule out other confounders. Even so, this experiment accounts for other variables that might moderate and mediate how information is processed, including political groups and previous policy support. By design, this experiment isolates the effect of the treatment and controlling for topic characteristics that might affect treatment effects, adding credibility to the idea that the effect of the treatment corresponds to information type and not topic.
A fourth issue concerns the generalisability of findings gathered in this experiment. In this sense, a first question that may arise is whether the results would hold if looking at other policy actors like government ministers or civil servants, who are the key actors selecting policy instruments, for instance. This is a question that future research could explore: whether different political actors respond to empirical evidence distinctly. However, I argue that there are no prior reasons to expect such a difference. All political actors are human individuals with similar cognitive biases and incentives to process or discard information. Moreover, MEPs have a very important function in the EU policy process, which is to set the agenda and co-legislate, so selecting policy instruments is also part of their commitments. It is true, however, that in this experiment they are not required to make policy as such, which indeed may influence how they process evidence and ideas differently. It may be the case, that when political actors are preparing policy proposals, they are more responsive to evidence and ideas, but vice versa when they are not, something that is suggested by the interviews carried out with MEPs after the experiment. However, this is something that future research should examine in more depth. All in all, the findings of this experiment are highly informative at the agenda-setting stage.
Finally, it is important to acknowledge the wider contribution this paper makes to the field of experiments on political elites. I show that in a context where there is increasing concern about the attrition effects of surveying and experimenting with political elites, using real-life initiatives to embed experiments is an effective and feasible alternative to accumulate knowledge within this research field. The ideal experimental setting would imply testing for many possible independent variables, however, in field experiments this has logistic challenges and it risks compromising statistical findings. Due to this difficulty of testing for many features in one sole experiment, this work only intends to be a starting point for future research analysing how political elites process evidence-based information.
Discussion and conclusion
The results in this paper show that empirical information does not gather more attention than ideas. Policymakers pay more attention to the announcement of ideas-based information, and information type also seems to have an effect on policymaker response. Other factors, such as support for the policy proposal in question or political group, do not play an important role in attention and response dynamics at the individual level. In essence, the type of information has an effect on attention although not so much on the follow-up (response rate).
There are two possible implications or accounts of why ideas-based information is more popular. On the one hand, it may be a story of interest: in the absence of prior demand, policymakers might not have an interest in the evidence per se, and may be more attentive to the inspiration that comes from ideas. The interviews carried out with some policymakers after the experiment took place, precisely suggest this: when picking up proposals, ideas – in the form of conference speeches or books – are more attractive, while evidence is actively sought when an idea has been picked up and is being developed as a proposal by an MEP. This finding is in line with previous work on policy evaluations – a specific type of evidence – which shows that policymakers mainly rely on this kind of evidence for accountability purposes, rather than to fulfill legislative or agenda-setting functions (Jacob et al, 2015; Speer et al, 2015; Borras and Hojlund, 2015; Bundi, 2016). In this sense, results seem to suggest that the agenda-setting stage is not where evidence is the most important. Relatedly, other work shows that policymakers tend to perceive evidence as dull, composed of raw data and little interest (Lomas and Brown, 2009).
Another plausible explanation, which is complementary rather than contrary to the one presented above, is that ideas are more attractive than evidence due to the informational-processing costs of the latter. Evidence is more demanding to process, not only due to comprehension mechanisms but also due to the associated fact-checking costs. Such information also includes an associated action cost: if policymakers receive evidence on an issue it is likely their perception is that they must act on it, or at the very least elaborate more consistent responses. Previous work has already shown that when policymakers receive more elaborated informational inputs, this is also translated into more elaborated responses (Richardson and John, 2012). In this context, policymakers might choose to ignore evidence signals if they are not prepared to act on them or take the time to elaborate their response message accordingly.
In essence, results convey that policymaker attention is biased towards ideas rather than evidence at the agenda-setting stage of the policy process. The announcement of ideas increases attention and response rate in comparison to evidence. An immediate implication of this finding is that evidence is not all when communicating to policymakers. Special attention should be devoted to framing and connection with broader ideas and to the implications of this evidence. Evidence should be packed and framed in a way that looks appealing, attractive and straightforward to process – key tenets behind ideas. Furthermore, I show that there is room for future research in this field and that this will be a necessary pathway to improve evidence-based policymaking. From a normative point of view, one would hope that evidence should be equally attractive to policymakers with or without demand, as this information may serve as a more effective means or an objective indicator of both problems and solutions. Future studies should examine how other variables interact with information type to achieve policymaker attention. Issue saliency, issue characteristics, message source, intensity of contact and framing are only some examples that future research could investigate. Information is only one of the key variables, but future studies could look at how this variable interacts with other important ones such as ideology, institutions and interests (Weiss, 1995). Overall this paper has sought to open a research line on how we can draw the attention of policymakers to evidence when they do not necessarily have a prior request for it. This paper shows that making use of preexisting and real-life initiatives is a feasible and effective alternative to accumulating knowledge and advance in this research field.
Note
Note that this graph only includes the MEPs who voted for or against basic income in the European Parliament in 2017. Those who were missing or abstained from voting are excluded from this graph, but I do not find significant differences between them.
Acknowledgements
I thank Laura Chaqués-Bonafont, Camilo Cristancho, Julie Sevenaans, Jordi Muñoz, Andy Eggers, Spyros Kosimidis and the anonymous referees for extremely helpful comments in the manuscript and/or revision process. Previous versions of this manuscript have been presented in ECPR General Conference in Wroclaw, 2019; ICPP Conference in Montréal 2019; SISP in Lecce 2019, and CAP annual conference in Budapest 2019. I thank all the attendees and participants for their thoughtful comments and feedback.
Conflict of interest
The author declares that there is no conflict of interest.
References
Baesler, J.E. and Burgoon, J.K. (1994) The temporal effects of story and statistical evidence on belief change, Communication Research, 21(5): 582–602.
Barker, D.C. (2005) Values, frames, and persuasion in presidential nomination campaigns, Political Behavior, 27(4): 375–94.
Baumgartner, F.R., De Boef, S. and Boydstun, A. (2008) The Decline of the Death Penalty and the Discovery of Innocence, Cambridge: Cambridge University Press.
Baumgartner, F.R. and Jones, B. (2015) The Politics of Information: Problem Definition and the Course of Public Policy in America, Chicago, IL: University of Chicago Press.
BIG Report (2009) Basic Income Grant Pilot Project Assessment Report, ISBN: 978-99916-842-4-6.
Borrás, S. and Højlund, S. (2015), Evaluation and policy learning, European Journal of Political Research, 54: 99-120, doi:10.1111/1475-6765.12076.
Bundi, P. (2016) What Do We Know About the Demand for Evaluation? Insights From the Parliamentary Arena, American Journal of Evaluation, 37(4): 522–541, doi: 10.1177/1098214015621788.
Bouwen, P. (2002) Corporate lobbying in the European Union: the logic of access, Journal of European Public Policy, 9(3): 365–90.
Bouwen, P. (2004) Exchanging access goods for access: a comparative study of business lobbying in the European Union Institutions, European Journal of Political Research, 43(3): 337–69.
Butler, D.M. and Broockman, D.E. (2011) Do politicians racially discriminate against constituents? A field experiment on state legislators, American Journal of Political Science, 55(3): 463–77.
Butler, D.M. (2014) Representing the Advantaged: How Politicians Reinforce Inequality, Cambridge: Cambridge University Press.
Butler, D., Volden, C., Dynes, A. and Shor, B. (2017) Ideology and learning in policy diffusion: experimental evidence, American Journal of Political Science, 61(1): 37–49.
Cairney, P. (2016) The Politics of Evidence-Based Policy Making, London: Palgrave Macmillan.
Cairney, P. and Zahariadis, N. (2016) Multiple streams approach: a flexible metaphor presents an opportunity to operationalize agenda setting processes, in N. Zahariadis and M. Buckman (eds) Handbook of Public Policy and Agenda-Setting, Cheltenham: Edward Elgar.
Ceron, A. and Negri, F. (2016) The ‘social side’ of public policy: monitoring online public opinion and its mobilization during the policy cycle, Policy & Internet, 8(2): 131–47.
Chaiken, S. and Trope, Y. (eds) (1999) Dual-Process Theories in Social Psychology, New York: Guilford Press.
Chaqués-Bonafont, L. (2019) The agenda setting capacity of global networks, in D. Stone and K. Moloney (eds) Oxford Handbook of Global Policy and Transnational Administration, Oxford: Oxford University Press.
Dahlstrom, M.F. (2010) The role of causality in information acceptance in narratives: an example from science communication, Communication Research, 37(6): 857–75.
Dalia Research (2017) 31% of Europeans Want Basic Income as Soon as Possible, https://daliaresearch.com/31-of-europeans-want-basic-income-as-soon-as-possible/.
Druckman, J.N. and Bolsen, T. (2011) Framing, motivated reasoning, and opinions about emergent technologies, Journal of Communication, 61(4): 659–88.
Duchon, D., Dunegan, K.J. and Barton, S.L (1989) Framing the problem and making decisions: the facts are not enough, IEE Transactions on Engineering Management, 36(1): 25–27.
Entman, R.M. (1993) Framing: toward clarification of a fractured paradigm, Journal of Communication, 43(4): 51–58.
Forget, E. (2011) The town with no poverty: the health effects of a Canadian guaranteed annual income field experiment, Canadian Public Policy, 37(3): 283–305.
Gaines, B.J., Kuklinski, J.H., Quirk, P.J., Peyton, B. and Verkuilen, J. (2007) Same facts, different interpretations: partisan motivation and opinion on Iraq, Journal of Politics, 69(4): 957–74.
Gastel, B. (1983) Presenting Science to the Public, Philadelphia, PA: iSi Press.
Gerber, A.S. and Green, D.P. (2000) The effects of canvassing, telephone calls, and direct mail on voter turnout: a field experiment, American Political Science Review, 94(3): 653–63.
Gerber, A.S. and Green, D.P. (2012) Field Experiments: Design, Analysis, and Interpretation, New York: W. W. Norton.
Gilardi, F. (2010) Who learns from what in policy diffusion processes?, American Journal of Political Science, 54(3): 650–66.
Gilardi, F. and Wasserfallen, F. (2017) Policy Diffusion: Mechanisms and Practical Implications, Working Paper, https://www.fabriziogilardi.org/resources/papers/Gilardi-Wasserfallen-2017.pdf.
Gray, V. (1973) Innovation in the States: a diffusion study, American Political Science Review, 67(04): 1174–85.
Hall, R. and Deardorff, A.V. (2006) Lobbying as legislative subsidy, American Political Science Review, 100(1): 69–84.
Hardie, J. and Cartwright, N. (2012) Evidence-Based Policy: A Practical Guide to Doing It Better, Oxford: Oxford University Press.
Heinze, T. (2011) Mechanism-based thinking on policy diffusion: a review of current approaches in political science, KFG Working Papers, 34, https://www.polsoz.fu-berlin.de/en/v/transformeurope/publications/working_paper/wp/wp34/WP_34_Heinze.pdf.
Holmes, A. (2017) 31% of Europeans Want Basic Income as Soon as Possible, Dalia, May 3, https://daliaresearch.com/31-of-europeans-want-basic-income-as-soon-as-possible/.
Jacob, S., Speer, S., and Furubo, J.-E. (2015) The institutionalization of evaluation matters: Updating the international atlas of evaluation 10 years later, Evaluation, 21: 6–31.
Jones, B.D. (2017) Behavioral rationality as a foundation for public policy studies, Cognitive Systems Research, 43: 63–75.
Jones, B.D. and Baumgartner, F. (2005) The Politics of Attention, Chicago, IL: University of Chicago Press.
Kahan, D., Braman, D., Slovic, P., Gastil, J. and Cohen, G. (2008) Cultural cognition of the risks and benefits of nanotechnology, Nature Nanotechnology, 43: 63–75.
Kangas, O.E., Niemelä, M. and Varjonen, S. (2014) When and why do ideas matter? The influence of framing on opinion formation and policy change, European Political Science Review, 6(1): 73–92.
Kazoleas, D. (1993) The impact of argumentativeness on resistance to persuasion, Journal of Human Communication, 20(1): 118–37.
Kingdon, J.W. (1984) Agendas, Alternatives, and Public Policies, Boston, MA: Little, Brown.
Kopfman, J.E., Smith, S.W., Ah Yun, J.K. and Hodges, A. (1998) Affective and cognitive reactions to narrative versus statistical evidence organ donation messages, Journal of Applied Communication Research, 26(3): 279–300.
Kunda, Z. (1990) The case for motivated reasoning, Psychological Bulletin, 108(3): 480–98.
Lakoff, G. (2004) Don't Think of an Elephant! Know Your Values and Frame the Debate, White River Junction, VT: Chelsea Green.
Liu, B.S. and Ditto, P.H. (2013) What Dilemma? Moral evaluation shapes factual belief, Social Psychological and Personality Science, 4(3): 316–23.
Lodge, M., and Taber, C.S. (2000) Three steps toward a theory of motivated political reasoning, in A. Lupia, M. McCubbins and S. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality, New York: Cambridge University Press, pp. 183-213.
Lomas, J. and Brown, D. (2009) Research and advice giving: a functional view of evidence-informed policy advice in a Canadian ministry of health, Milbank Quarterly, 87(4): 903–26.
Lord, C.G., Ross, L. and Lepper, M.R. (1979) Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence, Journal of Personality and Social Psychology, 37(11): 2098–109.
Mondak, J. (1993) Source cues and policy approval: the cognitive dynamics of public support for the Reagan agenda, American Journal of Political Science, 37(1): 186–212.
Nisbet, M.C. and Mooney, C. (2007) Science and society: framing science, Science, 316(5821): 56.
Nutley, S., Walter, I. and Davies, H. (2007) Using Evidence: How Research Can Inform Public Services, Bristol: Policy Press.
Obinger, H., Schmitt, C. and Starke, P. (2013) Policy diffusion and policy transfer in comparative welfare state research, Social Policy and Administration, 47(1): 111–29.
O’Brien, D. (2013) Drowning the deadweight in the rhetoric of economism: what sport policy, free swimming, and EMA tell us about public services after the crash, Public Administration, 91(1): 69–82.
Reinard, J.C. (1988) The empirical study of the persuasive effects of evidence: the status after fifty years of research, Human Communication Research, 15(1): 3–59.
Richardson, L. and John, P. (2012) Who listens to the grass roots? A field experiment on informational lobbying in the UK, British Journal of Politics and International Relations, 14(4): 595–612.
Scheufele, D.A. and Lewenstein, B.V. (2005) The public and nanotechnology: how citizens make sense of emerging technologies, Journal of Nanoparticle Research, 7: 659–67.
Sevenans, J. (2017) Why Political Elites Respond to the Media: the Micro-Level Mechanisms Underlying Political Agenda-Setting Effects, PhD Thesis, Antwerpen: Universiteit Antwerpen.
Sevenans, J., Walgrave, S. and Gwendolyn, J.E. (2016) How political elites process information from the news: the cognitive mechanisms behind behavioral political agenda-setting effects, Political Communication, 33(4): 605–27.
Sevenans, J., Walgrave, S. and Vos, D. (2015) Political elites’ media responsiveness and their individual political goals: a study of national politicians in Belgium, Research and Politics, 2(3): 1–7.
Sides, J. (2015) Stories or science? Facts, frames, and policy attitudes, American Politics Research, 44(3): 387–414.
Simon, H.A. (1985) Human Nature in Politics: The Dialogue of Psychology with Political Science, 79(2): 293–304.
Speer, S., Pattyn, V., and DePeuter, B. (2015) The growing role of evaluation in parliaments: Holding governments accountable, International Review of Administrative Sciences, 81: 37–57.
Stone, D.A. (1989) Causal stories and the formation of policy agendas, Political Science Quarterly, 104(2): 281–300.
Taber, C.S., Cann, D. and Kucsova, S. (2009) The motivated processing of political arguments, Political Behavior, 31(2): 137–55.
Tal, A. and Wansink, B. (2014) Blinded with science: trivial graphs and formulas increase persuasiveness and belief in product efficacy, Public Understanding of Science, 25(1): 117–25.
Taylor, S.E. and Thompson, S.C. (1982) Stalking the elusive ‘vividness’ effect, Psychological Review, 89(2): 155–81.
Tufte, E. (2001) The Visual Display of Quantitative Information, Cheshire, CT: Graphics Press.
Vis, B. (2019) Heuristics and political elites’ judgment and decision-making, Political Studies Review, 17(1): 41–52.
Vries, C., Dinas, E. and Solaz, H. (2016) You have got mail! How intrinsic and extrinsic motivations shape legislator responsiveness, Working paper 140, IHS Political Science Series.
Walgrave, S., Sevenans, J., Van Camp, K. and Loewen, P. (2018) What draws politicians’ attention? An experimental study of issue framing and its effect on individual political elites, Political Behavior, 40(3): 547–69.
Walter, I., Nutley, S.M. and Davies, H.T.O. (2005) What works to promote evidence-based practice? A cross-sector review, Evidence & Policy, 1(3): 335–64.
Ward, V., House, A. and Hamer, S. (2009) Knowledge brokering: exploring the process of transferring knowledge into action, BMC Health Services Research, 16(9): 12.
Weiss, C.H. (1995) Have we learned anything new about the use of evaluation? American Journal of Evaluation, 19(1): 21–33.
Wood, M. (2015) Depoliticisation, resilience and the herceptin post-code lottery crisis: holding back the tide, British Journal of Politics and International Relations, 17(4): 644–64.
Zahariadis, N. (2016) Setting the agenda on agenda setting: definitions, concepts, and controversies, in N. Zahariadis and M. Buckman (eds) Handbook of Public Policy and Agenda-Setting, Cheltenham: Edward Elgar.
Zebregs, S., Van den Putte, B., Neijens, P. and De Graaf, A. (2014) The differential impact of statistical and narrative evidence on beliefs, attitude, and intention: a meta-analysis, Health Communication, 30(3): 282–89.