Behavioural insights teams in practice: nudge missions and methods on trial

View author details View Less
  • 1 University of Queensland, , Australia
Full Access
Get eTOC alerts
Rights and permissions Cite this article

Behavioural and experimental projects have become increasingly popular with policymakers. Behavioural insights teams have used several policy design and implementation tools drawn from behavioural sciences, especially randomised controlled trials, to test the design of ‘nudge’ interventions. This approach has attained discursive legitimacy in government agencies seeking to use the best available evidence for behaviourally informed, evidence-based policy innovation. We examine the practices of governmental behavioural insights teams in Australia, drawing on two research projects that included interviews with key personnel. We find that teams make strong commitments to using and promoting randomised controlled trials in government policy innovation. Nevertheless, some members of these teams are beginning to appreciate the constraints of relying solely on randomised controlled trials in the development of behavioural public policy. We conclude that while an initial focus on rigorous trials helped behavioural insights teams establish themselves in policymaking, strict adherence may represent a risk to their long-term growth and relevance.

Abstract

Behavioural and experimental projects have become increasingly popular with policymakers. Behavioural insights teams have used several policy design and implementation tools drawn from behavioural sciences, especially randomised controlled trials, to test the design of ‘nudge’ interventions. This approach has attained discursive legitimacy in government agencies seeking to use the best available evidence for behaviourally informed, evidence-based policy innovation. We examine the practices of governmental behavioural insights teams in Australia, drawing on two research projects that included interviews with key personnel. We find that teams make strong commitments to using and promoting randomised controlled trials in government policy innovation. Nevertheless, some members of these teams are beginning to appreciate the constraints of relying solely on randomised controlled trials in the development of behavioural public policy. We conclude that while an initial focus on rigorous trials helped behavioural insights teams establish themselves in policymaking, strict adherence may represent a risk to their long-term growth and relevance.

Introduction: the rise of behavioural insights in government

Understanding how incentives and constraints influence the behaviour of citizens and social groups has always been fundamental to the theory and practice of good governance and regulatory systems. The distinctive contribution of behavioural sciences in recent decades (based especially in social psychology, criminology, marketing and economics) has been to sharpen our appreciation that habitual and emotive factors shape the actual choices of citizens (Kahneman, 2011; Halpern, 2015; Thaler, 2016). Citizens’ responses to the rules and options devised by lawmakers are not only diverse, but also often against the interests of the citizens themselves. Thus, empirical research to improve policymakers’ understanding of citizen behaviour was championed as an evidence-based pathway towards improving policies and service systems. These behavioural studies gradually became more influential as the cost-effectiveness of mainstream programmes came under heavy criticism.

In government, the application of this particular framework of behavioural science is often undertaken by specialised teams. These teams have rapidly emerged all over the world – in Europe, the US, Canada, Japan, Singapore, Saudi Arabia, Peru, Australia, New Zealand and many more countries (OECD, 2017). There is no dominant model across these diverse contexts. Behavioural teams have been working in many policy domains and with a variety of partner organisations across government, industry and civil society. A recent World Bank report has highlighted the different ways in which ten countries have implemented behavioural policies. These behavioural initiatives included both centralised and decentralised organisational models, and were managed through either formal arrangements or more networked approaches. Some initiatives focused on the early stages in policy design, while others considered implementation issues and programme evaluation (World Bank, 2018). This variety illustrates both the broad intellectual foundations of the behavioural sciences and the wide spectrum of possible applications across issues and problems.

Among the variety of initiatives inspired by the behavioural sciences in government, a notable model that attracted global attention was the UK Behavioural Insights Team (BIT UK), initially developed in the UK Cabinet Office. This team, led by David Halpern, has become a distinctive and successful model; indeed, in 2014, it evolved into a not-for-profit consulting company (Halpern, 2015; John, 2018). The portfolio of projects undertaken by the BIT UK, now working with partners in several countries, has attracted both enthusiastic emulation and critical commentary. However, there has been very little research about how such teams actually operate, how they select projects that might demonstrate social benefits and how their commitment to rigorous testing methodologies not only shapes, but also limits, their effectiveness. Recent studies have noted that behavioural insights (BI) team members typically highlight their contribution to building a stronger evidence base for analysing policy options (Einfeld, 2019; Feitsma, 2019; Ball and Feitsma, forthcoming); however, the more complex perspectives of team members within their varied institutional contexts now deserve deeper attention.

This article explores how the BIT UK model was adapted and developed in Australia, with special attention to the everyday practices and choices of staff in the BI teams located in central agencies. Drawing on case studies of three BI teams in Australia, we examine how the members of these teams describe their mission and mandate, their selection of problems or projects, their research methods, and the interplay between their professed scientific rigour and the wider political and organisational contexts, such as working with stakeholder organisations on matters of vital interest to government ministers. We conclude that while an initial focus on rigorous empirical research helped BI teams establish themselves in policymaking, strict adherence to this approach may represent a risk to their long-term continued growth and relevance.

Background: the emergence of BI teams

Initially set up in the UK Cabinet Office, the BIT UK developed a large programme of projects, extensive publications and later established affiliated teams working with government and community organisations in Manchester, New York, Sydney, Wellington and Singapore.1 Drawing on their own publications and broader studies, such as those by the World Bank (2018) and the Organisation for Economic Co-operation and Development (OECD, 2017), it is possible to see that the work of the BIT UK, and the many teams informed by their model, encompasses several different concepts and policy tools. It is arguably ‘best seen as a combined body of policy projects, procedures, instruments, organisational designs, disciplinary outlooks, theoretical ideas, evaluation methodologies and ethical orientations’ (Ball and Feitsma, forthcoming: 2). However, there are three key influences on their work: behavioural economics, the use of randomised controlled trials (RCTs) and nudges.

The BIT UK leans heavily on the same research as that used in the psychology research underlying behavioural economics (Halpern, 2015; Thaler 2016; Congdon and Shankar, 2018). In the 1970s and 1980s, psychologists Daniel Kahneman and Amos Tversky had catalogued many systematic errors and predispositions in human judgement and decision-making, often in contrast to what was expected in standard economic models. These typical biases in decision-making included loss aversion, framing and the base-rate fallacy, in addition to the use of heuristics such as anchoring, availability and representativeness. The idea that there were systematic ways in which human behaviour deviated from that of an economically rational actor raised questions about existing approaches to economics, as well as policy design (Kahneman, 2011; Thaler, 2016; Congdon and Shankar, 2018), and, in time, this concern made its way into government thinking (Thaler and Sunstein, 2008; Halpern, 2015).

A popular approach was to focus on how to design choices (‘choice architecture’) in a way that supports and encourages people to make better decisions but without restricting choice among alternatives (Thaler and Sunstein, 2008). Nudges use the findings of behavioural science to inform the redesign of choice architecture in order to achieve better policy outcomes without forbidding any options or significantly changing incentives. These non-coercive techniques have been applied in diverse fields, including physical environments, the layout of forms or the options presented on a service’s webpage. The positive results generated by BI teams using such techniques are often presented as evidence that small behaviourally informed ‘tweaks’ can provide cost-saving options for policymakers looking to change behaviour (Sanders and Halpern, 2014; Halpern, 2015).

Behavioural economics further influenced BI through its preferred methodology: the use of experiments and RCTs. By its nature, behavioural economics focuses on detailed empirical results rather than the development of an overarching grand theory. This focus on empirical testing led to highlighting projects that demonstrated direct policy implications, such as the Save More Tomorrow trial (Thaler and Benartzi, 2004) and the organ donation register trial (Abadie and Gay, 2006). These direct policy implications meant that the results were easily translated for policy implementation, and able to demonstrate measureable success (BIT UK, 2013; Benartzi and Thaler, 2013). In fact, more than 400 trials across a wide array of policies between 2010 and 2015 were evaluated through the use of RCTs or other experimental methods, and the promotion of these methods has been a core part of BIT messaging (Sanders and Halpern, 2014; BIT UK, 2015, 2016, 2017; Halpern, 2015). The OECD (2017: 48) report on Behavioural insights and public policy states that BI is ‘first and foremost an approach to policy making that embeds experimentation into the development of policy and regulation’.

To explain very briefly, RCTs represent a particular type of experimental method to test the effects of different measures. To measure the impact of an intervention, people in a social category are randomly assigned into treatment and control groups. The treatment group receives the new intervention, while the control group does not. Analysis of findings is intended to establish causal links between the intervention and the observed behavioural outcomes. A distinctive feature of the RCT approach is the random allocation of similar people into either a control or treatment arm of the study (Chalmers, 2003: 29). Trials can take place in the lab or in the field, and results can be collected through surveys or other forms of data collection. Advocates believe that a properly designed RCT is likely to produce ‘the least equivocal and least biased estimates of the effects of the program under study relative to a control condition or a competing program’ (Boruch and Rui, 2008: 43).

In its early years, the BIT UK (2012: 26) reported that a key characteristic of its own approach was ‘the extensive use, and encouragement of, experiments.… For example, it is often highly desirable for interventions to contain an “A–B format”, in which individuals are randomly allocated to one of two slightly different web pages, or receive two slightly different letters or policy interventions’. These trials align more closely with the trial methods used by technology innovators like Google and Facebook than with the detailed medicine-inspired RCTs of a previous era (Campbell, 1991; John, 2016). A–B trials compare two somewhat similar versions of an intervention (A and B), typically involving small changes in framing or terminology, in order to see which form of a message or other intervention increases attention and improves beneficial choices. The trials are intended to provide results quickly, with feedback provided ‘in real time’ (John, 2016). Some of the best-known trials run by the BIT UK used this approach, such as interventions to increase charitable giving or improve the payment of taxes and fines (BIT UK, 2015, 2016; Halpern, 2015). The arguments in favour of experimental policy trials have found support in many countries, including federal and state governments in Australia.

Developments in Australia

The potential benefits of using BI attracted early attention in the Australian government, with an expert roundtable held in 2007 (Productivity Commission, 2008) and two discussion papers released by the Australian Public Service Commission (APSC, 2007; 2009). Linkages between the BIT UK and state government agencies in Australia began in late 2012 when a project partnership was agreed with the Department of Premier and Cabinet in New South Wales (NSW). A new central unit undertook a series of trials of policy interventions, many of which required working with other departments. These trials included redesigning notices for the payment of fines, facilitating employees returning to work sooner after an injury, and reducing the incidence of missed outpatient appointments in hospitals (NSW Behavioural Insights Unit, 2014). They later expanded their focus to include aspects of more complex policy problems, such as analysing aspects of childhood obesity and domestic violence reoffending (NSW Behavioural Insights Unit, 2016). Other initiatives evolved in the state of Victoria. David Halpern, director of the BIT UK, was invited to be a leading thinker assisting the Health Promotion Foundation (VicHealth) between 2014 and 2016. This work included the design of seven behavioural trials for delivery by VicHealth and partners, along with workshops, public lectures and the establishing of a community of practice for health practitioners interested in the use of BI (Vichealth and Halpern, 2016).

There has also been wide interest within government agencies in funding low-cost trials that can lead to measurable improvements. A former professor of economics, Andrew Leigh MP, has championed the wider use of high-quality programme evaluations through RCTs (Leigh, 2009; 2018). Around the time when the Behavioural Economics Unit was being established in the Department of Prime Minister and Cabinet, an online survey was conducted of elected members of all nine state and federal parliaments (828 invitations, attracting 109 responses). Despite this low response rate, the analysts declared that ‘73% are supportive of the use of RCTs in social policy, and 51% prioritised RCTs as a top three input to which politicians should pay attention’ (Ames and Wilson, 2016: 12). A later study of public opinion found that citizens were broadly accepting of policy testing and pilot programmes, though they were less comfortable about the random allocation of people to different treatment groups (Biddle and Gray, 2018).

In late 2015, the Australian government announced the establishment of its own central unit in the Department of Prime Minister and Cabinet. The Australian government’s Behavioural Economics Team Australia (known as BETA) is a joint initiative funded by federal government departments and agencies (19 at the time of writing). The team brings together an interest in BI more broadly and a focus on RCTs. According to their website, BETA’s mission is ‘to advance the wellbeing of Australians through the application and rigorous evaluation of behavioural insights to public policy and administration’.2 They hope to achieve this through the promotion and monitoring of the use of evidence-based policymaking, and through the provision of technical expertise (Ames and Hiscox, 2016: 4; Ball, Hiscox and Oliver, 2017).

Drawing on exploratory case studies of three Australian BI teams, we will show that while a focus on RCTs may have proved useful in establishing BI as an approach to policy, driving legitimacy and demonstrating quick success, it also represents a risk to their longevity and longer-term success within government policymaking. We will begin with a discussion of our methodology and case studies, followed by a brief overview of the use of RCTs by BI teams in Australia. We demonstrate the strong commitments made in these teams to using and promoting RCTs in government policy innovation. Nevertheless, some members of BI teams are beginning to appreciate the constraints of relying solely on trial methodologies in contributing to the development of behaviourally informed policy.

Method

This article draws on the findings of two independent studies examining the operation of BI teams in Australia in 2017/18. Our research intent is to unpack the perspectives and practices of the policy workers and analysts in these teams. The first study entailed an exploratory ethnographic placement undertaken within the BETA unit in 2017. This study considered how BI practitioners and their project partners translated and enacted ‘behavioural insights’ in practice. This project represented an exploratory case study of the boundaries of the concept of BI in government policymaking. This research used an interpretive, abductive approach to data collection and analysis, undertaken through fieldwork by the first author as an embedded researcher at BETA’s offices. This involved overt participant observation (Lofland and Lofland, 1995: 32), which totalled 350 hours spread across 47 days. During this observation period, the sample included all members of BETA, as well as those partner agency team members who attended meetings or events. The researcher also conducted 19 formal semi-structured interviews of between 30 and 50 minutes: eight with internal staff, three with former staff and eight with members of external partner agencies. Interview participants were recruited using purposive non-probability sampling methods (Silverman, 2010: 141).

In the second study, the researchers compared the establishment processes, stated missions and project selections undertaken by three BI units working in three governments: the Australian federal government (focusing on BETA, as earlier); and two state governments in NSW and Victoria. In the first half of 2018, 30 interviews were conducted with key staff in these three jurisdictions. The researchers utilised a semi-structured thematic interview schedule, with interviews lasting between 40 and 60 minutes. By targeting staff who were directly involved with the design and implementation of experimental field studies, this project sought to document the practitioners’ own perceptions of the purposes, strengths and limitations of behavioural approaches to policy design and service delivery. Interviews also explored the perceived challenges and opportunities experienced by staff in their efforts to improve policy settings through experimental behavioural methods. Ethical clearance for both research projects was obtained from the University of Queensland, including the observational study and the interviews, and the confidentiality of data was protected in accordance with the required protocols.

Quotations taken from the interview transcripts and fieldnotes have been de-identified except where explicit permission for naming informants was obtained. Interview materials have been cited by using coded abbreviations and avoiding positional references. (Thus, the fieldnotes in the first BETA study are coded FN, and interviews with BETA staff are coded B. Interviews from the second study of Victoria, NSW and the federal BETA unit are coded as V, N and A, respectively.) In reporting quotations from the interviews, we focus on the most typical or common perceptions, while being mindful of a range of perspectives that sometimes hinted at alternative viewpoints.

We did not attempt to compare and contrast these three organisational units; rather, we undertook an exploratory survey of core themes. We used an approach of abduction or ‘puzzling out’, going back and forth between our research and the literature to ‘discern patterns of broader analytical interest’ (Boswell et al, 2019: 29). This approach does not provide generalisations, but seeks to identify patterns, resemblances and affinities (Boswell et al, 2019: 90). We hope to thereby stimulate further research and deeper consideration of the issues raised.

Commitment to the use of RCTs and rigorous evidence

Our research strongly highlighted the significant role that RCTs played in the promotion of a behavioural approach to policy design and implementation in government. Interviewees in our three case-study teams highlighted the importance of RCTs to their own work, sometimes to the exclusion of other methods and to the exclusion of policy issues not amenable to trials. For example, in the federal government, the research director of BETA stated that “our bread and butter is that we design trials” (FND7). In fact, BETA quickly developed a reputation among some other partner agencies for being particularly wedded to RCTs – a choice referred to as “RCT or the highway” (FND7). In NSW, a senior executive stated that the BI unit avoids spending “a lot of time on work that can’t be turned into a trial” (NK1). In the Victorian government, a senior official stated that the BI team undertakes “a lot of RCTs” and values the credibility attached to producing rigorous results (VH3). The Victorian team saw itself as “well placed” to promulgate the “methodology of testing, of finding out – using experimentation, testing and RCTs to establish what works” (VO5). A manager in NSW stated that “we use RCTs routinely … sometimes it’s not always feasible, but we use them in almost all our trials, and all our projects. And we do that because it is the gold standard. You know, accepted as the gold standard of evidence” (NK2).

The commitment to RCTs was also demonstrated in the skill sets and capabilities of BI team members. For BETA, more than half of the team during the early years held a psychology or economics degree, and felt that it was important that all staff had at least a basic understanding of how to design and run an RCT. The Victorian team spoke about the importance of “technical skills that run from the policy design, the intervention design, the technical data skills, to be able run and analyse an RCT” (VG4). Staff in NSW required specific technical skills, though some substantial “learning on the job” was also common (N08). The commitment to promoting RCTs within government extended to educating others about their value. BETA developed and managed training courses to enhance the technical skills of public servants by providing “introductory, intermediate and advanced training on RCTs for staff of our partner agencies” (Ames and Hiscox, 2016: 25).

There were two main reasons adduced for this strong commitment to trial-based projects. First, the teams championed building a rigorous evidence base, trialling policy initiatives to assess clear positive impacts before they were mainstreamed. Second, teams wanted to build a strong case for using this reliable, less ideological or politicised evidence by government and decision-makers. These reasons are outlined in more detail in the following.

Building an evidence base

In multiple publications, BETA claims that ‘a randomised controlled trial (RCT) is the best way of telling if a policy is working’ (Copley et al, 2017; Hiscox, Oliver et al, 2017). RCTs are considered the ‘gold standard’ for assessing causal impacts ‘because a RCT determines the impact of an intervention or treatment compared to if nothing was changed’ (Hiscox, Oliver et al, 2017: 11). BETA is dedicated to ‘rigorously test policy designs to build our understanding of what works and when [they] need to adapt [their] approach’ (Ames and Hiscox, 2016: 22). Choosing to ignore rigorous testing is presented as a high-risk endeavour – policymakers need to ask themselves ‘[w]hat is the cost of not knowing what works?’ (Ames and Hiscox, 2016: 24). A senior BETA manager said that:

‘in choosing randomised control trials as our primary method of showing things, I think that’s enormously valuable, and … any incremental progress is a good thing.… So, if we can – within the realm of areas where RCTs are appropriate – show their value, show where they do lead to good outcomes, show where they maybe find a null result or a backfire and, therefore, “Isn’t this great that we didn’t … put this thing in a larger scale?”’ (A05)

The NSW team also maintained that experimental trials using RCTs should provide the base of reliable knowledge necessary for policy innovation and reform: ‘we want to determine the impact in the most rigorous way possible. Where we can, we use Randomised Controlled Trials (RCTs), which compare the effectiveness of our interventions against what would have happened if nothing had changed (the “control group”)’ (NSW Behavioural Insights Unit, 2014: 3).

Depoliticising issues

The commitment to RCTs was also presented by some participants as a response to the ideological and politicised context of the policy process. A public servant in BETA stated that “the type of rigorous analysis that truly is evidence-based policy” is often in short supply in government, and that by using RCTs, incremental progress in evidence-based policy is “valuable” (AO3). Similarly, BETA’s research director stated that government should sponsor more RCTs because otherwise ‘policy makers get to decide, and put something out, at full scale, with no rigorous scientific peer-reviewed testing as to whether or not it’s safe and effective…. There is no reason to trust politicians and policy makers more than doctors’ (cited in Easton, 2016).

Others saw this evidence base as a way to depoliticise the policy process, thus helping the government to make more effective policy decisions. One interviewee spoke about the wider value of trials: “the other really important part is testing policies and programmes so that you have an evidence base so that we are actually um, helping I think … policymakers to make better decisions by giving them some evidence about what actually works around populations” (B2). Another participant noted that “I feel like in government, we spend a whole lot of money on a whole heap of stuff and we really don’t know how effective it is. When you bring in behavioural insights and the methodology they apply, you absolutely know how effective interventions are” (NP1).

Project selection

When selecting projects, BI teams exhibited a strong preference for those that can be evaluated with trial data. For BETA, an intervention topic was given priority only if it was considered amenable to trial and included a significant behavioural aspect:

‘we have sought to build up a … niche expertise in RCTs – which means that there’ll be occasions where we look at something and say … “This looks like it’s susceptible to behavioural insights” [but actually] it’s not going to be susceptible to an RCT. Maybe somebody else should evaluate it.’ (A06)

In NSW, they similarly had a set of criteria for selecting a BI project:

‘So, within the unit, methodologically we have five … sort of gateways before we embark on doing a project. Is it feasible? Is there data that’s collected, and is the system measurable? Is it a government priority? Do we have partner support, that sort of thing? Is there a behavioural value-add?’ (NW1)

In Victoria, it was somewhat more circumspect; however, a senior Victorian manager commented that, for them, it came down to “the nitty gritty of designing interventions, figuring out whether there’s data that we can use to evaluate the effect. Ideally write an RCT if that’s possible” (VK5).

Perhaps not always RCTs

The three teams also reported using other information-gathering methods as background for the design of trials. For example, the BETA and NSW teams utilise a discovery or exploratory phase, where they gather qualitative data along with administrative data for analysis and interpretation (Ames and Hiscox, 2016; NSW Behavioural Insights Unit, 2016). The Victorian team was distinctive in highlighting some “projects that involve a lot of co-design, where end users are engaged” (VG4). However, it was reported by some that exploratory work could easily be disrupted by time pressures and a focus on delivering a trial. For example, a BETA member noted that an opportunity to directly observe front-line service provision was “super helpful” but because it took time and resources, we “can’t do it every time” (B4). In another project, limited time frames influenced a decision not to undertake qualitative work. This project had aimed at improving compliance with a government intervention, where the reasons for poor compliance were not well understood. The team had intended to conduct interviews but time constraints led to abandoning this step “because it was in the too-hard basket, and it was … convenient to drop out” (B10). Others expressed concerns that this exploratory work was harder to justify because:

‘people have an expectation of behavioural insights units based on the story and narrative that inevitably came out of the BIT in the UK … so it’s a lot harder to quantify the upstream policy development implications of doing a piece of ethnographic research that takes 300 hours of fieldwork.’ (VH3)

Even when collected, qualitative data were often absent from final reporting, which instead focused on RCT results. For example, a BETA report on retirement income planning mentioned that qualitative findings were obtained during the survey process but details were sparsely outlined by comparison with the level of detail given to the trial findings (Hiscox, Hobman et al, 2017). A note was made that a ‘more detailed analysis of the qualitative data’ may be provided in a future study (Hiscox, Hobman et al, 2017: 23); however, this did not emerge. This pattern raises concerns for external observers. Qualitative research findings are considered an important part in designing an intervention but they are not accorded significant visibility. A notable exception was a report of the Victorian BI Unit (Behavioural Insights Unit, Victoria, 2017) outlining an extensive ethnographic study of how domestic violence offenders engage with the justice and corrections system, and how this engagement could be better managed.

RCTs = better policymaking?

The BI teams did not claim that RCTs were a universal solution to all research questions and policy puzzles. Some senior staff reflected on the risks of how experimental methods might limit the issues that could be tackled, and thus constrain the potential long-term value of their ongoing work programme and the impact of BI within policymaking.

One of these risks is that a commitment to RCTs may prevent them from being consulted on complex or political issues. Governments are unlikely to risk working with a BI team if that implies that a trial needs to be undertaken and, additionally in the case of BETA, that those results will generally be published. This, then, prevents them from undertaking the more nuanced elements of behavioural policy design, especially the early sensitising, diagnosis stages. One interviewee noted that, at least in the beginning, smaller-scale projects were more important to build confidence and relationships than running trials on “the minister’s baby” and thus risk having a short life (B6).

The members of the BI teams that we spoke to were not unaware of the political aspect of the policymaking process. While the core commitment to the rigour and purity of RCTs was clear, it was often balanced by the public sector context, especially the need to respond to matters of public importance as framed by government leaders. For example, a senior executive pointed out that in a “government operating environment”, there are always important contextual matters requiring close attention, such as ministerial priorities and support from partner agencies for a rigorous trial (VH3). Thus, some BI teams have assisted strategic policy teams working on larger government priorities, and while these teams were primarily recruited for their skills in managing RCTs, some of the more experienced public servants had a broader view of the policy process, including the importance of carefully identifying the problem, the value of stakeholder engagement and the need for cross-agency collaboration.

However, despite a backstage pragmatism, as we saw earlier, these BI teams continue to present a public message that more rigorous evidence will lead to better policy. This commitment to RCTs and trials appears to have assisted all three teams in ‘justifying their existence’. BETA received special funding for a three-year extension in the May 2017 federal budget. In NSW, the chief executive of the Department of Premier and Cabinet stated that he ‘look[s] forward to further exploring new avenues for applying behavioural insights to drive better outcomes across the NSW public service’, while his Victorian equivalent stated that ‘Victoria is increasingly applying behavioural insights to complex, multi-sector issues’ (as cited in BETA, 2018: iii). These teams have continued to expand their circle of influence in policymaking over several years.

Given the role of RCTs in their work programme and their evidence-based communication, the influence of BI teams is attracting close scrutiny. The form of their commitment to RCTs and experimental evidence raises issues about the potential value of their ongoing work programme and the broader impact of BI in policymaking. Some analysts have seen the increase in this type of ‘evidence-based policy making … as “stealth advocacy”, transforming public conflicts into debates between experts and immunising political issues against critique and opposition’ (Strassheim and Kettunen, 2014: 264; see also Mols et al, 2015; Whitehead et al, 2017). Lepenies et al (2018: 6) also claim ‘that the use of RCTs in the behavioural policy arena is outpacing critical reflection about their uses and the distinct epistemic and normative presumptions of the “policy culture” in which they are drawn upon’. These claims about technocratic expertise are the focus of our discussion in the next section.

Discussion: technocracy and the politics of policymaking

It is argued by some policy reformers that if we had more objective and more rigorous evidence, we could convince governments to pursue more effective policies (Haynes et al, 2012; Haskins and Margolis, 2014). Such claims show a clear link between the popularity of RCTs and the ‘science’ wing of the evidence-based policy movement (Boaz et al, 2008; Fleming and Rhodes, 2018; Einfeld, 2019; Feitsma, 2019). The concept of evidence-based policy is positioned as a scientifically neutral and objective channel to drive public policy towards ‘what works’, rather than being corrupted by politics and ideology (Parkhurst, 2016: 6).

The rationalist ideal of a linear relationship between evidence and policymaking has long been disputed, with many calling into question the desirability or even feasibility of an instrumental approach to the use of evidence, whereby the political dimension of policy development is suppressed (Weiss, 1979; Head, 2008, 2016; Russell et al, 2008; Parkhurst, 2016). As Head (2013: 397) observed, policy ‘decision making is typically not derived from objective science but based on reasoned argumentation, taking account of professional judgements, stakeholder interests and political contexts’. Use of evidence undergoes a series of filters and constraints (Ingold and Monaghan, 2016). This is not necessarily to be seen as a failure of effective policymaking, but rather highlights the important role played by interpretation and meaning-making in any policy designed and implemented by and for human beings.

Despite the extensive body of research demonstrating that policymakers’ use of evidence is often tactical and selective, rather than systematic and for careful problem-solving, BI teams clearly present a public message that more rigorous evidence will lead to better policy (BIT UK, 2017; BETA, 2018; Hallsworth et al, 2018; Sanders et al, 2018). Their reliance on demonstrating measurable benefits from trials is assumed to overcome the political and negotiated nature of policy change. However, ‘policy-by-RCT is neither ideologically neutral nor purely evidence based; it depends on the policy culture in which it is utilised’ (Lepenies et al, 2018: 6–7).

The promotion of RCTs may also risk the work of the BI teams being viewed as technocratic. RCTs can be exclusionary, both in terms of the technical knowledge required to understand the process and in terms of democratic engagement with the problems and solutions. For example, while the Test, learn, adapt guidelines portray trials as simple and ‘straightforward’, the need for ‘expert support’ is also underlined (Haynes et al, 2012: 18). Interviewees in the three teams also insisted that the design and implementation of RCTs required significant skills.

The use of more technical language, requiring a degree of expertise for analysis, can lead to a decrease in transparency for the public, and even within government itself. According to Mendel (2018: 325), the RCT findings presented by teams like the BIT UK ‘leave one relying on the authority or eminence of those involved in running the trials’, leading to a form of ‘authority-based rather than evidence-based policy’. BI team publications are criticised for providing limited methodological details about sampling, or about the specific conditions of the control and intervention trials. Given that these publications are reported as providing the results of rigorous trials, the absence of these quality-assurance checks has been problematic. In attempting to allay concerns about detailed reporting, the BIT UK have published several academic papers that address some of these issues (Hallsworth et al, 2014; Hallsworth et al, 2016; Hallsworth, 2016), and BETA has pre-registered a number of trials.3 However, the continuing assumption that quantitative trials should command authoritative influence within government policy processes remains a legitimate area of concern.

A final concern is that trials-based policy advice devalues experiential knowledge, including both professional knowledge and the experience of weaker actors. The champions of RCTs specifically seek to sideline professional and stakeholder knowledge from policy analysis, equating ‘experience’ with anecdotal or intuitive evidence (Haynes et al, 2012; Haskins and Margolis, 2014). The BIT UK team has previously acknowledged that qualitative research and democratic principles are vital when designing policy interventions (Halpern and Sanders, 2016); however, this insight has not been fully developed. Harvesting and promoting the use of knowledge and insights from diverse sources is arguably consistent with maintaining a strong commitment to evidence-informed policy development (Stern et al, 2012; Head, 2008, 2016). Making use of several sources of evidence, including stakeholder perceptions and concerns, is vital for scoping and framing the nature of the problem, understanding how specific interventions might be received by affected stakeholders, and rigorously testing those options that are most likely to be both effective and legitimate.

Conclusion

These critical concerns are raised here not in order to deny that RCT research methods can provide useful opportunities for learning about ‘what works’. Rather, the argument is that BI teams’ prioritisation of experimentalism in policymaking is only one of the methods suitable for gaining useful and reliable knowledge for policy innovation and the evidence-based analysis of options. We have noted two concerns that may inhibit the long-term impact of BI on government policymaking. First, it is important to acknowledge the limitations placed on the scope of research questions, both methodologically and politically, when projects are centred on conducting RCTs. A renewed focus on linking the micro and macro levels of analysis could address some of these concerns (Ewert, Loer and Thomann, 2020). Second, reliance on RCT-based science overlooks the political context of policymaking, as well as the widely recognised need to build civic trust in the face of technocratic and opaque government processes. The scientific nature of RCTs is supposed to ‘depoliticise’ the evaluation of policy options; however, this masks the political nature of selecting research questions and instruments, as well as the substantive exclusion of the voices of users and citizens.

We found some evidence that BI team members and their wider circle of partners were at least partly aware of these concerns and limitations. The Victorian team openly acknowledged their interest in exploring beyond the “pure … and technical” behavioural approach and instead investing energy “working up the chain”, that is, spending time exploring the mechanisms and meaning behind the problem before designing an intervention (VH3). Given the perceived constraints of experimentalism and micro-trials, it is not surprising to find evidence of growing support inside public agencies for human-centred design and stakeholder collaboration (for example, McGann et al, 2019) as an alternative pathway for policy innovation and improved services. The fast-moving pace of policy debate, and competition among the champions of different models for policy development, suggest that BI teams may need to learn to work with broader conceptions of knowledge and experiential expertise.

However, while individual members of BI teams may privately acknowledge certain limitations of experimentalism, the official narrative of evidence-based policy built on the scientific authority of RCTs remains at the core of their promise. Investing more effort in the early discovery stages, and opening up the assumptions underlying intervention design, would allow for the exploration of a broader evidence base for specific policy decisions. While the language of rigour has proven successful in aligning behavioural intervention trials with the cost-effectiveness agendas of governments, a continuing central focus on using RCTs could ultimately limit the scope and credibility of BI for policy use.

Funding

This work was supported by an Australian Government Research Training Program Scholarship and by a research grant from the Australia and New Zealand School of Government.

Acknowledgements

Thanks to Stephen Jones for assistance in conducting interviews in the second project, and to Michele Ferguson for assistance in coding the interview materials in that project.

Conflict of interest

The authors declare that there is no conflict of interest.

References

  • Abadie, A. and Gay, S. (2006) The impact of presumed consent legislation on cadaveric organ donation: a cross-country study, Journal of Health Economics, 25(4): 599620. doi: 10.1016/j.jhealeco.2006.01.003

    • Search Google Scholar
    • Export Citation
  • Ames, P. and Hiscox, M. (2016) Guide to Developing Behavioural Interventions for Randomised Controlled Trials, Canberra: Australian Government.

    • Search Google Scholar
    • Export Citation
  • Ames, P. and Wilson, J. (2016) Unleashing the Potential of Randomised Controlled Trials in Australian Governments, working paper, Boston: Harvard Kennedy School.

    • Search Google Scholar
    • Export Citation
  • APSC (Australian Public Service Commission) (2007) Changing Behaviour: A Public Policy Perspective, discussion paper, Canberra: APSC.

  • APSC (2009) Smarter Policy: Choosing Policy Instruments and Working with Others to Influence Behaviour, discussion paper, Canberra: APSC.

    • Search Google Scholar
    • Export Citation
  • Ball, S., Hiscox, M. and Oliver, T. (2017) Starting a behavioural insights team: three lessons from the Behavioural Economics Team of the Australian Government, Journal of Behavioral Economics for Policy, 1(S): 216.

    • Search Google Scholar
    • Export Citation
  • Ball, S. and Feitsma, J. (forthcoming) The boundaries of behavioural insights: observations from two ethnographic studies, Evidence & Policy.

    • Search Google Scholar
    • Export Citation
  • Behavioural Insights Unit, Victoria (2017) Applying Behavioural Insights: Improving Information Sharing in the Family Violence System, Melbourne: Victorian Government.

    • Search Google Scholar
    • Export Citation
  • Benartzi, S. and Thaler, R.H. (2013) Behavioral economics and the retirement savings crisis, Science, 339(6124): 11523. doi: 10.1126/science.1231320

    • Search Google Scholar
    • Export Citation
  • BETA (Behavioral Economics Team Australia) (2018) Behavioural Insights for Policy: Case Studies from Around Australia, Canberra: Australian Government.

    • Search Google Scholar
    • Export Citation
  • Biddle, N. and Gray, M. (2018) Support for Policy Trials in Australia, Canberra: ANU Centre for Social Research and Methods.

  • BIT UK (Behavioural Insights Team) (2012) Annual Update 2011–2012, London: UK Cabinet Office.

  • BIT UK (2013) Applying Behavioural Insights to Charitable Giving, London: UK Cabinet Office.

  • BIT UK (2015) Update Report 2013–2015, London: UK Cabinet Office.

  • BIT UK (2016) Update Report 2015–2016, London: Behavioural Insights Team Ltd.

  • BIT UK (2017) Update Report 2016–2017, London: Behavioural Insights Team Ltd.

  • Boaz, A., Grayson, L., Levitt, R. and Solesbury, W. (2008) Does evidence-based policy work? Learning from the UK experience, Evidence & Policy, 4(2): 23353.

    • Search Google Scholar
    • Export Citation
  • Boruch, R. and Rui, N. (2008) From randomized controlled trials to evidence grading schemes: current state of evidence-based practice in social sciences, Journal of Evidence-Based Medicine, 1(1): 419. doi: 10.1111/j.1756-5391.2008.00004.x

    • Search Google Scholar
    • Export Citation
  • Boswell, J., Corbett, J. and Rhodes, R. (2019) The Art and Craft of Comparison, Cambridge: Cambridge University Press.

  • Campbell, D.T. (1991) Methods for the experimenting society, American Journal of Evaluation, 12(3): 22360. doi: 10.1177/109821409101200304

    • Search Google Scholar
    • Export Citation
  • Chalmers, I. (2003) Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations, Annals of the American Academy of Political and Social Science, 589: 2240. doi: 10.1177/0002716203254762

    • Search Google Scholar
    • Export Citation
  • Congdon, W.J. and Shankar, M. (2018) The role of behavioral economics in evidence-based policymaking, Annals of the American Academy of Political and Social Sciences, 678: 8192. doi: 10.1177/0002716218766268

    • Search Google Scholar
    • Export Citation
  • Copley, S., Brewer, J., Turenko, A., Wilson, J. and Hiscox, M. (2017) Effective Use of SMS: Timely Reminders to Report on Time, Canberra: Australian Government.

    • Search Google Scholar
    • Export Citation
  • Easton, S. (2016) Experimental trials: a BETA way of making policy, The Mandarin, 28 November, www.themandarin.com.au/73022-experimental-trials-a-beta-way-of-making-policy

    • Search Google Scholar
    • Export Citation
  • Einfeld, C. (2019) Nudge and evidence based policy: fertile ground, Evidence & Policy, 15(4): 50924.

  • Ewert, B., Loer, K. and Thomann, E. (2020) Beyond nudge: advancing the state-of-the-art of behavioural public policy and administration, Policy & Politics, this issue.

    • Search Google Scholar
    • Export Citation
  • Feitsma, J. (2019) Brokering behaviour change: the work of behavioural insights experts in government, Policy & Politics, 47(1): 3756.

    • Search Google Scholar
    • Export Citation
  • Fleming, J. and Rhodes, R.A.W. (2018) Can experience be evidence? Craft knowledge and evidence-based policing, Policy & Politics, 46(1): 326.

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M. (2016) Seven ways of applying behavioural science to health policy, in I.G. Cohen, H. Lynch and C. Robertson (eds) Nudging Health: Health Law and Behavioral Economics, Baltimore, MD: Johns Hopkins University Press.

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M., List, J., Metcalfe, R. and Vlaev, I. (2014) The behavioralist as tax collector: using natural field experiments to enhance tax compliance, Journal of Public Economics, 148(1): 1431. doi: 10.1016/j.jpubeco.2017.02.003

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M., Chadborn, T., Sallis, A., Sanders, M., Berry, D., Greaves, F., Clements, L. and Davies, S.C. (2016) Provision of social norm feedback to high prescribers of antibiotics in general practice: a pragmatic national randomised controlled trial, The Lancet, 387(10029): 174352. doi: 10.1016/S0140-6736(16)00215-4

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M., Egan, M., Rutter, J. and McCrae, J. (2018) Behavioural Government, London: Behavioural Insights Ltd.

  • Halpern, D. (2015) Inside the Nudge Unit: How Small Changes can Make a Big Difference, London: W.H. Allen.

  • Halpern, D. and Sanders, M. (2016) Nudging by government: progress, impact, and lessons learned, Behavioral Science & Policy, 2(2): 5365.

    • Search Google Scholar
    • Export Citation
  • Haskins, R. and Margolis, G. (2014) Show Me the Evidence: Obama’s Fight for Rigor and Results in Social Policy, Washington, DC: Brookings Institution Press.

    • Search Google Scholar
    • Export Citation
  • Haynes, L., Goldacre, B. and Torgerson, D. (2012) Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, London: Cabinet Office.

    • Search Google Scholar
    • Export Citation
  • Head, B.W. (2008) Three lenses of evidence-based policy, Australian Journal of Public Administration, 67(1): 111. doi: 10.1111/j.1467-8500.2007.00564.x

    • Search Google Scholar
    • Export Citation
  • Head, B.W. (2013) Evidence-based policymaking – speaking truth to power?, Australian Journal of Public Administration, 72(4): 397403. doi: 10.1111/1467-8500.12037

    • Search Google Scholar
    • Export Citation
  • Head, B.W. (2016) Toward more evidence-informed policy-making?, Public Administration Review, 76(3): 47284. doi: 10.1111/puar.12475

  • Hiscox, M., Hobman, D.E., Daffey, M.M. and Reeson, D.A. (2017) Supporting Retirees in Retirement Income Planning, Canberra: BETA.

  • Hiscox, M., Oliver, T., Ridgway, M., Holzinger, L.A., Warren, A. and Willis, A. (2017) Going Blind to See More Clearly: The Effects of De-identifying Job Applications in the Australian Public Service, Canberra: BETA.

    • Search Google Scholar
    • Export Citation
  • Ingold, J. and Monaghan, M. (2016) Evidence translation: an exploration of policy makers’ use of evidence, Policy & Politics, 44(2): 17190.

    • Search Google Scholar
    • Export Citation
  • John, P. (2016) Randomised controlled trials, in G. Stoker and M. Evans (eds) Evidence-based Policy Making in the Social Sciences, Bristol: Policy Press, pp 6982.

    • Search Google Scholar
    • Export Citation
  • John, P. (2018) How Far to Nudge, Cheltenham: Edward Elgar.

  • Kahneman, D. (2011) Thinking, Fast and Slow, New York, NY: Farrar, Straus and Giroux.

  • Leigh, A. (2009) What evidence should social policymakers use?, Australian Treasury Economic Roundup, 1: 2743.

  • Leigh, A. (2018) Randomistas: How Radical Researchers Changed Our World, Melbourne: LaTrobe University Press.

  • Lepenies, R., Mackay, K. and Quigley, M. (2018) Three challenges for behavioural science and policy: the empirical, the normative and the political, Behavioural Public Policy, 2(2): 17482. doi: 10.1017/bpp.2018.18

    • Search Google Scholar
    • Export Citation
  • Lofland, J. and Lofland, L.H. (1995) Analyzing Social Settings: A Guide to Qualitative Observation and Analysis, California, CA: Wadsworth Publishing.

    • Search Google Scholar
    • Export Citation
  • McGann, M., Wells, T. and Blomkamp, E. (2019) Innovation labs and co-production in public problem solving, Public Management Review, DOI: 10.1080/14719037.2019.1699946.

    • Search Google Scholar
    • Export Citation
  • Mendel, J. (2018) Unpublished policy trials: the risk of false discoveries and the persistence of authority-based policy, Evidence & Policy, 14(2): 32334.

    • Search Google Scholar
    • Export Citation
  • Mols, F., Haslam, S.A., Jetten, J. and Steffens, N.K. (2015) Why a nudge is not enough: a social identity critique of governance by stealth, European Journal of Political Research, 54(1): 8198. doi: 10.1111/1475-6765.12073

    • Search Google Scholar
    • Export Citation
  • NSW (New South Wales) Behavioural Insights Unit (2014) Understanding People, Better Outcomes: Behavioural Insights in NSW, Sydney: Department of Premier and Cabinet.

    • Search Google Scholar
    • Export Citation
  • NSW Behavioural Insights Unit (2016) Behavioural Insights in New South Wales: Update Report 2016, Sydney: Department of Premier and Cabinet.

    • Search Google Scholar
    • Export Citation
  • OECD (Organisation for Economic Co-operation and Development) (2017) Behavioural Insights and Public Policy: Insights from Around the World, Paris: OECD Publishing.

    • Search Google Scholar
    • Export Citation
  • Parkhurst, J. (2016) The Politics of Evidence: From Evidence-based Policy to the Good Governance of Evidence, London: Routledge.

  • Productivity Commission (2008) Behavioural Economics and Public Policy, roundtable proceedings, Canberra: Productivity Commission.

  • Russell, J., Greenhalgh, T., Byrne, E. and McDonnell, J. (2008) Recognizing rhetoric in health care policy analysis, Journal of Health Services Research & Policy, 13(1): 406. doi: 10.1258/jhsrp.2007.006029

    • Search Google Scholar
    • Export Citation
  • Sanders, M. and Halpern, D. (2014) Nudge Unit: our quiet revolution is putting evidence at the heart of government, The Guardian, 3 February.

    • Search Google Scholar
    • Export Citation
  • Sanders, M., Snijders, V. and Hallsworth, M. (2018) Behavioural science and policy: where are we now and where are we going?, Behavioural Public Policy, 2(2): 14467. doi: 10.1017/bpp.2018.17

    • Search Google Scholar
    • Export Citation
  • Silverman, D. (2010) Doing Qualitative Research: A Practical Handbook, London: SAGE.

  • Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R. and Befani, B. (2012) Broadening the Range of Designs and Methods for Impact Evaluations, Working Paper 38, London: Department for International Development.

    • Search Google Scholar
    • Export Citation
  • Strassheim, H. and Kettunen, P. (2014) When does evidence-based policy turn into policy-based evidence? Configurations, contexts and mechanisms, Evidence & Policy: A Journal of Research, Debate and Practice, 10(2): 25977.

    • Search Google Scholar
    • Export Citation
  • Thaler, R.H. (2016) Behavioral economics: past, present, and future, American Economic Review, 106(7): 1577600. doi: 10.1257/aer.106.7.1577

    • Search Google Scholar
    • Export Citation
  • Thaler, R.H. and Benartzi, S. (2004) Save More Tomorrow™: using behavioral economics to increase employee saving, Journal of Political Economy, 112(S1): S16487. doi: 10.1086/380085

    • Search Google Scholar
    • Export Citation
  • Thaler, R.H. and Sunstein, C. (2008) Nudge: Improving Decisions About Health, Wealth, and Happiness, New Haven, CT: Yale University Press.

    • Search Google Scholar
    • Export Citation
  • VicHealth and Halpern, D. (2016) Behavioural insights and healthier lives, Melbourne: Victorian Health Promotion Foundation.

  • Weiss, C.H. (1979) The many meanings of research utilization, Public Administration Review, 39(5): 42631. doi: 10.2307/3109916

  • Whitehead, M., Jones, J., Lilley, R., Pykett, J. and Howell, R. (2017) Neuroliberalism: Behavioural government in the twenty-first century, Oxford: Routledge

    • Search Google Scholar
    • Export Citation
  • World Bank (2018) Behavioural Science Around the World, Washington, DC: World Bank.

  • Abadie, A. and Gay, S. (2006) The impact of presumed consent legislation on cadaveric organ donation: a cross-country study, Journal of Health Economics, 25(4): 599620. doi: 10.1016/j.jhealeco.2006.01.003

    • Search Google Scholar
    • Export Citation
  • Ames, P. and Hiscox, M. (2016) Guide to Developing Behavioural Interventions for Randomised Controlled Trials, Canberra: Australian Government.

    • Search Google Scholar
    • Export Citation
  • Ames, P. and Wilson, J. (2016) Unleashing the Potential of Randomised Controlled Trials in Australian Governments, working paper, Boston: Harvard Kennedy School.

    • Search Google Scholar
    • Export Citation
  • APSC (Australian Public Service Commission) (2007) Changing Behaviour: A Public Policy Perspective, discussion paper, Canberra: APSC.

  • APSC (2009) Smarter Policy: Choosing Policy Instruments and Working with Others to Influence Behaviour, discussion paper, Canberra: APSC.

    • Search Google Scholar
    • Export Citation
  • Ball, S., Hiscox, M. and Oliver, T. (2017) Starting a behavioural insights team: three lessons from the Behavioural Economics Team of the Australian Government, Journal of Behavioral Economics for Policy, 1(S): 216.

    • Search Google Scholar
    • Export Citation
  • Ball, S. and Feitsma, J. (forthcoming) The boundaries of behavioural insights: observations from two ethnographic studies, Evidence & Policy.

    • Search Google Scholar
    • Export Citation
  • Behavioural Insights Unit, Victoria (2017) Applying Behavioural Insights: Improving Information Sharing in the Family Violence System, Melbourne: Victorian Government.

    • Search Google Scholar
    • Export Citation
  • Benartzi, S. and Thaler, R.H. (2013) Behavioral economics and the retirement savings crisis, Science, 339(6124): 11523. doi: 10.1126/science.1231320

    • Search Google Scholar
    • Export Citation
  • BETA (Behavioral Economics Team Australia) (2018) Behavioural Insights for Policy: Case Studies from Around Australia, Canberra: Australian Government.

    • Search Google Scholar
    • Export Citation
  • Biddle, N. and Gray, M. (2018) Support for Policy Trials in Australia, Canberra: ANU Centre for Social Research and Methods.

  • BIT UK (Behavioural Insights Team) (2012) Annual Update 2011–2012, London: UK Cabinet Office.

  • BIT UK (2013) Applying Behavioural Insights to Charitable Giving, London: UK Cabinet Office.

  • BIT UK (2015) Update Report 2013–2015, London: UK Cabinet Office.

  • BIT UK (2016) Update Report 2015–2016, London: Behavioural Insights Team Ltd.

  • BIT UK (2017) Update Report 2016–2017, London: Behavioural Insights Team Ltd.

  • Boaz, A., Grayson, L., Levitt, R. and Solesbury, W. (2008) Does evidence-based policy work? Learning from the UK experience, Evidence & Policy, 4(2): 23353.

    • Search Google Scholar
    • Export Citation
  • Boruch, R. and Rui, N. (2008) From randomized controlled trials to evidence grading schemes: current state of evidence-based practice in social sciences, Journal of Evidence-Based Medicine, 1(1): 419. doi: 10.1111/j.1756-5391.2008.00004.x

    • Search Google Scholar
    • Export Citation
  • Boswell, J., Corbett, J. and Rhodes, R. (2019) The Art and Craft of Comparison, Cambridge: Cambridge University Press.

  • Campbell, D.T. (1991) Methods for the experimenting society, American Journal of Evaluation, 12(3): 22360. doi: 10.1177/109821409101200304

    • Search Google Scholar
    • Export Citation
  • Chalmers, I. (2003) Trying to do more good than harm in policy and practice: the role of rigorous, transparent, up-to-date evaluations, Annals of the American Academy of Political and Social Science, 589: 2240. doi: 10.1177/0002716203254762

    • Search Google Scholar
    • Export Citation
  • Congdon, W.J. and Shankar, M. (2018) The role of behavioral economics in evidence-based policymaking, Annals of the American Academy of Political and Social Sciences, 678: 8192. doi: 10.1177/0002716218766268

    • Search Google Scholar
    • Export Citation
  • Copley, S., Brewer, J., Turenko, A., Wilson, J. and Hiscox, M. (2017) Effective Use of SMS: Timely Reminders to Report on Time, Canberra: Australian Government.

    • Search Google Scholar
    • Export Citation
  • Easton, S. (2016) Experimental trials: a BETA way of making policy, The Mandarin, 28 November, www.themandarin.com.au/73022-experimental-trials-a-beta-way-of-making-policy

    • Search Google Scholar
    • Export Citation
  • Einfeld, C. (2019) Nudge and evidence based policy: fertile ground, Evidence & Policy, 15(4): 50924.

  • Ewert, B., Loer, K. and Thomann, E. (2020) Beyond nudge: advancing the state-of-the-art of behavioural public policy and administration, Policy & Politics, this issue.

    • Search Google Scholar
    • Export Citation
  • Feitsma, J. (2019) Brokering behaviour change: the work of behavioural insights experts in government, Policy & Politics, 47(1): 3756.

    • Search Google Scholar
    • Export Citation
  • Fleming, J. and Rhodes, R.A.W. (2018) Can experience be evidence? Craft knowledge and evidence-based policing, Policy & Politics, 46(1): 326.

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M. (2016) Seven ways of applying behavioural science to health policy, in I.G. Cohen, H. Lynch and C. Robertson (eds) Nudging Health: Health Law and Behavioral Economics, Baltimore, MD: Johns Hopkins University Press.

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M., List, J., Metcalfe, R. and Vlaev, I. (2014) The behavioralist as tax collector: using natural field experiments to enhance tax compliance, Journal of Public Economics, 148(1): 1431. doi: 10.1016/j.jpubeco.2017.02.003

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M., Chadborn, T., Sallis, A., Sanders, M., Berry, D., Greaves, F., Clements, L. and Davies, S.C. (2016) Provision of social norm feedback to high prescribers of antibiotics in general practice: a pragmatic national randomised controlled trial, The Lancet, 387(10029): 174352. doi: 10.1016/S0140-6736(16)00215-4

    • Search Google Scholar
    • Export Citation
  • Hallsworth, M., Egan, M., Rutter, J. and McCrae, J. (2018) Behavioural Government, London: Behavioural Insights Ltd.

  • Halpern, D. (2015) Inside the Nudge Unit: How Small Changes can Make a Big Difference, London: W.H. Allen.

  • Halpern, D. and Sanders, M. (2016) Nudging by government: progress, impact, and lessons learned, Behavioral Science & Policy, 2(2): 5365.

    • Search Google Scholar
    • Export Citation
  • Haskins, R. and Margolis, G. (2014) Show Me the Evidence: Obama’s Fight for Rigor and Results in Social Policy, Washington, DC: Brookings Institution Press.

    • Search Google Scholar
    • Export Citation
  • Haynes, L., Goldacre, B. and Torgerson, D. (2012) Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, London: Cabinet Office.

    • Search Google Scholar
    • Export Citation
  • Head, B.W. (2008) Three lenses of evidence-based policy, Australian Journal of Public Administration, 67(1): 111. doi: 10.1111/j.1467-8500.2007.00564.x

    • Search Google Scholar
    • Export Citation
  • Head, B.W. (2013) Evidence-based policymaking – speaking truth to power?, Australian Journal of Public Administration, 72(4): 397403. doi: 10.1111/1467-8500.12037

    • Search Google Scholar
    • Export Citation
  • Head, B.W. (2016) Toward more evidence-informed policy-making?, Public Administration Review, 76(3): 47284. doi: 10.1111/puar.12475

  • Hiscox, M., Hobman, D.E., Daffey, M.M. and Reeson, D.A. (2017) Supporting Retirees in Retirement Income Planning, Canberra: BETA.

  • Hiscox, M., Oliver, T., Ridgway, M., Holzinger, L.A., Warren, A. and Willis, A. (2017) Going Blind to See More Clearly: The Effects of De-identifying Job Applications in the Australian Public Service, Canberra: BETA.

    • Search Google Scholar
    • Export Citation
  • Ingold, J. and Monaghan, M. (2016) Evidence translation: an exploration of policy makers’ use of evidence, Policy & Politics, 44(2): 17190.

    • Search Google Scholar
    • Export Citation
  • John, P. (2016) Randomised controlled trials, in G. Stoker and M. Evans (eds) Evidence-based Policy Making in the Social Sciences, Bristol: Policy Press, pp 6982.

    • Search Google Scholar
    • Export Citation
  • John, P. (2018) How Far to Nudge, Cheltenham: Edward Elgar.

  • Kahneman, D. (2011) Thinking, Fast and Slow, New York, NY: Farrar, Straus and Giroux.

  • Leigh, A. (2009) What evidence should social policymakers use?, Australian Treasury Economic Roundup, 1: 2743.

  • Leigh, A. (2018) Randomistas: How Radical Researchers Changed Our World, Melbourne: LaTrobe University Press.

  • Lepenies, R., Mackay, K. and Quigley, M. (2018) Three challenges for behavioural science and policy: the empirical, the normative and the political, Behavioural Public Policy, 2(2): 17482. doi: 10.1017/bpp.2018.18

    • Search Google Scholar
    • Export Citation
  • Lofland, J. and Lofland, L.H. (1995) Analyzing Social Settings: A Guide to Qualitative Observation and Analysis, California, CA: Wadsworth Publishing.

    • Search Google Scholar
    • Export Citation
  • McGann, M., Wells, T. and Blomkamp, E. (2019) Innovation labs and co-production in public problem solving, Public Management Review, DOI: 10.1080/14719037.2019.1699946.

    • Search Google Scholar
    • Export Citation
  • Mendel, J. (2018) Unpublished policy trials: the risk of false discoveries and the persistence of authority-based policy, Evidence & Policy, 14(2): 32334.

    • Search Google Scholar
    • Export Citation
  • Mols, F., Haslam, S.A., Jetten, J. and Steffens, N.K. (2015) Why a nudge is not enough: a social identity critique of governance by stealth, European Journal of Political Research, 54(1): 8198. doi: 10.1111/1475-6765.12073

    • Search Google Scholar
    • Export Citation
  • NSW (New South Wales) Behavioural Insights Unit (2014) Understanding People, Better Outcomes: Behavioural Insights in NSW, Sydney: Department of Premier and Cabinet.

    • Search Google Scholar
    • Export Citation
  • NSW Behavioural Insights Unit (2016) Behavioural Insights in New South Wales: Update Report 2016, Sydney: Department of Premier and Cabinet.

    • Search Google Scholar
    • Export Citation
  • OECD (Organisation for Economic Co-operation and Development) (2017) Behavioural Insights and Public Policy: Insights from Around the World, Paris: OECD Publishing.

    • Search Google Scholar
    • Export Citation
  • Parkhurst, J. (2016) The Politics of Evidence: From Evidence-based Policy to the Good Governance of Evidence, London: Routledge.

  • Productivity Commission (2008) Behavioural Economics and Public Policy, roundtable proceedings, Canberra: Productivity Commission.

  • Russell, J., Greenhalgh, T., Byrne, E. and McDonnell, J. (2008) Recognizing rhetoric in health care policy analysis, Journal of Health Services Research & Policy, 13(1): 406. doi: 10.1258/jhsrp.2007.006029

    • Search Google Scholar
    • Export Citation
  • Sanders, M. and Halpern, D. (2014) Nudge Unit: our quiet revolution is putting evidence at the heart of government, The Guardian, 3 February.

    • Search Google Scholar
    • Export Citation
  • Sanders, M., Snijders, V. and Hallsworth, M. (2018) Behavioural science and policy: where are we now and where are we going?, Behavioural Public Policy, 2(2): 14467. doi: 10.1017/bpp.2018.17

    • Search Google Scholar
    • Export Citation
  • Silverman, D. (2010) Doing Qualitative Research: A Practical Handbook, London: SAGE.

  • Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R. and Befani, B. (2012) Broadening the Range of Designs and Methods for Impact Evaluations, Working Paper 38, London: Department for International Development.

    • Search Google Scholar
    • Export Citation
  • Strassheim, H. and Kettunen, P. (2014) When does evidence-based policy turn into policy-based evidence? Configurations, contexts and mechanisms, Evidence & Policy: A Journal of Research, Debate and Practice, 10(2): 25977.

    • Search Google Scholar
    • Export Citation
  • Thaler, R.H. (2016) Behavioral economics: past, present, and future, American Economic Review, 106(7): 1577600. doi: 10.1257/aer.106.7.1577

    • Search Google Scholar
    • Export Citation
  • Thaler, R.H. and Benartzi, S. (2004) Save More Tomorrow™: using behavioral economics to increase employee saving, Journal of Political Economy, 112(S1): S16487. doi: 10.1086/380085

    • Search Google Scholar
    • Export Citation
  • Thaler, R.H. and Sunstein, C. (2008) Nudge: Improving Decisions About Health, Wealth, and Happiness, New Haven, CT: Yale University Press.

    • Search Google Scholar
    • Export Citation
  • VicHealth and Halpern, D. (2016) Behavioural insights and healthier lives, Melbourne: Victorian Health Promotion Foundation.

  • Weiss, C.H. (1979) The many meanings of research utilization, Public Administration Review, 39(5): 42631. doi: 10.2307/3109916

  • Whitehead, M., Jones, J., Lilley, R., Pykett, J. and Howell, R. (2017) Neuroliberalism: Behavioural government in the twenty-first century, Oxford: Routledge

    • Search Google Scholar
    • Export Citation
  • World Bank (2018) Behavioural Science Around the World, Washington, DC: World Bank.

  • 1 University of Queensland, , Australia

Content Metrics

May 2022 onwards Past Year Past 30 Days
Abstract Views 0 0 0
Full Text Views 50 50 50
PDF Downloads 67 67 67

Altmetrics

Dimensions