This part provides a detailed how-to manual for policy officials and practitioners working with public agencies on applying behavioural insights to public policy, as well as a repository of approaches, proof of concepts and methodological standards for designing and implementing a behaviourally informed policy intervention.
Tools and Ethics for Applied Behavioural Insights: The BASIC Toolkit
Chapter 2. The BASIC Manual
Abstract
BASIC – A toolkit and ethical guidelines for applying BI in public policy
During the last 10 years, Behavioural Insights (BI) has become increasingly established in public policy, as well as in society more broadly. It was originally created by the UK Behavioural Insights Team (UKBIT) to refer to an evidence-based approach to integrating insights and methodologies from the behavioural sciences in public policy to provide better and more effective regulation (Halpern, 2015). As this approach spread wider into public policy circles, the resulting initiatives and outcomes are increasingly referred to as “behaviourally informed public policy”, or just “behavioural public policy” (BPP). This approach to the development, implementation and evaluation of public policy has been granted its own academic journals, associations, cross-institutional networks and an ever-increasing number of institutions and teams co‑ordinating and/or integrating BI into public policy around the world.
The core tenet of BI is the application of insights and methodologies from the behavioural sciences in public policy development and delivery. To be more precise, these insights are mainly taken from behavioural economics, cognitive and social psychology and the study of judgment and decision-making. Insights are also taken from similar disciplines sharing not only the inductive but also the causal explanatory and experimental approach to the subject matter of human behaviour as well as dual process, and similar theories of human cognition (see Box 2.1). The aspiration is to better understand why people act as they do to create more effective public policies by taking into account how the limits and biases of human attention, belief formation, choice and determination, as uncovered by these sciences, influence people’s behaviour.
Box 2.1. What are “Behavioural Insights”
Behavioural Insights (BI) constitute the evidence-based approach to integrating insights and methodologies from the behavioural sciences in public policy to provide better and more effective public policies; behavioural insights (written in lower case), on the other hand, refer to the specific insights and methodologies from the behavioural sciences. In particular, this latter concept of behavioural insights refers to a series of theories and empirical findings originating in the behavioural sciences regarding what shapes real-world human behaviour in predictable ways, including the methodologies for how to approach this subject matter.
While there is no universal definition of behavioural science (see paragraph below), this toolkit takes the concept primarily to refer to behavioural economics, cognitive and social psychology, the study of judgment and decision-making, and similar disciplines sharing not only the inductive but also the causal explanatory and experimental approach to the subject matter of human behaviour as well as dual process theories of human cognition. Consequently, the behavioural insights around which BI revolves go beyond the insights provided by the academic discipline of behavioural economics. While the latter studies the effects of psychological, social, cognitive and emotional factors on economic decisions of individuals and institutions, BI covers a wider domain than economic decision-making and thus includes a wider set of behavioural insights than those relevant for that field.
What exactly constitutes “the behavioural sciences” is up for debate. Some prefer to define the behavioural sciences very broadly so as to accommodate almost any approach that relates to human behaviour, others prefer to define the term more narrowly so as to ensure at least some level of theoretical and methodological consistency. This manual presents an approach that falls in the latter category. Either way, it is important to emphasise that the behavioural sciences do not, by themselves, constitute a unified field but rather feature a plurality of sciences that do not readily lend itself for policymakers and practitioners to tap into. Rather, BI tends to draw on a particular branch of psychological theories especially those compatible with experimental methodologies (Lepenies and Małecka, 2019).
Source: Lepenies, R. and M. Małecka (2019), “The ethics of behavioural public policy”, in A. Lever and A. Paoma, The Routledge Handbook of Ethics and Public Policy, Routledge, New York.
Thus, BI stands in contrast to more traditional policy paradigms, which have tended to rely on more abstract models and ideal assumptions about human behaviour, models that do not factor in such limits and biases. Instead, traditional approaches have usually assumed that people’s behaviour could be understood as if resulting from fully rational and deliberative thinking based on being provided full information and absent of constraints on time and attention. Consequently, at least according to critics, traditional policies easily end up being naive and ineffective as they reflect assumed rather than actual behaviours. BI, in contrast, claims to provide more realistic models and assumptions about the psychological factors that shape human behaviour, tools for how to influence such behaviour and methods for how to investigate and measure actual behaviour and behaviour change. However, as BI and BPP are experiencing increasing public attention, it is becoming more and more evident that policymakers and practitioners working with policymakers have a hard time orienting themselves critically within this fast-evolving scientifically based paradigm. This is especially true when it comes to:
grasping the basics of the scientific theories and concepts from the behavioural sciences
learning the processes and tools involved when integrating BI in public policy
understanding the scientific methodologies that are applied in validating and testing behavioural public policies
having sound ethical guidance when working responsibly within this paradigm.
To help policymakers and practitioners orient themselves, BASIC provides a process-oriented framework for integrating insights, theories and methods from the behavioural sciences when designing and implementing public policies. To a large extent, the current practices, proof of concepts and methodological standards related to behaviourally informed public policymaking have already been described in various reports and frameworks (see Box 2.1). However, a full process framework equipping practitioners with best practice tools, methods and ethical guidance for conducting BI projects from the beginning to the end of a public policy cycle was missing (OECD, 2017). BASIC intends to fill this gap.
The aim of BASIC (Figure 2.1) is to provide the basics for orienting oneself within the world of BI and provide a framework for how to apply BI to public policy through a five‑stage process that runs from the beginning to the end of the policy cycle (Figure 2.2).
The process aspect of BASIC is important to emphasise. The OECD (2017) case study collection shows the results of a survey of over 150 cases applying BI to public policy, the majority of which have been towards the end of the policy cycle, i.e. at the stages of implementation and enforcement/compliance. This supports a widespread perception that BI enters only after the policy issue has been identified and analysed. It is taken to do so by suggesting behavioural tweaks and strategies, such as providing more salient information or providing users with the “right” default option to promote certain behaviours or even behavioural change. In addition to this, the case-collection also reflects how BI, methodologically speaking, is mainly about experimentation and trialling in the evaluation of policy outcomes.
The perception of BI as entering at the end of the policy cycle may indeed be partially self-inflicted by BI practitioners. Celebrating the ten-year anniversary of UKBIT, Sanders, Snijders and Hallsworth (2018) notes this perception in the continuation of UKBIT’s strategic focus on quick wins and return on costs, and warns of the danger that “behavioural science is seen to offer merely technocratic tweaks” (i.e. letters, emails, text messages) and as focusing primarily on results (i.e. the notorious abundance of bar graphs in BI publications) rather than a more systemic change to the policymaking process.
However, a deeper look at the BI approach and its theoretical underpinnings demonstrate clearly that BI has a lot to offer in the ex ante appraisal as well as the ex post evaluation stages of the policy cycle (OECD, 2017; 2018). Indeed, it may be argued that since BI cannot be effectively applied without understanding how (mechanisms) and under what circumstances (boundary conditions) their application may be expected to cause behaviour change (Marchionni and Reijula, 2019); the responsible application of BI to public policy requires a significant amount of time and effort placed in the early stages of the policy cycle. Moreover, behaviourally informed policies are not really evidence-based unless they involve “mechanistic” and “circumstantial” evidence (Grüne-Yanoff, 2016). This means that evidence gained through testing and experimentation should be used at both the ex post and ex ante stages of the policymaking process to systematically understand what behaviours may be driving policy problems and scale “what works” from the beginning of the project.
It should thus not be overlooked that the more effective use of BI in the policy cycle depends on a close and systematic integration of ex ante evaluation and ex post evidence. Tackling “wicked problems” and contributing to a more systematic approach to policymaking changes in the public administration requires BI to be applied coherently throughout the policy cycle to better understand and identify relevant behaviours, conduct better analyses, design better policy strategies and test policy interventions to drive change improving policy efforts. The five stages of BASIC attempts to accommodate a framework for doing this.
The 5 stages of BASIC
The five stages of BASIC seeks to guide the application of BI to a given policy issue in a problem-oriented way:
1. Behaviour, deals with the initial stage of applying BI at the beginning of the policy cycle so as to identify and target crucial behavioural aspects of policy problems versus issues stemming from lack of information, incentives or standard regulation.
2. Analysis, deals with scrutinising the target behaviours as viewed through the lens of theories, insights and methodologies from the behavioural sciences.
3. Strategies, provides guidelines for the practitioner to systematically identify and conceptualise behaviourally informed strategies based on the behavioural analyses that result from the combination of Stages 1 and 2.
4. Intervention, comprises core methods for systematically designing experiments for evaluating the efficacy as well as the efficiency of behavioural interventions.
5. Change, provides practitioners with tools for: i) checking whether the initial assumptions and contextual factors have evolved before rolling out a BI-informed intervention; and ii) producing plans for implementation, scale, monitoring, evaluation, maintenance and dissemination of applications.
How to use BASIC
BASIC is a framework built “by practitioners, for practitioners”. It is partly a synthesis of existing approaches, frameworks, tools and guidelines already widely used implicitly by BI practitioners. It also builds partly on tools which have been specially developed by iNudgeyou – The Applied Behavioural Science Group during a decade of work with applying BI to public policy around the world.
The ordering of tools and guidelines in a framework helps to highlight the way in which BI can be used throughout the policy cycle, a component often missing in the BI literature. These tools stress the interdependency between the various stages of the policy cycle: from how the behavioural reduction of a policy issue directs what behaviours to focus on, to how the analysis of such behaviours then influences the choice of behavioural strategies to be tested. These tests then should be designed to provide the proper basis for behaviourally informed policy initiatives.
In addition, BASIC also highlights a series of ethical considerations for each stage, which need to be addressed as the practitioner works through the framework. These include considering whether possible behaviour changes are actually aligned with the interest of citizens, over giving consideration to the appropriateness of certain behavioural strategies relative to the problems addressed to securing privacy and equal treatment of citizens when designing field tests.
Finally, while BASIC provides policymakers and practitioners working with them a step-by-step framework for working through a policy problem with a behaviourally informed approach, it is important to observe at least the following three caveats:
Context matters throughout the five stages: BASIC provides a step-by-step approach. However, it should not be applied without continually paying consideration to the political, institutional and policy context throughout all of its stages of application. As for all other frameworks, the policymaker and practitioner working with policymakers will encounter cases where he or she will have to adapt and supplement BASIC with other resources to address the special features of the policy issue addressed.
Not all policy problems call for a behaviourally informed approach: BASIC includes a process to scope the policy problem and identify those driven by psychological limitations and biases. It argues that these are problems which are first and foremost amenable to behaviourally informed approaches. Other policy issues, such as systemic issues (e.g. financial or physical constraints) will benefit more from alternative approaches.
Not all applications will progress through every tool and step of BASIC. Nor should all applications progress meticulously through each and every tool or step of BASIC. In some cases, the initial policy project rules out a behavioural problem only to realise partway through that behavioural expertise is necessary. In other cases, field experimentation may be difficult or impossible to execute. Additionally, the nature of behavioural problems itself may make certain tools or steps irrelevant.1
Scoping a BI project: What to do first?
Applying BI will always be constrained by resources in terms of time, money or institutional leverage, which is often a challenge for BI practitioners who are outside the policymaking process. Working with BI also poses some special challenges and requirements to practitioners only familiar with public policy, which may tax resources and relations, if not planned for. Thus, it is crucial for whoever is applying BI to public policy either from within or from outside government to consider the potential scope and level of a BI project even before beginning to work on the policy issue. Equally important is to take into consideration those contextual issues – political, institutional and policy related – that need to be factored in before starting up a project involving BI.
Setting up a team, group or network
BI uses an empirical approach that differs from traditional policy interventions. In particular, it makes use of a range of methodologies that might be unfamiliar to traditional policy specialists, including, amongst others, extensive data analysis, observational studies, laboratory experiments, field-testing of prototypes and extensive randomised controlled trials, creating a new set of challenges.
Some of these challenges may be addressed or mitigated by identifying, in advance, the kinds of problems and requirements a given BI project is likely to stir up as well as specific human resources needed to deal with these. In this way, the practitioner will be in a better position to manage cross-institutional expectations in relation to the project, identify what tools may realistically be applied and consider what human resources to involve at various stages of the project. The important takeaway is that the practitioner should devote time and thought into negotiating expectations to address challenges specific to BI applications before they become problems.
The first and perhaps most important thing to notice in this relation is that whether centralised or decentralised, whether public or private, whether permanent or project-based, the success of the BI approach usually involves assembling some sort of permanent or temporary team, group or network. To this end, there is a set of features characterising the success of getting BI projects successfully off the ground and have them make meaningful contributions. These characteristics are:
1. Experienced resources: A team, group or network will significantly benefit from including and/or involving some experienced people who have first-hand experience with public policy and administration as well as behavioural science, including actual experience with running real-world experiments. Such expertise will allow the team, group or network to avoid repeating the same generic mistakes and problems that tend to arise in BI projects in public policy.
2. Diverse expertise: A team, group or network will need to encompass or be able to draw upon a variety of specialised expertise, including intimate knowledge of policy processes and standard policy instruments, applied BI, cognitive and social psychology, behavioural economics, experimental design and statistics. Thus, involving a critical mass of four to six practitioners with diverse educational backgrounds is usually needed. In some cases, institutional constraints may only allow for a team, group or network that also relies on neighbouring disciplines. While this may strengthen the work, one should also be wary of the weaknesses that may result from differing, or even inconsistent, ontologies and methodologies.
3. Mindset, social skills and diversity: As applying BI often requires practitioners to work extensively in the field and be willing to negotiate key aspects of their work, a special mindset, extensive social skills and diversity are crucial. Practitioners should be clear that developing, designing and delivering public policy is not a desk job, but involves working closely and respectfully together with, and as part of, front-line public service as well as with policy practitioners higher up the hierarchy and collaborators outside the public sector. As a team, group or network will also work across a variety of educational and social backgrounds to serve citizens living their lives outside the public sector, it is crucial that it is diverse on this account and includes people with actual real-work experience, inside as well as outside the public sector.
4. Advisory board, network participation and collaboration: As BI is a steadily evolving and international field that involves high levels of domain-specific as well as cross-disciplinary knowledge, it is highly recommended that the team, group or network establish an advisory body, if possible. This may involve government officials, academics and experts and be used to provide support, insights and direction. Also, it is crucial that the team, group or network orients itself and actively participates in national as well as international knowledge-sharing networks and events. Finally, the team, group or network should perceive itself as neither necessary nor sufficient, when carrying out projects. Instead, it should welcome heavy involvement and collaboration with external partners and stakeholders, and always remember to pay credits and honours where they are due.
5. Secure a two- to three-year commitment: Applying BI is an empirical effort. It takes time to conduct proper behavioural analysis of the behavioural issues underlying policy problems, develop strategies, design, set up and run experiments, as well as implement on a larger scale. In addition, the inductive process involved is inherently fallible and thus calls for a sense of psychological security that allows for failed experiments and protects the team, group or network from external pressure to take short cuts. For that reason, a team, group or network should ideally try to secure a two- to three-year commitment that allows it to develop the necessary infrastructure and cross-institutional network, identify issues to work on, and develop, design and deliver advice on behavioural strategies for public policy.
Exploring the political, institutional and policy context
Equally important is to take into consideration those contextual issues – political-, institutional- and policy-related – that need to be factored in before designing and implementing any policy intervention. This includes considerations related to:
Political leadership: Are political leaders aware of the use of BI and have they been briefed on what BI can or cannot do?
Institutional setup: Where do the expected policy interventions fit within the administrative and government structure? Have the relevant institutions been mapped? Have opportunities and needs for co-ordination been considered and planned?
Policy space: What are the connections with existing policies and interventions? Are there potential gaps and overlaps? If so, how can they be addressed?
Determining the policy level of the project
Third, a crucial question to ask from the outset is “at what policy level is the project anchored?” Not only does this define its scope, resources and constraints, it also helps clarify potential positive and negative features that will influence the project.
A way of approaching this question is by identifying at which of the following three levels a given BI project is anchored (see Table 2.1). Determining the level of the project will help the practitioner to identify in advance some crucial features, positive and negative, which will tend to shape the project. This will allow for developing suitable strategies and precautionary measures, relative to the necessary management of expectations and resources amongst all parties involved. In addition, the scoping of the problem and type of intervention also helps identify the level of maturity in the use and application of BI.
Table 2.1. Thinking aid: Considering the level of the project
Goal |
Visual aid |
Decision-making tool |
|
---|---|---|---|
Scoping the project based on the policy level of the project and the special characteristics of BI. |
Use consideration points to define the various levels of the project from high-level institutional, down to strategic and behavioural levels and manage expectations relative to the BI project. |
||
Level of the project |
Expectation for the project |
||
Institutional-level projects aim to apply BI to a wider institutionalised domain to provide an understanding of how this approach may help to transform public policy development and/or delivery. |
Explore the ‘institutional fit’ of BI, so to speak, by: i) providing knowledge about the institutional potential and relevant processes and methods involved when working with BI; ii) carrying out interventions that may serve as proof-of-concept; and iii) identifying the possible institutional obstacles that working with BI presents to the particular institution and its domain. |
||
Strategic-level projects aim to apply BI to one or more issues from a defined list of existing policy problems that challenge a particular institutional domain or sector. |
Deliver viable and effective policy insights and solutions which are cost-effective compared to alternative policy measures by: i) extending existing knowledge about BI and building capacity for this within the institution; ii) applying the lessons learned from former institutional projects to strategic level problems to test for their robustness; and iii) providing scalable long-term solutions to one or more existing policy issues. |
||
Behavioural-level projects aim to apply BI directly to a specific behavioural problem in the institutional domain or sector. |
Policymakers, stakeholders and collaborators usually assume that the tools and methods for applying BI in public policy design and delivery are more or less fully developed. Thus, behavioural level projects are expected to fully integrate into the everyday decisions and processes of institutional work. The success criteria of projects at this level will usually be: i) Smooth integration of process; ii) “problem solved”, not “lesson learned”; and iii) easily communicable results. |
In addition to identifying the project level, the practitioner should also facilitate a discussion on the scope of the project to manage and make expectations transparent from the beginning and identify the potential point of entry in the policy cycle where this intervention should occur.
Checklist for scoping a BI project
To avoid unpleasant surprises, the practitioner may sit down with everyone involved in the project and clearly communicate the special features that a BI project may come to involve dependent on the level of the project. Table 2.2 contains some discussion points that practitioners may address within their team to consider whether or not BI is appropriate for the project. The discussion points can also be used to communicate with stakeholders on what to expect when getting involved in the policy initiative under consideration.
Table 2.2. BI project scoping checklist
Run through the following points with your team to scope the problem and communicate with stakeholders
Completion check |
Discussion point |
---|---|
|
BI does not apply to all problems. Given the surge in interest in BI, policymakers may have unrealistically high expectations about its potential. However, BI cannot be applied to any kind of policy issue and it is rarely, if ever, able to solve behavioural problems completely and on its own. |
BI is not about raising “general public awareness”. In public policy raising “general public awareness” is often implicitly seen as the main road to influencing behaviour. BI, on the other hand, usually focuses on generating measurable changes in concrete behaviours, without necessarily resulting in measurable changes in general perceptions. |
|
Developing BI is not necessarily “cheap”. While applying the results from BI may be cost-effective, the development of these is not necessarily cheap. Groundwork needs to be carried out before coming up with ideas for what strategies to test. |
|
Applying BI requires expertise. It is common to think that anyone can be an intuitive expert on behaviour, but that in itself is bias since behavioural insights are often counterintuitive. The practitioner should clarify what expertise is present in the project and why as well as what working with applied BI means and how such projects tend to draw on the present expertise throughout a project. |
|
Be critical of existing data and perceptions. Establish a critical, though not sceptical, attitude towards existing data and perceptions from the outset as a norm to guide the work of the team. Existing material will often have been produced using standard methodologies, which do not necessarily align with the theoretical underpinnings of BI and may thus potentially misguide its application. |
|
Secure permissions and agree on due credits from the outset. Try to secure that the team is granted permission to oversee all stages of the project as well as receive due credits. Also, secure the team shared user rights of results so that these are publicly available for scientific publication, public dissemination and journalists – this should, of course, also include null and negative results. |
Ethical guidelines for applying BI
Applying BI to public policy raises particular ethical concerns. This is because BI approaches public policymaking on different terms than traditional public policy. Traditional public policy operates often within a formalised legislative paradigm. Citizens as well as policymakers are assumed to self-consciously attend to what is most important, adhere to the rules of rationality and stick to their choices and promises. The BI approach, on the other hand, assumes citizens as well as policymakers to be less perfect. The psychological theories underpinning the approach assumes people’s attention to be scarce, the processes involved in their belief formation and choices to be biased, and their determination to be continuously challenged. As BI approaches to policymaking use different methods and means for influencing behaviour change, BI needs to conceive of ethically relevant concepts such as autonomy, consent and responsibility differently. This calls for special ethical considerations and guidelines to complement those already in place for more traditional policymaking. For this purpose, BASIC includes ethical guidelines to consider at every step.
These guidelines are intentionally both practical and aspirational – while some guidelines may not be implementable in every setting, they are intended to give the policymaker high standards to consider throughout a BI project.
Before starting the project
As ethics should be considered upfront, below are a set of ethical guidelines that should be considered before beginning a BI project.
Consider establishing an ethical review board. Ethics is an issue to be observed from the outset of BASIC. Obtaining a democratic mandate to devise public policy is not a mandate to pursue this in any way one likes. Ethics has priority. Thus, as a first step consider the possibility of establishing an ethical review board to follow the team, group or network from day one. If this is too ambitious for the project at hand, then outline the ethical issues associated with a project, how the project proposes to address these and continuously consider where ethical approval is required. Potentially, the BI team, group or network may also consider contacting an ethical review board at a university to get established third-party expert advice on particular issues. Following established ethical review processes could also be an option.
Appoint an ethics supervisor for data collection, use and storage. BI often involves data collection and analysis that goes beyond what is standard in traditional public policymaking. This includes: i) primary behavioural data (i.e. data on or related to the real-world behaviour of citizens); ii) secondary behavioural data (i.e. data on variables related to people’s attention, belief formation, preference construction, determination and more); iii) contextual data (i.e. data on contextual variables, including seemingly irrelevant aspects of choice architectures); and iv) data on people’s reflective preferences (insofar as such exist) about what people believe they ought to do given their available options. Consider appointing at least one member – either a member of the ethical review board or the BI team – to supervise ethical aspects of data collection, use and storage.
Observe existing ethical guidelines and codes of conduct. BI is characterised by working across institutional boundaries. Make sure all team-members observe existing ethical guidelines and codes of conduct of the particular fields that the project involves as well as receive the necessary training to comply with these. Also, existing ethical guidelines and codes of conduct will not cover all aspects of a BI project. Establish a procedure from the outset for flagging activities and data collection that are not covered by these, and for how to perform an ethical review in such cases. Finally, the team, group or network should discuss and establish procedures for how it handles collaborating parties that fail to comply with those parties’ own ethical guidelines and codes of conduct, while also observing that honesty, anonymity and whistleblowing is protected.
Stage 1: Behaviour – Identifying and defining the problem
Stage 1: Behaviour
Under ideal circumstances, BI expertise is involved from the beginning of a policy effort. If this is the case, Behaviour1 refers to the initial stage in such an effort, where policymakers and practitioners working with them may follow four steps that apply thinking aids and decision-making tools aligned with BI to:
1. Decompose a policy problem into its behavioural components.
2. Prioritise what behaviours to assess as potential targets for a BI project.
3. Define potential target behaviours in terms of decision points and processes.
4. Select those behaviours exhibiting the best potential for a BI approach.
When
This stage is relevant when the BI team is part of the policy effort from the outset and is regarded as such. If the team is brought in superficially or is in need of “quick wins”, the tools in Behaviour might be too cumbersome to apply.
Milestone
The end of Behaviour provides a first milestone aimed at identifying what behaviour(s) to target. When arriving at this milestone, the team may consider bringing stakeholders together to ensure continued buy-in and support. Further, the team may also consider bringing additional stakeholders into the project based on their relevancy relative to the identified target behaviour(s).
1. In the BASIC manual, the core stages of BASIC are referred to in small caps (i.e. “Behaviour”) to distinguish the stage from the regular use of the word (i.e. in Behaviour you diagnose the behaviour problems).
The case-collection Behavioural Insights and Public Policy: Lessons from Around the World (OECD, 2017) observes that BI has largely been applied to areas in policy implementation and enforcement/compliance at the end of the policy cycle. It also notes the potential for BI to be applied to earlier stages and there are signs this already happening in many places. However, few tools and guidelines exist for how to integrate BI at these early stages. In particular, there is a lack of knowledge about how BI translates policy challenges into behavioural problems, as well as what thinking-aids and decision-making tools can be used to effectively evaluate when to apply BI instead of other available policy instruments.
The stage of Behaviour does this by focusing on a series of descriptive tools. That is, it is not aimed at explaining behaviour (Stage 2: Analysis) or identifying strategies for behaviour change (Stage 3: Strategies). The aim is simply to identify, prioritise, define and select those behaviours contained within a wider policy effort that are particularly suitable for a BI approach. This stage is a fundamental one but still often overlooked when planning a policy intervention (be it behavioural or more traditional) for dealing with a given policy problem. Finally, Behaviour also involves a series of ethical considerations that researchers and practitioners working with BI should consider relative to this stage, which follows the five tools.
Tool #1: Behavioural reduction: Decomposing policy issues into behaviours
Goal |
Visual aid |
Decision-making tool |
---|---|---|
1. Identifying constituent behaviours within wider policy issues to which behavioural insights might potentially be applied. |
Conduct a behavioural reduction to decompose policy problems, first into strategic domains and then into their constituent behaviours. |
Whether speaking of big reforms or operational tinkering, public policy development and delivery is ideally driven by planned efforts. BI is well suited to this ideal as it aims to deliver slow incremental, evidence-based and “gentle” (i.e. not purely “command and control”) regulation. Still, while befitting the mentality of planned efforts, the empirical drive of BI means that it works on the operational levels of public policy development and delivery, not on the broader level of policy strategies and big agendas.
This creates the obvious challenge: how can high-level policy planning be connected with the operational-level design and delivery of policy? On the one hand, at the early stages of the policy cycle, issues are usually formulated too vaguely and broadly for identifying what concrete behaviour changes to target at the operational level to effectively help resolve the policy issue (Soman, 2015). On the other hand, policymakers and practitioners working at the operational level will often find it difficult to assess whether the behaviours targeted are actually the most pertinent ones for addressing the larger policy issue at stake.
A “behavioural reduction” (see Figure 2.3 and Box 2.2) can help bridge the gap between the policy and the operational levels. It is a simple tool whereby the practitioner constructs a hierarchical branching tree model to map how a general policy issue connects to concrete behaviours. In its most simple form, the reduction is carried out in three steps that aim to decompose the policy problem into its many behavioural components. In this way, the behavioural reduction helps practitioners to identify the concrete behaviours tied to the policy problem and to which BI may be applied. This should be as concrete as possible (see also Tool #3 below).
Box 2.2. How to conduct a behavioural reduction in practice
The policymaker and practitioner can conduct a behavioural reduction by following this process:
1. Plot the general policy area or challenge at the top of a whiteboard. Practitioners often use whiteboards to think through behavioural problems in a group setting, if you have one then put this at the top. This is referred to as the “policy level” of the behavioural reduction.
2. Connect the relevant strategic domains within which the policy issue arises. This level is referred to as the “strategic level” of the behavioural reduction.
3. Attach each of the strategic domains into the concrete behaviours. The items at this level of the reduction should be concrete decisions, behaviours and procedures. Hence, this level of the behavioural reduction is also referred to as the “behavioural level” (for illustration, see Figure 2.2).
The concept of a “behavioural reduction” is not as strange as it might seem at first. It very much echoes a standard brainstorming session and may even be conducted as such using a whiteboard (though thinking in groups may have its own series of problems). To this end, assemble stakeholders and conduct a brainstorming process under a heading (policy effort or challenge), with the aim of generating a vast set of concrete examples of relevant behaviours (concrete behaviours) and ultimately sorting them into relevant categories (strategic domains). Finally, order them in the hierarchy described above – you now have a behavioural reduction.
Tool #2: Prioritising potential target behaviours using priority filters
Goal |
Visual aid |
Decision-making tool |
---|---|---|
2. Evaluating and prioritising behaviours relative to how suitable they are for a BI approach. |
Apply a priority filter to prioritise which of these behaviours are behavioural problems suitable for an effective application of BI based on core features. |
Whether as a result of a behavioural reduction or not, the first stage of a BI project will often present the practitioner and his or her team with a wider set of behaviours to which BI might potentially be applied (sometimes also called a “gross list”). Thus, the policymaker and the practitioner working with them will need some sort of decision-making tool to develop a short list of those behaviours that are most likely to provide for a successful project, also called a “net list”. The “priority filter” is such decision-making tool (see Box 2.3). The priority filter is an instance of what in the BI literature is referred to as a weighted additive decision rule, where choosers assign importance weights to each attribute of a choice option and then compute an overall score for each alternative by summing up the product of the importance weight and the score of that alternative (Payne, Bettman and Johnson, 1993). The assumption of the priority filter is that the success of a BI project crucially depends on a series of practical as well as empirical features.
Box 2.3. How to apply a priority filter in practice
1. Remove and add questions from the filter (Table 2.3) as you find relevant behaviours to the project at hand to create the relevant priority filter.
2. Inform all relevant team and stakeholders about the general purpose of and how to understand each question posed in the priority filter.
3. Forward the priority filter as a template to all relevant team members and stakeholders and ask them to fill this out independently of each other prior to the next meeting (to avoid groupthink); one filter per relevant behaviour.
4. Gather relevant team members and stakeholders. Facilitate a discussion of the answers provided to each question, especially answers where participants differ substantially in their evaluation.
5. Agree on a “common evaluation” for each question evaluated for each behaviour.
6. Apply weights and rank each behaviour according to their total score.
Table 2.3 presents a priority filter formulated in terms of a questionnaire. Each question tries to identify the presence or absence of crucial features by means of the Likert scale evaluation (i.e. four- or five-point scales commonly encountered in surveys). In addition, based on the specific project each feature should be attributed a weight, before adding the score for each behaviour considered. It is important that such weights are formulated prior to scoring behaviours, and possibly concealed to respondents, so as to avoid motivated fiddling with these. Finally, one also needs to apply a decision rule for shortlisting problems based on the result, e.g. deciding only to continue with the three behaviours that get the highest score.
Table 2.3. Priority filter questionnaire
Fill out one questionnaire per behaviour using a Likert scale: (1) = definitely not, (2) = probably not, (3) = uncertain, (4) = probably, (5) = definitely
Problem behaviour identified |
|
---|---|
Stakeholder question |
Score |
1. Does the behaviour intuitively seem to be a behavioural problem? That is, does the behaviour occur despite people having good reasons to act otherwise as judged by themselves? |
1 2 3 4 5 |
2. Is a change in the behaviour an institutional priority? That is, would a group of policymakers in the domain intuitively evaluate changing the behaviour as an institutional priority? |
1 2 3 4 5 |
3. Could changing the behaviour serve as “proof of concept”? That is, would success in changing the behaviour serve as a proof-of-concept in addressing a wider set of policy issues? |
1 2 3 4 5 |
4. Is targeting particular behaviour uncontroversial? That is, will policymakers, citizens and relevant societal organisations agree that it is legitimate to try to change the behaviour with BI? |
1 2 3 4 5 |
5. Are relevant stakeholders motivated and ready to engage? That is, would relevant stakeholders have the time and willingness to engage in a project concerning the behaviour if you asked for their collaboration? |
1 2 3 4 5 |
6. Are the relevant arenas accessible for the BI project? That is, are the arenas in which the problem unfolds accessible to the BI team relative to ownership and/or privacy issues? |
1 2 3 4 5 |
7. Is the relevant data accessible? Will it be relatively easy to get hold of existing data or record behavioural data in light of practical and/or ethical issues? |
1 2 3 4 5 |
BI team questions |
Score |
8. Does the behaviour theoretically appear to be a behavioural problem? That is, is the behaviour a likely result of psychological limitations, heuristics and habits despite people having good reasons to act otherwise as judged by themselves? |
1 2 3 4 5 |
9. Are the reasons for a change in behaviour well documented? That is, is the evidence that supports Questions 1 and 8 produced by methodologies compatible with the psychological theories underpinning BI? |
1 2 3 4 5 |
10. Have similar problems been addressed with BI? That is, can you identify studies or projects where BI have been applied to a similar problem? |
1 2 3 4 5 |
FINAL SCORE: |
Box 2.4. What is a behavioural problem?
A behavioural problem is a pattern in behaviour, whether regarded in terms of attention, belief formation, choice or determination, that occurs despite people having good reason to act otherwise. Hence a behavioural problem is not a problem of lack of: access to information; proper attitudes; right incentives or sanction; or a need for further regulation such as a ban or prohibition. In practice, such behaviour is often referred to as “irrational”.
Tool #3: Defining potential target behaviours in terms of decision points
Goal |
Visual aid |
Decision-making tool |
---|---|---|
3. Conceptualise prioritised behavioural problems as decision points in such a way that the lens and analytical tools of BI can readily be applied. |
Use decision points to define prioritised behaviours so as to allow for the application of concepts and methods from the behavioural sciences. |
Having shortlisted potential target behaviours, the next step is to define these behaviours more closely. However, something which is often overlooked when working with a BI project is that the primary focus is not that of a particular individual or group, but rather behavioural tendencies, as they unfold in a given context. Hence, a BI project usually derives the target group from a behavioural pattern (this is not to say that demographic groups based on gender, age, income, etc. may not be used as secondary defining traits of groups as well). This also implies that the field is different from, for instance, sociological and communication science, where one often starts by defining target groups for an intervention in demographic terms.
BI applies to a theoretical conceptualisation of behaviour, i.e. to behaviour as seen through the lens of the theories and methods underpinning the BI approach. Thus, the next step is for the policymaker and practitioner working with them is to ensure up front that the behaviours studied are defined in accordance with these theories and methods – in particular, this usually means describing behaviours in terms of decision points, even if this is only a potential decision point.
Box 2.5. How to define behaviours in terms of decision points
A standard way to define behaviour in terms of decision points is to identify a generic agent (the “who”) and the generic set of this agent’s available choice options (the “what”). Such descriptions will be familiar to anyone trained in economics or decision theory where “decision point models” are standard. Different from this, though, behavioural decision points also include explicit references to the generic contexts within which a behaviour unfolds (the “when-and-where”). Finally, and also different from standard decision theory, descriptions of behavioural patterns as decision points ideally identify the frequency distribution over choice options (generic behaviours) recorded or observed in the generic context.
1. Provide the generic agent with a set of available choice options (the “what”).
2. Define a generic agent (the “who”).
3. Provide context to the set of options where the behavioural pattern unfolds (the “when-and-where”).
4. Describe the observed frequency distribution over the choice options (how many does a, how many does b, how many does c, etc.). This point ideally requires surveys, observations or data-mining, but may be estimated if this is not possible.
In particular, neither a behavioural pattern nor its conceptualisation as a decision point can be directly observed. Identifying a behavioural pattern and describing it as a decision point is a constructive act, in which a model of the mind connects empirical observations – that is, it is “a theoretical conceptualisation”. To apply BI then, the practitioner needs to define potential target behaviours in terms of behavioural patterns conceptualised as decision points. This means defining what behaviour is enacted by whom, when and where?
Tool #4: Identifying crucial decision points in processes using behavioural flowcharts
Challenge |
Visual aid |
Decision-making tool |
---|---|---|
4. Conceptualising behavioural problems as processes in such a way that crucial decision points may be identified (then return to 3). |
Use behavioural flowcharts to describe how a process unfolds and how people make choices throughout this. |
At times one may encounter a potential target behaviour that is part of or results from a process or chain of actions. In such cases, one cannot define the behaviour as a single decision point but needs to unfold the potential target behaviour as a process of decision points and identify the most crucial one(s), before this can be defined.
To this end, one may draft a “behavioural flowchart” (see Figure 2.5). A flowchart is a well-known tool in data science and related disciplines. Flowcharts use a defined set of arrows and shapes to represent activities and relationships in a process. The goal of the diagram is to show how the steps in a process fit together by breaking down a process into individual activities and illustrating the relationships between these activities, as well as the flow of the process. A behavioural flowchart provides a detailed description of how a process actually unfolds as well as attaches behavioural measures of how people make choices throughout the process. This allows for quantitative comparative analysis of decision points in the flowchart aimed at identifying the crucial decision points to define. The simplicity of behavioural flowcharts also makes them useful tools for understanding and sharing processes in teams as well as analysing these in an effort to identify, besides crucial decision points, potential loose ends and friction points that inhibit the efficiency and reliability of the process.
Tool #5: Select the behavioural problem(s) with the best potential for a behavioural approach
Challenge |
Visual aid |
Decision-making tool |
---|---|---|
5. Finally, select what behavioural problem(s) to target for further analysis. |
Apply selection filters to finally select what behavioural problem(s) to target for further analysis. |
At this point, a “net list” of potential behavioural problems has been identified and defined to target potential behaviours for a BI project. The next and final step of Behaviour is to select the behaviour(s) that exhibits the best possible conditions for behaviour change with a sizeable impact through the application of BI. In assessing this, three heuristic (i.e. mental shortcuts or intuitive judgments) questions may be used as consideration points as to what potential behaviour(s) to target. In BASIC, such questions are referred to as a selection filter.
1. What do base rates combined with past policy effort indicate how difficult it will be to change the behaviour?
A low base rate (e.g. <10% conformity to the preferred behaviour) may indicate, for instance, that only a specific and select subset of people currently engage in this behaviour or that very little effort has been put into changing this behaviour in the past. If the latter is the case, it might not be a behavioural problem but just an informational one, and traditional strategies may be a natural first step to consider – or, of course, it might just be low-hanging fruit for a BI approach. However, if a high amount of effort has been devoted in the past to changing this behaviour, then practitioners should carefully consider what has been done so far and why base rates remain low despite past efforts.
A similar point pertains to very high base rates of conformity with the preferred behaviour (e.g. >90%). If everyone is doing the right thing with little policy intervention, then traditional policy efforts might be a first choice. However, if a lot of effort has been put into changing this behaviour, the practitioner should investigate whether there might be special challenges or reasons why a small number of people do not conform to the behaviour enacted by others. Possibly such inquiry may lead to a revision of the target group associated with the behaviour studied and thus a redefinition of the behavioural problem.
Finally, many practitioners interpret a medium base rate as a good indicator of the potential for behaviour change with a sizeable impact through the application of BI. A reason for this is that BI mainly applies to behaviours resulting from limited attention, informational complexity, weak preferences and minor friction, all of which are factors making us vulnerable to behavioural bias. However, the studies of bias that underpin BI rarely see extreme base rates. Thus, medium base rates better reflect the phenomena studied by the underlying science. Medium base rates may also indicate a high potential for change as they are often overlooked by past policy efforts because extreme cases tend to attract more attention.
2. How will a potential behaviour change translate to impact?
Another crucial question to consider when evaluating the potential of changing target behaviours is how such change will translate into individual and societal impact.
The relationship between the magnitude of the behaviour change and the resulting impact varies depending on the issue at stake. To illustrate, one might aim to reduce street litter in order to reduce a city’s cleaning costs. Yet, even a reduction of 50% of litter might not have any economic impact at all. This could be the case if even mildly littered streets are unacceptable to a city and its citizens. As a consequence, the municipality will have to clean the streets with the same frequency and costs despite achieving 50% reduction in litter. At the other extreme, sometimes even slight behavioural changes may generate a big impact. This is, for instance, the case when it comes to generating competition on complex markets. Here it may only take 5%-10% of consumers actively engaging in price comparisons to drive competition with resulting benefits for all consumers. As a result, one needs to carefully analyse how each of the potential behavioural changes considered is likely to translate into impact.
3. What is the frequency with which the behaviour occurs?
A third and final question one needs to consider when deciding which behaviour to target is how frequently the behaviour occurs. This question goes beyond asking about the relationship between behaviour change and impact, as the behavioural pattern may be so infrequent that the expected total societal impact will be negligible. On the other hand, behaviour with apparent marginal costs or benefits per instance may be so frequent that the aggregate impact of even a slight behavioural change may be considerable. Also, a measure of the frequency with which a behaviour occurs will provide valuable information about how long it will approximately take to reach the required number of observations to allow for statistical analysis in the case that an intervention aimed at changing the behaviour is to be tested.
Together, these three consideration points may be used as heuristics (i.e. mental shortcuts or intuitive judgments) to help select which behavioural problem to target in a BI project, i.e. “the target behaviour”.
Ethical guidelines for identifying and defining the problem (Behaviour)
The initial stage of a BI project raises a series of special ethical issues that needs to be considered from the outset. The Behaviour stage recommends ten ethical guidelines that can be summarised as:
Observe the limits of legitimate public policy interventions. Make sure that the team refrains from targeting behaviours and behaviour changes that it cannot defend in public as well as in the wider BI community. In particular, does not automatically follow that BI will be applied in the service of people’s own interests. Always consider the legitimacy of the public motive behind targeting a given behaviour for change by comparing this to the regulatory paradigm the team is operating within.
Secure specific and robust acceptance when targeting behaviours. BI is underpinned by psychological theories that take people’s self-reported behaviour, beliefs and preferences to be easily influenced by the immediate context and subject to biases. Always evaluate the existing evidence for targeting a given behaviour change through the lens of the theories that underpin BI. Make sure that acceptance of targeting a behaviour is not obtained due to framing and similar influences and avoid inferring the acceptability of targeting specific behaviours from acceptance of the general policy goal.
Beware not to simplify behaviour too much. Behaviour conceptualises behaviours as the aggregate patterns of groups. Yet, individuals usually hold distinct preferences. Always assess the heterogeneity of preferences in groups and consider how to protect individual rights, values and liberties when targeting behaviour change. Also, a change in one behaviour often leads to changes in other behaviours and such derivative changes may not impact all citizens in the same way. Always consider what the potential side effects might be of pursuing a given behaviour change and for whom, and involve stakeholders with relevant knowledge about what these might be.
Special considerations for the use of private data. This will often pertain not only to attitudes and beliefs but also to actual behaviour of citizens. A BI project will often involve collecting and making use of types of data, which is often standardly contained or connected in the databases of governmental agencies. This includes: i) primary behavioural data (i.e. data on or related to the real-world behaviour of citizens); ii) secondary behavioural data (i.e. data on variables related to people’s attention, belief formation, preference construction, determination and more); iii) contextual data (i.e. data on contextual variables, including seemingly irrelevant aspects of choice architectures); and iv) data on people’s reflective preferences (in so far as such exist) about what people believe they ought to do given their available options.
Table 2.4. Ethical guidelines for Stage 1: Behaviour
1. Ethics is an issue to be observed from the outset of BASIC. As a first step in Behaviour, establish an ethical review board to follow the project from day one and throughout its existence with a clear eye on the following guidelines. If this is too ambitious for the project at hand, then outline the ethical issues associated with a project, how the project proposes to address these and consider where ethical approval is required. |
2. Behaviour, like other explorative stages in BI projects, often involves data collection and analysis that goes beyond what is standard in government agencies. Appoint at least one member of the ethical review board or the BI team to supervise ethical aspects of data collection and use. |
3. Behaviour is characterised by working across institutional boundaries. Make sure all team members observe existing ethical guidelines and codes of conduct of the particular fields that the project involves as well as receive the necessary training to comply with these. |
4. Do not be a passive bystander. Discuss and establish procedures for how the team handles collaborating parties that fail to comply with their own ethical guidelines and codes of conduct, while also observing that honesty, anonymity and whistleblowing is to be protected. |
5. Existing ethical guidelines and codes of conduct will not cover all aspects of a BI project. As part of Behaviour, establish a procedure from the outset for flagging activities and data collection that are not covered by these and for how to perform an ethical review in such cases. |
6. Behaviour targets behavioural problems, i.e. behaviours where people fail to achieve their preferred ends due to psychological factors. Yet, not all such behaviours fall within the legitimate confines of public policy. Make sure that the team refrains from targeting behaviours and behaviour changes that it cannot defend in public as well as in the wider BI community. |
7. While Behaviour targets behavioural problems, it does not automatically follow that BI will be applied in the service of people’s own interests. Always consider the legitimacy of the public motive behind targeting a given behaviour for change by comparing this to the regulatory paradigm the team is operating within. |
8. Behaviour conceptualises behaviours as the aggregate patterns of groups. Yet, individuals usually hold distinct preferences. Always assess the heterogeneity of preferences in groups and consider how to protect individual rights, values and liberties when targeting behaviour change. |
9. A change in one behaviour often leads to changes in other behaviours. Always consider the potential side effects of pursuing a given behaviour change, involve stakeholders with relevant knowledge about these and create suitable measures for monitoring potential side effects throughout all relevant stages in the project. |
10. BI is underpinned by psychological theories that take people’s self-reported behaviour, beliefs and preferences to be easily influenced by the immediate context. To serve the people, rather than the context, always evaluate the existing evidence for targeting a given behaviour change through the lens of the theories that underpin BI. |
Stage 2: Analysis – Understanding why people act as they do
Stage 2: Analysis
In the second stage of BASIC – Analysis – the practitioner focuses on analysing the target behaviour(s) and the choice architecture(s) that are embedded within this behaviour. The aim is to understand why people act as they do. BASIC differs from other BI approaches by emphasising the importance of analysis and its systematic relationship to relevant strategies. This feature is captured through the ABCD framework that suggests behavioural problems may be analysed in terms of four aspects:
1. Attention.
2. Belief formation.
3. Choice.
4. Determination.
When
Analysis is held to be too important to ignore in any responsible BI project and it is crucial for the team to highlight the stage so as to be given the time to conduct a proper analysis. ABCD, however, takes some effort to master and other existing non-diagnostic frameworks (see Box 2.6) may be substituted if wanting to generate ideas faster. Also, ABCD may be supplemented with more traditional approaches.
Milestone
Provided that the policymaker and practitioner have reached a satisfying confidence level with regards to the outcomes of this stage, Stages 1 and 2 may be referred to as a “behavioural analysis” of the behavioural problem. Using ABCD for this analysis can form the basis for identifying effective behavioural Strategies for informing public policy (Stage 3).
The first stage of BASIC focused on how to approach a policy problem from a BI perspective from the outset of the policy cycle. The stage ended with the practitioner selecting one or more behavioural problems to which BI may potentially be applied.
Stage 2: Analysis aims to understand why people act as they do relative to the target behaviour(s), as seen through the lens of BI. This is particularly challenging, as insights from the behavioural sciences are often counter-intuitive as well as reveal how people struggle when trying to recall facts, attribute correct causes and make successful predictions. While standard qualitative methods have a lot to offer in terms of learning about people’s perspectives, their knowledge and assumptions, they cannot easily report on why people act as they do, what psychological mechanisms are involved and what would effectively influence their behaviour. Thus, a BI approach should always be cautious about the validity of more traditional approaches.
BI and considerations in understanding why people act as they do
At the heart of BASIC one finds an iterative systematic inquiry combined with behaviourally informed indicators (i.e. “symptoms”) and mixed methods that seek to generate hypotheses about why people act as they do. This process of inquiry is the ABCD framework, which is introduced in this chapter and will be progressively developed through this and the subsequent stage.
The selection of method(s) for why people act as they do relative to the behaviour studied is based on what kind of information is sought, from whom and under what circumstances (Robson, 2001). Different from traditional scientific research, the study of behaviour in the real world is characterised by the adoption of flexible research designs where the nature and number of methods used can change as data collection continues. Thus, there is no single or straightforward way to go about understanding why people act as they do in a BI perspective.
The behavioural sciences as defined earlier, in their current form, already present a wide range of methodological approaches for studying the individual, social and contextual factors that cause behaviours and how such behaviours may be modified. Unfortunately for BI, many of these methods, those from behavioural economics for example, have mainly been designed for strictly controlled conditions in laboratory settings. Hence, they are not readily applicable to studying the behaviours of citizens, consumers and employees acting in the real world, nor for designing better public policy.
This poses a challenge which is enhanced by the fact that many of those who want to work with BI have not been trained in the theories and methodologies underpinning the behavioural sciences. As a result, there is a tendency towards applying more traditional methods for studying behaviour in the real world, such as classical questionnaires, interviews, focus groups, etc., which remains less inquisitive about the face validity of the data produced by such methods; such traditional methods are also easier for stakeholders to understand. This tendency is even further enhanced by the fact that many tend to perceive BI not only as an innovative but also perhaps more creative participatory and colourful approach than it really is. Consequently, in many places, BI is often merged with other innovative paradigms such as design thinking and collaboration without much regard to the fundamental differences between these paradigms and with the danger of diluting the theoretically grounded contributions that BI has to offer.
To meet this challenge, it is important for practitioners to recognise and emphasise that methodological eclecticism is constrained by theoretical consistency. One cannot, for instance, ask a person to provide a consistent explanation why he or she does what they do, while at the same time assuming the drivers of behaviour to be largely outside the bounds of rationality and perhaps even outside of reflective consciousness. Further, policymakers and practitioners working with them should always pay close attention to what aspects of methodologies can be reconciled with the psychological theories underpinning the behavioural sciences and what cannot. In particular, the application of many standard methodologies only makes sense given that they are suitably adapted to accommodate the insights provided by these theories.
Analysing behaviour calls for methodological eclecticism as well as theoretical consistency
BASIC adopts a flexible approach in the stage of Analysis dependent on the behavioural problems studied and where the nature and number of methods used can change as data collection continues. Thus, there is no single or straightforward way to go about understanding why people act as they do in a BI perspective. Rather than going through the various methods that might be used in Analysis, some central points for practitioners to consider are stated here.
Beware of face validity. In accommodating standard methodologies to BI, probably the first and foremost thing to observe is the consequences that dual process and similar psychological theories have for the constructs and phenomena usually studied. For instance, different from standard methodologies, such theories often approach self-reports as inherently unreliable as these are taken to be subject to cognitive limitations and bias. In particular, self-reported memories, beliefs, preferences, intentions and experiences are not regarded as mental facts fetched from our inner libraries but rather as constructions assembled or inferences made when the circumstances call for them. As behavioural scientist Nick Chater (2018) radically puts it, “the mind is flat” and the inner library is itself a cognitive illusion. It is crucial for the practitioner to continuously consider whether the methodologies applied tap into facts and not just their face validity.
Triangulate if possible. A further consequence of the theoretical underpinnings of BI is that standard methodologies should be treated with care and if possible be methodologically triangulated. Qualitative interviews, self-reported answers to survey questions and even observations should be treated as explorative experimentations tracing the truth rather than providing it. So as in any other real-world situation, where the inquirer has available multiple but unreliable sources, one needs to crosscheck and possibly perform small-scale tests and experimentation to get the story right.
Get up close with reality. Due to their cost-effectiveness and face validity, participatory workshops, focus groups and expert interviews are currently standard tools when trying to find out why people act as they do (as well as what to do about it). Yet, such sources should be evaluated relative to the mental distance from the actual behaviour inquired into and should be disfavoured relative to methods, such as in situ interviews and direct observational studies of first-hand experience by those involved in the behaviour as well as by practitioners themselves. Also, in this aspect, the BI approach to public policy is new. An ambitious way of stating this would be to say that “no policy can truly be said to be behaviourally informed if the informant has not been there herself to observe through the lens of BI – from within as well as from the outside – how the target behaviour subject to the policy actually unfolds in its natural context”. But insofar as this is regarded as too idealistic, it should at least be an ideal to aspire to.
ABCD – A framework for structuring Analysis
At the heart of BASIC’s Analysis is the idea that through an iterative systematic inquiry, practitioners may form hypotheses based on best guesses according to the best evidence available – much like a doctor devises diagnostic procedures and tests to form hypotheses about illnesses based on their systematic relationship to symptoms. This type of reasoning is called “abductive” reasoning – reasoning to the best possible explanation, and always subject to uncertainty. Still, the reason for putting significant efforts into pursuing a diagnostic approach in the analysis of behaviour is that this ultimately provides a more effective and responsible approach to the development of behaviourally informed Strategies to be tested as part of the stage of Intervention.
To assist the process of abductive reasoning, Analysis offers a framework called ABCD for structuring the diagnostic inquiry, which are Attention, Belief formation, Choice and Determination (see Figure 2.6).
Like existing BI frameworks (see Box 2.6Error! Reference source not found.), ABCD seeks to assist practitioners in analysing behavioural problems on the basis of behavioural insights. Different from these frameworks, however, ABCD goes beyond presenting a list of selected insights. Instead, it includes a structured diagnostic approach for analysing target behaviour(s) that looks at:
1. Diagnostic aspect and indicators: The inner two circles in the figure that helps narrow behaviours into their respective section(s) of the ABCD Framework. This will be developed in the remainder of Stage 2: Analysis.
2. Strategies: The second outermost circle that gives a starting point for solving behaviours diagnosed in the respective section(s) of the ABCD Framework. This will be further developed in Stage 3: Strategies.
3. Insights: The outermost circle that gives behavioural solutions used in different contexts around the world as a starting point for testing possible behaviourally informed policy initiatives. This will be further developed in Stage 3: Strategies.
ABCD is derived from the fundamental assumption of BI that behavioural problems result from systematic deviations from what is predicted by rational models (see also DellaVigna (2009) for a similar idea); and since rational models make predictions within the aspects of Attention, Belief formation, Choice and Determination, relevant aspects of behavioural problems must be examinable according to these four domains.
Box 2.6. Some key BI frameworks
With the rise of BI around the world, a number of useful frameworks have been developed by both government and non-government agencies. Similar to ABCD, all of these frameworks use a simple mnemonic to establish an analytical tool aimed at helping a policymaker think about behavioural issues within a policy problem. While ABCD can be seen as “another framework”, it was designed and optimised to be used as the analytical heart of the BASIC process framework rather than a standalone tool.
Below is a non-exhaustive list of widely referenced frameworks that complement ABCD and could be a resource for policymakers looking for different ways to analyse a behavioural problem.
MINDSPACE (The Behavioural Insights Team, 2010): Provided an early checklist for thinking about how nine well-evidenced behavioural insights may inform public policy development, design and delivery.
Test, Learn, and Adapt (The Behavioural Insights Team, 2013): Gave an accessible introduction to the basics of using randomised controlled trials in policy evaluation.
EAST (The Behavioural Insights Team, 2013): Provided a simple framework considering how behavioural insights may help design policies based on leveraging convenience, social aspects of decision-making and the attractiveness and timeliness of policies.
World Development Report: Mind, Society, and Behavior (World Bank, 2015): Gave a comprehensive overview of how the BI perspective on human decision-making is of relevance to development policy.
Define, Diagnose, Design, Test (ideas42, 2017): Provided a practical framework for thinking through a problem and identifying behaviourally informed solutions.
US Internal Revenue Service Behavioral Insights Toolkit (IRS, 2017): Created to be a practical resource for use by IRS employees and researchers who are looking to use BI in their work.
Assess, Aim, Action, Amend (BEAR, 2018): Presented a playbook developed for applying BI in organisations outlining four steps for applying BI.
Sources: The Behavioural Insights Team (2013), Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, https://38r8om2xjhhl25mw24492dir‑wpengine.netdna‑ssl.com/wp‑content/uploads/2015/07/TLA‑1906126.pdf (accessed on 6 November 2018); The Behavioural Insights Team (2010), MINDSPACE, https://www.behaviouralinsights.co.uk/publications/mindspace/ (accessed on 6 November 2018); World Bank (2015), The World Development Report 2015: Mind, Society and Behaviour, http://www.worldbank.org/content/dam/Worldbank/Publications/WDR/WDR%202015/WDR‑2015‑Full‑Report.pdf (accessed on 6 November 2018); ideas42 (2017), Define, Diagnose, Design, Test, http://www.ideas42.org/blog/first-step-towards-solution-beta-project/ (accessed on 6 November 2018); IRS (2017), Behavioral Insights Toolkit, https://www.irs.gov/pub/irs-soi/17rpirsbehavioralinsights.pdf; BEAR (2018), How Should Organizations Best Embed and Harness Behavioural Insights? A Playbook, http://www.rotman.utoronto.ca/‑/media/Files/Programs‑and‑Areas/BEAR/White‑Papers/BEAR_BIinOrgs.pdf?la=en (accessed on 6 November 2018).
Further, practitioners may not only use ABCD to systematically form hypotheses based on best guesses according to the best evidence available but, as will be evident from the next stage, ABCD also provides a key for systematically identifying what behavioural strategies may be relevant if the aim is to influence the target behaviour.
The four aspects of behavioural problems
The framework begins with ABCD itself: Attention, Belief formation, Choice and Determination. The assumption is that since rationality provides for prescriptions in each of these four aspects of behaviour, behavioural problems – understood here as deviations from such prescriptions – must be examinable in terms of these aspects as well. Thus, a prerequisite of applying ABCD is understanding in broad terms, what rationality prescribes in these four aspects:
Attention is about what to focus on in a given context. Here the rules of rationality are quite simple assuming that people cannot focus on everything. To act rationally in this domain, people should focus on what is the most important aspect of the context in light of one’s knowledge and preferences.
Belief formation is about making judgments provided the information that one has available. Here the rules of rationality are quite complex and have been a subject matter of philosophy and theory of science since Ancient Greece. Simplified somewhat, to act rationally, people should form their beliefs according to the rules of logic as applied to well-defined propositions as well as rationally update their beliefs in light of new information according to sound probability theory.
Choice is about making decisions between the available choice options given one’s preferences. How to do this rationally has traditionally been the subject matter of philosophy of choice, decision theory and microeconomics. Again simplifying, rationality has it that, to act rationally, people should make choices so as to maximise subjective expected utility.
Determination is about sticking to one’s choices. Determination, including the subject matters of self-control and willpower, has not been studied much relative to rationality. The reason for this might be that the rules of rationality ultimately are quite simple in this domain as well. Provided that one decides to pursue certain long-term goals, one should keep to their plan.
Ignoring the details of academic debate, these rules of rationality as applied in the four domains are quite uncontroversial. That is, focusing on the most important priorities, forming logically sound beliefs according to the information available, making choices that maximise subjective expected utility based on one’s preferences (whatever they might be), and then sticking to those choices is advice that any reasonable person would subscribe to.
Yet, advances in the behavioural sciences have revealed that people inhabiting the real and complex world, have difficulties adhering to this advice thus making us predictably irrational. While we readily embrace the rules of rationality, we tend to forget that more intuitive-automatic processes provide the foundational as well as, at times, only mechanisms driving our behaviour. The consequences of this are summarised in very general terms in the four diagnostic domains and associated with certain diagnostic indicators (symptoms) that practitioners may use as cues of their relevance in the analysis.
Box 2.7. Critical steps for using the ABCD framework
In the second and third stage of BASIC, practitioners apply the ABCD framework in the Analysis (Stage 2) of target behaviour(s) as well as in the systematic identification of appropriate Strategies (Stage 3) for designing Interventions that may inform public policies aimed at creating behaviour Change.
ABCD works by focusing on the practitioner’s analysis on each of four aspects of behaviour that tend to cause the biases involved in behavioural problems and changes. It is implicit but worth noting: your behavioural problem or change can connect with more than one aspect, so care must be paid to considering the full framework in your analysis.
The main steps in the application of ABCD are to:
1. Select a target behaviour for Analysis (see Stage 1: Behaviour).
2. Become familiar with the behaviour studied by observing the behaviour; engaging in the behaviour; interviewing people engaging in, or otherwise involved in the behaviour; as well as by determining what data already exists and examining it.
3. Use indicators such as those defined by ABCD to hypothesise what behavioural aspects (e.g. attention, belief formation, choice, determination) are likely to be involved in causing the behavioural problem or obstacle for change.
4. Consider all potential data that could, in principle, be recorded about the target behaviour relative to the generated hypotheses in (3) if everything was possible.
5. Determine what further data could be recorded through behaviourally informed methods to support or even test hypotheses in (3).
6. Return to the field to study, record further data and if possible, conduct falsifiable tests of the hypotheses about what behavioural factors may cause the behavioural problem or obstacle involved in the target behaviour.
Repeat Steps 2 to 6 until the team is sufficiently confident in the viability of the hypotheses given the time and cost constraints of the project.
Diagnostic aspect and indicators
Attention – The window of the mind
An implicit assumption is often made that our attention capacities are more or less boundless, which allows us to focus on whatever is most important. This assumption follows quite naturally from the idea that any behaviour, including what we attend to, is seen to be a behaviour that is maximising expected utility. It is also from this assumption that rationality provides its sole prescription for attention: focus on whatever is most important.
This rational treatment of human attention stands in marked contrast to that provided by cognitive and social psychology. Here human attention has been shown to be surprisingly scarce, easily distracted, quickly overwhelmed and subject to switching costs. All of these seriously affect our ability to spot what is important as well as bias our processing of whatever is in focus.
Behaviours that, from a rational perspective will appear as deliberate, may, from a behavioural viewpoint, be analysed in terms of at least five types of inattention.
Forgetting: if there is nothing, in particular, within a given context to remind people about the need to attend to a specific action, then people will tend to forget to carry out this action. As a consequence, people may miss their doctors’ appointments, fail to file tax, take their medication, and the like.
Overlooking: If there is nothing to attract people’s attention to the appropriate action when carrying out an alternative task, they will tend to overlook the appropriate task. As a result, people may easily overlook speed limits when driving, sanitisers when visiting family at the hospital and gorillas when they are counting passes of a basketball.
Relegating: If a task is gaining someone’s immediate attention, in a context where more pressing tasks could be attended to, people will tend to relegate attention to the immediate task. Consequently, people tend to relegate attention to actions, even when brought to their attention in an irrelevant context. For example, this happens when reminding people about deadlines for filing taxes while seated in the cinema or warning young people partying about the long-term health consequences of smoking.
Multitasking: If people are engaging in multiple tasks simultaneously, their ability to detect relevant information and perform cognitive tasks is influenced significantly. As a consequence, texting while driving leads to decreased performance in traffic, multitasking while working at industrial sites leads workers to ignore potential safety risks, and multitasking in the surgical operating theatre results in doctors ignoring procedures.
Distractions: If people are switching back and forth between tasks, performing tasks in rapid succession or become distracted by irrelevant cues, cognitive performance, as well as memory retention, will suffer significantly. This is what happens when students switch back and forth between listening to a lecture and surfing the Internet, when office workers work in open plan offices or when workers switch between emails and important decisions during a meeting.
These five types of inattention are concepts drawn from everyday language but with descriptions based on insights from the behavioural sciences. As such, they provide bridges for the policymaker and practitioner to explore relevant fields in search of more specific behavioural insights that might explain the targeted behavioural problem or challenge.
Anticipating the next stage, Strategies, the practitioner may thus look at the following behavioural insights when analysing a target behavioural problem relative to the aspect of attention:
Is the decision point located when relevant2 for people? In particular, is the decision point well-timed and placed in a context where people are in a suitable state-of-mind?
What do people attend to at the relevant decision point and, if they are not attending to what they ought to, what is seizing their attention instead and why? In particular, what part of the context is salient to them, do they get a reminder, is a decision point prompted and does attention play out in a social context?
What happens if people are inattentive at the decision point? In particular, is there a default that is mediated by inattention into a particular effect or some other sort of safety mechanism in place?
Belief formation – Making sense of the world
The second aspect to look into when analysing a target behavioural problem is the role played by belief formation – the processes involved in making sense of the world. In this domain, epistemic rationality assumes that everyone carefully searches for and scrutinises all relevant information; seeks new information and updates beliefs accordingly; and adheres to rules of logic and probability theory, even when matters get complex and extended computational power is required. However, the behavioural findings relative to belief formation differs in multiples ways from that predicted by models of epistemic rationality. For humans, belief formation is mainly about quickly making sense of the world by consistently creating a coherent worldview based on our preconceptions; a model that works well enough to make successful predictions and choices, within the psychological limits imposed by attention, memory, information and processing powers. From a behavioural viewpoint the consequence is that the following five general problems often occur indicating that informational search, intuitive understanding and judgments may be subject to biases in belief formation.
Holding on to pre-existing beliefs: In trying to maintain a coherent picture of the world people tend to ignore information that does not fit into their existing worldview or information they fear may cause psychological discomfort. Even when searching for new information or better understanding, they tend form beliefs according to the information they have readily available, especially if this information is insufficient for reaching a conclusion. In general, this concept is referred to as “confirmation biases”.
Bias statistical sampling: In situations of uncertainty, people are prone to perform sampling errors, such as neglecting base rates, or use conveniently available information to form beliefs about the likelihood of events. This issue often falls under the title of “sampling and statistically biases”.
Over- and under-confidence: People have difficulty distinguishing relevant from irrelevant information when assessing the correctness of their beliefs and the level of confidence that the evidence warrants. For example, if asked to evaluate the truth of information, they may confuse the confidence and/or the credibility of the messenger with the likelihood of the message being true. Likewise, they can be influenced by the availability of memory or recent past thoughts when passing judgment. This phenomenon is referred to as “confidence biases”.
Having difficulties with abstractions: People tend to poorly estimate concepts when they become more abstract, such as probabilities, money or time. For instance, people overestimate small probabilities and underestimate large ones, which impact our perceptions of chance and risk. Likewise, people underestimate the importance of abstract information and overestimate concrete anecdotes.
Relying excessively on mental shortcuts or intuitive judgments: People tend to rely on simple mental shortcuts or intuitive judgments (i.e. heuristics) to reach conclusions, especially when under uncertainty. They are easily influenced by what other people do, use simple mental models to understand complex systems and often fall prey to logical fallacies.
The practitioner may look at some categories of behavioural insights that have often proved to be relevant when problems of belief formation are present.
What principles guide peoples’ search for information? For instance, what are their pre-existing beliefs, do they eliminate information by aspects (as when searching for a hotel online), and what questions and doubts direct their search?
How does context interact with intuitive belief formation? Do contextual features support correct belief formation and what mental models do people use to make sense of the world?
What kinds of information support people’s judgment? What heuristics do they rely on when forming beliefs? How do physical and social contexts support the adoption of the heuristics that people apply?
Choice – Making the best of opportunities
The third aspect to consider when analysing a behavioural problem is how preferences are constructed and choices influenced when making choices. Rationality assumes people make choices based on preferences – or preferences inferred from choices – by assuming people adhere to a handful of axioms prescribing how to make the best of opportunities in an instrumentally rational way.
The rational portrayal of choice behaviour has, however, been the main target in the emergence of behavioural economics. During the last 50 years, this discipline at the heart of BI has relentlessly brought forward experimental evidence showing how humans differ in their decision-making from the rational actor of traditional economics. The research agenda of behavioural economics has thus revealed that, as with belief formation, choice behaviour is often constructed on the spot and potentially influenced by a long list of cognitive biases. For instance, materially incentivising choices may crowd out intrinsic motivation; the mere arrangement, formulation and “framing” of choice options may significantly influence choice behaviour; and social aspects, such as social cues, comparisons and meanings, may potentially attract or detract people from choosing particular options, regardless of the outcome.
As traditional economic analysis constitutes one of the core strategies of traditional public policy it is not surprising to find that BI, especially insights from behavioural economics, have significant implications for policy analysis of why people choose as they do and what strategies to pursue when trying to influence choices. In analysing behavioural problems, practitioners thus should keep a close eye not only on the material incentives associated with outcomes but also on the list of indicators that choices are unduly influenced by psychological factors as well, including:
Doubt, disappointment and regret: If a decision point presents a complex, confusing or misleading choice architecture people may tend to express doubt ex ante and disappointment or regret ex post their choices. As a consequence, people will levy more unwarranted consumer complaints and bad online user reviews, or spend excessive time making decisions (choice overload) and avoid talking about past choices.
Sticky status quo: If people own, create or otherwise invest time, energy or resources in a project, they may excessively stick to the status quo. This may lead people to pour more resources into an investment already lost (sunk cost fallacy) or reject reasonable offers for property they own (endowment effect) or have put effort into creating (IKEA effect).
Sensitivity to framing and arrangements: If people have weak preferences or face situations of risk or uncertainty, their choices may be overly sensitive to the mere formulation or arrangement of choices (framing). As a result, people make different choices as losses loom larger than gains (loss aversion); avoid risks when outcomes are framed as gains but become attracted to risks when outcomes are framed as losses (risk aversion for gains and risk attraction for losses); prefer options that are presented first in a series (order effect); resort to choosing the middle option in a series (compromise effect) but extreme options for more complex choices (extremeness effect) or options arranged as weakly dominant options (asymmetric dominance effect); and many more.
Social motives, meanings and norms: If choices involve social motives, meanings and norms, they will often be made in ways that deviate from predictions based on analysis of extrinsic motivation, such as material incentives. Extrinsic motives such as monetary incentives or punishment may undermine intrinsic motivation and hence lead to the opposite effect of that intended (crowding out of motives). People may also choose a non-preferred choice if it takes on a social meaning (social meaning; reaction) or imitate celebrities in, for example, conspicuous consumption (social imitation, status cascades); or are influenced by social norms, such as failing to blow the whistle to avoid being a “snitch” (conformity); choose the default setting because it is perceived as the socially accepted choice (the default effect by recommendation); or give money to strangers (reciprocity; fairness).
Again, anticipating the next stage, Strategies, the practitioner may look at some behavioural insights that have often proved to be relevant when choice problems are part of the target behavioural problem.
What makes a given choice attractive to people? For instance, what is the motive they are acting upon, what perspectives do they include, and how does it connect with emotions?
How are choices framed? Does the arrangement of choice options seem to influence people’s behaviour and are choices described in particular ways?
What social motives, meanings and norms is the target behaviour embedded within? Does the behaviour connect with social identities and how? And is the behaviour subject to social norms?
Determination: Sticking to choices over time
The fourth aspect of behavioural problems that practitioners may explore is the role played by determination, defined as behaviours requiring people to stick to their choices over time when challenged by issues, often referred to as willpower, self-regulation or self-control. Like attention, determination has not been a core theme in studies of rationality, aside from the simple assumption that when people make long-term goals, they should stick to them.
The behavioural sciences have shown the rational assumptions of determination to be illusory and ideal. Studies have shown that fundamental attribution error makes us liable to interpret other people’s behaviour in the realm of determination as a result of dispositional factors, rather than situational factors, even though situational factors are often a more likely cause. More precisely, determination is significantly influenced by at least three dimensions: mental taxation, which affects everyone, in every aspect of ABCD, from those under cognitive pressure to those continually in poor or impoverished living conditions, especially when the consequences become very direct (Mullainathan and Shafir, 2013); learned strategies or competencies (Mischel, 2014) for dealing with temptations; and situational factors (choice architecture) (Thaler and Sunstein, 2008). As a result, people often take one action with the genuine feeling that they should be taking another. The danger then is assuming failures in determination are associated with a personality issue rather than socio-economic and situational variables that one has little control over.
As such, it is important for practitioners to look for diagnostic indicators related to determination such as:
Cognitive dissonance: When people face challenges to their long-term goals, they experience mental discomfort or psychological stress. This can trigger increased pulse rates, anger and physical sway. Cognitively, people search for ways to reconcile immediate gratification with their long-term goals (motivated reasoning) or may exaggerate the desirability of the long-term goal (effort justification).
Mental taxation or exhaustion: Another indicator of one’s determination being challenged is the experience of mental taxation or exhaustion. This causes people’s minds to be less efficient (tunnelling) due to the consumption of “mental bandwidth” (Mullainathan and Shafir, 2013) that would otherwise go to less pressing concerns, planning ahead and problem-solving. This may result in cognitive deficits, self-defeating actions and an increased tendency to be distracted by inner and outer interruptions.
Inertia and procrastination: The complete (inertia) or temporary (procrastination) avoidance of doing a task that needs to be accomplished through a series of characteristic psychological strategies (coping behaviours). People may use avoidance, denial, distractions or may blame unrelated situational factors as reasons preventing the achievement of their goals.
Excessive self-directed blame: When challenges lead to failure, people may blame themselves and experience regret. This is why job-seekers failing to apply successfully for jobs, people failing at a diet or failing to quit smoking may come to adjust their self-perception as well as experience lower self-esteem and guilt. In extreme cases, this may also lead to clinical depression.
Again, anticipating the next stage, Strategies, the practitioner may look at some categories of behavioural insights that has often proved to be relevant when problems of determination have been part of the target behavioural problem.
What are the points of friction relative to a desired behaviour? Is there friction if wanting to do the right thing and is it too easy to do the wrong thing?
Do people have plans and are they given feedback? For example, do they have a plan for when to do what and are they given various kinds of feedback when pursuing their goals?
How do performance and goal achievement interact with the social context? How, if at all, do people commit themselves in pursuit of their long-term goals and what kinds of expectations do such commitments create in other people? Are social norms at play, such as failing to blow the whistle to avoid being seen as an informer (conformity), choosing the default setting because it is perceived as the socially accepted choice (default effect by recommendation) or giving money to strangers (reciprocity; fairness)?
Ethical guidelines for understanding why people act as they do (Analysis)
Seeking to understand why people act as they do in a BI perspective may involve a wide range of methods. These methods share the following common characteristics: they usually observe or study human behaviour, running the risk of affecting participants’ personal lives and colliding with people’s privacy rights.
It is important to emphasise that the ethical guidelines presented as part of Behaviour should also be observed when working with Analysis, in particular, the guideline stating that “all team-members observe existing ethical guidelines and codes of conduct of the particular fields that the project involves as well as receive the necessary training to comply with these”.
Besides these earlier guidelines, the following ten guidelines attempt to capture some of the most basic ethical considerations special to BI when closely studying behaviour as part of Analysis. These can be summarised as:
Seek ethical approvals and competencies where necessary. Use the ethical review board or relevant authorities within which the behaviour is studied to grant approval. If using a third party to conduct the study, this ethical responsibility cannot be transferred. Ensure appropriate training to develop sufficient competencies for data use and analysis.
Consider what guidelines must be followed when studying behaviour up close. These include collecting and documenting consent, revealing the purpose of the study, ensuring participants are voluntarily participating and additional safeguards when studying vulnerable populations.
Only collect data that is necessary and ensure secure handling. Ensure that those handling the data are properly instructed in the secure collection and handling of data.
Table 2.5. Ethical guidelines for Stage 2: Analysis
1. Seek ethical approval where necessary. For any non-casual study of behaviour, the team should, where necessary, seek approval from the ethical review board associated with the team as well as from the authorities or other organisations within which the behaviour studied unfolds. Also, remember that ethical responsibility cannot be transferred. If the team commissions studies from other entities, it is the team’s responsibility to ensure that the ethical guidelines for ANALYSIS are properly adhered to. |
2. Ensure competency. Always ensure that those conducting observations, data analysis or any other kind of study or experiment as part of ANALYSIS have received appropriate training and are sufficiently competent to actually safeguard ethical guidelines and knowledge-based supervision in practice. |
3. Collect and document consent. Remember that no research on a person may be carried out without the prior informed, free, express, specific and documented consent of that person or their guardians. This also includes, insofar as is possible, the purpose of the study (see also point 6). If consent has already been given to general collection of data, then consider whether it is actually acceptable to make use of such prior consent. |
4. Voluntary and anonymous participation. Always ensure when necessary that people asked to participate understand that participation is voluntary and that the refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive. In addition, always make sure to anonymise participants to the furthest possible extent, short of consent, and make clear to participants that this is done. |
5. Additional safeguards for research with vulnerable populations. Special safeguards need to be in place for research with vulnerable populations. Vulnerable populations include school children under the age of 18, people with learning or communication difficulties, patients in hospital or people under the care of social services, people in custody or on probation, and people engaged in illegal activities, such as drug abuse. |
6. Refrain from deception. Parts of the behavioural sciences have a troubled past relationship with deception, for example, deliberately making somebody believe something that is not true. The experience of deception in behavioural research may have the potential to cause distress and harm and can make the recipients cynical about the activities and attitudes of research and the institutions carrying out or sponsoring research. Always refrain from deception if possible and only make use of deception if absolutely necessary, while ensuring approval by the ethical board as well as participating organisations and after consulting appropriate resources such as the British Psychological Society’s Code of Human Research Ethics (BPS, 2018), or the like. |
7. Only collect what is necessary and ensure the secure handling of data. Studying behaviour makes for collecting a wide range of data. Ensure that observational and other data generated as part of ANALYSIS is stored and handled safely as well as that only data that is necessary to collect for the purpose at hand is collected. |
8. Always provide contact information. If possible, always provide the name and contact details of the team member leading the study as well as the name and contact details of another person who can receive enquiries about any matters, which cannot be satisfactorily resolved with the member leading the study. |
9. Always provide debriefing. After studying behaviour as part of ANALYSIS, always consider the possibility of debriefing participants as the default when the data gathering is completed, especially where any deception or withholding of information has taken place. When behaviour is studied more remotely, publishing an annual report which discloses previous experiments or hosting a small section on the organisation’s website to disclose past experiments to interested members of the public may suffice. |
10. Always qualify the ANALYSIS. Make sure to collect relevant comments from participants on the results of ANALYSIS whenever possible. Also, always consider the possibility of arranging for active dialogue and equal representation from relevant citizens, groups and stakeholders when interpreting and reporting results. |
Annex: Theoretical underpinnings of behavioural analysis
Dual-processing theories of reasoning, judgment and social cognition
The first and foremost principle for the practitioner to observe when seeking to accommodate more traditional methodologies is perhaps the consequences that the various versions of dual process theories, often adopted in BI (e.g. Kahneman’s System 1 System 2 theory), have for the constructs and phenomena studied.
Dual-process accounts are the result of seeking to understand the processes involved in actual human reasoning, judgment and social cognition, especially when these do not seem to reflect normative models of rational reasoning and choice. They have emerged from largely disconnected literatures and experiments in cognitive and social psychology (Evans, 2008) and received attention from the general public with Daniel Kahneman’s popular intellectual autobiography Thinking, Fast and Slow (2011) covering his work with Amos Tversky’s, which led to Kahneman receiving a Nobel Prize in economics (Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel) in 2002 (shared with experimental economist Vernon L. Smith).
Dual process theories share the distinction between cognitive processes that are fast, intuitive, automatic and, by and large, unconscious (System 1) and those that are slow, deliberative and subject to conscious rule (System 2). Their main purpose is to study the interplay of non-rational features, automatic processes and reflective reasoning aspiring to the ideals of rationality (Gawronski, Sherman and Trope, 2014).
Table 2.6 summarises the clusters of attributes associated with dual systems accounts.
However, it should be carefully noted that dual process theories vary quite a lot, especially when it comes to the kinds of System 1 processes described by different theorists. Also, it is evident from the more detailed research that not all of the attributes often sorted in dual process theories sensibly fits into this dual structure (Evans, 2008) and that more complex structures such as Stanovich’s (2011) tripartite model of cognition is necessary for dealing with phenomena where, for example, individuals’ responses are observed to vary either within-subjects or between-subjects.
Thus, dual process accounts should always be treated as the simplifications they are, and policymakers and practitioners will benefit immensely by orienting themselves with their details and boundaries when addressing specific behaviour(s) and deeper questions. This point is, for instance, evident when observing that while dual process theories often assert that automatic processes do not require conscious awareness or intentions to work, more detailed accounts make the qualification that such processes do not necessarily manifest themselves as “subconscious” influences. Different from reflexes, then, the contours of some biases may actually be observed by introspection as well as blocked by means of self-regulation (Stanovich, 2012). This is important, both when considering individual differences as well as ethics. Although some of the most memorable examples of bias are memorable due to their non-conscious nature, such psychological phenomena should not be mindlessly characterised as subconscious influences, as they often are.
Table 2.6. Clusters of attributes associated with dual systems of thinking
System 1 |
System 2 |
---|---|
Cluster 1 (Consciousness) |
|
Unconscious (preconscious) |
Conscious |
Implicit |
Explicit |
Automatic |
Controlled |
Low effort |
High effort |
Rapid |
Slow |
High capacity |
Low capacity |
Default process |
Inhibitory |
Holistic, perceptual |
Analytic, reflective |
Cluster 2 (Evolution) |
|
Evolutionarily old |
Evolutionarily recent |
Evolutionarily rationality |
Individual rationality |
Shared with animals |
Uniquely human |
Nonverbal |
Linked to language |
Modular cognition |
Fluid intelligence |
Cluster 3 (functional characteristics) |
|
Associative |
Rule-based |
Domain-specific |
Domain-general |
Contextualised |
Abstract |
Pragmatic |
Logical |
Parallel |
Sequential |
Stereotypical |
Egalitarian |
Cluster 4 (Individual differences) |
|
Universal |
Heritable |
Independent of general intelligence |
Linked to general intelligence |
Independent of working memory |
Limited by working memory capacity |
Source: Reproduced from Evans, J.S.B. (2008), “Dual-processing accounts of reasoning, judgment, and social cognition”, Annual Review Psychology, Vol. 59, pp. 255‑278, www.annualreviews.org/doi/pdf/10.1146/annurev.psych.59.103006.093629.
Dual process theories have difficulties accounting for certain phenomena or detailed findings, for example, individual differences in observed behaviour. Therefore, practitioners may benefit from consulting more complex accounts of human reasoning, judgment and social cognition, such as Stanovich’s Tripartite Model depicted here in the context of considering individual differences in the observed choices between a smaller, but present reward (USD 100 now) and a larger but future reward (USD 140 in 1 year). In this context, it is often observed that while some individuals may be observed to choose the smaller but present reward, others prefer the larger but future reward. Such individual differences are often not random and thus important when considering why people act as they do, as well as what BI strategies may effectively inform public policy in a given situation.
Returning to the main point about BI and the nature of dual process theories, it is by identifying processes according to the simplified distinctions of dual process, or more complex but similar theories, that BI seeks to explain how the supposedly irrelevant features of decision-making contexts may systematically influence human decision-making and behaviour to produce cognitive biases. In particular, it should be noted that such biases are defined as systematic behavioural deviations from the predictions of rational models. A cognitive bias, then, is a tendency for people to systematically produce an output behaviour Y’, rather than the behaviour Y predicted by the rational model provided an input variable X, and where the deviation between Y and Y’ is explained by reference to cognitive features as described, e.g. by dual process theories.
Cognitive biases, heuristics and BI
However, a general shortcoming of the current BI literature is that cognitive biases are often misunderstood, under-described and conflated with behavioural insights.
To see this, it should be recalled here that the concept of behaviour in BI is not that of our everyday understanding but a theoretical conceptualisation of a (potential) decision point. This means, that by “behaviour” is not necessarily meant an overt and observable behaviour, but rather anything that can be modelled as a decision point, which in turn means that any response which is (potentially) subject to self-regulation (System 2) counts as behaviour. Thus, behaviours include: what to attend to, how to form beliefs, what to choose, whether to stick to one’s choices and any other response that constitutes a counterfactual event conditional on volition.
Next, recall that a cognitive bias, as defined above, is a systematic tendency for behaviour to deviate from the predictions of rational models due to cognitive mechanisms. This means that a cognitive bias merely describes a systematic relationship between an input variable and an output behaviour said to deviate from the predictions of rational models due to cognitive features, without necessarily being specific about what cognitive mechanism translates input to output. When all references to cognitive mechanisms are dropped in the description, there is a tendency to refer to it as an “effect”, for instance, “the default effect”.
Yet, it is a common misunderstanding to refer to one or more cognitive biases to explain a given behaviour. This is a misunderstanding since cognitive biases do not explain behavioural biases unless they make reference to the specific psychological mechanisms thought to translate an input variable into a behavioural effect (Smets, 2018). For some, this might just appear to be an academic detail. However, without a mechanistic explanation and evidence for this, the practitioner not only fails to provide an explanation of why people act as they do. They will also be ineffective in forming hypotheses about what behavioural strategies might influence people, as well as be incapable of interpreting the effect of tests and interventions (Marchionni and Reijula, 2019); and thus, finally also, unable to know whether a given proposal for policy will be effective, robust, persistent or welfare-improving when scaled or generalised (Grüne-Yanoff, 2016).
Referring to a “heuristic” provides one type of attempt to describe the “mechanics of the mind” that provides such translation. Consider, for instance, Tversky and Kahneman (1974) account of the adjustment and anchoring heuristic state that: “In many situations, people make estimates by starting from an initial value that is adjusted to yield the final answer. The initial value, or starting point, may be suggested by the formulation of the problem, or it may be the result of a partial computation. In either case, adjustments are typically insufficient”. The takeaway for the practitioner is that, to explain why people act as they do, one needs to account for the relationship between an input variable and an output behaviour, as well as for the cognitive mechanism mediating the two.
The structure of behavioural insights
Finally, the concept of cognitive biases or behavioural effects is often used interchangeably with the concept of behavioural insights. Yet, in doing this, behavioural insights become under-described. This is because successfully applying BI analytically as well as strategically requires more than just an understanding of the relationship between: 1) an input variable; and 2) a behavioural effect. In particular, there are at least seven additional components that go into a behavioural insight in order for it to be applicable to the real world. These are as follows:
3) Mediators. As just asserted, a mechanism needs to be asserted as a mediator to provide a behavioural insight. For instance, a default effect may result from different mechanisms, for example, due to inattention, its normative signal or its reduction of friction (Grüne‑Yanoff, 2016). Each of these mechanisms, in turn, constitutes different behavioural insights and cannot just be subsumed by the practitioner under one heading.
4), 5), 6) Situational, Individual and Social Moderators. What psychologists refer to as, moderators, also need to be included as part of a behavioural insight as well (Van Kleef and Van Trijp, 2018). Moderators are variables that influence the strength of a relationship between the input variable and the output behaviour. What moderators obtain relative to a behavioural insight depends upon the level of abstraction at which an insight is formulated. While some moderators pertain to cognitive bias and its effect on a general level, others pertain to more specific applications.
For simplifying purposes, BASIC divides moderators into three groups: situational, individual and social moderators. An individual moderator is, for instance, people not falling for the trickery question of “How many animals did Moses bring on the Ark”, because they know of the Moses Illusion; a situational moderator is, for instance, people not eating less even when given smaller plates, because they are hungry; and finally, a social moderator is, for instance, young people not being influenced by “nine-out-of-ten” social proofs, because they do not want to be mainstream.
7) Boundary conditions. Some of the more popularised parts of BI literature may give the practitioner the impression that cognitive limitations, biases, heuristics and habits influence people’s behaviour unconditionally. For instance, if social proof exists or is provided, people will mindlessly follow this. However, this is not the case.
While moderators are variables that influence the strength of a relationship between the input variable and the output behaviour, behavioural insights also have boundary conditions. That is, a cognitive mechanism will only translate an input variable into a behavioural effect conditional on certain conditions being satisfied. For instance, in one experiment, all but one conference participants accepted a default in the terms and conditions of the conference formula stating that they would be willing to wear a clown nose throughout the conference. Needless to say, they only “accepted” this because of inattention. The default effect, the acceptance, was thus brought about conditional on inattention and did not affect the single attentive person which, of course, declined. Consequently, to know when a behavioural insight might explain behaviour, as well as to know when to expect a behavioural insight to influence behaviour, the practitioner needs to know its boundary conditions. In fact, as will be evident below, boundary conditions are crucial in BASIC since the ABCD framework for structuring Analysis is based on four broad categories of such conditions as Attention, Belief formation, Choice and Determination.
8) Potential side effects. The potential side effects of cognitive limitations, biases, heuristics and habits when influencing people’s behaviour are also an important part of a behavioural insight. Take, for instance, the default effect by inattention. A side effect of this is that people do not know that they have been opted into the default conditions. In this way, a practitioner’s knowledge about potential side effects may play a crucial role both when seeking to understand why people act as they do, the consequences of using a behavioural insight as a strategy to influence people’s behaviour, as well as to identify important ethical issues.
9) Evidence. Finally, part of a behavioural insight is knowledge about the evidential base underpinning it. This includes the kinds of populations that this has been tested within (e.g. university students and employees), experimental designs (RCT, quasi-experiment, cross-sectional, longitudinal, etc.) and type of study, e.g. proof of principle (laboratory experiments), proof of practice (field experiments) and proof of policy (implementation studies). For this latter distinction, see Figure 2.14 below adapted from Van Kleef and Van Trijp ( 2018).
In conclusion, a (theoretical, not methodological) behavioural insight ideally comprises knowledge about nine components in total. Needless to say, the BI literature cannot always provide knowledge about all nine components and may also disagree as to their proper descriptions. Yet, the diagram below provides a template for practitioners to structure information about behavioural insights for analytical and strategic purposes alike.
Stage 3: Strategies – BI for behaviour change
Stage 3: Strategies
Stages 1 and 2 focused on conducting a Behavioural Analysis of target behaviours relevant for addressing the policy problem. Stage 3: Strategies aims to identify behavioural insights that might be effective for informing behaviourally informed strategies using ABCD that might effectively change target behaviours and can be tested in the subsequent stage of Intervention. In the stage Strategies, the policymaker and practitioner working with them will:
1. Identify classes of strategies and behavioural insights that match the behavioural analysis of the behavioural problem(s) conducted in Stages 1 and 2.
2. Conceptualise a suitable intervention based on the relevant strategies and insights, and which might be tested for their efficiency.
3. Screen these interventions with regard to ethics, feasibility and costs.
When
Strategies have always been an unavoidable step in any BI project. However, what makes BASIC different from other frameworks is the “diagnostic link” between the Behavioural Analysis of the two first stages and the third stage of Strategies. Thus, this stage is only to be engaged with when a behavioural analysis has been conducted. BASIC ties the Behavioural Analysis to Strategies by using ABCD.
Milestone
The milestone of Strategies is to arrive at a suitable and acceptable policy intervention that has been ethically approved and passes a positive cost assessment. When such a policy intervention has been formulated on the basis of ABCD, the practitioner may take the BI project to the next stage: Intervention.
This stage accomplishes two key important tasks for implementing a BI project. First, as mentioned in the introduction, the use of behavioural insights as active components, for instance nudges, in public policy interventions should be regarded as a core tenet of what is usually referred to as behaviourally informed public policy. Yet while BI does have a strong focus on the application of BI as part of nudge interventions (OECD, 2017), it is important to emphasise that BI is not limited to this type of approach. The development, design and delivery of behaviourally informed public policy also comprise approaches such as “push”, “curling” and “boost” as well.
Second, classes of insights and specific behavioural insights are presented for each of the four aspects explained in Stage 2: Analysis, including selected cases to illustrate their uses. Twelve strategies with respective insights are presented. In this way, the ABCD framework also presents itself as a repository for systematically matching behavioural analyses with behavioural strategies that present themselves as the basis for designing policies likely to effectively and gently influence target behaviours.
In Strategies, the policymaker and BI practitioner working with them aimed to identify, conceptualise and design behaviourally informed strategies conditional on the hypotheses generated in the Behavioural Analysis about what seems to cause a behavioural problem. This is possible, as each diagnostic domain is associated with BI strategies, which in turn comprises classes of behavioural insights along which policy interventions may be designed.
Attention – Make it relevant, seize attention and plan for (in)attention
Attention is the window of the mind and thus the starting point of all behaviour. Hence it is also a natural starting point both in Analysis and Strategies. In addition, while attention is scarce, easily distracted, quickly overwhelmed and subject to switching costs, practitioners will often find that attentional issues have been overlooked in the design and implementation of traditional public policies. For this reason, behavioural problems are often partially caused by attentional issues and it may thus prove effective to revise and design policy interventions so that they become more relevant, seize attention and, if this is not possible, think about how to plan for inattention.
Make it relevant
A prerequisite for working effectively with attention to creating a behavioural effect is that one engages with people in a relevant way – that is, at the right time, at the right place and at the point where people are most willing to enact the behaviour that one aims to promote. This can be done by carefully considering the following insights.
Visceral factors: Ability and motivation are not constants (Loewenstein, 1996). If you are hungry, you are more likely to eat bad food and make bad decisions. If you are tired, you are more likely to make mistakes, make worse decisions and eat bad food. Thinking about, and even influencing, people’s state-of-mind relative to visceral factors and calibrating policy interventions with this in mind throughout ABCD can increase the likelihood that people will behave in the direction of what the policy is trying to promote. Taking visceral factors into account in planning when a policy intervention is to make contact with people’s attentional capacities is, therefore, a crucial strategy. However, before even thinking about applying this strategy by influencing people’s state-of-mind relative to visceral factors, please consult the ethical guidelines at the end of this chapter, as these are exclusively their private arena and should only be influenced to boost their capacity for autonomous decision-making.
Timing: It is everything. People feel more positive in the morning than in the afternoon (Pink, 2018). Asking people to commit well in advance to something sensible (e.g. eating fruit rather than cake) make them more likely to commit, asking them the day before makes them less likely to commit (Read, Loewenstein and Kalyanaraman, 1999). Asking people to take out insurance on water damage is more likely after flooding than before; and offering farmers to purchase fertiliser at the right time may have the same impact as a 50% monetary subsidy (Duflo, Kremer and Robinson, 2011). Thus, timing is intimately connected with visceral factors and, to some sense, may be regarded as one of the former concept’s dimensions. However, thinking timing into the details of a policy intervention also has a more practical dimension. Fines and charges may be timed relative to when people receive their pay check in order to increase the likelihood of payment, reminders may be timed so as to be most likely to prompt action, for example, timing text-message alerts just-in-time to avoid overdraft (Adams et al., 2018) and deadlines may be co‑ordinated with other events so as to increase the likelihood of people meeting them.
Placement: An overlooked dimension of making a policy intervention relevant is placement. Yet, as revealed by the strategies applied by supermarkets, placement is crucial when trying to influence, not only choices (see Arrangements below) but also behaviour. Still, failure to get people’s attention at the place which is optimally calibrated with action is a standard issue in many behavioural problems – in supermarkets and policies alike: at which place are teenagers more likely to buy condoms; at the cash register in the supermarket or from a vending machine outside? Likewise, moving sanitisers in front of the door, rather than having them hanging on the wall increases hand hygiene, whether at the hospital or at a restaurant buffet. Moving blood-screening tests for diabetes and pre-diabetes that require fasting to the Mosque and timing it with Ramadan leads many more to take the test (OECD, 2017). So where do you put the new data-protection policy next time? – in an email that will get ignored, or at the bathroom at work where people are surprisingly fond of reading long texts. Examples like these abound and emphasise the importance of considering placement in policy implementation – some places are public, others private; some places are close to the action to be promoted, others are far away. Practitioners should not forget this third dimension of relevancy.
In sum “make it relevant” includes, at least, three variables – timing, placement and visceral factors – which are relevant when designing and implementing policies. Getting “the principle of relevance” right is a precondition to making the best use of people’s attention.
Seize attention
The fundamental problem of inattention is, not surprisingly, that people usually fail to attend to what is important in a given context. This may happen even when policy interventions have been calibrated with visceral factors, timing and placement. Whether because they forget or overlook something, and whether this is due to relegating, multitasking or being distracted, focusing on one thing implies by definition that one is not paying attention to something else. Thus, policymakers and practitioners should carefully consider how to design the details of policy interventions so that people attend to what is important at that given moment for the intervention to succeed. There are at least three ways to do this using the following insights.
Salience: The salience concept denotes a feature of choice architecture – whether of a decision point, a choice option or an attribute of choice options – that draws in our attention relative to surrounding objects, information, events or options, at the expense of other features. A salience-based nudge is any attempt to influence people’s attentional systems in a particular situation with the intention of activating, guiding or retaining focus on a particular aspect of the choice architecture by making that aspect salient. Usually, this type of influence is conducted in order to make such an aspect the object of conscious processes, i.e. System 2 thinking. Said differently, the aim of salience-based nudges is to guide what people are attending to – as well as not attending to – playing on non‑rational psychological features of cognition. When it comes to the behavioural aspect of attention treated here, the relevant sense of salience is that of getting people to notice at some decision point that they are being asked to make a choice – the latter two senses of salience will be treated as aspects of “choice” (see perspectives below). There are many ways that researchers and practitioners can make a decision point salient. Most famous, perhaps, is the engraving of silhouettes of flies into the urinals of Schiphol Airport in Amsterdam (NLD), which purportedly reduced spillage by 80% and cleaning costs by 8% (Evans-Pritchard, 2013). Also, digital speed signs in traffic that flash when drivers are speeding and have been seen to decrease average speed. Another example would be to make litter bins in Copenhagen salient by using stickers of green footprints leading to bins, which was measured to significantly decrease street litter.
Reminders: Another way of getting people to notice that they are being asked to make a choice at some decision point is by using reminders. The use of reminders is a very similar principle to that of making decision points salient. Yet, it distinguishes itself by causing a behavioural effect by means of an explicit messenger and the triggering of an association in memory making it structurally a bit more complex – a feature that has bearings relative to the possibilities for designing reminders (see, for example, Messenger effect and Create commitments below) as well as ethical bearings (see Ethical guidelines, at the end of this chapter). Reminders are becoming increasingly relevant due to increased digitisation and, especially in health, the potential of this principle has been documented. Thus, a meta-review of reminders in health by Stubbs et al. (2012) found 7 studies of reminders by letters which led to an average reduction in “did not attends” (DNAs) of 7.6% and 12 studies of reminders by text-messages which led to an average reduction in DNAs of 8.6%. Likewise, the UK Financial Authorities has run a series of experiments examining the detailed differential effects of reminders in this domain as well.
Prompts: You can seize attention simply by asking people to pay attention through prompts. This is defined as making someone do something by interrupting their on-going action and forcing them to make a decision before being able to proceed, as illustrated by pop-up boxes on digital interfaces. Of course, this is an increasingly relevant principle with increased digitisation. The Danish Business Authority, for instance, used a prompt to try to get 14 000 companies to verify their basic data in the Danish Business Register, with the result that approximately 66% either confirmed or updated their data in the registry (OECD, 2017). However, the concept of prompts goes beyond digital platforms. For instance, charities using “facers” on the streets asking for donations and hospitals asking patients to fill out a survey while waiting for an appointment. Text messages can also be designed to serve as both prompts and reminders. However, they only work when they are made relevant (see above). Otherwise, people will disregard and reject prompts – making prompts intuitively easy to dislike.3
Social attention: Finally, one may consider behavioural insights relative to social aspects of attention. The cocktail-party effect refers to how people’s attention may be guided by semantic content such as when hearing one’s name being mentioned at a cocktail party leading one, but not other people, to direct their attention to the source. This principle has, for instance, been used to optimise boarding of planes by seizing the attention of travellers from particular countries to pay attention to specific procedures by placing their national flag at counters where paperwork needs to be carried out. The spotlight effect refers to how people tend to think that other people focus on the same contextual features and options as they themselves are. The principle is integrated into digital speed signs in traffic that flash when drivers are speeding. This has been measured to decrease the average speed of drivers by not only making them aware that they are speeding but that other people may observe this as well. Finally, the use of pictures of eyes to create artificial social monitoring also belongs to this category. Thus, depicting posters with eyes has been used in a range of interventions to effect people into acting pro-socially by giving them the sense that what they are attending to are attended to by others as well.
Plan for inattention
If facing an attentional problem, it may also prove effective just to plan for inattention. That is, it may often prove more effective to rely upon people not attending to the issue at hand – either because attention will always fail at some point or because it makes no sense that people need to devote their scarce attention to the issue. Hence examining what happens when attention fails as part of the analysis and then planning and designing for inattention is a central strategy in BI for dealing with attentional problems.
Defaults (by inattention): Perhaps the best-known and most effective behavioural insight when planning for inattention is changing the default. In the complex choice architectures of modern societies, we increasingly rely on defaults, or “pre-set choices”, to decide for us, when we do not have the time or capacity to attend to the vast array of choices available. We print our notes for the upcoming meeting relying on the printer to choose a readable format; we buy a phone relying on the producer to have suitable defaults balancing between protecting our privacy and providing us with personalised services; and we participate in default retirement plans, believing that someone took the proper time to construct the right configuration of investment choices. To some extent, being able not to attend to every possible choice but just rely on defaults is a necessary strategy for us to focus on what is really important in our lives.
A default is defined as an aspect of choice architecture, where one particular choice option is chosen as the pre-set choice such that people have to make an active decision to choose an alternative choice option. Said a bit differently, a default is the option that occurs when people do not make a choice (Johnson et al., 2002). A default effect as the change in the likelihood that a particular alternative is chosen when designated as the default versus a control condition when no default is designated (Brown and Krishna, 2004). This definition, however, does not point to the responsible mechanism. Consequently, a default effect may be described more precisely by referencing and distinguishing between what cognitive mediator or mechanism that brings it about (i.e. as any behavioural effect caused by a default through a mediator) rather than in purely consequential terms. For policy purposes, it is important to distinguish between various types of default effects according to the cognitive mechanisms that bring them about, as well as the systems that condition them (Dinner et al., 2011). In the most basic type of default effect, people may end up with the default option simply because they do not notice at the decision point that they are being asked to make a choice (see Johnson et al., 2002). We refer to this as “inattention-based default effect”.
Despite the increasing importance of defaults in our lives, studies show that they are too often badly aligned – sometimes by intention, other times due to negligence – with individual and societal preferences. When this happens, the consequences are grave as defaults effectively materialise as soon as we are inattentive. As a consequence, getting the arrangement of defaults right and preventing misuse of defaults is a core principle of BI whenever people are simply inattentive. Two examples can be found in Box 2.8.
Box 2.8. Two cases of planning for inattention with default
1. The European Commission action to counter the misuse of defaults
In 2009 and 2011, the European Commission (EC) launched a series of steps to protect consumers against emerging speculation in default effects, especially by online businesses.
In 2009, Microsoft was charged with abusing its prolonged market dominance on the market for PCs to tie its online browser “Internet Explorer” as a default browser. As a consequence, Microsoft had gained close to a monopoly on the market for browsers (more 90% of PCs on the market had Internet Explorer installed). In 2009, the EC forced Microsoft to install a programme that prompted users to make an active choice between the 12 most popular browsers on the market (European Commission, 2009).
The new programme was highly successful. Between March 2010 and November 2010, the new programme led to 84 million browsers being downloaded. After that, Microsoft failed to comply with its commitment by not providing the browser choice screen with its Windows 7 Service Pack 1, from February 2011 until July 2012. This led to a historic EUR 561 million fine of Microsoft by the European Commission (2013).
In 2011, the EC began an effort to protect consumers against the widespread speculation in attention-based default effects carried out by the emerging online industry. Their first target was the widespread misuse of pre-ticked boxes on websites for charging inattentive consumers additional payments (for example, when buying plane tickets online). The EC thus decided to ban the use of pre-ticked boxes as part of marketing beginning in 2014 (European Commission, 2014).
2. Rutgers University’s paper-saving changes to printer defaults
A typical illustration of an attention-based default effect for a simple, non-dynamic decision task with only two alternatives is people’s tendency to stick with printers’ default setting. They do this even when they have prior information about what the default is, as well as hold preferences aligned with the often-recommended course of action, i.e. double-sided printing. Thus, this effect creates a generic behavioural problem causing vast overconsumption of printing paper.
To solve this problem, traditional public policymakers have at times resorted to suggesting an environmental tax on paper products. In 2012, for instance, the Swedish Nature Conservation Association (Naturskyddsförening) suggested a 10% tax on all paper products in Sweden. The projected effect of such a tax was a 2% reduction in paper consumption, equalling 12 km2 of saved forests and SEK 2 billion in taxes a year (Axelsson and Åström, 2012).
However, an intervention at Rutgers University (USA) in 2008 illustrates the simple and cheap alternative provided by BI: changing the default from single- to double-sided printing. Doing so on its 3 university campuses reduced paper consumption by 44%. Over the next 3 years, the university added further behavioural principles and estimated that they saved approximately 55 million pieces of printing paper, equalling saving 4 650 trees (Sunstein and Reisch, 2013; Cho, 2013).
Sources: European Commission (2009), “Antitrust: Commission accepts Microsoft commitments to give users browser choice”, Press Release, http://europa.eu/rapid/press-release_IP-09-1941_en.htm (accessed on 7 November 2018); European Commission (2013), “Antitrust: Commission fines Microsoft for non-compliance with browser choice commitments”, Press Release, http://europa.eu/rapid/press-release_IP-13-196_en.htm (accessed on 7 November 2018); European Commission (2014), “Taking consumer rights into the digital age: over 507 million citizens will benefit as of today”, Press Release, http://europa.eu/rapid/press-release_IP-14-655_en.htm (accessed on 7 November 2018); Axelsson, S. and K. Åström (2012), Everyone Earns a Paper Fee, https://www.naturskyddsforeningen.se/nyheter/alla-tjanar-pa-en-pappersavgift (accessed on 7 November 2018); Sunstein, C. and L. Reisch (2013), “Green by default”, Kyklos, Vol. 66/3, pp. 398-402, http://dx.doi.org/10.1111/kykl.12028; Cho, R. (2013), Making Green Behavior Automatic, https://blogs.ei.columbia.edu/2013/05/23/making-green-behavior-automatic/ (accessed on 7 November 2018).
Safety mechanisms: In some instances, the consequences of inattention may be re‑directed by physical mechanisms quite similar to procedural choice architectural defaults. This is, for instance, the case for safety lines when conducting dangerous work, dead-man-buttons and electricity wires that automatically de-plugs or turn off when one trips in them. Such physical arrangements are very similar to defaults, but are distinguished from defaults and referred to as safety mechanisms.
Conclusion: How to address (in)attention
How to work with the attentional aspects of behavioural problems is rarely at the centre of the development, design and delivery of public policies. Yet, as was seen above with regards to hand hygiene and testing for diabetes, applying BI to the attentional aspect of public policy implementation can make the difference between failure and success.
However, the attentional aspect of behaviour can also inform the very structure of the policy intervention pursued. This was referenced by the timing of an intervention for offering fertilisers to farmers in Kenya – there, timing (together with commitment to the offer) proved just as efficient as a 50% subsidy.
Consequently, the attentional aspect of behaviour is not just something to think about at the very end of the policy cycle but should be done from the outset when attention is part of the problem as well as the solution. Thinking about how to make public policy interventions relevant, seizing the attention of those engaging in the target behaviour and making plans for how to deal with inattention is a cornerstone in applying BI to public policy.
Belief formation – Guide search, make inferences intuitive and support judgment
Analysing problems in belief formation and devising strategies relative to this aspect of behaviour comprises the second cornerstone of the ABCD model for identifying relevant strategies when applying BI to public policy. The following examples illustrate how practitioners may use strategies such as guide search, making inference intuitive and supporting judgment to mitigate issues in belief formation.
Guide search
While there is no such thing as too much information in a traditional public policy perspective, information overload has become a serious problem for the people inhabiting the real world. In this, there is such a thing as too much information and complexity, but too little time to search and process this when in search of answers. For that reason, problems in belief formation usually go hand in hand with the vast amounts of information and possibilities that are put on offer.
In this perspective, it is not surprising to find that some of the biggest companies today are companies built around information search engines and consumer comparison platforms. What is perhaps more surprising is that traditional public policy interventions with regards to problems of belief formation have been slow in copying what these companies do well, but instead often try to approach problems in belief formation by offering even more information. Practitioners can help guide citizens more effectively when facing vast information sets by applying the following insights in policy design and delivery.
Searching by aspects (SBA): One such principle is searching by aspects (SBA). SBA is a development of a decision-making model or heuristic originally described by Amos Tversky (1972), as elimination by aspects (EBA). This model of decision-making applies when people face too many options to choose from. It works by first identifying the single attribute or feature, i.e. an aspect that is deemed primary or most important. This aspect is then used to partition the set of options in those options that possess the primary feature or attribute and those that do not – discarding or eliminating the latter from consideration. The process is then iterated by identifying a secondary or next-most important aspect and reducing the set of options further according to this, and so on, until the set of options to be considered is either manageable or only consist of one object.
While EBA is usually thought of as a decision-making heuristic, digitisation has led to it being applied just as much as a principle in information search incorporated into various search engine functions. Think for instance of sites where consumers may search for travels, hotels, dates or clothing. Here consumers may quickly and easily find their way, for example, to a manageable set of hotels to consider from amongst millions of options by first eliminating by the aspect of the city they are visiting, then adding the days they are interested in, then adding a price range, and so on. In this role, EBA is used to filter through large information sets rather than help to make a choice; this is why it may also be referred to as the principle of “searching by aspects”.
SBA has proven useful to guide citizens through complex informational sets in public institutions. Thus, in many digitised countries, citizens efficiently search for anything from job openings, over juridical documents, to public services like medical clinics, doctor’s offices or dentists, in search systems based on SBA.
Question trees: Another way to help people find their way around vast and complex sets of information is by applying decision trees to guide information searches. A decision tree is a decision-making tool that uses a branching structure to model sequences of decisions onto their possible consequences so as to allow for analysis. When used as an information-search tree, the same structure is used, but now as a Q&A based tool to guide users to the right answer – hence the label “question trees”. The technique has been implemented extensively in call centres, where it provides a structured approach for front-staff to effectively identify the problems that callers have.
One of its first technological implementation was as part of automated telephone-systems (“press 1 for English”) guiding the caller to the right service section. Recently, it has also been applied to help guide citizens by their own devices to the kind of information or options needed when interacting with public bodies. In 2013, for instance, the Danish Business Authorities together with iNudgeyou tested the efficiency of a question-tree procedure in getting newly started business owners to correctly identify the type of company (amongst some 140+ types) they needed to register to conform to existing rules and regulations. In a small, randomised controlled trial, it was found that a question tree reduced the number of business registrations with errors in them by 43% (from 35% to 20%). Likewise, the United Kingdom (UK) government has implemented digital question trees as a tool on a range of sites. For example, if someone wants to check if they have the right to work in the UK, they can access the site https://www.gov.uk/legal-right-work-uk and press start. After answering a short series of easy questions, you are told whether you are entitled to work in the UK, what documents to bring/obtain and/or which authorities to contact. The one-minute experience of going through the questions makes you baffle at the ease with which you are led through an incredibly complex set of laws and requirements to arrive at the exact information needed (UK Government, n.d.).
Make it intuitive
One thing is being guided to an answer by behavioural insights when navigating vast and complex amounts of information, such as when trying to register a business or find out what documents are needed to obtain a work permit. Another thing is to navigate those complex systems and technologies themselves that a modern world makes available. After all, humans co-evolved for millennia with nature but the pace at which technological developments occur is too fast for human evolution to keep up with. Still, this leaves humans to struggle with understanding and remembering how the systems, environments and objects that surround them at work, in the interaction with public bodies and in the marketplace function.
To this end, the traditional approach has relied heavily on information, instructions and training. In contrast, BI has from the outset explored areas such as human factors (Wickens, Gordon and Liu, 1998) and user-centric design (Norman, 1988), though with a stronger emphasis on the psychology and experimental tests than these disciplines usually exercise, in the search for principles to apply in the pursuit of providing better and more effective regulation. Perhaps in this area, more than any other, BI becomes an applied approach in the literal sense. To do this, the practitioner may want to use the following insights.
Intuitive coding: A broad concept referring to the idea of construing information, environments and objects so that people intuitively form appropriate beliefs using System-1 thinking. For example, a light switch may be designed in a way so that users intuitively form the correct idea of how to use it by, for example, flipping it up and down or turning a knob left or right. While the design of light switches is not particularly important for public policy, the idea of intuitive coding may be crucial for the construal of “user interfaces” in public policy.
At the most practical end of the spectrum, we find examples like the Lake Shore Drive in Chicago, where a tight turn makes it one of the city’s most dangerous curves. Trying to limit accidents, in September 2006 the city painted a series of white lines perpendicular to travelling cars such that the lines get progressively narrower as drivers approach the sharpest point of the curve (see Figure 2.16). This creates the illusion of speeding up, which – by hypothesis – should make drivers lift the foot from the speeder to compensate for possible illusions of control and overconfidence. The result: there were 36% fewer crashes in the 6 months after the lines were painted compared to the same 6-month period the year before (September 2006 to March 2007 and September 2005 to March 2006) (Nudge blog, 2010).
In the United Kingdom, researchers tried to incorporate behavioural insights into the user-centred design of an inpatient prescription chart to study how changes in the content and design of prescription charts could influence prescribing behaviour and reduce prescribing errors. The changes included having doctors circle “microgram”, “mg”, “g” or other units, rather than writing it to avoid misreading (see Figure 2.17). In a simulated context, the chart significantly reduced the number of common prescribing errors including dosing errors and illegibility without education or support, suggesting some common prescription writing errors are potentially rectifiable simply through changes in the content and design of prescription charts (King et al., 2014).
Mental models: Mental models are psychological representations of real, hypothetical or imaginary situations. In particular, a mental model is a category, concept, identity, prototype, stereotype, causal narrative or larger worldview that helps people make sense of the world. Mental models capture broad ideas about how the world works and one’s place in it. They are thus structures that enable as well as constrain the ways people interpret their surroundings and understand themselves. In doing so, they cause people to ignore certain pieces of information and fill in missing information where needed. Mental models are automatically triggered by contextual cues – models of the mind that provide us with default assumptions about the people we interact with and the situations we face (World Bank, 2015).
Public policy itself depends upon mental models. A central claim of this report has been that the rational model of human agency has directed and constrained traditional public policy. An even more fundamental claim made here is that the rational model is not well adapted to inform public policy when it comes to behavioural problems. Instead, it suggests an alternative mental model in the form of dual process theories to inform the development, design and delivery of public policy. The shift from the rational model to the dual process cognitive theory or model of human behaviour is but one example of the potentials that may come from changing the mental models that people use to make sense of the world. Whether such changes succeed may depend on institutional changes but the behavioural sciences have also shown that mental models may be changed by exposing people to alternative ways of thinking and to new role models in real life as well as in fiction.
The World Bank (2015) describes how certain groups of disadvantaged people in Ethiopia have been observed to hold beliefs that they could not change their future, thereby constraining their abilities to see the opportunities they might have. Researchers invited a randomly selected group of villagers to watch inspirational documentaries in which individuals from the region described how they had improved their socio-economic positions by setting goals. A survey conducted six month later found that viewing the documentaries had increased aspirations and brought about small changes in participants behaviour such as increased savings and investing more resources in their children’s schooling (Tanguy et al., 2014)
While the case from Ethiopia describes the strategy of changing the mental model used by people, one may also use BI to change systems so that they conform to mental models. Citizens, for instance, usually spend more time on other sites and platforms than those provided by public bodies. Hence, adjusting information architecture and layout on public websites to the mental models that people have picked up more broadly may significantly improve the functionality and experience of the service.
Support judgment
People still need to make judgments. That is, they need to infer new beliefs from pre‑existing beliefs. In doing this, people rely on an array of simplifying heuristics that allow them to draw inferences that often but not always serve as reliable and cost-effective shortcuts for processing information. Tversky and Kahneman (1974) famously identified three heuristics – availability, representativeness and anchoring and adjustment – influencing human judgment. As the list of such heuristics is becoming increasingly long and varied due to the rapid progress of the behavioural sciences, the following exposition is limited to illustrating three out of several possible principles for applying BI to support people in making judgments.
Utilising heuristics: When it comes to behavioural insights strategies, the principle of utilising heuristics means that researchers and practitioners tap into heuristics so as to promote a particular belief being formed. Needless to say, one should think twice about using this principle on ethical grounds. Yet, considering what heuristics will play a part in forming beliefs in a specific context and designing policy interventions to match, rather than conflict with these, is usually only appropriate. Here we use the messenger effect to illustrate the principle.
The messenger effect is a robust effect where people judge the truth or likelihood of a message according to the perceived credibility of the messenger. The UK launched the “Healthy Buddy” scheme, whereby older students received healthy living lessons from their school teachers and then acted as peer teachers to deliver these lessons to younger “buddies”. Compared with a control group, both the older and younger “buddies” enrolled in the “Healthy Buddy” scheme showed an increase in healthy living knowledge as well as in their behaviour and weight (The Behavioural Insights Team, 2010).
Adapting to heuristics: A persistent criticism of the literature on and application of bias and heuristics (i.e. mental shortcuts or intuitive judgments) relative to judgment is that this perceives bias and heuristics as fundamentally flawed reasoning (Gigerenzer, 1991). Proponents of this criticism have argued that biases and heuristics should rather be conceived of as adaptive forms of reasoning, that while not conforming to the rules of rationality or formal logic, presents efficient heuristics in an uncertain world, as long as the information is presented in a way that allows for their relevant application.
For instance, both lay people and professionals often have problems calculating the probability of an event occurring based on knowledge of a related event (known as Bayesian inferences). This results in the person typically committing what is referred to as the base-rate fallacy, as noted above. Gigerenzer and colleagues have shown that presenting risky decisions in terms of natural frequencies helps people, even fourth graders, make Bayesian inferences correctly without help from instructors. Making sure that information is presented in forms such as natural frequencies that fit cognitive strategies or heuristics represents the principle of adapting to heuristics, so as to make use of their efficiency in solving problems.
As an example of using the principle of adapting to heuristics as well as an example of the approach to BI earlier labelled “Boost”, Drexler et al. (2014) have shown the benefit of providing instruction, practice and training in financial decision-making skills. In their study, they provided micro-entrepreneurs in the Dominican Republic with simple financial and accounting heuristics, which led to significant and economically meaningful improvements in business practices and outcomes. In a different approach to adapting to heuristics, a switch from showing fuel efficiency in the context of purchasing a new car in terms of “miles per gallon” to showing “gallon per miles” have been shown to make the benefits of greater fuel efficiency more transparent (Larrick and Soll, 2008).
Social proof: In the present context, social proof is regarded as pertaining to belief formation and thus defined separately from social norms and peer pressure. Instead, it is regarded as a social-psychological phenomenon where people look to the behaviour of others in an attempt to make sense of the world. Social proof is triggered by uncertainty about the state of the world in social contexts and driven by the belief that other people possess knowledge about what is going on and aspects of their surroundings work. Social proof thus represents a class of heuristics for forming beliefs based on the behaviour of others and the assumption of an asymmetry in knowledge.
By highlighting or emphasising a positive behavioural norm, practitioners may support judgment by “de-biasing” the existing misperception or, potentially, though usually not ethically acceptable, encouraging the misperception that the positive behaviour is more prevalent than it actually is, which may result in people adopting the positive behaviour. This is in contrast to traditional public policy, which tends to emphasise negative or problematic behaviour, which often leads to people making exaggerations about negative or problematic behaviours that could easily lead to a misperception of how widespread the problem is (Berkowitz and Perkins, 1987).
This principle of social proof has been applied to a series of behavioural problems during the last couple of decades. For instance, it has been used to emphasise the actual behaviour in relation to alcohol consumption amongst youths with the result of reducing misperceptions as well as actual consumption for the youth provided the positive social proof (Balgvig and Holmberg, 2014). Likewise, emphasising the actual use of seatbelts amongst drivers has been shown not only to “de-bias” the misperception amongst drivers about other people’s behaviour, but also show to lead more drivers to perceive the behaviour as positive (Linkenbach and Perkins, 2003). Thus, the principle of social proof offers a cheap and quite effective strategy that may act as support in people’s process of judgment: always highlight the actual positive behaviour as people will take this into account when making sense of the world in an uncertain situation.
Choice – Make it attractive, frame prospects and make it social
When making a choice is difficult, people are likely to be influenced by biases and heuristics in their decision-making. ABCD suggests that practitioners look into making preferable choices more attractive, use framing of prospects and leverage social identities and norms.
Make it attractive
Attraction is the fundamental law of choice. In facing a set of choice options people opt for what they find most attractive. But what makes a choice attractive and how may practitioners use behavioural insights into this area to encourage people to make the best choices? This is an issue that may be treated at length, but here two simple principles are considered: how to connect with motives and perspectives, and how to trigger emotions.
Consider motives: Every choice has a motive. This motive can be either intrinsically or extrinsically motivated. Intrinsic motivation to perform an activity comes when one receives no apparent reward except the activity itself; whereas, extrinsic motivation to perform an activity comes from external rewards, such as money, commands and promises of punishment. Motivational crowding theory suggests that providing extrinsic incentives for certain behaviours can undermine the intrinsic motivation for that behaviour. Considering how to connect with intrinsic motives, as well as determine how potential extrinsic incentives will interact with these motives is a crucial exercise for practitioners.
Intrinsic motives are, by nature, cheaper and more meaningful to people than extrinsic motives. This is, for instance, well known from the voluntary work that millions of citizens perform around the world. Hence, practitioners should always consider what intrinsic motives might be identified and connected as drivers to the behaviour wanted.
From a rational choice perspective, these types of motivations can be reconciled by offering extrinsic motivations, such as offering monetary incentives, to attract people to the desired intrinsic choice. However, in a series of experiments and field trials, the behavioural sciences have revealed that extrinsic incentives are not always reconcilable with intrinsic motivation. Instead, motivational crowding theory suggests that providing extrinsic incentives for certain behaviours can undermine the intrinsic motivation for that behaviour. For instance, paying for a behaviour which previously has been voluntary, such as blood donation, might reduce the willingness to enact that behaviour (Titmuss, 1970). In another instance, monetary compensation offered for a nuclear waste repository in Switzerland lowered the willingness to accept the locally undesired project from 50.8% to 24.6%. About one-quarter of the respondents even seemed to reject the facility simply because of the financial compensation attached to it (Frey and Jegen, 2001).
Thus, considering how to connect intrinsic motives with potential extrinsic incentives is crucial for practitioners.
Create perspectives: The practitioner should distinguish between primary motives of a choice and secondary motives as this is a vital distinction. To illustrate why, think about buying bottled water. Conceived in isolation, most people do not care much about which bottle or brand of water to buy. That is, all the options will satisfy the primary motive of quenching thirst. When this is the case, secondary motives may become of interest. A secondary motive is an additional motive induced into considering a choice that provides additional reasons for choosing one option over another. In the example of buying water, you may, for example, find that one brand is donating some of the profits for charity. As a result, when facing two identical bottles of water, which equally quench your thirst (primary motive), the donation to charity (secondary motive) may act as a tiebreaker. Making such secondary motives “salient” creates, what may be referred to as, a perspective by highlighting an attribute that may provide a secondary motive for choosing an option and is an effective way to influence choices in cases where people hold weak preferences over options. An illustration of this comes from Norway where a study found that making lifetime costs of domestic appliances salient to consumers encouraged them to buy domestic appliances that were 4.9% more energy efficient (Kallbekken, Sælen and Hermansen, 2013).
Trigger emotions: The concept of emotion refers to a type of cognitive experience associated with intense mental activity and often resulting in an internal stimulus falling somewhere in between pleasure and displeasure. A range of stimuli including sensory stimuli, memory and mental simulation (i.e. imagination) may trigger emotions.
At least since Plato, the tradition in public policy has been to contrast emotions with reason and argue that the latter should be held in higher regard as well as be protected from the former in order to allow for making rational decisions. Thus, while emotions may be treated as ends in themselves, the means should be evaluated independently from the emotions that rational considerations might trigger. Thus, in a bit of caricature of the rational perspective, emotions are to be treated as mere mental noise that one should aim to transcend so that reason may prevail with a cool perspective on things. Looking at communications from public bodies to citizens often reveals that this tradition is proudly maintained.
However, contrary to Plato, evolutionary psychology holds that basic emotions and social emotions alike have evolved to motivate behaviours that were adaptive in our ancestral environment. Thus, a more contemporary behavioural perspective is that without emotion there is no choice but apathy. Emotions are not noise when making choices and making decisions.
In particular, the act of experiencing emotion (affect) is a fundamental factor when navigating choices. To choose, we internally simulate the consequences of making one choice over another and thus we automatically become emotionally stimulated. In some areas, emotions are stimulated in order to seriously challenge or even crowd out our more deliberative reasoning. However, there is no reason why it may not be used to make sensible, but bland, preferable choice options a bit more attractive. Yet, this strategy is still highly neglected in public communication.
Bertrand et al. (2010) conducted a field-experiment in financial decision-making, which included experiments on an advertisement. In particular, the study found that a picture of an attractive, smiling female increased demand for the financial product by the same amount as a 25% decrease in the loan’s interest rate (see Bertrand et al., 2010; The Behavioural Insights Team, 2010). Needless to say, the findings of science are not always politically correct and practitioners should, of course, take this into account in choosing how to apply behavioural insights.
Frame prospects
The framing and arrangement of prospects are perhaps the most famous but also one of the more technical areas of BI as applied to public policy. In facing a series of choice options, a person also faces a series of possible futures, i.e. prospects. While making it attractive provides reasons for choosing, the framing of prospects influences people to choose one or another option in subtle ways independent of what is chosen and why. That is, one option may be chosen over another simply due to the way that choices are presented – either as a matter of arrangement or as a matter of formulation.
Arranging choices: Albeit the influence of the mere arrangement of choices was not a topic of Tversky and Kahneman, it is considered here as part of the strategy of framing prospects. While a standard topic of marketing research, the principle of arranging choices offers some simple behavioural insights to BI practitioners in public policy that should always be considered, as any choice will always be arranged in one way or another.
To illustrate the potential effect of arranging choices, consider the simple arrangement in Figure 2.18 (Panel A) of two options of coffee with aligned attributes arranged horizontally as below:
Now, according to standard rational models, one should prefer either Option 1 – the small coffee (EUR 2.50), or Option 2: the big coffee (EUR 3.50), or be indifferent. Now, consider next the arrangement (Panel B) of two options of coffee with aligned attributes arranged horizontally – which one do you prefer now?
In presenting people with choices like these in experiments and marketing research, researchers find that some people who prefer Option 1 in the first setting, prefer Option 2 in the second setting. Such cases are problematic from a rational perspective since they imply that some people may reverse their preferences from preferring Option 1 over 2 in one setting to preferring 2 over 1 in an almost identical setting that only differs by offering an even bigger Option 3. While this makes no sense from the perspective of standard rationality, the behavioural sciences explain it as an instance of the compromise effect: consumers are more likely to choose the middle option of a selection set with aligned attributions, rather than the extreme options. That is, the mere arrangement of options influences choice in irrational ways.
However, the compromise effect is not the only arrangement effect that researchers and practitioners may consider. For instance, when attributes are not aligned, as is the case with, for example, holiday packages or laptop computers, researcher have found that people tend to choose extreme options as the number of options increases. This is the opposite of what happens with the compromise effect illustrated above. Examples like these reveal that the arrangement of options is a highly practical field where intuitions are more or less useless at understanding why testing interventions in each context is required.
There are certain contexts of public policy that are worth considering through the lens of BI as applied to the arrangement of choice options. For instance, researchers have found that the arrangement of choice options significantly affect food choices. Thus, for instance, Hansen et al. (2016) found the mere re-arrangement of something as trivial as a conference buffet with coffee, fruit and cake may decrease calorie consumption by 25% – much more than is likely to be achieved through taxation of sugar and fat. In a similar fashion, Miller and Krosnick (1998) found that the arrangement of choice options when casting a vote significantly influences choice – both with regards to candidates within parties and for parties themselves. This “ballot order effect” has shown that candidates listed first on a ballot receive, on average, 2.5% more of the vote than those listed after. This has led states like Ohio (USA) to rotate the name order of the candidates on its ballots.
Framing prospects: Having considered how the mere arrangement of choice options might affect choices, practitioners might also consider how to frame prospects so as to encourage preferable choices given that the right conditions obtain.
At its most simple, the framing of prospects refers to how the mere formulation of choice options may influence choices independent of their semantic content. For example, you are presented a choice between two cancer treatments: The first gives an 80% chance of survival, the second a 20% chance of death. Semantically they are the same but you are likely to choose the first treatment only because survival sounds better than death.
While frames such as this one rely on the mere (positive or negative), yet inconsequential, difference in the formulation choice options, Kahneman and Tversky identified a series of systematic insights into how people are influenced by the formulation of prospects and summarised this in their prospect theory (1979).
The value function is perhaps the most famous part of this theory (see Figure 2.19). It provides a model of choice summarising findings of how people decide between alternatives that involve risk and uncertainty. First and foremost, the model asserts that people think in terms of expected utility relative to a reference point rather than absolute terms. Second, the model captures the insight that people are more influenced by the prospect of loss than the prospect of gains popularly expressed, as “losses loom larger than gains” (loss aversion). Finally, the model captures the insight obtained from experiments that people, due to declining marginal utility for gains as well as losses, are risk averse for prospects involving gains, while risk seeking when it comes to prospects involving the risk of losses.
While prospect theory may seem very abstract to policymakers, practitioners may use the theory when deciding how to formulate simple prospects such as those faced by citizens when making everyday decisions in their interaction with public bodies.
In 2016, the Danish Taxation Authority, working with iNudgeyou, increased the percentage of companies filing taxes on time from 65% to 74% (compared to 2015) by adding a reminder line to the original email formulated in terms of loss aversion saying “Remember to report tax on time to avoid a tax surcharge of up to DKK 5 000. This replaced the header reading “Remember to report tax before July 1st” (OECD, 2017). In 2017, Medway Council (United Kingdom) worked with UKBIT on increasing the rate at which council taxpayers signed up for direct debit. Testing two new messages – one which drew on loss aversion and one which drew on social norms – against a business-as-usual control of no message, they found that both new messages significantly increased sign-ups and that the loss aversion tactic worked slightly better, especially for houses in high tax bands (Sanders, Jackman and Sweeney, 2017).
Other similar experiments exist, where choices are formulated in terms of reference points, loss aversion and the risk evaluation predicted by prospect theory. Common to these are that public policies revolving around incentives, risk and uncertainty may be made more effective by merely considering how the choices are framed.
Make it social
Humans are, first and foremost, social beings. Yet, this is often ignored in public policy, where they are, first and foremost, treated as isolated citizens, consumers and individuals. Connecting with the social identities and norms informally co‑ordinating and regulating human groups and societies is an invaluable strategy in the pursuit of creating a change in behaviour. In this regard, practitioners can make policy social by considering the following insights.
Connect with social identities: Social identity is a complex phenomenon. The concept is usually taken to refer to how we identify ourselves in relation to others according to what we have in common. It is at the core of what provides us with a sense of self-esteem as well as shapes our way of socialising and what behaviour we engage in. Strong and intimate forces are at play when connecting with the social identity of people. However, by considering the social identity of people as well as the social meanings that choices are embedded within, practitioners may find ways of connecting the behaviour change sought by public policies to the deeper fabric of the societies they serve.
A fundamental mechanism involved in social identity is each individual’s comparison with its peers. This mechanism is what drives people’s sense of status, recognition and identification with a group. Thus, by making certain choices people may gain status, recognition and identity within groups of peers in spite that these choices may result in little external reward. This was, for instance, famously shown in the campaign “Don’t mess with Texas”. In this, the Texas Department of Transportation (USA) sought to reduce littering on Texas roadways. They launched the “Don’t mess with Texas” campaign targeted at 18-35 year-old males who were known to be most likely to litter and created a slogan aimed at connecting with the social identity of the target group. The campaign has been credited with reducing litter on Texas highways by 72% between 1986 and 1990 (Texas Department of Transportation, n.d.; Texas Times, 2016).
Another example comes from Opower, a leading US provider of customer engagement and energy efficiency cloud services to utilities. Opower provides households with “Home Energy Reports” that consists of two parts: one containing suggestions on how to reduce energy adapted to the household and the other using social comparison that compares the household to the 100 nearest houses of similar size. In an analysis of 78 492 households separated into treatment (39 217 households) and control (39 275 households), those receiving the social comparison reduced electricity consumption by 2.0%, on average. Opower estimates that this would result in a reduction of over 450 000 tonnes of CO2-emission equivalent to USD 75 million in energy savings across the 15 million homes in the 6 countries they service (Allcott, 2011). Needless to say, researchers and practitioners should be careful when trying to connect a given behaviour change to people’s social identity. Misfires using this principle may seriously backfire on the trust put in public officials and institutions as well as cause damage to the social fabric. However, as social identities are fundamental to the functioning of any human society, it is not a matter of whether but more about how public policies seek to connect with those identities and for what purposes.
Create a sense of community. The final insight to be considered as part of the dimension of choice is that of observing the role that a sense of community may play when people make choices.
Most people’s choices ultimately have deeply ingrained social dimension to them. This includes instrumental choices that co‑ordinate people when interacting, such as when adhering to conventions like speaking a particular language, driving on the same side of the road or exchanging goods using a particular medium of economic exchange. It also includes preferring instrumental activities more when performing these in groups or other social contexts, such as when opting for packed theatre or restaurant rather than an empty one, going to a gym or park that other people go to as well or preferring to see a soccer match live on TV because of knowledge that everyone else are watching it at the same time. Finally, it includes options that are preferred due to their social dimension being part of the purpose, such as when going to a particular bar, playing golf with company or singing in choir rather than alone.
Observing the role that a sense of community may play in how people make choices and creating a sense of community around certain activities may hold the key for influencing and creating certain types of behaviour that might otherwise be difficult to get people to choose to pursue. This is evidenced by big marathon events, communal eating events and non-governmental organisations (NGOs) facilitating the co‑ordination of collective litter collection, searches for missing individuals and charity fundraising.
Determination – Make it easy, provide plans and feedback, and create commitments
Behavioural problems related to issues of determination share the characteristic of people not acting on their intentions – the so-called intention-action gap. Making a choice is sometimes easy, yet certain types of choices require repeated mobilisation of motivation in the face of challenges posed by competing goals and temptations. When a behavioural analysis reveals that a behavioural problem is fully or partially caused by issues related to determination, ABCD offers the following strategies for practitioners to integrate into policy design and implementation to help people stick to their plans.
Make it easy
Most people know that it is easy to form an intention of doing something. It is much harder to get it done. However, we do not always anticipate this and tend to systematically overestimate our own ability to take small steps to accomplish our goals. Thus, choosing to do something is not the same as succeeding. The world is complex and when any one person has to juggle multiple goals at once, even relatively small obstacles may become a reason for postponing taking action. As a result, people tend to procrastinate leading to inertia and maintaining the status quo.
In such cases, we usually put our faith in increasing motivation – our own, our employees’, or citizens’ at large. However, one thing that the behavioural literature has made clear is how it is often far more effective and cheaper to reduce or, if possible, even remove those small obstacles referred to as “friction costs”. “Make it easy” is thus a mantra, not only of Richard Thaler but also of any researcher or practitioner working with BI.
On a theoretical level the behavioural insight captured by the mantra “make it easy” may be illustrated as below (see Figure 2.20) by pitching motivation against the difficulty of performing and action (the blue dot) relative to an action-threshold (curve A). When the action is outside of the threshold inertia, procrastination results. The effect of the standard approach of increasing motivation is captured by curve B. The effect of making an action easier to perform is captured by curve C.
From this, it is also obvious that while making something easy may be a way to get people to get things done, there is also a shadow function of “making it easy”. As anyone who has been on a diet knows, “making something just a bit more difficult” may have a significant effect on inhibiting that action. Taken together, these two sides of the same behavioural insight make for the strategy of working with friction, which may be categorised, depending on details, as an instance of the policy approach called “nudging” or “curling” described above. The following insights serve as illustrations.
Default effect by cognitive avoidance: There are many ways to “make it easy”. Common to all of these is that they look easier and more straightforward in hindsight than they do when in the process of doing them.
Changing the default is the most basic way of working with friction. For example, if people are automatically subscribed to a programme, one removes all obstacles to signing up. Simultaneously, of course, one also makes it more difficult for people to get around to unsubscribe. Some of the most famous examples from BI are about changing the default in the domain of determination. This use of defaults is called the default effect by cognitive avoidance (DECA) and is different from the other behavioural insights concerning the default effects discussed earlier.
Perhaps the most famous use of DECA is that in pension schemes. Automatically enrolling employees in such schemes have been found to be incredibly effective compared to when employees actively have to opt in (The Behavioural Insights Team, 2014). However, DECA applies to any policy problem requiring citizens to make a continued effort in information search or goal maintenance. Thus, in Germany, two natural experiments examined how default settings may affect consumer choice in regards to energy consumption – an area in which consumer behaviour is notoriously immobile because of suppliers’ use of subscriptions, the lack of urgency in revising subscriptions and the high effort it takes to get an overview of the market and change supplier. First, in Schönau, Schwartzwald, approximately 2 500 citizens established the green electricity company Elektrizitätswerke Schönau in the wake of Chernobyl. Being part of this company was the default for all citizens. Recent reports note that opt-outs are marginally above 0% per year. Second, in Southern Germany, Energiedienst GMbH in 1999 substituted the former one-option model with a default system in which Option 1 was a green option that cost 23% more than the original model; Option 2 was the default intermediate green option that was 8% cheaper than the original model; and Option 3 was the least green option that was an additional 8% cheaper than Option 2. As a result, 94% of consumers choose Option 2, that is the intermediate green default option, while only 4.3% choose the cheapest option, Option 3, and the remaining 2% either choose Option 1 or to change energy supplier (Pichert and Katsikopoulos, 2008).
Work with friction: Another principle for making something easier is by reducing or increasing the hassle-factor or “friction” so as to make it easier to take up a preferable service or performing an action.
Reducing the number of actions, clicks or questions that one needs to perform or answer to succeed with something has been shown again and again to be a simple way to “make it easy” to help people achieve their goals. Thus, going back to the flowcharts of the stage of Behaviour to look for ways of simplifying the process that it takes to succeed in the preferable behaviour is a recommended first step. In fact, UKBIT has run several experiments showing the efficacy of this strategy. In one such experiment run with the UK Revenue and Customs authority, tax collection rates improved from 19% to 23% by directing letter recipients straight to a specific form they were required to complete rather than to the web page that included the form. In another experiment, streamlining and automating parts of the process for under-represented low-income groups applying for financial assistance led to an eight percentage point increase in the university attendance of these groups (The Behavioural Insights Team, 2014).
Conversely, when Denmark introduced an online “direct divorce” solution in 2013, it made getting a divorce within minutes easy and the number of divorces increased significantly. However, the number of people regretting their divorce shot up as well, as the number of cases where people asked for an annulment of their divorce increased to more than one out of ten. Laws were subsequently passed to make divorce a bit more difficult again.
Thus, “make it easy” is not best understood in absolute terms but rather as a strategy of making the preferable course of action relatively easier when compared with non‑preferable choices. When people have trouble self-regulating their response, as might be the case for some files of divorce, determination may be supported by making some choices a bit more difficult. In another example of this, deaths from paracetamol poisoning were observed to decrease by 43% after new legislation required larger portions to be sold in blister packs. As a result, 765 fewer people died between 1998 and 2009 (The Behavioural Insights Team, 2014).
Provide plans and feedback
In other situations, it is not possible or insufficient to make actions easier by changing the default or reducing/increasing friction. In particular, some behaviour changes require that goal-directed behaviours are not just initiated or considered once or twice but are also continuously maintained over time. Besides the recurring attentional problems posed by such behaviours, the mental taxation involved in doing this plus the balancing of competing goals may easily lead to failure. One may thus intend to stick to a diet, a health plan or taking one’s medications but, at some point, the continuous inner battles that need to be repeatedly won may make the temptation of skipping a day or two too much.
In such situations, we often put our faith in our strength of will with only ourselves to blame in case of failure. However, the behavioural science literature suggests that continuously sticking to one’s plan to reach long-term goals may be just as much a matter of technique and external feedback as a matter of inner resources. Teaching people the fundamentals of these techniques (“boost”), such as how to set up the right kinds of plans as well as arranging for suitable feedback, is thus behavioural insights that offer themselves for constructing potential strategies for successful behaviour change. The following insights exemplify this strategy.
Implementation intentions: A well-known way to succeed with a complex long-term goal is by breaking the complex goal down to simple actionable steps. In the goal-setting literature, this is often referred to as “eating an elephant by taking one bite at a time”. Still, even when doing this, plans often fall through. After all, even simple long-term goals may require continuous effort.
To alleviate this problem findings in the behavioural sciences, suggest that initiating as well as maintaining goal-directed behaviour can become much more likely by the making of concrete and specific action-plans, stipulating not only the goal but a context-specific plan for accomplishing that goal of the form: “When C arises, I will perform response A”. This type of conditional planning is referred to within the BI literature as implementation intention plans. This “if-then” structure has been shown to result in a higher tendency of succeeding with accomplishing one’s goals by predetermining a specific and desired goal-directed behaviour in response to a particular cue or future event (Gollwitzer and Brandstätter, 1997; Gollwitzer, 1999). Further, ensuing research in implementation has found that when implementations are devised in advance to combat the potential obstacles challenging the pursuit of a long-term goal, implementation intentions are even more effective in supporting behaviour change. There are multiple reasons why implementation intention plans are so effective. Most importantly such implementation intention plans are assumed to cause mental representations of future situations such that when they occur, the plan becomes automatically activated. This not only helps to remind one of one’s goals and plans but also make following the plan automatic over time so that it does not require conscious intent and deliberation. This can be used for a wide variety of public policy relevant interventions from providing people with plans for voting, sticking to one’s diet or exercise programme, to getting people to perform self-examinations for health purposes.
Thus, in an experiment by Orbell, Hodgkins and Sheeran (1997), participants were first asked to indicate how strongly they intended to perform breast self-examination (BSE) during the next month. To create relevant implementation intentions, participants were then asked to write down where they would perform BSE in the next month and at what time of the day. Of the participants who had reported strong intentions to perform BSE during the next month, 100% did so when they had been induced to form additional implementation intentions. If no additional implementation intentions were formed, however, the strong goal intention alone only produced 53% of goal completion. Similar results based directly on administrative data, rather than self-report, have been found in relation to flu shots (Milkman et al., 2011) and colonoscopies (Milkman et al., 2012).
Providing feedback: The word feedback, which originated in 1920 in the field of electronics, has expanded its meaning widely to refer to almost any mechanism by which information about the effect of an activity or process is returned and thereby, in turn, can affect that activity or process in the future. A feedback intervention is defined as an action taken by an external agent to provide information regarding some aspect(s) of one’s task performance (Kluger and DeNisi, 1996). Historically, the two most influential conclusions in research on feedback interventions are that they improve learning as well as motivation, with the caveat that feedback may also decrease motivation if one is doing poorly and has hardly any effects when an individual is already performing at a high level.
There are different types of feedback, including natural feedback processes (e.g. homeostasis); task-generated feedback (e.g. gardener seeing that they have flooded their plants); feedback of progression (e.g. how long you have run on the treadmill); feedback on results (e.g. how fast you ran five kilometers); relative feedback (e.g. your place in a race); social comparison feedback (e.g. how much you earn compared to your colleagues); personal feedback (e.g. your wife telling you that you could do better in all aspects of life). If seeking to help people sticking to a long-term goal, providing them with the suitable kind of feedback in the right situation may help them stay on track.
As an example of using feedback to change behaviour, in 2017 the Australian Department of Health identified 6 649 general practitioners (GPs) whose antibiotic prescribing rates were in the top 30% for their geographic region. Four different letters were prepared to test different behavioural insights, while a control group of 1 338 did not receive a letter. The trial found the biggest impact was on the 1 333 GPs whose letter from the chief medical officer contained a comparison with their peers as shown in a graphic depicting their scripts as a stack of red and white capsules. “I know that antimicrobial resistance is a complex issue that requires concerted efforts across general practice, hospitals, laboratories and animal health professionals”, the chief medical officer wrote. “However, there is clear evidence that reducing unnecessary prescribing can lower the incidence of antimicrobial resistance. The benefits of tackling this problem are relevant to every one of our patients”. The GPs who received that letter reduced their prescribing rate by 12.3% over the next 6 months (Australian Government Department of Health, 2018).
Create commitments (social expectations)
Sometimes, it takes more than working with friction or providing plans and feedback to help individuals achieve their long-term goals. The challenges and obstacles may just be too numerous or too hard to overcome. But even in such circumstances, the behavioural science literature has a trick up its sleeve. One reason that people procrastinate is that in a long-term perspective, everything seems easier if postponed to the future. That is why people take up 12 months interest-free loans – for surely, in a year, we will be better at handling our finances than now. It is also why you systematically tend to set your alarm clock to 6 am, only to press the snooze button. Behavioural scientists refer to this pattern in behaviour as “present bias”. People like to enjoy themselves in the present, while “the future” invites for all the difficult tasks we know we ought to do.
On a theoretical level, the present bias refers to the tendency of people to give stronger weight to payoffs that are closer to the present time when considering trade-offs between two future moments (O’Donoghue and Rabin, 1999). Practitioners may integrate this tendency into policy interventions by planning so that the tasks necessary for accomplishing a long-term goal are in the future when making a decision to pursue them and then put in place a commitment device that is hard to ignore when facing temptations. This can be done by considering the following two classes of insights.
Private commitment devices: Private commitments to a particular goal or action, for instance, making a commitment that is not public, is closely connected with the behavioural insight of connecting with social identities (see above). In making a private commitment, one introduces self-directed expectations and thereby constrains how one is allowed to think about oneself depending on failure or success in accomplishing the goal set. However, a private commitment is not just about making a pledge to oneself. It is about taking up a commitment device to realign reasons and incentives such that sticking to one’s plan becomes more attractive when challenged by temptations or when mentally taxed.
A particularly well-known form of a private commitment device is Ulysses contracts, where people pre-commit an amount of money that is returned to them only if they meet a prior agreed behavioural change goal. The idea is that Ulysses contracts help tackle present bias and utilise loss aversion (Oliver, 2017). To examine the effect of such contracts, Volpp et al. (2008) designed a study with three groups: subjects in Group 1 were assigned a weight-monitoring programme; subjects in Group 2 were also assigned the weight loss programme plus a Ulysses contract; and subjects in Group 3 were assigned the weight loss programme plus a lottery incentive. After 16 weeks, the average weight losses were 3.9 lbs, 14 lbs and 13.1 lbs respectively. The proportions of those in each group achieving the weight loss target of 16lbs were 10.5%, 47.4% and 52.6% respectively. Unfortunately, 7 months after the initiation of the study the average losses across the 3 groups had narrowed to 4.4 lbs, 6.2 lbs and 9.2 lbs respectively – a statistically insignificant difference due to small sample size. In general, there is mixed evidence of the efficacy of Ulysses contracts in public health but it is part of a class of BI strategies that should be considered by practitioners, as ongoing digitisation will vastly increase the space for the application of this strategy.
Public commitments: Public commitments are similarly connected with social identities and are a stronger type of commitment than private commitments. Yet, in making a public commitment, one creates both self-directed expectations as well as expectations in others about one’s behaviour and thereby leverages social norms (see above). Taken together, these two aspects of public commitments intertwine so as to substitute for the usually material incentives introduced by private commitments – and as the literature has it, are far more effective at achieving its purpose.
In its most rudimentary form, a public commitment is nothing more than a pledge made publicly. The power of such commitments was documented in a 1972 field experiment by Thomas Moriarty (1975) when staging 56 thefts at Jones Beach, New York. In all of them, a portable radio was stolen from an unattended blanket. With the aid of two experimental confederates, the theft was staged in full view of each subject. In each case, the “confederate victim” placed his/her blanket (the victims were interchanged according to gender) within five feet of the subject, turned on his/her portable radio to a local rock station (at a fairly high volume). After reclining for one to two minutes, the victim left his/her blanket and spoke briefly to the subject, either asking for the subject to watch his/her things (Group 1) or for a lighter to lit his/her cigarette (Group 2). The confederate victim then strolled away out of sight. Two to three minutes after, a “confederate thief” came along and stole the radio. The results were: in Group 2, only 20% of the subjects responded to the obvious theft compared to 95% in Group 1 (ignoring the 16 subjects that self-reportedly did not see the theft, all of which came from Group 2).
Similar results for the efficacy of public commitments have been obtained in a wide range of settings (Goldstein, Martin and Cialdini, 2015). For instance, in a field experiment in the UK, having patients repeat the date of their doctor’s appointment led to a 3.5% reduction in “do not attends” (DNAs), while further having them write it down led to a subsequent reduction in DNAs of 18% compared to the previous 6 months average (Martin, Bassi and Dunbar-Rees, 2012).
However, this effect was increased even further – to 31.7% decrease in DNAs – when a poster was added that 9 out of 10 patients showed up to their doctors’ appointment. As this situation is not linked to people being uncertain about the most suitable or correct behaviour when lacking information, this effect is not one of social proof (see above). Rather, it is about leveraging social norms, one of the most powerful ways to influence people’s behaviour but also one that calls for cautiousness.
Leveraging social norms
The final insight to be considered as part of the aspect of determination is that of harnessing the power of social norms. At the most general level, social norms are the mutual expectations that govern the behaviour of members of groups and societies. Behaviours adhering to social norms can be puzzling: experiments show people forego immediate self-serving behaviour to respect fairness or that norms may persist even when everyone in a group would prefer if the norm did not exist. Social norms provide strong expectations and constraints on what is acceptable behaviour as perceived by the group and thus group members may go to great lengths to abide by the existing norms, which may be incredibly difficult to change. However, in some situations, researchers and practitioners may turn to leveraging the power of social norms, especially when promoting pro-social behaviours.
Famously, the UKBIT working with HM Revenue and Customs (HMRC) changed a letter sent to people who were told in letters from HMRC that most people pay their tax on time and those who had not, belonged to the minority group that had failed to do so. This intervention significantly increased payment rates with a 5-percentage point increase in payments and led to GBP 1.2 million more being paid in the first month than the control group (The Behavioural Insights Team, 2014). What makes this intervention different from the Opower experiment mentioned above is that whereas the latter experiment works by using social comparison to get people to compare their own actual performance relative to other people’s performance with their self-perception, the tax experiment mentioned here explicitly uses fundamental in-group-out-group norms to pass a social judgment and signal normative expectations on those who do not comply.
Another BI experiment leveraging the power of social norms was aimed at passengers in minibuses in Kenya to reduce traffic deaths. In the experiment, researchers used stickers in buses to remind passengers of their right to a safe ride on public transportation and encouraging them to “heckle and chide” reckless drivers. The intervention was a remarkable success. In the buses randomly assigned to the treatment group, insurance claims involving injury or death fell by half, from 10% to 5% of claims. This was reflected in a survey of drivers, suggesting that passenger heckling played a role in improving safety (Habyarimana and Jack, 2011).
Needless to say, leveraging social norms should be done with care. For one, when leveraging social norms practitioners intervene in and make use of structures at the foundation of societal organisation and government. Second, for those influenced by social norms, they may feel stigmatised and that their social fabric is being misused if the purpose of leveraging social norms is not clearly acceptable.
Ethical guidelines for designing BI strategies for behaviour change (Strategies)
The stage of Strategies suggests a series of categories of behavioural insights to inform the design of potential public policies to match behavioural problems identified through the preceding Behavioural Analysis section. However, since some behavioural insights rely on mechanisms that are not fully accessible to consciousness or under people’s conscious control, the BI paradigm has continuously been facing criticism and suspicion of serving governmental manipulation with people’s choices. The ethics of applying BI, therefore, quickly become a more complex matter. For one, it involves counter-intuitive and theoretical scientific insights for which our moral intuitions are not well adapted. For another, behavioural insights are not all alike and hence difficult to evaluate as one.
Still, several distinctions and observations may be drawn providing some guidance for what to consider when evaluating the ethics of behavioural insights for informing public policies.
Some misunderstandings to avoid
While we are always being behaviourally influenced, this does not exempt BI from ethical evaluation. It is sometimes claimed with reference to Thaler and Sunstein’s book Nudge: Improving Decisions about Health, Wealth, and Happiness (2008) that since we cannot avoid behavioural influences, then ethics is not an issue that needs be considered. This is neither true nor what Thaler and Sunstein assert. More importantly, while it might be true that we are always being behaviourally influenced, when applying BI, researchers, practitioners and policymakers intentionally try to intervene to change the behaviour of citizens. With intentional intervention comes ethical responsibility that cannot be evaded by pointing to the fact that citizens otherwise would have been influenced by different factors.
Public acceptance of a behavioural intervention does not make it ethically permissible. In recent years a long series of survey studies have surfaced inquiring into the public acceptance of applying various kinds of behavioural insights to change people’s behaviour. While such empirical studies are interesting since they reveal the structure of the moral intuitions relative to BI, any kind of public acceptance of a behavioural intervention does not make that intervention ethically permissible. For one, such surveys do not easily reconcile with the theoretical underpinnings of BI. Second, one cannot deduce what ought to be acceptable from what is currently acceptable.
While people may avoid a behavioural intervention in principle, this does not mean that they can in practice. It is sometimes held that BI interventions neither force individuals to act in a certain way nor sanction them economically. Hence, it is said, applying BI cannot be morally objectionable. However, it should be noted that the freedom of choice held in this case is often one that only pertains to ideally rational individuals – and since one of the main propositions of BI is that real-world individuals are not ideally rational, it is incoherent to hold this position.
Two central distinctions
Transparent and non-transparent interventions. Not all aspects of applying behavioural insights are inaccessible to consciousness. While it is sometimes held that behavioural insights influence individual behaviour in ways that are inaccessible to consciousness, this is not the case for some types of influences. In particular, the use of insights such as salience, reminders, prompts, questions trees, implementation intentions and the like, are usually transparent to citizens. The application of such insights is referred to as transparent, while influences for which citizens cannot identify who is trying to influence them, by what means and for what purposes are referred to as non-transparent.
Avoidable and unavoidable interventions. Not all influences from applying behavioural insights are outside people’s control (i.e. automatic). It is sometimes held that behavioural insights influence people’s behaviour in ways that render it outside of their conscious control. However, while some insights mediate their effects in ways that people cannot avoid, many applications make possible or even depend on conscious control. Influences that people cannot control are referred to as unavoidable, while influences that make it possible or depend on conscious control are referred to as avoidable.
These two distinctions can be combined to form four types of policy interventions (see Figure 2.21). When assessing the transparency and “avoidability” of these interventions, keep in mind the following considerations:
Prioritise transparency. Is your intervention clearly communicated, including being transparent about its purpose and nature?
Offer a way out. Can citizens avoid the intervention? Does the intervention offer easy pathways to objections and complaints?
Ensure the policy intervention serves the public interest. Is it in line with public sentiments? Does it prevent harm against others?
Ensure citizens are not being held responsible for consequences that they did not consciously select. In your context, are they able to fully understand the implications of their choices? Are they considered legally accountable for these?
Table 2.7. Ethical guidelines for Stage 3: Strategy
1. While we are always being behaviourally influenced, this does not exempt BI from ethical evaluation. While it might be true that we are always being behaviourally influenced, when applying BI, researchers, practitioners and policymakers intentionally try to intervene to change the behaviour of citizens. With intentional intervention comes ethical responsibility that cannot be evaded by pointing to the fact that citizens otherwise would have been influenced by different factors. |
2. Devising strategies for behaviour change is not morally objectionable in and of itself. BI is sometimes criticised for seeking to intervene in the life of citizens in order to influence their behaviour. However, this is not an objection against applying BI in public policy but rather against public policy in general. After all, the raison d’être of public policy is intervening in individuals’ lives to regulate and influence citizens’ behaviour. |
3. Public acceptance of a behavioural policy intervention does not make it ethically permissible. In recent years, a long series of survey studies have surfaced inquiring into the public acceptance of applying various kinds of behavioural insights to change human behaviour. While such empirical studies are interesting since they reveal the structure of the moral intuitions relative to BI, any kind of public acceptance of a behavioural intervention does not make that intervention ethically permissible. For one, such surveys do not easily reconcile with the theoretical underpinnings of BI. Second, one cannot deduce what ought to be acceptable from what is currently acceptable. |
4. While people may avoid a behavioural policy intervention in principle, this does not mean that they can in practice. It is sometimes held that BI interventions neither force individuals to act in a certain way nor sanction them economically. Hence, it is said, applying BI cannot be morally objectionable. However, it should be noted that the freedom of choice held in this case is often one that only pertains to ideally rational individuals – and since one of the main propositions of BI is that real-world individuals are not ideally rational, it is incoherent to hold this position. |
5. Not all aspects of applying behavioural insights are inaccessible to consciousness. While it is sometimes held that behavioural insights influence individual behaviour in ways that are inaccessible to consciousness, this is not the case for some types of influences. In particular, the use of insights such as salience, reminders, prompts, questions trees, implementation intentions and the like, are usually transparent to citizens. The application of such insights is referred to as transparent, while influences for which citizens cannot identify who is trying to influence them, by what means and for what purposes are referred to as non-transparent. |
6. Not all aspects of applying behavioural insights are outside people’s control, i.e. automatic. It is sometimes held that behavioural insights influence people’s behaviour in ways that render it outside of their conscious control. However, while some insights mediate their effects in ways that people cannot avoid, many applications make possible or even depends on conscious control. Influences that people cannot control are referred to as unavoidable, while influences that make it possible or depend on conscious control are referred to as avoidable. |
7. Transparent avoidable policy interventions are usually regarded as ethically permissible when serving peoples interests. As potential policies based on such influences are transparent and under the conscious control of citizens, citizens are in a situation where they can decide to reject and thus avoid the policy intervention in question. Thus, such policies will usually be permissible as long as they are intended to serve the interest of citizens and thus qualifies for public policy intervention. |
8. Transparent unavoidable policy interventions are usually regarded as ethically permissible when serving people interests and routes to objections are made available. Being transparent, citizens will be aware of such interventions but since they are not readily avoidable due to their automatic mediators, policymakers should always take care to make available routes for objecting and complaining about the potential intervention as part of its design, this includes easy routes to writing letters of complaints and making contact with public officials. |
9. Non-transparent unavoidable policy interventions are usually not regarded as ethically permissible unless they serve people interests, are clearly communicated, routes to objections are made available and citizens are not held accountable. Some behavioural interventions are not readily transparent and may not be unavoidable. Policies designed on such interventions may be ethically permissible if: i) their existence, purpose and their nature as a means is clearly communicated, thereby making them transparent in principle; ii) easy routes to objections and complaints are made available; iii) the intervention serves peoples interests; and iv) citizens are not held accountable for the consequences. |
10. Non-transparent avoidable policy interventions are usually not regarded as ethically permissible even if serving peoples interests. When a policy intervention is non-transparent and avoidable, this means that it will usually be a matter of intentional manipulation by policy design, while at the same time people will usually be held accountable for their actions. In such cases, citizens are treated as a means, rather than an end. Even if such interventions are intended to serve the interests of citizens, they are usually not permissible unless they serve to prevent harm to others. |
Annex: Approaches in behavioural public policy
As we move from flexible and exploratory stages of Behavioural Analysis to the more pre-determined stages of Strategies and Intervention of a BI project, it is useful to take a step back and take an overview of the ways in which BI can be applied. The most famous is through nudging, popularised by Thaler and Sunstein (2008), and is often seen as the primary application of BI. However, it is just one of several approaches which may be characterised relative to traditional public policy as follows and for a similar but alternative characterisation (see Oliver, 2017).
Traditional public policy analyses target behaviour as the outcome of rational deliberation and decision-making by agents with unbounded attention and willpower. It conceptualises behavioural problems as the result of lack of information, absence of attitudes or lack of sufficient incentives and motivation. As a result, it pursues behaviour change by providing rational reasons for action, such as information (informational campaigns), presenting and arguing the case (persuasive campaigns), providing incentives (reliefs, rebates, taxation, fees and fines) and legal regulation (formalised prescriptions and prohibitions sanctioned by law).
Pushing understands target behaviours as either outcomes of rational agency or results of laziness. Behavioural problems are thus seen as the result of cognitive misers and biases due to agents allocating insufficient priority to attention, information search, deliberation and following through on their intentions. While push politics thus does recognise a behavioural component in the analysis of behaviour, it pursues behaviour change by emphasising and strengthening aspects of choice architectures that provide rational reasons for action beyond what ought to be required from a purely traditional approach. The aim is to trump cognitive bias by having people make meta-decisions about prioritising targeted behaviours so that the problems are resolved through reflective thinking. Doubling cigarette prices, tripling prison sentences, quadrupling traffic fines, and the like, are examples of push politics.
Boosting analyses target behaviour as either an outcome of reflective thinking or a result of lack of competencies. Behavioural problems are analysed as the result of cognitive bias influencing people when they lack the information, skills and competencies to navigate a complex world (Hertwig, 2017). This approach aims to make it easier for people to exercise their own agency in making choices by “boosting” individuals’ own decision-making competencies. It ranges from strategies that require little time and effort on the individual’s part to strategies that require substantial amounts of training, effort and motivation. Providing people with statistical skills or presenting information to them in ways that make them less likely to be influenced by cognitive biases are instances of boost politics.
Curling analyses target behaviours in light of people’s limited motivation and lack of self-control. Behavioural problems are seen as the result of “friction” where people have difficulties following through on their intentions in demanding processes and choice architectures (e.g. as administrative frameworks) or hostile choice environments (e.g. supermarkets). Curling is a paradigm of protection that attempts to weaken, remove and/or counter the psychological mechanisms identified by BI by trying to remove friction in choice architectures or counter illicit “nudges” by, for example, banning certain choice architectural features, such as the EU’s ban of pre-ticked boxes on shopping websites to aid consumers (European Commission, 2014) or imposing mandatory cool down periods on payday loans.
Nudging analyses target behaviours as outcomes of limited capabilities for people to exert rational agency. Behavioural problems are seen as the result of cognitive limitations, biases and heuristics impeding ABCD from conforming to the rules of rationality, thus preventing people from achieving subjectively preferred outcomes in such problems. Nudging aims to influence behaviours by intentionally applying BI, not only in the analysis of behaviours but also as strategic means to achieve behaviour change. It does this by integrating particular “nudges” into aspects of the choice architectures within which decision points are embedded.
Box 2.9. Two definitions of nudge
The concept of a nudge was originally coined in the relevant sense by Richard Thaler and Cass Sunstein in the famous book Nudge: Improving Decisions about Health, Wealth and Happiness (2008). Various revisions have been provided in the academic literature in order to clarify conceptually as well as ethically relevant aspects of the definition, such as whether nudges are intentional interventions and how nudges involve the active us of non-rational psychological mechanisms.
Nudge as originally defined by Thaler and Sunstein
“A nudge, as we will use the term, is any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives. To count as a mere nudge, the intervention must be easy and cheap to avoid. Nudges are not mandates. Putting fruit at eye level counts as a nudge. Banning junk food does not” (Thaler and Sunstein, 2008).
Mechanistic definition
“A nudge is a function of any attempt at influencing people’s judgment, choice or behaviour in a predictable way (1) that is made possible because of cognitive boundaries, biases, routines and habits in individual and social decision-making posing barriers for people to perform rationally in their own declared self-interests and which (2) works by making use of those boundaries, biases, routines, and habits as integral parts of such attempts” (Hansen, 2016).
In addition, nudging may be regarded as the systematic development, test and implementation of evidence-based nudges, where practitioners rely on psychological theories, such as dual and triple process theories, and make use of experimental methods for effect-and policy evaluation.
Sources: Thaler, R. and C. Sunstein (2008), Nudge: Improving Decisions about Health, Wealth, and Happiness, Yale University Press; Hansen, P.G. (2016), “The definition of nudge and libertarian paternalism: Does the hand fit the glove?”, European Journal of Risk Regulation, Vol. 7(1), pp. 155-174.
A savvy behavioural practitioner will keep all approaches for using BI in mind as they move into the stage of Strategies. This also includes knowing when a policy problem is not behavioural at all and thus inviting for more traditional public policy tools to address the problem.
Stage 4: Intervention – Testing BI strategies for informing public policies
Stage 4: Intervention
At this point, a Behavioural Analysis has been conducted and, using ABCD, relevant behavioural insight Strategies that may inform public policies aimed at creating behaviour change have been identified. The next stage, Stage 4: Intervention, aims to test whether these strategies may effectively inform the design and delivery of public policies. In BI, such tests are devised through interventions based on scientific standards of experimentation. Yet, the special purpose of testing strategies to inform actual policymaking rather than scientific discovery can make things quite complex. This chapter tries to strike the difficult balance of providing the basics of experimentation in an accessible way, while at the same time inform about some of the more complex possibilities, limitations and problems. It does so by:
1. Explaining some basic features and concepts of the experimental approach.
2. Exploring some fundamental issues that are often neglected in simplified accounts of the experimental approach.
3. Providing the basic steps for carrying out simple BI experiments.
When
It is attractive to think that Intervention is mainly a stage for academics or one that might be skipped by copy-pasting BI strategies that have already been tested with success in other places. However, as will be argued in this chapter, unless the BI team has reasonable evidence that the same mechanisms and boundary conditions are in place for a target behaviour as in past successful interventions and subsequent implementations, there is good reason to test the intervention relative to the target behaviour.
Milestone
The aim of Intervention is to test the effectiveness as well as potential side effects of behavioural insights strategies suggested for informing public policies relative to a target behaviour. If the test proves to be successful, the BI project may use the result as an evidential basis for informing public policies in the next stage: Change.
At the heart of the BI paradigm lies the ambition to evaluate the effectiveness of suggested behavioural insights for informing public policies according to the methodological standards of the behavioural sciences. This stands in marked contrast to many other innovative policymaking methods, which may employ piloting and testing but in a more design-led perspective that is not based on rigorous experimental methods. Thus, Stage 4 of BASIC focuses specifically on the experimental approach that is fundamental to BI, based on a systematic and iterative process of positing hypotheses about human nature, and then designing and evaluating behavioural insights strategies based on these hypotheses to arrive at the best possible strategies for changing the target behaviour.
Basic features and concepts of the experimental approach
To “experiment” or, to “carry out an experiment”, is a word that has penetrated everyday language in a sense where it means to “try out new things” or “do things differently than usual” to see if some change might have an effect on something else. Yet, in the sciences, testing through experimentation means something much more precise.
In the sciences, the point of an experiment is to demonstrate the causal relationship between an intervention and its outcome. Said differently, the reason you conduct an experiment is to find out whether making some intervention (i.e. the manipulation of an independent variable) will cause an effect (i.e. a measurable difference in one or more dependent variables). In addition, an experiment may also aim to determine through which mechanism (mediator) a cause produces its effect, under what conditions (boundary conditions), what may moderate it (moderators) and what kind of relationship between cause and effect is obtained (relationship).
An experiment does this by “cloning the world in two”, then simulating what happens in the cloned world (counterfactual) where the only difference is that the intervention occurs, and finally comparing the resulting state of the cloned world with the original one (status quo) to determine whether a difference is obtained. Insofar as the only difference between the two worlds is the prior occurrence of the intervention in the counterfactual world with a following change in the state of that world, the difference may be asserted to result from the intervention. That is, the cause of the effect can be attributed to the intervention.
So how may a practitioner conduct experiments that actually teach us something about relationships between causes and effects in the real world and, specifically, about lessons of the effects of integrating suggested behavioural insights in public policies? And when can we trust findings to apply to people, contexts and times beyond those conditions within which an experiment is conducted? These are quite difficult questions to answer, but the most prominent suggestion in current BI is by conducting randomised controlled trials (RCTs) in the field.
Randomised controlled trials
RCTs have been at the core of the evidence-based movement over the last two decades in public policy. By many of its proponents, RCTs are considered the “gold standard” because they represent the best scientific method available for assessing whether an intervention is effective, as well as, if designed ideally, assessing the nature of the causal relationship, i.e. the mechanisms, involved. This attitude is especially prevalent within the BI community where many hold RCTs to be the best way of determining whether a policy intervention is effective.
In its simplest form (Figure 2.22), an RCT randomly allocates participants to one of two groups: a group that receives the intervention (treatment group, or counterfactual) and a control group which does not (control group, or status quo). The treatment is then applied and the differences between the groups are observed and measured in terms of differences between the dependent variable for the two groups via a post-test. The random allocation is critical in ensuring that the two groups are statistically equivalent in known as well as unknown traits.
If the group of participants recruited for the experiment (sample) consists of an equal amount of men and women (known traits), the random allocation of participants to the two experimental groups will result in each of these two groups converging to the same distribution of men and women as in the sample, as the number of allocated participants grows larger. Likewise, for any unknown trait in the sample, such as sleeping sclerosis or genetic disposition for type 2 diabetes, the random allocation to groups will ensure that each of the two experimental groups converge to the same distribution with regards to these unknown traits as in the sample as the number of allocated participants grows larger. It is in this sense that the random allocation to groups serves to create two equivalent groups in known as well as unknown traits.
Thus, provided that the participants are allocated randomly to each group, an RCT comes as close as possible to creating a counterfactual world to the status quo – the only difference between the groups involved in the experiment is the intervention received. If no other variable could have influenced the outcome, any subsequent difference between the groups can be attributed as a causal effect of the intervention.
An experimental design refers to the way an experiment is designed to document the potential effect of an intervention. This goes beyond how participants are allocated to experimental groups. In the following, the most basic experimental RCT designs are described.
Post-test only RCT: At minimum, an RCT requires the random allocation of participants to groups, some intervention and the measurement of the potential effect tested for. The post-test only randomised controlled trial fits this minimal design. In this, participants are randomly allocated to an experimental “intervention group” or an experimental “no intervention group” referred to as the control group. After the treatment, a post-test is given to both groups to measure and compare the effect of the intervention in terms of differences between the dependent variable for the two groups.
Post-test only two treatment comparison trial: A variation of this design is the post-test only two-treatment comparison trial which, instead of having an “intervention group” and a “no intervention group”, has two interventions groups. It then compares the effect of the interventions in terms of the differences between the dependent variable for the two groups by applying a post-test.
It should be emphasised that the post-test only two-treatment comparison trial is quite problematic since there is no way of knowing whether the two treatments tested are better or worse than the policy status quo. The reason that it is mentioned here is that it transpires that practitioners may easily become attracted to the idea of testing a behaviourally informed strategy intervention up against what may be referred to as a “false control”. This may happen when seeking to test a treatment, e.g. a reminder or letter, where no similar treatment (reminder or letter) has existed before but finding it of too little interest to test against the status quo. This scenario has been observed to lead some practitioners to have public servants write up a “control letter”. Yet, this strategy should be avoided since any differences could well be from this “false control” being quickly and poorly assembled; and practitioners should always retain a control group in the experimental design representing the policy status quo.
However, if sample sizes are small, even the post-test only RCT experimental design just mentioned may become problematic as a low number of participants may allow for differences between groups to creep in, thereby undermining their equivalence. In such cases introducing a pre-test into the experimental design above may offset some of the uncertainty resulting from a small sample size. This provides us with the following experimental design:
The pre-test post-test RCT: In the pre-test post-test randomised controlled trial (see Figure 2.23) participants are measured pre-test on the dependent variable and then randomly allocated to a control group and an intervention group (independent of the result). The latter group then receives the intervention while the control group receives no such intervention. Finally, participants of all groups are subjected to a post-test measurement. Results from the two groups are then compared relative to each group’s pre-test post-test changes.
These experimental designs are aimed at testing a single factor intervention at a time. This is completely in line with how some researchers think about doing BI experiments in the real world: only test one factor at a time so we may truly isolate the causal relationship. Too many BI practitioners tend to repeat this mantra, taking it to mean that one can only test one factor in any given experiment. Fortunately, this is not quite so. Granted that one knows one’s way around experimental design, more than one factor may be tested in one and the same experiment “almost for free”. One way of doing this is by using factorial designs.
Factorial designs: A factorial design tests two (or more) independent variables and their potential interaction effect at the same time by combining the “levels” (including the binary one of the “absence” and “presence” of an intervention) with the levels of another factor. For instance, imagine you want to test “Intervention A: Salient deadline” and “Intervention B: Social proof”. Now, you may either do two RCTs or pursue a 2x2 factorial design like the one in Figure 2.24.
A 2x2 factorial design like this implies that four groups for covering all possible combinations of the two interventions are created and participants randomly allocated to these. One reason for working with a factorial design like this is that it provides more information than two separate RCTs because it allows you to study the effect of an intervention relative to:
the level (e.g. presence/absence) of a second principle (2 or 3 compared to 4)
the combined effect relative to the control (1 compared to 4)
any potential interaction effect of the 2 interventions tested (2 + 3 compared to 4).
Despite that factorial designs allow for learning a lot from one experiment, they may quickly grow out of hand relative to the sample size available. For instance, if you have three insights you want to test (absent/present), say formulations integrating loss aversion, salient deadline and social proof respectively, you will need eight groups. In this case, one may opt for a “fractional factorial design”. This design offers additional possibilities, but as we shall see one should also use it with some care.
Fractional factorial designs: If the number of combinations in a full factorial becomes too high to be feasible, a fractional factorial design may be used. In such a design only some of the possible combinations are tested.
A particular use of the fractional factorial design in BI may be illustrated using the 2x2 factorial design above. Assuming that budgets, available sample size, institutional cautiousness or some other constraint only allows for three rather than four groups, the fractional factorial design may omit to test a group, e.g. as here the group testing Combination 3, such that only Combinations 1, 2, and 4 are tested. This allows for an experimental design sometimes informally referred to as a “multi-layered experiment” because the test builds up by adding layer after layer of interventions for the experiment to test (see Figure 2.25).
While a fractional factorial design allows one to create layer-by-layer interventions, it also presents a standard pitfall when applying BI to real-world public policy. That is, the stakeholder ordering the experiment may see no reason to waste resources on Group 1, where the control is established and rather focus on Groups 2 and 4. This can happen in the public policy space, where legal obligations of the stakeholder institution could apply or a moral imperative to provide the best public services could be argued.
Thus, imagine a team that considers applying a fractional factorial design like that above. By hypothesis, the team expects that behavioural insight a will have a larger effect than behavioural insight b. Simultaneously the stakeholder institution in which the experiment is to be conducted wants the experiment to have the best overall effect possible using the least amount of resources. From this perspective, the stakeholder sees no reason to test the basic control letter and insists that the experiment should only involve Groups 2 and 4. Such insistence is not farfetched as it may result from the legal obligations of the stakeholder institution. Next, imagine that since insight a is expected to be most effective, they ask for a to constitute Treatment A; adding the moral reason that as a public body they are expected to provide the best service possible to the public. Provided this scenario, what should the team do and why?
The intuitive thing to do here might be to go with the wish of the stakeholder. However, this would be a mistake. If insight a is expected to have a large effect, and insight b only a minor if any effect, then there is a good chance that the difference between Groups 2 and 4 will be so minuscule, that there will be no significant difference between the 2 results. Even worse, assuming that insight a had a marvellous effect, the team will not even be capable of showing this. As the control group was annulled, all they will have to point at is an insignificant difference between Groups 2 and 4. Thus, in this case, the team should, as also mentioned earlier, first and foremost insist on retaining a control group. If this is not possible, then the team should insist on testing the insight expected to be least effective, i.e. b, as Treatment A.
Quasi-experiments – When randomisation is not possible
When conducted correctly, an RCT has the potential of demonstrating actual causal relationships obtained between interventions and outcomes in the real world. However, the real world does not always allow for the random allocation of people to experimental groups without seriously distorting the target behaviour that the experiment no longer is about the causal relationship intended. As a result, the quasi-experiment has become a widespread alternative to RCTs in social experimentation.
The quasi-experiment is a research design involving an experimental approach more or less identical to an RCT but where random allocation to treatment and control group has not been used (Campbell and Stanley, 1963). Consequently, the equivalence between groups cannot be guaranteed, resulting in a series of threats to the internal validity of the experiment. For this reason, quasi-experiments are often portrayed as a second-best choice to be considered when the behavioural intervention studied does not allow for the random allocation to groups. Examples of valuable quasi-experimental designs include:
Regression discontinuity (RD): where participants are assigned to treatment and control groups based on a cutoff point of an assignment variable. The discontinuity between the treatment and control trends is then measured.
Propensity score matching (PSM): where participants in the treatment group are paired to participants in the control group based on the similarity of their scores to account for selection bias.
Difference in differences: where the effect of a treatment or of a policy is estimated by comparing the pre- and post-treatment differences in the outcome in the treatment and control group
As some of the most relevant and interesting real-world behaviours, especially when it comes to public policy, do not allow for randomisation, quasi-experiments should perhaps be accepted as the realistic standard, rather than the alternative. If not, the very method of experimentation may end up biasing what is studied experimentally – a bias, which is already clearly detectable in BI, where experiments on conformity to messages sent by letters and similar behaviours conducive to randomisation are massively overrepresented.
In addition, as the pioneers of experimental design, Cook and Campbell (1979) argued even randomised controlled trials should be planned such as to be interpretable as quasi-experiments, in case something goes wrong with the randomised design, as it often does in the real world. On the negative side, this means that a series of precautionary measures should always be taken relative to the design of the experiment as well as the analysis planned. On the positive side, it usually means that additional information is collected, for example, about the background of participants, allowing for more interesting analysis.
Quasi-experimental designs may thus be regarded as a natural starting point for practitioners challenging them to think creatively about how to approximate random allocation in the real world, rather than insist on creating a randomised sample when this may introduce artificiality into the behaviour.
This conclusion is important for policymakers as they are usually the ones deciding what BI intervention to fund and accept. The tendency amongst some researchers and practitioners to portray RCTs as the only way of working scientifically with BI would be a distortion and limitation of the nature of how behavioural science actually works with regards to testing interventions with actual relevance for public policy.
Learning “what works” from experiments
The only way researchers and practitioners can properly design an experiment is if they can specify in advance the variables to be included and the experimental protocols to be followed. For this to be possible researchers and practitioners must already have a substantial conceptual grasp of the behaviour which the BI intervention tested is to be applied to. In following a diagnostic method like BASIC, part of this conceptual grasp should be in place. However, if developing experiments without a conceptual grasp of the behaviour achieved through some sort of diagnostic effort experimentation might easily prove a risky strategy, where precious resources may be wasted and citizens used as participants in haphazard experiments.
More importantly in this connection, without such a conceptual grasp of the target behaviour, practitioners will end up learning nothing about what works from testing BI interventions through experimentation – they will only be able to say what worked in the particular experiment carried out. This is because an experiment in and by itself only shows that a particular intervention caused a certain outcome (causal description), but reveals nothing about the actual mechanisms by which it did so (causal explanation) – except in so far the experiment is deliberately designed to control for possible alternative mechanisms (Robson and McCartan, 2016; Shadish, Cook and Campbell, 2002). If this is not the case, practitioners will be in the blind as to how to generalise their findings from “what worked” into those principles of “what works” that is to behaviourally inform public policy. This is not a point to be dismissed as merely of “theoretical and academic interest”. Carrying out components of experiments without a close eye to this issue undermines the whole point of experimentation as well as the very possibility of behaviourally informed public policy – to know “what works” means to know how it works, for whom and under what conditions.
What was written at the beginning of the chapter thus takes on a new nuance. The reason for conducting an experiment might be to find out whether making some intervention (i.e. the manipulation of an independent variable) will cause an effect (i.e. a measurable difference in one or more dependent variables). However, the reason why one chooses to experiment is to find out how that knowledge can be used to inform future decisions. For this to be possible, one also needs to determine through which mechanism a cause produces its effect (mediator), under what conditions (boundary conditions), what may moderate it (moderators) and what kind of relationship between cause and effect is obtained (relationship). This latter “addition” is crucial. It is what allows practitioners to generalise the findings of the experiment.
Generalising experimental findings
Another relevant aspect relative to generalising experimental finding is that of sample size. Non-researchers sometimes think that the sample recruited for an experiment needs to be representative of a population, and thus ideally comprise 1 000 or more participants selected randomly from the wider population. This is due to their familiarity with traditional methodologies such as representative surveys. However, while such vast and representative samples would obviously be nice in an experiment, it is not necessary.
As just discussed, the point of an experiment is to test the causal effect of an intervention. For this, it is not necessary to consider the sampled participants representativeness relative to a larger population. Given the random allocation to groups as in an RCT as well as strict control over the experimental setting, such that the groups differ only in the intervention introduced to the treatment group, any subsequent difference between the two groups must be an effect of the intervention. From this, it follows that the necessary sample size for proving such an effect ultimately depends on the size of the effect and can be calculated by a statistical method known as “power analysis” (see Box 2.10).
Yet, one should still pay attention to the composition of the sample featuring in an experiment. This is because experimentation is concerned with drawing lessons that may be generalised and the composition of the sample constrains the conclusions that may be drawn from the measured effect of the intervention relative to the participants (what worked) to the likely effect of the intervention on other people in the real world (what works).
Importantly, similar points also hold true with regards to the context within which the experiment took place; in particular, the conditions under which the causal relation was produced, the mechanism that mediated cause and effect, and the potential moderators. This is what was alluded to in Stage 1 when writing that BI is about behavioural insights. One needs to identify the behavioural insights at work as defined there to provide behaviourally informed policy advice. While issues about generalisability, such as representativeness, is not a fundamental issue when designing experiments for understanding “what worked”, it is a fundamental issue when designing experiments to inform public policy about “what works”.
Box 2.10. Power analysis
The necessary sample size for proving an effect ultimately depends on the size of the effect and can be calculated by a statistical method known as “power analysis”. Power analysis allows one to perform a backwards calculation from the size of the effect to the sample size needed to show this effect to be significant. A power analysis allows researchers and practitioners to determine the sample size required to detect an effect of a given size with the required degree of confidence. This means that how many participants are needed for an experiment may be derived from knowledge of the effect and the confidence level wanted. Of course, this creates something of a catch 22, as the eventual estimated size of the effect can never be known until it has been shown. To some extent, the mapping of a behavioural pattern done as part of Behaviour as well as piloting the experiment (see Step 7 below) may give some clues about what to expect. However, a better approach when applying BI to public policy may be to assume what effect size will be acceptable for developing a larger policy intervention upon given the expected costs and benefits potentially resulting from the intervention and then derive the number of participants needed to detect such an effect with an acceptable probability. As conducting power analyses can be quite technical, practitioners will often be well advised to seek out external expertise on this matter.
Ultimately, generalisability is a complex area in scientific research. Two core strategies for working with generalisability may, however, be mentioned here (Robson and McCartan, 2016):
1. Direct demonstration is a strategy where researchers and practitioners try to replicate an experiment, carry out further experiments with other types of participants or conducts the experiment in a different context.
2. Making a case is a strategy where researchers and practitioners try to argue that it is reasonable to expect the results will generalise due to the sample, setting or mechanisms studied in the experiment.
The ABCD framework is in a sense an example of making a case by asserting that systematic relationship between certain aspects of behavioural problems – Attention, Belief formation, Choice and Determination – and particular solutions that are so robust that they may be generalised. Still, when it comes to informing the design and development of particular public policies, BASIC holds that making a case is mainly an approach for identifying potential strategies to integrate into a policy intervention that often needs to be tested at several levels relative to the target behaviour in the target population to inform public policy in terms of a general behavioural policy principle.
Proving principle, practice and policy
Most of the experiments inspiring BI have traditionally taken place in laboratory settings. A laboratory is an artificial place constructed with the sole purpose for researchers and practitioners to control for almost all factors. This allows researchers and practitioners to test very precisely the effect of a cause, support claims of mediators and manipulate moderators in order to assess their impact of the effect as such a laboratory is the perfect place to test for the existence and nature of a behavioural insight. In this way, laboratories provide proofs of principle.
Proofs of principle. The necessary artificiality of laboratories challenges the generalisability of their findings in two ways (Aronson, Brewer and Carlsmith, 1985). First, laboratory experiments may lack experimental realism. This is the case if an experiment fails to put participants in a real situation, such that it does not engage the participants properly or has no real impact on them. Second, laboratory experiments may lack mundane realism. This is the case if the participants encounter events in the laboratory, which are very unlikely to occur in the real world. Third, the findings discovered in a laboratory may easily be so fragile that they have no bearing in the noisy world outside the laboratory.
Besides this, laboratories also invite certain biases into their findings. The two most important of these are demand characteristics, which bias the behaviour observed in the lab because participants know they are part of an experiment, that they are being observed, and know that the behaviour they exhibit will be objects of interpretation. Consequently, the behaviour observed will not only be influenced by the intervention, but also by the participants’ interpretation of what effect the intervention is supposed to have on them. The other one is, expectancy effects, which bias results through the practitioner’s (usually unwittingly) expectations about finding support for the experimental hypothesis.
Taken in sum, these problems inherent in laboratory experiments may be argued in many instances to undermine proofs of principle as direct sources for informing public policies. Instead, a more fitting role for them may be argued to be that of informing the field experiments, which in turn may inform public policies by providing proofs of practice.
Proofs of practice. Policies are supposed to work in the real world – not the artificial world of the laboratory. In moving experiments out of the laboratory and into natural settings, one minimises certain issues pertaining to the generalisability of findings. If something works in a field experiment, it works in the real world – at least for the specific intervention tested. Likewise, some of the biases liable to affect laboratory studies are also avoided. For instance, in field experiments, participants will often not know that they are participating in an experiment. Hence demand characteristics are minimised as well. Finally, while laboratory experiments usually recruit participants amongst students, field experiments tend to observe participants from groups that usually engage in the target behaviour. As such, field experiments are the perfect setting to test the real-world effectiveness of a BI intervention. In this way, field experiments provide proofs-of-practice.
However, field experiments also present certain drawbacks. First and foremost, it will often be difficult to allocate participants randomly such as to establish equivalent experimental groups without essentially just moving the laboratory into the field. However, with increased digitisation more and more target behaviours are becoming conducive to random allocation in field experiments. If randomisation is not possible, quasi-experiments is a second-best option. Also, interactions between participants in a field experiment are not as rare as one would expect – when participants interact within experimental groups as well as across experimental groups, it vitiates their random assignment as well as violates their assumed independence. That said, perhaps the most important problem of field experiments is that the loss of control relative to the laboratory setting makes it difficult to get a sufficient conceptual grasp on details to allow for causal explanations to be tested (see discussion above). Consequently, field experiments are usually not sufficient by themselves for providing proofs-of-policy principles unless specifically designed for this.
Box 2.11. Real world situations conducive to randomised experiments
1. When lotteries are expected.
2. When demand outstrips supply.
3. When an innovation cannot be introduced to everyone simultaneously.
4. When participants are isolated from each other.
5. When a tie can be broken.
6. When people express no preference among alternatives.
Source: Adapted from Cook, T. and D. Campbell (1979), Quasi-experimentation: Design and Analysis Issues for Field Settings, https://www.scholars.northwestern.edu/en/publications/quasi-experimentation-design-and-analysis‑issues‑for‑field‑settin; Robson, C. and K. McCartan (2016), Real World Research, https://www.wiley.com/en-us/Real+World+Research%2C+4th+Edition-p-9781118745236 (accessed on 7 November 2018).
What then constitutes proof of policy, if neither laboratory experiments nor field experiments may do this in and by themselves? A possible answer may be gathered from Levitt and List (2005) who looked at the two approaches – lab and field – and concluded that: “the sharp dichotomy sometimes drawn between lab experiments and data generated in natural settings is a false one. Each approach has strengths and weaknesses, and a combination of the two is likely to provide deeper insights than either in isolation”.
If this is the case, one may argue that proofs of generalisable policy principles are not to be found in any single experiment, whether laboratory or field. A laboratory finding may fail to generalise into the field, and a field experiment may fail to generalise across contexts that may seem similar. Yet, by combining the two strategies, laboratory experiments may deliver insights into the causal relationships needed to generalise successfully across real-world settings (what works); and field experiments may deliver the generalisation from the laboratory to the field to show what of what works also works in the real world. In particular, building up evidence through iterated experimentation may provide the behavioural insights that may ultimately be used to inform public policy.
The main steps for carrying out a BI experiment
1. Integrate strategies into a prototype policy intervention. Integrate the principles you identified as potential Strategies (Stage 3) for influencing the target behaviour into a prototype intervention that could realistically be implemented as part of public policy.
2. Collect feedback for improving your prototype intervention. Consider whom to involve and how, including people from the target group of the intervention, to get valuable input and feedback on the proto-type intervention. When done, make revisions and iterate the process starting from (1) until you feel ready to proceed to (3).
3. Determine the variables of the experiment. Determine what variables potentially, realistically and ethically may be manipulated and measured, including background variables, independent, dependent and proxy variables.
4. Select experimental setting and design. Determine which kind of experiment (field or laboratory) and which kind of experimental design is feasible for testing the effect of the prototype intervention given the constraints set by the project, the involved institutions and the real world. In particular, against this background, also determine what sample size is necessary for detecting an effect size sufficiently large to justify running an experiment.
5. Develop experimental protocols for testing interventions. Develop an experimental protocol for testing the intervention, including procedures for sampling, data collection and data analysis and share this with relevant people – researchers as well as BI and policy practitioners – to get feedback and input for making necessary revisions. When done, make revisions to the protocol and iterate this step until you feel ready to proceed to (6).
6. Obtain approval and pre-register your experiment. Consider pre-registering the study and whether to get approval from an ethical review board is necessary as well as what legal resources to consult attached to the institutions involved in the project. Consider also whether to involve people from the target group of the intervention in getting input and feedback on the ethical aspects of the intervention. In particular, define potential “ABORT” conditions.
7. Conduct a pilot-experiment. Conduct a pilot or pre-test of the prototype as well as important aspects of the protocol, so as to examine: i) whether institutional, technical and systemic aspects work out as expected; ii) what challenges to time schedules and other unforeseen factors might reveal themselves in the process; iii) potential indicators of what effect size of the intervention to expect; iv) the feasibility of the planned data analysis; and v) whether revisions to the prototype and the protocol are needed – thus returning the process to (5) – before continuing to (8).
8. Carry out the experiment. Use the advice located at the beginning of this section to determine your final experimental method and follow appropriate standards for rigorous experimental methods.
9. Analyse the result. Follow the planned analysis as described in the protocol and discuss any possible changes to this with relevant researchers, the ethical review board (if involved) as well as the project advisory board if this has been established.
10. Writing up the experiment: procedure, results and perspective. Write up a report on the experiment independent of the result and register this in the relevant databases.
Ethical guidelines for testing behaviourally informed policies (Intervention)
The stage of Intervention is unavoidably one that intervenes in people’s lives by manipulating independent variables to observe how this systematically affects the behaviour of participants. In addition, experimentation invariably involves targeting groups of people differentially, often effecting some groups of people with a treatment that one has reason to believe will positively affect their lives, while withholding this treatment from at least one other group. Hence, it is not surprising that ethical issues need to be considered from the very beginning of designing an experiment over its implementation, completion and on reporting its results.
Considering ethics relative to Intervention is usually done by consulting three sources (Shadish, Cook and Campbell, 2002):
Ethical codes of conduct.
Informed consent.
Institutional review boards (IRBs).
The behavioural literature and associations refer to experimental disciplines that for years have devised important resources for addressing these three points. However, it is important to notice that particular ethical codes of conduct, guidelines and procedures are not always uniformly applicable to all types of experimental research. They have mostly been developed to serve in medical research and other areas, such as BI, and have different needs and requirements. Hence, researchers and practitioners need to orient themselves within standard ethical guidelines and codes as well as fit these to the special circumstances they are working under.
Of the three sources of ethics relative to the stage of Intervention, the latter two sources were already addressed by the ethical guidelines sketched at the end of Analysis (Stage 2). Thus, practitioners new to experimentation need to consult those guidelines carefully before embarking on running experiments.
The following guidelines relate to some key ethical and legal issues that one needs to consider when running experiments with BI applied to public policy. They can be summarised as:
Be aware that interventions unavoidably intervene in people’s lives. Experiments intentionally give one group a treatment that is believed to have a positive impact, while withholding this treatment intentionally from another group. You must orient yourself within the standard ethical guidelines and codes that fit into the special circumstances of the behavioural project.
Obtain appropriate legal consent and demonstrate the necessity of the experiment. You should consider if the laws in your country deem experimentation as legally permissible in public service. It may also be necessary to demonstrate that the intervention will improve a policy situation, reveal knowledge not currently known, provide necessary data, be used to inform policy and protect the rights of individuals.
Always consult experience. Make sure that experiments are conducted by people with experience in experimental design, intervention and reporting to ensure proper protocols are followed.
Ensure justice, fairness and distributional impacts are considered. You need to consider and address the potential ethical issues that arise from one group receiving a treatment and the other not. This may require to deploy safety valves for discontinuing the experiment for ethical reasons or compensating/offsetting groups after the experiment.
Take all measures to protect data privacy and confidentiality, as well as ensure ethical data analysis. You should carefully consider using procedures and protocols that ensure the confidentiality of participants, for instance, by using randomised response methods or determining not to collect or connect any data about potential identifiers. Ethical data analysis can be strengthened by pre‑registering studies, accounting for data outliers and truthfully reporting on attrition, to strictly follow standards of statistics and their representation.
Table 2.8. Ethical guidelines for Stage 4: Intervention
1. Consider whether legal permissions for experimentation should be obtained. Even though the law in some countries views experimentation as a legitimate means of exploring public policy issues, practitioners should pay close attention to the legal issues prompted by experimentation. For instance, most countries embrace the principle of equality of treatment requiring that individuals who are similar in relevant ways should be treated similarly. Yet, experiments often require that individuals who are similar in relevant ways should be treated differently. Thus, experimentation in public policy requires researchers and practitioners to consider whether legal permission is needed and seek to acquire such permissions when necessary. |
2. Demonstrate the appropriateness and necessity of experimentation. Large differential effects between citizens cannot be justified by appeals to some larger benefit to those who might receive improved policy in the future as a result (Shadish, Cook and Campbell, 2002). Thus, before an experiment is conducted, it might be worth demonstrating that (Federal Judicial Center, 1981): • The current policy situation needs improvement. • The effect of the proposed intervention for improvement is not already known. • Only an experiment could provide the necessary data to clarify the question. • The result of the experiment will be used to inform existing practices or policies. • The right of individuals will be protected in the experiment. |
3. Always consult experience. Make sure that new experimenters always consult people with experience in experimental design, intervention and reporting to help generate suitable protocols for experimental designs, pre-tests and experimental tests, including protocols for informed consent, debriefing and reporting. This is especially important to do with regards to any deviation from customary practices for existing codes of conduct within the field in which the BI intervention is tested. |
4. Consider justice, fairness and other ethical aspects when sampling. Interventions often treat people differentially by withholding experimental treatments: treating some groups of people with a treatment that should positively affect their lives, while withholding this treatment from at least one other group. Before considering how to deal with this kind of differential treatment, just being part of the experimental sample means that one receives differential treatment relative to those people not part of the sample frame. Practitioners need to consider and address the potential ethical issues arising from this kind of differential treatment relative to sampling. |
5. Deploy compensatory experimental designs if possible. While interventions often treat participants differentially, certain features of experimental designs may compensate or offset some of those ethical issues that arise. The procedure of randomisation may itself be regarded as such a feature, as participants by definition have equal chances for ending up in each of the experimental groups. However, other features of experimental designs may also compensate or offset unequal treatment. For instance, if suitable conditions are obtained, practitioners may opt for a crossover design, such that experimental groups switch places as control and treatment groups. Another strategy is to opt for within-group designs such as pre-test post-test designs, where the behavioural effect of a treatment on a group of participants is compared to the same group’s behaviour before the treatment was devised. |
6. Compensate or offset differential effects between groups. It is not always possible to deploy an experimental design that compensates or offsets potential differential effects between groups, which raises ethical issues. In such cases, practitioners may consider whether post-experimental measures for compensating or offsetting such effects are available. For instance, participants in a group subject to negative differential effect relative to other groups may receive an extended deadline or an additional reminder for complying with existing regulation. In other cases, participants may receive compensatory benefits, such as educational advice, special options or first treatments to offset such effects. What makes up compensatory or offsetting measures will depend on the specific purpose of an experiment. |
7. Deploy routes for discontinuing experiments for ethical reasons. Plan in advance for ongoing experiments to be halted if negative side effects unexpectedly occur or if one experimental group experiences dramatically better results than another. This also requires that preliminary analyses at fixed intervals be planned that allows for prematurely discontinuing the experiment for ethical reasons. While standard in medical research, this practice is just as important when devising experiments for behaviourally informed policies. |
8. Protect data privacy and confidentiality. As mentioned In Stage 1: Behaviour, BI projects often collect and connect data in ways not usually done in public policy development and design. In addition, testing BI interventions may involve further collection of data from participants who have agreed to be part of an experiment or study. It is important to observe that the confidentially of research data is not necessarily protected by law – especially not when interventions are tested by public authorities themselves. However, this does not permit practitioners to plan experiments, including when obtaining consent, where participants are not guaranteed the kind of confidentiality guaranteed by the stated consent or expected by citizens who decide to participate. For this reason, even when being employed within public organisations, practitioners should carefully consider using procedures and protocols that ensure the confidentiality of participants, for instance, by using randomised response methods or determining not to collect or connect any data about potential identifiers. |
9. Ensure ethical data analysis. Statistical analysis may easily be tweaked to misrepresent findings in ways that misdirect laymen, who tend to perceive numbers and statistics as objective facts. Practitioners are responsible for doing their best to avoid misrepresentations, especially in BI where one cannot be excused by assuming that people ought to know better. This guideline not only concerns the representation of data but also its analysis. It is thus important that researchers and practitioners comply with principles for the ethical production and analysis in all aspects of handling data – from pre-registering studies, over accounting for data outliers and truthfully reporting on attrition, to strictly follow standards of statistics and their representation. |
10. Preventing misrepresentation as best as possible. Even if all the guidelines above are followed, it is still part of the scientific social responsibility of practitioners to do their best to prevent misrepresentations and overstretch of results. BI has seen its fair share of misrepresentations and overstretches; simplifying mechanisms too much and overstretching lab findings to explain almost any real-world phenomenon. For this reason, practitioners should not only make clear results and conclusions but also the limits of these relative to the interpretation of real-world phenomena and what needs to be studied further before drawing appealing conclusions. |
Stage 5: Change – Implementing behaviourally informed policies
Stage 5: Change
When a BI project enters this stage, significant effort has been put into the Behavioural Analysis in seeking to identify a target behaviour and understand why people act as they do, identify effective and responsible behavioural insight Strategies that match the behavioural problem, and test a prototype policy Intervention. The BI project enters into Stage 5: Change when tests have produced promising results and that a behavioural insight can be developed into a full policy intervention – or when repeated failure brings the project to an end and the community can learn from what did not work so that the BI field can advance.
When
Obviously, without effective implementation of the successfully tested, behaviourally informed policy, there will be little if any effect of the work done. Yet, Change is also the stage where the temporary communion of mutual interests of all those involved in the BI project may dissolve with the potential result that nothing gets implemented, or what gets implemented is very different from what was intended. To prevent this from happening, Stage 5: Change includes a series of tools and considerations relative to the effective and successful implementation of behaviourally informed policy.
Milestone
The aim of Change is to inform public policies about the findings from the project and ensure that society gains the broadest possible value from the insights gained. BASIC suggests that this is done by reaching the final five-point milestone:
1. Revisiting the political context and project level.
2. Implementing and scaling behaviourally informed policies.
3. Setting up monitoring of long-term and potential side effects.
4. Maintaining the policy initiative.
5. Disseminating knowledge widely.
Revisiting the political context and project level
The first step is to come full circle and revisit the policy context or policy challenge that originally motivated the project, as well as the project brief that defined the approach and scope of the project. In public policy situations and as interests change all the time, ensuring that interventions as well as the process of implementation are aligned with the current situation is key. Even though the priority filter in Behaviour tries to take precautionary measures for such changes, a series of factors are still often seen to change, with a potential relevance of interventions and their implementation. These include:
Digitisation: Digital platforms and technologies develop at an ever-increasing pace. Practitioners will often find that the programme software and digital systems involved in a project may have changed and offer new constraints or possibilities that need to be taken into account when developing a plan for implementation. There are many examples of this problem in BI, where many original projects have delivered behavioural insights into letters sent from public bodies, only to find that those organisations transitioned into digitising their communication at the same time. The same is currently the case in consumer research, where projects about certain markets or consumer conditions are overtaken by the development of digital markets.
Policy interests: Political and policy interests sometimes change at an even faster pace than technological development. Factors external as well as internal to the project might have caused priorities to flip. New and pressing policy challenges may have crowded out interest in the current project or the policy problem might have developed into a more pressing concern and called for more immediate action. Internally, the implementation of a BI project might suddenly be top of a minister’s agenda if, for instance, the results are very promising; or interest may have waned, also if the results were too meagre or technical to promote a public agenda.
Regulatory context: Regulations might have been passed that have rendered the intervention superfluous or out of pace with the rules. The former is represented by interventions designed for a behavioural problem, which since then have become subject to legislative push (e.g. the problem at stake has been regulated through traditional means) or legislative rollback (e.g. when a law is abrogated so that the intervention is no longer relevant). The latter is even more important as changes in the legal landscape might call for revisions in the design of the intervention (e.g. when new data-protection rules require for changes in a digital implementation).
Institutional structure: The period where BI has emerged has also been one where institutional reforms have been popular. Thus, it is crucial that practitioners take institutional reforms, changes to structure and dynamics into account before embarking on implementing a behaviourally informed policy intervention.
Public opinion: Last, but certainly not least, any plan for implementing a behaviourally informed policy intervention needs to take changes in public attitudes and sentiments into account. Cases with relevancy to the policy challenge or policy problem addressed by the project may have received considerable public attention during the execution of the BI project, which means that the policy intervention suggested by the project needs to be implemented with an eye to this. Thus, the practitioner should consider consulting on the proposed intervention with citizens, businesses, non-profit organisations and other affected groups to get a view into how the policy intervention might be received and, equally important, perceived, from comments on the proposed intervention, as well as to gain further support up front from these stakeholders.
Besides looking into the potential factors with potential relevance for the behaviourally informed intervention and its implementation, practitioners also need to revisit the ambitions and scope of the original project brief. Although all changes that have been made to the original project brief during its execution might have been acknowledged by all relevant parties throughout the project, the implementation plan still needs to take the original brief into account to make sure how what is to be done next connects with the original idea behind the project. In particular, the implementation and next steps should revisit the ambitions, which pertained to the project relative to its level (see above):
Institutional level projects aimed to apply BI to a wider institutionalised domain to provide an understanding of how this approach may help to transform public policy development and/or delivery. The ambition is thus to explore the “institutional fit” of BI, so to speak, by: i) providing knowledge about the institutional potential and relevant processes and methods involved when working with BI; ii) carrying out interventions that may serve as proof-of-concept; and iii) identifying the possible institutional obstacles that working with BI presents to the particular institution and its domain.
Strategic level projects aimed to apply BI to one or more issues from a defined list of existing policy problems that challenge a particular institutional domain or sector. The ambition is thus to deliver viable and effective policy insights and solutions which are cost-effective compared to alternative policy measures by: i) extending existing knowledge about BI and building capacity for this within the institution; ii) applying the lessons learned from former institutional projects to strategic level problems to test for their robustness; and iii) providing scalable long-term solutions to one or more existing policy issues.
Behavioural level projects aimed to apply BI directly to a specific behavioural problem in the institutional domain or sector. Policymakers, stakeholders and collaborators usually assume that the tools and methods for applying BI in public policy design and delivery are more or less fully developed. Thus, behavioural level projects are expected to fully integrate into the everyday decisions and processes of institutional work. The success criteria of projects at this level will usually be: i) smooth integration of process; ii) “problem solved”, not “lesson learned”; and iii) easily communicable results.
It is important that the stage of implementation begins by revisiting these aspects of the project, so as to ensure that the implementation of any ensuing behaviourally informed policy intervention is adapted to the current policy context as well as aimed at delivering on those ambitions that originated the project.
Implementing and scaling behaviourally informed policies
Having revisited the policy context and the project level, the next step of Change is to decide on plans for implementing, scaling and evaluating the behaviourally informed policy change suggested by the four initial stages of BASIC. Such plans are incredibly important. The first decade of behavioural public policy has revealed that many BI projects fail to go beyond proof-of-practice to truly inform public policy through their actual policy implementation as well as by feeding the resulting policy situation back into the beginning of the policy cycle for further improvement.
Consider good regulatory and policymaking practices. The intervention being developed could lead to a new programme or a change to a law, regulation or regulatory regime. For instance, the OECD worked with the Colombian Communications Regulator to re-design the consumer protection regime with the help of behavioural insights (OECD, 2016). In these situations, the policymaker and practitioner should consider good regulatory and policymaking practices, such as regulatory impact assessments (RIAs) and stakeholder engagement, as a means of embedding the BI-informed interventions into existing decision-making tools, further measuring the potential impact of the proposed intervention and offering citizens, businesses and other affected parties a chance to provide their inputs (OECD, 2014; 2018).
Actively use behavioural insights to inform implementation and scaling. In drafting plans for implementing, scaling and evaluating a policy solution, policymakers and practitioners should actively use behavioural insights to inform these plans. Considering strategies such as “make it relevant” or “devise plans and feedback” relative to this stage is important. Also, a behaviourally informed policy will always have been tested in a more specific or limited area than that to be covered by the policy. Thus, considering how the results might fail to generalise when scaled, for example, through a “post-mortem” and then devise plans to take results into account would also be a good strategy when implementing behaviourally informed policies. These are but two of many ways that researchers and practitioners may consider applying BI to inform the stage of Change.
Implement experimentally and scale incrementally. Besides actively using BI to inform the implementation, scaling and evaluation, to actively use BI as part of Change, policymakers and practitioners should also devise plans in accordance with the methodological underpinnings of this. Traditionally policies are rolled out across the board when implemented. But adopting a BI approach to Change means adopting an experimental approach to the implementation and an incremental approach when scaling up behaviourally informed policies. This also requires keeping track of the dependent measures used for the experimental evaluation as part of Intervention as well as adding additional measures made possible by the policy being scaled up. This allows keeping close track of various moderating variables as part of implementation. Thus, through the implementation and scaling up of a behaviourally informed policy, policymakers and practitioners may study whether certain groups are more or less affected than what was suggested by tests as part of Intervention. This, in turn, may lead to further iterations and tweaks in the design of the policy in question.
Avoid diluting behavioural policies by carefully monitoring implementation. A recurring problem experienced in the stage of Change is that behavioural policies may become diluted. This occurs because of the often counter-intuitive nature of behaviourally informed interventions. To third parties usually working in a rationality-based policy perspective, crucial contextual features and other aspects of a behaviourally informed policy, might not seem important or be perceived as in conflict with a traditional approach to policymaking. In an illustrating case, a Danish distributor of public communication cancelled the use of pink paper for a letter, as it seemed unimportant. However, when trialled in Singapore, pink paper was found to have a positive effect on how many people complied with the message. Another common situation is when public servants or staff decide that it is not necessary to follow the procedures devised as part of a behavioural intervention as this is not perceived to be important (see for example Martin, Bassi and Dunbar-Rees, 2012). To avoid such situations, it is important to plan and follow the BI intervention all the way through the policy cycle.
Monitoring long-term and potential side effects
Experiments that test the potential effects of behaviourally informed policies will always be limited in time and scope. In particular, most experiments in the BI literature have been one-shot or very limited in timespan. This is unfortunate and practitioners should aim to negotiate interventions where trials provide some confidence of effects over time and across relevant domains. However, when this has not been the case, the long-term effects and potential side effects of such experiments will be unknown when entering the stage of Change.
As mentioned above, implementing and scaling a behaviourally informed policy offers an opportunity for practitioners to keep a close track of various moderating variables as part of implementation. However, when drafting plans for Change, practitioners should also put a special emphasis on the necessity of establishing measures for and then monitoring potential long-term effects.
Box 2.12. Examples of monitoring behaviourally informed policy solutions
UKBIT found that employees, who successfully had been prompted to charitable giving the previous year, had reverted to the original level of giving when receiving the same treatment the following year (The Behavioural Insights Team, 2015).
An experiment to nudge travellers in an airport to smoke in designated smoking zones, showed no decrease in effect for well-maintained interventions when doing a follow-up study three years after the intervention was put in place (Schmidt, Schuldt-Jensen and Hansen, 2017).
Sources: The Behavioural Insights Team (2015), The Behavioural Insights Team: Update Report 2013-2015, http://38r8om2xjhhl25mw24492dir-wpengine.netdna-ssl.com/wp-content/uploads/2015/08/BIT_Update-Report-Final-2013-2015.pdf (accessed on 7 November 2018); Schmidt, K., J. Schuldt-Jensen and P. Hansen (2017), “Rygeadfærd i BASICperspektiv: En case fra Københavns Lufthavne om adfærdsdiagnosticering og langtidsvirkning af adfærdsinterventioner”, Økonomi og Politik, Vol. 90(4), pp. 54-65.
Likewise, it is important to monitor for unexpected side effects. This is illustrated by an experiment by UKBIT conducted in the United States. In a letter trial with “[name] you need to open this” handwritten on the envelope, return rates for failed deliveries were higher (not large enough though to determine if significantly so) for envelopes with handwriting on.
For these reasons, plans for implementing, scaling and evaluating the policies resulting from a BI project should always include specific plans for monitoring long-term as well as potential side effects. This may be done by integrating an ex post evaluation or review of a given policy as a required step of the policymaking process. In this way, evaluations or reviews will help ensure the quality of policy over time as well as help to generate new data that can highlight deficiencies, which can be addressed by new behaviourally informed policy initiatives.
Thus, when constructing the policy, researchers and practitioners should consider including provisions that require evaluations or reviews to take place. For example, “programmed reviews” can be included which impose a sunset requirement as a failsafe mechanism to ensure the policy remains fit-for-purpose over time or a post-implementation review that requires an evaluation after a given time. In the BI space, there could be an additional moral imperative for including such provisions, as arguments about the contentious nature of using psychology in policymaking may be limited by assurances that the given policy will be reviewed to mitigate potential negative long-term effects.
Maintaining the policy initiative
Different from efforts directed at changing public attitudes or cultural perceptions but similar to traffic signs and data systems, behaviourally informed policies are usually only effective as long as the intervention is maintained. The study of behaviourally designed smoking zones just mentioned above also showed that for those zones, which were not properly maintained behavioural effects, had a decrease relative to their decline (Schmidt, Schuldt-Jensen and Hansen, 2017). Such a lack of maintenance – whether physical or systemic – is common for BI interventions for the same reasons that BI interventions are at risk of being diluted during implementation. Maintenance of BI interventions may be neglected because features may appear as unimportant or may be in conflict with what seems necessary from a more rational perspective.
As part of securing the continued maintenance of behaviourally informed policies and interventions, plans for implementing and scaling should, therefore, include instructions for the proper maintenance – physical or systemic – of the policy. As an illustration of what happens when this is not done is provided from a Norwegian intervention, which successfully nudged consumers to buy more energy efficient domestic appliances by showing the lifetime costs of these next to the sale prices. In this experiment, the behavioural effects returned to normal, as new staff were not being trained in the role intended for the showing of lifetime costs as part of the sales situation (Kallbekken, Sælen and Hermansen, 2013). To avoid problems with maintaining a policy initiative over time, practitioners should consider what audiences need to be involved in the maintenance and produce material and instructions that fit these audiences and the situations in which this material is to be used.
Disseminating knowledge widely
It has only been a decade since BI became popular in policymaking. Thus, it is not surprising that it is only recently that outlets and standards for reporting on BI projects have begun to emerge. While the idea of disseminated results widely is expected in the behavioural sciences, it is still not a widespread practice in most public institutions – not even those where the idea of evidence-based policy has existed for a while. As a result, many early BI projects were not reported at all or only for internal use. In particular, null results have not been widely publicised leading to publication bias. Also, the lack of standards has led to non-transparent reporting; reporting without moderators; reporting only in local languages; overstatement of effects, savings and revenues; and understatement of true costs (see, for example, OECD, 2017 and Osman et al., 2018).
For this reason, it is crucial that researchers and practitioners participate, support and systematically share and report their work in national as well as international networks of researchers and practitioners. Stage 5: Change should include allocating resources for writing up work and publishing this in academic journals or other approved outlets. Finally, practitioners working within BI should also make an effort at supplying information and transparency in data to the various current efforts at providing publicly available databases of BI projects.
Relative to the policy side, it should also be remembered that BI is an evidence-generating approach that seeks to de-bias future decisions by policymakers. Thus, it is just as important to share results with the community of policymakers to facilitate peer learning and better decision-making throughout government. This also includes communicating upwards to the political leadership to gain support for future interventions or further capability building for BI in the public sector.
Ethical guidelines for implementing behaviourally informed policy (Change)
Like for the other four stages in BASIC, researchers and practitioners should also observe a series of ethical guidelines in the stage of Change. Some key ethical guidelines when working with Change are summarised below:
Adhere to principles of proper stakeholder engagement. Make sure to involve public bodies, staff, citizens, businesses and other affected parties, that they are properly consulted and that the results of this consultation are clearly communicated.
Follow principles of transparency and accountability. Results of experiments and consultations should be shared with executive and legislative branches, as well as with broader society. This includes ensuring proper credit is given to the policymakers and government agencies who ran the experiments.
Report on what works, and what does not. This is an important part of research so that both academics and other policymakers can learn from their efforts. This includes reporting on null results and unexpected effects to avoid exposing citizens to interventions that have already been shown to fail.
Monitor long-term and side effect. In implementing behaviourally informed interventions, researchers and practitioners also have the responsibility for devising plans that monitor effects to protect citizens from potential negative consequences.
Table 2.9. Ethical guidelines for Stage 5: Change
1. Involve stakeholders in Change. Good regulatory practice calls for active stakeholder engagement, if possible and suitable when implementing and scaling behaviourally informed policies. Make sure to involve public bodies, staff, citizens, businesses and other parties affected by the proposed policy. Policies should always serve and respect the citizens, and the extended trust they put in government should never be assumed or taken for granted. |
2. Adhere to principles of transparency and accountability: Transparency in BI is an important discussion in the behavioural community (see Hansen and Jespersen, 2013). Researchers and practitioners need to consider the appropriate procedures and requirements for transparency and accountability to the executive and legislative branches of government, as well as the broader society. |
3. Give credit where credit is due. A lot of work in BI is commissioned work carried out or supported by smaller governmental agencies or non-governmental units. If wanting to accept the ethos of behavioural science, this means that policymakers and governmental agencies should give credit where credit is due. |
3. Always report on null results and unexpected effects. To learn, one not only needs to know what works and why, but also what did not work. While agreement about and resources devoted to publishing null results as well as unexpected effects should be secured already as part of Behaviour, it is at this point that those obligations need to be adhered to. Thus, always report on null results and unexpected effects to avoid exposing citizens to interventions that have already been shown to fail. |
4. Monitor for long-term and side effects. While we have already mentioned that monitoring for long-term and side effects is part of good practice in the stage of Change, this should also be done for ethical reasons. In implementing behaviourally informed interventions, researchers and practitioners also have the responsibility for devising plans for monitoring long-term and side effects to protect citizens from the potential negative consequences of these. |
5. Carefully examine individual and social moderators where feasible. BI has become famous for reporting on significant behavioural effects caused by implementing minor and seemingly insignificant changes into public policy. Less attention has been paid to individual and social moderators causing variance in these effects. While an increase in a positive behaviour should always be welcomed, it is just as important to ensure that specific individuals and groups do not pay a negative price for the average improvement. Hence, researchers and practitioners should always carefully examine individual and social moderators as part of implementing and scaling behaviourally informed policy. |
References
Adams, P., et al. (2018), “Time to act: A field experiment on overdraft alerts,” Financial Conduct Authority, Occasional Paper 40, https://www.fca.org.uk/publication/occasional-papers/occasional-paper-40.pdf
Allcott, H. (2011), “Social norms and energy conservation”, Journal of Public Economics, Vol. 95(9-10), pp. 1082-1095, http://dx.doi.org/10.1016/j.jpubeco.2011.03.003.
Aronson, E., M. Brewer and J. Carlsmith (1985), “Experimentation in social psychology”, in L. Gardner and E. Aronson (eds.), Handbook of Social Psychology, Random House, New York.
Australian Government Department of Health (2018), “Nudge vs Superbugs: A behavioural economics trial to reduce the overprescribing of antibiotics June 2018”, http://www.health.gov.au/internet/main/publishing.nsf/Content/Nudge-vs-Superbugs-behavioural-economics-trial-to-reduce-overprescribing-antibiotics-June-2018 (accessed on 7 November 2018).
Axelsson, S. and K. Åström (2012), Everyone Earns a Paper Fee, https://www.naturskyddsforeningen.se/nyheter/alla-tjanar-pa-en-pappersavgift (accessed on 7 November 2018).
Balgvig, F. and L. Holmberg (2014), “Flamingoeffekten: Sociale misforståelser og social pejling”, Djøf Forlag.
BEAR (2018), How Should Organizations Best Embed and Harness Behavioural Insights? A Playbook, http://www.rotman.utoronto.ca/-/media/files/programs-and- areas/bear/white-papers/bear_biinorgs.pdf?la=en (accessed on 6 November 2018).
Berkowitz, A. and H. Perkins (1987), “Current issues in effective alcohol education programming”, in J. Sherwood (ed.), Alcohol Policies and Practices on College and University Campuses, National Association of Student Personnel Administrators Monograph Series, Columbus, OH.
Bertrand, M. et al. (2010), “What’s advertising content worth? Evidence from a consumer credit marketing field experiment”, Quarterly Journal of Economics, Vol. 125(1), pp. 263-305, http://dx.doi.org/10.1162/qjec.2010.125.1.263.
BPS (2018), Code of Ethics and Conduct, The British Psychological Society, https://www.bps.org.uk/sites/bps.org.uk/files/policy/policy%20-%20files/bps%20code%20of%20ethics%20and%20conduct%20%28updated%20july%202018%29.pdf.
Brown, C.L. and A. Krishna (2004), “The skeptical shopper: A metacognitive account for the effects of default options on choice”, Journal of Consumer Research, Vol. 31(3), pp. 529-539.
Campbell, D. and J. Stanley (1963), Experimental and Quasi-experimental Design for Research, Rand McNally and Company, Chicago.
Chater, N. (2018), The Mind is Flat: The Illusion of Mental Depth and the Improvised Mind.
Cho, R. (2013), Making Green Behavior Automatic, Climate, General Earth Institute, Colombia University, https://blogs.ei.columbia.edu/2013/05/23/making-green-behavior-automatic/ (accessed on 7 November 2018).
Cook, T. and D. Campbell (1979), Quasi-experimentation: Design and Analysis Issues for Field Settings, Houghton Mifflin, https://www.scholars.northwestern.edu/en/publications/quasi-experimentation-design-and-analysis-issues-for-field-settin (accessed on 7 November 2018).
DellaVigna, S. (2009), “Psychology and economics: Evidence from the field”, Journal of Economic Literature, Vol. 47(2), pp. 315-72, https://pubs.aeaweb.org/doi/pdfplus/10.1257/jel.47.2.315.
Dinner, I. et al. (2011). “Partitioning default effects: Why people choose not to choose”, Journal of Experimental Psychology: Applied, Vol. 17(4), pp. 332-341, http://dx.doi.org/10.1037/a0024354.
Drexler, A., G. Fischer and A. Schoar (2014), “Keeping it simple: Financial literacy and rules of thumb”, American Economic Journal: Applied Economics, Vol. 6(2), pp. 1-31, http://dx.doi.org/10.1257/app.6.2.1.
Duflo, E., M. Kremer and J. Robinson (2011), “Nudging farmers to use fertilizer: Theory and experimental evidence from Kenya”, American Economic Review, Vol. 101(6), pp. 2350-2390, http://dx.doi.org/10.1257/aer.101.6.2350.
Ethics Committee of the British Psychological Society (2018), Code of Ethics and Conduct, British Psychological Society, Leicester, www.bps.org.uk/news-and-policy/bps-code-ethics-and-conduct.
European Commission (2014), “Taking consumer rights into the digital age: over 507 million citizens will benefit as of today”, Press Release, http://europa.eu/rapid/press-release_IP-14-655_en.htm (accessed on 7 November 2018).
European Commission (2013), “Antitrust: Commission fines Microsoft for non-compliance with browser choice commitments”, Press Release, http://europa.eu/rapid/press-release_IP-13-196_en.htm (accessed on 7 November 2018).
European Commission (2009), “Antitrust: Commission accepts Microsoft commitments to give users browser choice”, Press Release, http://europa.eu/rapid/press-release_IP-09-1941_en.htm (accessed on 7 November 2018).
Evans, J.S.B. (2008), “Dual-processing accounts of reasoning, judgment, and social cognition”, Annual Review Psychology, Vol. 59, pp. 255-278, www.annualreviews.org/doi/pdf/10.1146/annurev.psych.59.103006.093629.
Evans-Pritchard, B. (2013), Aiming to Reduce Cleaning Costs by Blake Evans-Pritchard (Works That Work Magazine), https://worksthatwork.com/1/urinal-fly (accessed on 7 November 2018).
Federal Judicial Center (1981), Experimentation in the Law: Report of the Federal Judicial Center Advisory Committee on Experimentation in the Law, Federal Judicial Center, Washington, DC, https://www.fjc.gov/sites/default/files/2012/ExperLaw.pdf (accessed on 7 November 2018).
Frey, B. and R. Jegen (2001), “Motivation crowding theory”, Journal of Economic Surveys, Vol. 15(5), pp. 589-611, http://dx.doi.org/10.1111/1467-6419.00150.
Gawronski, B., J.W. Sherman and Y. Trope (eds.) (2014), Dual-process Theories of the Social Mind, Guilford Publications, New York.
Gigerenzer, G. (1991), “From tools to theories: A heuristic of discovery in cognitive psychology”, Psychological Review, Vol. 98(2), pp. 254-267, https://www.mpib‑berlin.mpg.de/volltexte/institut/dok/full/gg/fromtool/fromtool.pdf (accessed on 7 November 2018).
Goldstein, N., S. Martin and R. Cialdini (2015), Yes!: 60 Secrets from the Science of Persuasion., Profile Books.
Gollwitzer, P. (1999), “Implementation intentions: Strong effects of simple plans”, American Psychologist, Vol. 54(7), pp. 493-503.
Gollwitzer, P. and V. Brandstätter (1997), “Implementation intentions and effective goal pursuit”, Journal of Personality and Social Psychology, Vol. 73(1), pp. 186-199.
Grüne-Yanoff, T. (2016), “Why behavioural policy needs mechanistic evidence”, Economics and Philosophy, Vol. 32(3), pp. 463-483, https://doi.org/10.1017/S0266267115000425.
Habyarimana, J. and W. Jack (2011), “Heckle and chide: Results of a randomized road safety intervention in Kenya”, Journal of Public Economics, Vol. 95(11-12), pp. 1438-1446, http://dx.doi.org/10.1016/J.JPUBECO.2011.06.008.
Halpern, D. (2015), Inside the Nudge Unit: How Small Changes Can Make a Big Difference, Penguin Random House UK, London.
Hansen, P.G. (2016), “The definition of nudge and libertarian paternalism: Does the hand fit the glove?”, European Journal of Risk Regulation, Vol. 7(1), pp. 155-174.
Hansen, P.G. and A.M. Jespersen (2013), “Nudge and the manipulation of choice: A framework for the responsible use of the nudge approach to behaviour change in public policy”, European Journal of Risk Regulation, Vol. 4(1), pp. 3-28.
Hansen, P. et al. (2016), “Apples versus brownies: A field experiment in rearranging conference snacking buffets to reduce short-term energy intake”, Journal of Foodservice Business Research, Vol. 19(1), pp. 122-130, http://dx.doi.org/10.1080/15378020.2016.1129227.
Hertwig, R. (2017). “When to consider boosting: Some rules for policy-makers”, Behavioural Public Policy, Vol. 1(2), pp. 143-161.
ideas42 (2017), Define, Diagnose, Design, Test, http://www.ideas42.org/blog/first-step-towards-solution-beta-project/ (accessed on 6 November 2018).
iNudgeyou (2015), Nudging Hospital Visitors’ Hand Hygiene Compliance, https://inudgeyou.com/en/nudging-hospital-visitors-hand-hygiene-compliance/ (accessed on 7 November 2018).
IRS (2017), “Behavioral Insights Toolkit”, US Internal Revenue Service, https://www.irs.gov/pub/irs-soi/17rpirsbehavioralinsights.pdf.
Johnson, E.J., S. Bellman, and G.L. Lohse (2002), “Defaults, Framing, and Privacy: Why Opting In-Opting Out,” Marketing Letters, Vol. 13 (1), pp. 5-15.
Kahneman, D. (2011), Thinking, Fast and Slow, Farrar, Straus and Giroux, New York.
Kahneman, D. and A. Tversky (1979), “Prospect theory: An analysis of decision under risk”, Econometrica, Vol. 47(2), p. 263, http://dx.doi.org/10.2307/1914185.
Kallbekken, S., H. Sælen and E. Hermansen (2013), “Bridging the energy efficiency gap: A field experiment on lifetime energy costs and household appliances”, Journal of Consumer Policy, Vol. 36(1), pp. 1-16, http://dx.doi.org/10.1007/s10603-012-9211-z.
Kluger, A.N. and A. DeNisi (1996), “The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory”, Psychological Bulletin, Vol. 119/2, pp. 254-284, https://psycnet.apa.org/buy/1996-02773-003.
King, D. et al. (2014), “Redesigning the ‘choice architecture’ of hospital prescription charts: A mixed methods study incorporating in situ simulation testing”, BMJ Open, Vol. 4(12), p. e005473, http://dx.doi.org/10.1136/bmjopen-2014-005473.
Larrick, R. and J. Soll (2008), “ECONOMICS: The MPG Illusion”, Science, Vol. 320(5883), pp. 1593‑1594, http://dx.doi.org/10.1126/science.1154983.
Lepenies, R. and M. Małecka (2019), “The ethics of behavioural public policy”, in A. Lever and A. Paoma, The Routledge Handbook of Ethics and Public Policy, Routledge, New York.
Levitt, S. and J. List (2005), “What do laboratory experiments tell us about the real world?”, Journal of Economic Perspectives, Vol. 21, https://www.researchgate.net/publication/248419698_What_Do_Laboratory_Experiments_Tell_Us_About_the_Real_World (accessed on 7 November 2018).
Linkenbach, J. and H.W. Perkins (2003), “Misperceptions of peer alcohol norms in a statewide survey of young adults”, in H.W. Perkins (ed.), The Social Norms Approach to Preventing School and College Age Substance Abuse, Jossey-Bass, San Francisco.
Loewenstein, G. (1996), “Out of control: Visceral influences on behavior”, Organizational Behavior and Human Decision Processes, Vol. 65(3), pp. 272-292, https://doi.org/10.1006/obhd.1996.0028.
Lunn, P. (2014), Regulatory Policy and Behavioural Economics, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264207851-en.
Marchionni, C. and S. Reijula (2019), “What is mechanistic evidence, and why do we need it for evidence-based policy?”, Studies in History and Philosophy of Science Part A, Vol. 73, pp. 54-63, https://www.sciencedirect.com/science/article/pii/S0039368118300311.
Martin, S., S. Bassi and R. Dunbar-Rees (2012), “Commitments, norms and custard creams – A social influence approach to reducing did not attend (DNAs)”, Journal of the Royal Society of Medicine, Vol. 105(3), pp. 101-104, http://dx.doi.org/10.1258/jrsm.2011.110250.
Milkman, K.L. et al. (2012), “Following through on good intentions: The power of planning prompts”, HKS Faculty Research Working Paper Series RWP12-024, John F. Kennedy School of Government, Harvard University, http://web.hks.harvard.edu/publications/workingpapers/citation.aspx?PubId=8410.
Milkman, K.L. et al. (2011), “Using implementation intentions prompts to enhance influenza vaccination rates”, Proceedings of the National Academy of Sciences, Vol. 108(26), pp. 10415-10420, https://doi.org/10.1073/pnas.1103170108.
Miller, J. and J. Krosnick (1998), The Impact of Candidate Name Order on Election Outcomes, Oxford University Press, American Association for Public Opinion Research, http://dx.doi.org/10.2307/2749662.
Mischel, W. (2014), The Marshmallow Test: Understanding Self-control and How to Master It, Random House, https://books.google.fr/books?id=pg2rawaaqbaj&dq=mischel+2014&lr=&source=gbs_navlinks_s (accessed on 7 November 2018).
Moriarty, T. (1975), “Crime, commitment, and the responsive bystander: Two field experiments”, Journal of Personality and Social Psychology, Vol. 31(2), pp. 370-376, http://dx.doi.org/10.1037/h0076288.
Mullainathan, S. and E. Shafir (2013), Scarcity: Why Having Too Little Means So Much, Times Book, New York, https://www.hks.harvard.edu/centers/cid/publications/books/scarcity-why-having-too-little-means-so-much (accessed on 6 November 2018).
Norman, D. (1988), The Psychology of Everyday Things, Basic Books.
Nudge blog (2010), Measuring the LSD Effect: 36 Percent Improvement, http://nudges.org/?s=lake+shore+drive (accessed on 7 November 2018).
O’Donoghue, T. and M. Rabin (1999), “Doing it now or later”, American Economic Review, Vol. 89(1), pp. 103-124, http://dx.doi.org/10.1257/aer.89.1.103.
OECD (2018), OECD Regulatory Policy Outlook 2018, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264303072-en.
OECD (2017), Behavioural Insights and Public Policy: Lessons from Around the World, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264270480-en.
OECD (2016), Protecting Consumers through Behavioural Insights: Regulating the Communications Market in Colombia, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264255463-en.
OECD (2014), The Governance of Regulators, OECD Best Practice Principles for Regulatory Policy, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264209015-en.
Oliver, A. (2017), The Origins of Behavioural Public Policy, Cambridge University Press, Cambridge, http://dx.doi.org/10.1017/9781108225120.
Orbell, S., S. Hodgkins and P. Sheeran (1997), “Implementation intentions and the theory of planned behavior”, Personality and Social Psychology Bulletin, Vol. 23(9), pp. 945-954, http://dx.doi.org/10.1177/0146167297239004.
Osman, M. et al. (2018), “Learning lessons: How to practice nudging around the world”, Journal of Risk Research, pp. 1-9, http://dx.doi.org/10.1080/13669877.2018.1517127.
Payne, J., J. Bettman and E. Johnson (1993), The Adaptive Decision Maker, https://books.google.fr/books?hl=en&lr=&id=QzXFqwrPLXkC&oi=fnd&pg=PR11&dq=Payne+et+al+1993&ots=12OJ4hEw9n&sig=IoPa2VNZ9Adb98YypU4t84bwXuM#v=onepage&q=Payne%20et%20al%201993&f=false (accessed on 7 November 2018).
Pichert, D. and K. Katsikopoulos (2008), “Green defaults: Information presentation and pro‑environmental behaviour”, Journal of Environmental Psychology, Vol. 28(1), pp. 63-73, http://dx.doi.org/10.1016/J.JENVP.2007.09.004.
Pink, D. (2018), When: The Scientific Secrets of Perfect Timing, Penguin Publishing Group.
Read, D., G. Loewenstein and S. Kalyanaraman (1999), “Mixing virtue and vice: Combining the immediacy effect and the diversification heuristic”, Journal of Behavioral Decision Making, Vol. 12(4), pp. 257-273, http://dx.doi.org/10.1002/(SICI)1099-0771(199912)12:4<257::AID-BDM327>3.0.CO;2-6.
Robson, A. (2001), “The biological basis of economic behavior”, Journal of Economic Literature, Vol. 39(1), pp. 11-33, http://dx.doi.org/10.1257/jel.39.1.11.
Robson, C. and K. McCartan (2016), Real World Research, Wiley, https://www.wiley.com/en-us/Real+World+Research%2C+4th+Edition-p-9781118745236 (accessed on 7 November 2018).
Sanders, M., M. Jackman and M. Sweeney (2017), Introducing Test+Build – A BI Venture, The Behavioural Insights Team, https://www.behaviouralinsights.co.uk/uncategorized/introducing-testbuild-a-bi-venture/ (accessed on 6 November 2018).
Sanders, M., V. Snijders and M. Hallsworth (2018), “Behavioural science and policy: Where are we now and where are we going?”, Behavioural Public Policy, Vol. 2(2), pp. 144-167, http://dx.doi.org/10.1017/bpp.2018.17.
Schmidt, K., J. Schuldt-Jensen and P. Hansen (2017), “Rygeadfærd i BASICperspektiv: En case fra Københavns Lufthavne om adfærdsdiagnosticering og langtidsvirkning af adfærdsinterventioner”, Økonomi og Politik, Vol. 90(4), pp. 54-65.
Shadish, W., T. Cook and D. Campbell (2002), Experimental and Quasi-experimental Designs for Generalized Causal Inference, Houghton, Mifflin and Company, http://psycnet.apa.org/record/2002-17373-000 (accessed on 7 November 2018).
Smets, K. (2018), “There is more to behavioural economics than biases and fallacies”, The Behavioural Scientist, Vol. 24, https://behavioralscientist.org/there-is-more-to-behavioral-science-than-biases-and-fallacies/.
Soman, D. (2015), The Last Mile: Creating Social and Economic Value from Behavioral Insights, University of Toronto Press, Toronto, https://books.google.fr/books?hl=en&lr=&id=dh1kcgaaqbaj&oi=fnd&pg=pp1&dq=dilip+soman+2015&ots=u4xy2vwq1s&sig=hxrjmn4sebsloanbirdi6ybmhp4#v=onepage&q=dilip%20soman%202015&f=false (accessed on 7 November 2018).
Stanovich, K.E. (2012), “On the distinction between rationality and intelligence: Implications for understanding individual differences in reasoning”, in The Oxford Handbook of Thinking and Reasoning, Oxford University Press, Oxford.
Stanovich, K.E. (2011), Rationality and the Reflective Mind. Oxford University Press, Oxford.
Stanovich, K.E. (2009), What intelligence tests miss: the psychology of rational thought. Yale University Press
Stubbs, N. et al. (2012), “Methods to reduce outpatient non-attendance”, The American Journal of the Medical Sciences, Vol. 344(3), pp. 211-219, http://dx.doi.org/10.1097/MAJ.0b013e31824997c6.
Sunstein, C. and L. Reisch (2013), “Green by default”, Kyklos, Vol. 66(3), pp. 398-402, http://dx.doi.org/10.1111/kykl.12028.
Tanguy, B. et al. (2014), “The future in mind: Aspirations and forward-looking behaviour in rural Ethiopia”, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2526352 (accessed on 7 November 2018).
Texas Department of Transportation (n.d.), “Don’t mess with Texas”, https://www.txdot.gov/inside-txdot/media-center/psas/litter-pollution/dont-mess-with-texas.html (accessed on 7 November 2018).
Texas Times (2016), “Don’t mess with Texas”, http://www.cbclandman.com/uploads/images/pdfs/2016%20-%20don,t%20mess%20with%20texas,%20the%20real%20story.pdf (accessed on 7 November 2018).
Thaler, R. and C. Sunstein (2008), Nudge: Improving Decisions about Health, Wealth, and Happiness, Yale University Press.
The Behavioural Insights Team (2015), The Behavioural Insights Team: Update Report 2013-2015, The Behavioural Insights Team, London, http://38r8om2xjhhl25mw24492dir-wpengine.netdna-ssl.com/wp-content/uploads/2015/08/bit_update-report-final-2013-2015.pdf.
The Behavioural Insights Team (2014), EAST Four Simple Ways to Apply Behavioural Insights, The Behavioural Insights Team, https://38r8om2xjhhl25mw24492dir-wpengine.netdna-ssl.com/wp-content/uploads/2015/07/bit-publication-east_fa_web.pdf.
The Behavioural Insights Team (2013), Test, Learn, Adapt: Developing Public Policy with Randomised Controlled Trials, The Behavioural Insights Team, https://38r8om2xjhhl25mw24492dir-wpengine.netdna-ssl.com/wp-content/uploads/2015/07/TLA-1906126.pdf.
The Behavioural Insights Team (2010), MINDSPACE, The Behavioural Insights Team, https://www.behaviouralinsights.co.uk/publications/mindspace/ (accessed on 6 November 2018).
The Danish Business Authority and Copenhagen Economics (2013), Nudging Business Policy: Making It Easy to Do the Right Thing, https://erhvervsstyrelsen.dk/sites/default/files/media/nudging-business-policy.pdf.
Titmuss, R. (1970), The Gift Relationship: From Human Blood to Social Policy, Allen and Unwin, London.
Tversky, A. (1972), “Elimination by aspects: A theory of choice”, Psychological Review, Vol. 79(4), pp. 281-299, http://dx.doi.org/10.1037/h0032955.
Tversky, A. and D. Kahneman (1974), Judgment under Uncertainty: Heuristics and Biases, http://psiexp.ss.uci.edu/research/teaching/Tversky_Kahneman_1974.pdf.
UK Government (n.d.), Check if a Document Allows Someone to Work in the UK - GOV.UK, https://www.gov.uk/legal-right-work-uk (accessed on 7 November 2018).
Van der Plight, J. (2001), “Decision making, pyschology of,” International Encyclopedia of the Social & Behavioral Sciences, pp. 3309-3315, https://doi.org/10.1016/B0-08-043076-7/01750-2.
Van Kleef, E. and H.C. van Trijp (2018), “Methodological challenges of research in nudging”, in Methods in Consumer Research, Vol. 1, pp. 329-349, Woodhead Publishing.
Volpp, K. et al. (2008), “Financial incentive-based approaches for weight loss”, JAMA, Vol. 300(22), p. 2631, http://dx.doi.org/10.1001/jama.2008.804.
Wickens, C., S. Gordon and Y. Liu (1998), An Introduction to Human Factors Engineering, Longman.
World Bank (2015), The World Development Report 2015: Mind, Society and Behaviour, The World Bank, Washington, http://www.worldbank.org/content/dam/worldbank/publications/wdr/wdr%202015/wdr-2015-full-report.pdf.
Notes
← 1. Still, practitioners cannot just skip considering the stages of BASIC as a behaviourally informed approach to public policy development and delivery. A policymaker should at least consider the implications of each and every stage for the given piece of policy. Take, for instance, the situation where a practitioner considers just copying a behaviourally informed policy from another country or designing a policy on the basis of behavioural insights. Even in these situations, the practitioner should consider whether the original behavioural analysis and strategies would fit the new context, the extent to which an intervention needs to be tested under the new conditions and how to scale the change for the policy issue at hand.
← 2. Italics refer to behavioural strategies developed in Stage 3: Strategies.
← 3. The exact function of prompts has sometimes confused BI researchers and practitioners. Text message reminders may, for example, in some cases be interpreted as a prompt as you often cannot proceed on your phone without taking notice of the message. In such cases, the message works both as a prompt and reminder. Also, it has been discussed whether a prompt works as a nudge or is more like being coerced to do something. The short answer is that it depends on the details of the prompt. When you cannot bypass a prompt without making a decision, it forces you to make a choice and works like a “push”. When a prompt leaves one open to dismissing it, e.g. by shutting down a pop-up box, it forces you to pay attention to what is being asked for; or more precisely, it forces you to make a decision about making a decision (push) but only nudges you to make the latter decision. Yet, these conceptual matters are secondary to the question of whether prompts work.