This chapter describes the precautionary principle as a regulatory approach, including in the context of energy-related decision-making. It analyses precaution as a “continuous variable”, illustrating the importance of taking into account the (implicit) trade-offs when prioritising certain risks. Finally, it discusses the interplay between precautionary approaches with public perception and the wider socio-political context.
Understanding and Applying the Precautionary Principle in the Energy Transition
3. The PP as a regulatory approach: risk, uncertainty and precaution
Abstract
Key messages
The PP is a potentially useful tool in high stakes situations of uncertain environmental or human health hazard. It is not a prescribed formula. The PP should be understood as a flexible principle which can help decision makers to:
ensure they are not ignoring problems of scientific uncertainty; and
avoid unintended disastrous impacts that can arise when actors may be inclined to reckless risk-taking behaviour.
Pinpointing “obvious” precautionary decisions is challenging from a regulatory standpoint. This challenge is ubiquitous in the energy transition context. Not deploying certain technologies may also entail important potential — even major — negative long-term effects.
In the face of uncertainty, taking precautionary decisions is shaped by social-political and psychological elements as well as the political economy. These factors are equally as important as scientific evidence, which was the traditional emphasis of academic research and debate on the PP. Empirical evidence suggests that the PP is applied with a high degree of discretion – in often heavily politicised decision-making.
Governments do not follow “national styles” when it comes to applying the PP. Even within single countries, patterns of systematic application are difficult to detect. Variations in application are due to specific risks and concerns. They are not a result of general differences.
Precautionary measures should aim at optimising trade-offs across interconnected risks as much as possible. Moreover, application of the PP should rely on robust and iterative assessment in order to reduce incrementally any uncertainty surrounding potential outcomes and their probabilities.
In the energy transition context, the critical question (and challenge) relates to identifying the truly precautionary regulatory choices in a context of impending catastrophic risks linked to climate change. The main question is not whether to apply the PP or not but rather what the actual precautionary choice would be. Furthermore, the question is whether there is a demonstrated or sufficiently credible claim that the technology can make a real positive contribution to fighting climate change and its impact. Not the “classical” PP question of whether a given technology is sufficiently risky as to warrant a ban or other severely restrictive action.
If a technology can credibly make a positive contribution to fighting climate change, the relevant regulatory choice should be between “normal” risk-based regulation and — if justified by its potential harm — restrictions, strict monitoring and gradual implementation.
The fact that some risks, related to the use of hydrogen, are unknown or insufficiently quantified does not justify the use of the PP in hydrogen risk regulation. Certain applications of hydrogen might warrant more precaution due to higher uncertainty and potential harm. However, this decision is not a yes-or-no question, it must rely on stepwise, scalable and experimental approaches.
Socio-political and psychological elements play an important role in shaping precautionary decision-making in the face of uncertainty.
The PP in the context of energy-related decision-making
When considering the “big picture” of energy sources, vectors and applications, the application of the PP is both very relevant in principle, and highly challenging in practice. As discussed, it may bring about important benefits in terms of determining rules that protect from severe harm (environment, safety etc). It can, however, bring about negative consequences such as unduly stifle the use of new technologies.
In the context of the 2020s, “precaution” is a highly problematic and conflictual issue in the energy field. Arguably, the lack of precaution when it could have made a difference has led us to the current climate crisis: the roll-out of fossil fuels on a massive scale was seen by some as a potential source of global warming early on (Thompson, 2019[1]), and solid scientific findings of CO2-induced warming date back at least to the 1970s (NASA, 2022[2]).1 Even though the research produced internally by Exxon was suppressed for profit motives, warming became increasingly understood and noticeable over the 1980s and 1990s. However, it did not become a prevailing concern until several years later, and a vast number of energy-related decisions continued to be made without taking it into account.
Moreover, the core question for energy technologies is not only about “applying the PP or not”, but rather what the actual precautionary choice would be.
Hydropower initially looked like an environmentally friendly source of electricity, adding the benefit of flood control for valleys that had often suffered from disastrous flooding (with, of course, the major safety risk of dam failure to control for). However, longer-term, negative impacts on many fish species, marshlands and coastal deltas (which are no longer reinforced by alluvions), have become a major concern. This is illustrated by the evolution over time of guidance on implementation of the Water Framework Directive and associated regulations in several jurisdictions with regard to the impact on good ecological status ratings of heavily modified waters, engineering works etc. – e.g. Scotland (Scottish Environment Protection Agency, 2022[3]). This realisation has led to a sharp decrease in large new dam construction projects as well as to the removal of a number of existing dams. Investment in wind and solar energy now far outpaces investment in hydropower (Leslie, 2018[4]), with associated challenges in variability of supply, as only hydropower can be considered a stable baseload source.
After an initial enthusiastic global rollout of civilian nuclear energy from the 1950s onwards, major incidents (Three-Mile-Island, Chernobyl and Fukushima) led to an increasingly strong backlash (Kiyar, D. & Wittneben, B., 2012[5]) (Beale, 2016[6]), which has resulted in a particularly safety-focused regulatory framework (NEA, 2022[7]), and sometimes in particularly strong precautionary postures. A 2016 article quoting data from the World Nuclear Association’s Reactor Database noted that, in the 32 years before Chernobyl, 409 reactors were opened, but only 194 had been connected in the three decades since (Beale, 2016[6]). However, other dimensions of precaution (e.g. energy independence) were actually one of the motives why some countries strongly developed nuclear electricity production (e.g. France in the 1970s) (Le Gros, G. (, 2020[8]). Security of supply has been again highlighted as a key issue due to the invasion of Ukraine, and this tends to show that a more holistic vision of precaution should also consider reliability in all sense.
More deeply, however, the safety and environmental dimension of precaution in the case of nuclear energy is far less easy to judge in light of the global climate emergency. The early 1990s anti-nuclear, PP-based narrative fundamentally contested the use of an energy source carrying significant safety risks and unknowable longer-term risks around high-level nuclear waste (Weber, 1991[9]). This latter point, in particular, made the discussion more about precaution than only about risk management. However, opponents of nuclear energy were also arguing that the downside risks of an accident were so high that no benefits could justify them — i.e. that no risk-benefit balance could properly be found in this case. Considering what is known from data on effective harms and risks of nuclear energy compared to other energy sources (particularly baseload energy sources) and, most importantly, given what we now know about the climate crisis (a risk that dwarfs by its magnitude and potential impact all the safety concerns from any individual installation), it seems legitimate to ask whether the genuine PP-driven choice would still be to avoid or abandon nuclear energy — or rather the opposite.
If we take energy sources and vectors with less “longer-term, hard-to-predict” impact, such as hydrogen, there is even a stronger case to be made that the PP may not lead to avoiding the use of the new technology (which has safety risks, but not of the “unknown and unpredictable” kind), but rather to advocating its use (given the open-ended catastrophic risk of climate change). This discussion is further developed in subsequent sections (see Box 3.9).
Applying the PP in practice
If the PP is to be understood as more than a pure declaration of intent, there are two ways in which it can function as a regulatory approach.
The first relates to the implementation stage: i.e. an approach that operators (and regulators) should follow to maximise meaningful prevention and management of risks in conditions of uncertainty (Gemmell, J. Campbell; Scott, E. Marian, 2013[10]). In this respect, the PP is probably always relevant, as well as — unfortunately — insufficiently understood and implemented (too often being substituted by formalistic, process-focused “make believe” steps that have little real preventive effect). In this perspective, the PP is about the “how”, rather than the “what” or “whether”: it should guide operators (and, along with them, regulators) in caring about each potential risk driver and uncertainty in the processes or products they use. This includes paying attention to potentialities, early signs of unforeseen problems, etc.
The second way the PP can function is more problematic and relates to the rule-making stage. If the PP is to have practical consequences in legislative and rule-making activity, it needs to be a principle that can: a) actually help decision-making, b) serve as a useful heuristics instrument for decision-making, and c) give an indication of which direction the default decision should take in a context of high uncertainty and significant potential (but not quantified or confirmed) harm. The purpose of such a tool is to avoid unintended disastrous impact in situations where some actors may be too inclined to reckless risk-taking behaviour because rewards are concentrated and short-term, whereas harms are typically distant in time, uncertain and diffuse. The problem is that it is unclear how often “obvious” precautionary decisions exist, and this seems to be particularly rare in the energy transition context. This is because not using certain technologies also conveys important, even major, downside risks and harms – again, not fully certain, or quantifiable, but no less important than the risks of using them.
The PP has attracted the attention of policymakers, regulators, industry, and academics. It has been recognised as a potentially useful tool in situations where environmental or human health hazard is uncertain and the stakes are high (European Parliamentary Research Service, 2015[11]). This applies to decisions about products or activities that could be seriously harmful to public health or the environment (Vos and De Smedt, 2020[12]).
While the PP has certainly been contested, it has received renewed interest in recent years, driven by emerging threats and ongoing crises. This suggests that it can be useful to analyse its articulation around notions of risk and uncertainty. It is also crucially important to evaluate whether the application of the precautionary principle is subject to changes, and to assess the drivers of those changes (Tosun, 2013[13]).
As discussed in the previous chapter, the PP has been incorporated into national as well as international law (indeed, the European Commission’s 2000 Communication endorsed it as a guiding policy of the European Union in areas such as environmental, consumer and health protection). However, the application of the PP presents many challenges, especially regarding the articulation between uncertainty, hazard, risk and precaution. Empirical evidence suggests that differences in application owe to “the particular question of which risks to worry about and regulate most”, rather than to the general application of the PP as such (Wiener and Rogers, 2002[14]). Overall, governments do not seem to follow “national styles” when it comes to applying the PP. For instance, while it has been argued that — across the board — Europe is more precautionary (i.e., erring on the side of safety at the cost of opportunity) than the US, “no evidence is found to support this argument” (Shapiro and Glicksman, 2003[15]); (Tosun, 2013[13]); (Vogel, 2012[16]); (Wiener et al., 2011[17]). Moreover, “even within single countries such as the UK or the US, scholars find it difficult to trace patterns of the systematic application of the precautionary principle” ( (Hood, Rothstein and Baldwin, 2001[18]); (Majone, 2016[19])cited in (van der Heijden, 2019[20]). Box 3.1 presents additional examples of comparative country analysis of the PP’s application.
Box 3.1. Comparative country analysis of the PP’s application
Wiener’s EU-US comparative analysis of the use of precaution recommends further comparative analysis of regulation and a shift from simple principles of precaution toward a more holistic concept of “prudent precaution”.
Wiener finds that the degree of precaution often depends on the legal system and the context of the regulation (e.g. technology, location, politics, public perception…), rather than on some overarching national regulatory position. He notes that regulators face multiple risks and need to optimise the trade-offs across “interconnected risks” (Wiener and Rogers, 2002[14]). In a similar vein, Li’s comparative analysis of the US and China and the relative stringency of respective federal/central regulatory approaches to environmental risks concludes there is a more complex pattern of risk-specific policy selection in each country. The study emphasises that estimates used in regulatory production are dependent on a range of societal and environmental issues in each country. Li finds that factors such as crisis, international pressure and trade competition can lead to more stringent regulation.
Comparative analysis by Lofstedt on chemicals regulation in Europe concludes that “there is no clear consensus as to when risk or hazard considerations should be the basis for regulatory decision-making, with wide discrepancies between Member States (e.g. the UK is overall more risk based than Sweden) and between regulatory agencies within Member States” (Lofstedt, 2011[21]).
Vogel has attempted to explain why the U.S. and Europe have often regulated differently a wide range of similar risks. He observes that, between 1960 and 1990, American health, safety and environmental regulations were “more stringent, risk averse, comprehensive, and innovative than those adopted in Europe”. One key explanatory factor according to the author is that “concerns over such risks — and pressure on political leaders to do something about them — have risen among the European public but declined among Americans”. Vogel also notes that “policymakers in Europe have grown supportive of more stringent regulations while those in the United States have become sharply polarised along partisan lines”[…]; he adds “as European policymakers have grown more willing to regulate risks on precautionary grounds, increasingly sceptical American policymakers have called for higher levels of scientific certainty before imposing additional regulatory controls on business” (Vogel, 2012[16]). Of importance to this trend was: 1) the embedding of risk/cost/benefit analysis in the US legal process from the late 1970s onwards; and 2) a need for the European Commission to secure stakeholder acceptance of the Single Market (NGOs in particular).
Farrow and Hayakawa, in an analytical review of the PP, identify that a “real options approach” has in some cases been adopted for regulatory decisions that involve uncertain safety impacts, social costs, and differences in perception among the public. The “real options approach” entails an economic analysis that determines if it is optimal to invest in safety, even if the estimated costs significantly exceed the estimated benefits. This approach — in order to develop a quantitative appraisal of precaution — aims to calculate the uncertainty and the size of the potential irreversible costs should the risk materialise. If the result of this calculation is zero, then there is no uncertainty and thus no justification for applying the PP. The authors argue that the ‘real options approach’ provides an analytical and quantitatively feasible approach to the “descriptively conceivable but analytically weak PP”.
Since understanding the range of interpretations and applications of the PP requires linking it with notions of uncertainty and risk (and risk governance in particular), the next section begins by outlining these aspects succinctly. It then discusses several applications of the PP, as well as some of their determining factors.
The PP and risk governance
The PP matters from a risk governance perspective in that the latter must often confront uncertainty, and decision makers are expected to answer public policy questions that science is not in a sufficiently advanced state to answer
Von Schomberg stresses that scientific uncertainty is key to understanding how and why the PP applies. He distinguishes between situations that can be dealt with by using “normal” risk management tools, and those that may justify precautionary approaches. Typically, precaution would apply when an activity or substance poses a plausible threat of harm, but there is insufficient scientific evidence or lack of agreement as to the nature or scale of the likely adverse effects; or, when the potential harms are known but the particular cause-effect relationships cannot be scientifically established (Von Schomberg, R., 2012[24]). Risk governance can be defined as the “the totality of actors, rules, conventions, processes and mechanisms concerned with how relevant risk information is collected, analysed and communicated and management decisions are taken” (International Risk Governance Council, 2006[25]).2
According to the European Parliamentary Research Service (EPRS), most risk governance models encompass three main components: risk assessment, risk management and risk communication (including stakeholder engagement). Execution and evaluation are, however, also essential components that are sometimes overlooked in this context. Other, alternative risk governance models do exist; e.g. Stirling’s five-phase model (screening, appraisal, evaluation, management and communication), and the IRGC’s Framework (see Box 3.5 later in this section) (EPRS, 2015[26]) (Renn, O. (editor) et al, 2008[27]). The Standard ISO 31000 represents an additional characterisation of risk governance: its principles (ISO, 2021[28]), provide a framework and process for managing risk, and is among the most commonly adopted models worldwide (outside the areas of food safety and environmental protection).
There is a variety of viewpoints regarding the role of the PP within risk governance. A number of authors (Belvèze, 2003[29]) and key policy documents contend that the PP should be integrated into risk management frameworks, and that it is relevant for the decision-making stages of risk governance (European Risk Forum, 2011[30]) (International Risk Governance Council, 2006[25]). The European Commission’s Communication states that “application of the precautionary principle is part of risk management, where scientific uncertainty precludes a full assessment of the risk and when decision makers consider that the chosen level of environmental protection of human, animal and plant health may be in jeopardy” (European Commission, 2000[31]). Moreover, there have been attempts to devise a policy framework for the application of the PP; the PrecauPri project,3 for instance, did so in order to provide guidance to policymakers with respect to European and international risk governance (Renn, O. et al. (coord.), 2003[32]).
In contrast, (Taleb, N. N. et al, 2014[33]) – who notably limit application of the PP to cases of potential ruin — point out, that the PP should not be conflated with risk management:
Risk management involves various strategies to make decisions based upon accounting for the effects of positive and negative outcomes and their probabilities, as well as seeking means to mitigate harm and offset losses. Risk management strategies are important for decision-making when ruin is not at stake. However, the only risk management strategy of importance in the case of the PP is ensuring that actions which can result in ruin are not taken, or equivalently, modifying potential choices of action so that ruin is not one of the possible outcomes (Taleb, N. N. et al, 2014[33]).
The regime model of risk governance posits that, in theory, any viable control system must contain three basic components: a goal setting component; an information gathering component to check that the goal is being reached; and a behaviour modification component to bring activities into line with the goal (Hood, Rothstein and Baldwin, 2001[18]). The PP may arguably be applicable for both goal setting (e.g. permit, ban or control a given substance that may or may not pose some kind of harm), and behaviour modification (e.g. whether child protection officers should leave a child in, or remove a child from, a family setting that may or may not be abusive).
From a risk regulation perspective, the PP also has implications in terms of its legal philosophy and its repercussions for international regulatory co-operation. These have been discussed by Van Calster among other authors (see Box 3.2).
Box 3.2. The PP and conflicts law perspectives
According to the Van Calster, there is need for supra- and transnational law to set up structures of accountability. This will allow risk-averse and risk-tolerant societies to work out their differences in a space that is legally defined through meta-rules acceptable to both (inasmuch as the risk-averse state claims to regulate on grounds of protection against danger).
In addition, the condition imposed by the PP for some minimally objective, empirical scientific support is a requirement for consistency that will be acceptable to a risk-averse state. It is also one that the law can verify. Moreover, it gives the regulating state “freedom to evaluate if the less corroborated empirical evidence creates sufficient concern to warrant regulatory action based on the nature of the suspected hazard.”
Conflicts law perspectives argue that European and transnational law derive legitimacy from their ability to set up legal structures that require the inclusion of the other in the assessment of conflict constellations. The PP respects the prescriptive premises of conflict law: that states take seriously the extraterritorial effects that they produce and reconsider them in light of the concern of the affected jurisdictions, but it tolerates diversity and limits judicial review to a marginal role.
Source: (Geert Van Calster, ed., 2014[34]).
The Precautionary Principle and uncertainty
As the precautionary principle is intended to enable more prudent decision-making under conditions of scientific uncertainty, a major point of contention relates to the level of uncertainty associated with a given phenomenon (Vos and De Smedt, 2020[12]). As stated by the European Parliamentary Research Service (EPRS):
A key variable of the different understandings of the precautionary principle is the degree of scientific uncertainty likely to lead to action from the authorities. However, other variables also feature prominently in the different interpretations of the precautionary principle, including the severity of the risks involved, the magnitude of the stakes and the potential costs of action or inaction. (EPRS, 2015[26])
It has been argued that a realistic analysis involves explicit consideration of the entire “spectrum of uncertainties”, including both irreducible uncertainties (due to intrinsically random and uncontrollable phenomena), and reducible or epistemic uncertainties (due to lack of knowledge) (Patelli, E. and Broggi, M., 2015[35]). Scientific uncertainty can have a variety of causes; e.g. it may stem from a lack of data or inadequate models of risk assessment, or it might exist in the form of indeterminacy, when not all the factors influencing the causal chains are known (European Commission, 2017[36]). Furthermore, scientific uncertainty might arise when there is ambiguity, contradicting data or in situations of ignorance, where certain risks are still unknown (European Commission, 2017[36]).
Available evidence suggests the potential existence of gaps and/or biases in the way uncertainty is often characterised — with social and cultural factors tending to receive little attention.
A 2000 meta-study from the European Environment Agency (EEA) discussed the relationship between modelling and scenario-building on the one hand (i.e. an approach that seeks to characterise uncertainty and “explore the different outcomes associated with ‘what-if’ questions”), with the strategic and cultural approaches to regulation and precaution, on the other. The study noted that the socio-cultural domain had not been explored in satisfying depth by any of the models. Indeed, when it came to analysis of issues related to sustainability, “the indicators chosen to represent this domain are still largely demographic or economic and only marginally correlated to the underlying issues” (EEA, 2000[37]).
The Intergovernmental Panel on Climate Change (IPCC) has produced a guidance note for the treatment of uncertainty. It attempts to provide a common approach and calibrated language which can be used broadly for developing expert judgments, and for evaluating and communicating the degree of certainty of findings from any assessment process. Although specifically intended for the authors of the IPCC's 5th Assessment Report, the note contains background information and suggestions that can be useful more broadly (see Box 3.3).
Box 3.3. IPCC Guidance Notes for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties and The concept of risk in the IPCC Sixth Assessment Report (summary of cross Working Group discussions)
The guidance note uses confidence levels to characterise uncertainty as expressed by expert judgments on the correctness of a model, analysis or statement. This confidence scale serves to express the numerical chance of an assessed finding being correct, where confidence levels are categorised as very high confidence, high confidence, medium confidence, low confidence and very low confidence). It synthesises the author teams’ judgments about the validity of findings as determined through evaluation of evidence and agreement.
The IPCC also relies on two metrics for communicating the degree of certainty in key findings:
Confidence in the validity of a finding. This is based on the type, amount, quality, consistency of evidence (e.g. mechanistic understanding, theory, data, models, expert judgment), and degree of agreement.
Quantified measures of uncertainty in a finding as expressed probabilistically (based either on statistical analysis of observations / model results, or expert judgment).
Risk management relies on an ability by decision-makers to weigh up alternative courses of action, and to balance a range of potentially adverse consequences, since no action is entirely free of the potential for adverse consequences. Such balancing inevitably relies on individual or collective value judgements, including whether risks are viewed as manageable, intolerable or existential. A critical contribution from IPCC assessments to inform decision-making lies in a careful and transparent characterisation of risks, considering both the adverse consequence and its potential:
What is the magnitude, reversibility, distributional effects, etc. of the adverse consequence?
How confident are we in our understanding of those aspects?
How much do those consequences depend on socio-economic trends or other assumptions?
How well do we understand the potential for such events/outcomes to occur, and how much does this potential depend on climate change, policy design or socio-economic variables?
Can we quantify the probability of occurrence? If not, can we characterise the potential in some other way that helps stakeholders decide whether to take this potential seriously, and how it compares with potential adverse consequences from alternative courses of action?
These considerations apply not just to risks related to climate change impacts but equally to risks related to responses to climate change, including adaptation and mitigation technologies, investments, practices and behaviours, and policies.
Source: (IPCC, 2010[38]) (IPCC, 2020[39])
Furthermore, the EPRS describes three possible ways of interpreting the PP in situations of scientific uncertainty (European Parliamentary Research Service, 2015[11]).4 The first consists of classifying uncertain situations according to the sources of uncertainty, complexity, ambiguity and ignorance (see Figure 3.1). As also highlighted in the 2000 Communication, the PP is not intended to apply to hypothetical effects or imaginary risk and should be based on a scientific examination of the issue at hand. Nor will it apply when the desired level of protection is defined and the risk of harm can be quantified; where risks are established with certainty, the prevention principle as enshrined in the Treaty on the Functioning of the European Union is applicable instead (European Commission, 2000[31]).
The second approach discussed in the EPRS report is based on three schematic interpretations that depend on the degree of uncertainty, obligation and stringency:
First/minimal interpretation: uncertainty does not justify inaction and warrants legislation despite the absence of complete scientific evidence concerning a particular hazard.
Second/median interpretation: uncertainty justifies action and warrants legislation even if the link between cause and effect has not been fully established.
Third/maximal interpretation: uncertainty necessitates legislation until the absence of hazard has been proven.
This approach disregards certain variables, such as the severity of the risks involved and the stakes at hand. These variables may, however, prove decisive when uncertainty is substantial.
A third approach relies on a procedural interpretation that encompasses four elements: a) potential hazards are characterised by their serious, irreversible and uncertain consequences; b) dynamic decision-making processes should be both iterative and informative to allow learning over time; c) the burden of proof is shared between the regulator and the proponent; and d) no decision is prescribed a priori.
Precaution as a “continuous variable”
The following section analyses precaution as a continuous variable, including its interplay with different levels of uncertainty, when applied as a regulatory approach. It starts by attempting to dispel the notion that precaution is a matter of binary choices, illustrating the importance of taking into account the (implicit) trade-offs that exist when prioritising certain risks at the expense of others. This section continues by briefly discussing the following: tools to support decision-making in the presence of trade-offs, when and how to apply the PP; frameworks and tools to determine the appropriateness of precautionary measures; iterative approaches for enhanced adaptability, as well as regulatory design for scientific uncertainty.
Regulatory choices and trade-offs
A frequent shortcoming of precautionary measures has been the narrow focus on a single target risk.
Instead, precautionary measures should select which risks are top priority and consider the potential to address affect multiple risks at the same time (Wiener, 2018[41]). Graham and Wiener call for “stepping out of the single-risk mind-set”; instead, they see a real world of multiple, interconnected risks, with regulators needing to optimise the trade-offs between them:
We should neither ignore uncertain risks, nor overreact; but rather seek ‘risk-superior’ ways of reducing those risks without worsening countervailing risks (Graham, J. and Wiener, J., 1995[42]).
The problem of trade-offs between different risks and choices (in situations where none of the choices is zero-risk or even low-risk), is particularly central to the energy transition context. Many of the energy-related choices currently facing public opinion, policymakers and regulators in the 2020s are risk-heavy, regardless of whether the decision is “ban” or “allow”, “do not use” or “use”.
Typically for such scenarios, one side of the alternative has more “tangible”, immediate or somewhat measurable risk (primarily safety risk, secondarily environmental risk), e.g. when considering if the use of nuclear power, large-scale wind farms, hydrogen as hydrocarbon substitute etc. is allowed (rather than remaining with existing, carbon-intensive but well-known sources). By contrast, the alternative option (i.e. not to use these technologies), has lower immediate and tangible risks, but makes it even more difficult to address the climate emergency. Indeed, not using these low-carbon technologies significantly increases the probability of catastrophic harm from climate disaster.
It is worth, here, unpacking the relative uncertainties on these two sides of the energy-source choice. Even though there is some uncertainty as to both the temperature and climate pathway that will eventually result from different energy choices, and the eventual harm that they will cause (IPCC, 2022[43]). In particular, each atmospheric CO2 concentration leads to probabilistic scenarios in terms of temperature increase, and, in turn, these lead to again probabilistic scenarios in terms of climate systems transformation – including a certain probability of “vicious cycle” leading to “runaway climate change”, which can best be summarised as absolute, life-extinguishing disaster. Even though this particular pathway has overall a low probability, its potential impact dwarves in magnitude most (if not all) harmful impacts considered in risk-based regulation or precautionary decisions. What this means in our case is that handling the uncertainty on low-carbon energy sources negative impacts (e.g. long-term risk from nuclear energy or shorter-term risk from large-scale hydrogen use) should not be considered against a zero-risk baseline, but against the practical certainty of massive harm from climate change on every time horizon – including a non-zero probability of total disaster. Seen in this perspective, well-understood precaution would, in most cases, be to accept (and, of course, seek to properly manage) the risks from low-carbon energy sources in order to reduce catastrophic climate risk.
The additional difficulty, in this situation, is that the arbitrage between the two choices is not just an issue of difficult quantification – but one of perceptions and the psychological mechanisms of decision-making. There tends to be a range of additional decision-making biases at play whereby, once something has been banned, it is assumed even more strongly to be harmful (e.g. GMOs). Prospect theory, which contains the notion of loss aversion, can be useful to understand this process better. The hypothetical value function developed under prospect theory is defined by deviations from a reference point and is “normally concave for gains, commonly convex for losses, and is generally steeper for losses than for gains” (see Figure 3.2). In other words, losses generally seem larger than gains. In addition, decision-making being subject to a reference point (in this case, the ban) arguably excludes the notion of purely “neutral” decisions. Another implication is that people tend to overestimate small risks and underestimate large risks, and have difficulties properly estimating probabilities (tending to “over-weight” low probabilities in their decision-making) (Kahneman, D. and Tversky. A., 1979[44]).
In addition to loss aversion, related biases that potentially affect decision-making in this context include the status quo bias/sunk cost fallacy (which may help perpetuate initial bans), confirmation bias (involving a selective, one-sided treatment of available evidence), and a biased preference for avoiding scrutiny (by not changing the initial regulatory stance).
From this perspective, the central problem is that prevailing human heuristics bias can lead decision-making to privilege a “tangible”, immediate, risk of harm over the distant, incomprehensible, and unimaginable risk of catastrophic climate change — something never before encountered in our experience. In such a context, there is a pattern of the PP being misunderstood and misapplied to push towards default decisions that refuse the use of low-carbon technologies perceived as “high risk”, even when proper consideration of the two sides of the alternative may mean that the really precautionary decision would be “use, with appropriate caution and safeguards”. This report will later explore what such “appropriate caution and safeguards” could look like, and how more “agile” regulation could combine regulatory design and delivery to implement precaution in a more flexible, graded and regularly revised way, rather than as a definitive “yes or no”.
Wiener and others argue that precaution should not be understood as a formal binary classification but as general posture: a continuum of precaution defined by varying degrees of earliness and stringency (Wiener, 2016[45]). Here, a regulation is more precautionary the earlier it takes effect and the more stringently it restricts the suspected source of the risk. As such, every regulatory choice involves uncertain future risks and hence a trade-off between two kinds of errors (commonly referred to as Type I and Type II errors respectively):
False positives (Type I error): an initial finding of (unacceptable) harm later turns out to have been incorrect. In this case, adopting precautionary regulations can incur the cost of false positives; e.g. economic and financial losses, restricted freedoms, and any foregone health and environmental benefits of restricted technologies (Wiener and Rogers, 2002[14]).
False negatives (Type II error): an initial finding of no harm (or acceptable harm) later turns out to have been incorrect. Insufficient ex ante regulation can incur the harm of neglecting false negatives; e.g. health and environmental damage.
As anecdotal evidence, the Hansen and Tickner study of 88 cases identified as potential false positives (i.e. where the authorities take precautions which later prove unnecessary), concluded that only four of those cases had led to “unnecessary measures”, and that the risk of false positives was, therefore, low (Hansen and Tickner, 2013[46]).
Lemons et al argue that scientists are more interested in avoiding false positives than false negatives (Lemons, J. et al., 1997[47]). Similarly, Underwood argues that “most ecological and environmental work is designed to keep the possibility of Type I error small”, whereas the PP “dictates that Type II errors are a serious problem for environmental management, much more so than Type I errors. Thus, not detecting impacts (Type II) is not precautionary” (Underwood, 1997[48]). This distinction between Type I and Type II may, however, be less of a clear-cut case in a climate emergency context where it is not so much a question of avoiding health and environmental damage as of estimating the balance of such damage between two pathways (using, or not using, the technology under consideration).
On a related note, and in the context of assessing technological impacts, Shrader-Frechette refers to Type I and Type II errors as “industry risk” and “public risk” respectively. She outlines several factors that could explain the prevailing preference for minimising industry risk, i.e. the risk of not developing or using a technology that is actually acceptable and safe. These factors include the following: the preference “appears more consistent with scientific practice”; consistency with the standards of proof required in criminal cases which stem from the State to protect its moral legitimacy (the “need to be sure beyond a reasonable doubt that the defendant is guilty before deciding against him”); the fact that “many risk assessments and impact analyses are done by those who are closely associated with the technology being evaluated and who are therefore sympathetic to it and to those who implement it” (thus underestimating risk probabilities); and the prevalent use of Bayesian decision rules5 based on expected utility and subjective probabilities, rather than the maximin principle (Rawls, 1971[49]). Again, this distinction and assessment may become less clear-cut in the energy transition context. First, because we have in fact “public risk” on both sides. Second, because the relative frequency of cases of “excessive” precaution or “insufficient” precaution may be less important than their impact, as a few technological choices one way or another may have massive importance to large-scale events.
Shrader-Frechette argues that there are grounds for minimising public rather than industry risk, as “neither science nor criminal law provides arguments for minimising industry, over public, risk”. She evokes two sets of different arguments in favour of contending that an assessor’s prima facie duty is to minimise the chance that an unsafe technology is implemented. The first set stresses that the dangers faced by the public “represent the kind of risk most deserving of reduction”; the second set focuses on “the public as the locus of decision-making regarding societal hazards, since laypeople typically argue for reducing public risk” (Shrader-Frechette, K. S., 1991[50]).
When considering the trade-offs at hand, Wiener stresses that reducing a target risk can by the same token increase a countervailing risk. In their seminal work, “Risk vs. Risk”, Graham and Wiener acknowledged “challenges in comparing risks with diverse attributes on which people may have different perspectives (and perceptions), including probability, severity, population, uncertainty, type of impact, timing, and distributional equity”, and called for full impact analysis methodologies including both countervailing harms and co-benefits (Graham, J. and Wiener, J., 1995[42]) (Wiener, 2020[51]). This warrants transparent and clear presentation of trade-offs, possible options, uncertainty, and what is (not) known — including the limitations of scientific evidence and advice in assessing risks and costs/benefits (Blanc, F. et al., 2015[52]).
As suggested by several of the authors referenced in this sub-section, the human factor remains therefore a critical one when it comes to apprehending uncertainty, risk and precaution. Decision-maker knowledge, experience and approach to risk (including appetite for it), all impact upon how the notion of precaution may be applied to the system, subject or question at hand.
Some will understand better than others the system, science, uncertainties, state of knowledge, emerging trends, nature of causation and availability, and the effectiveness of different solutions. Others may, in turn, be inclined to accept risk if doing so appears profitable, or if the costs and other negative impacts fall upon others. In a similar vein, the power structures of organisations, agencies, departments and governments etc. play an important role in shaping attitudes towards risk and precaution, as well as the associated regulatory decisions themselves. Later in this chapter, the section on Precaution, the human factor and the socio-political context provides additional insights into these and related aspects.
Tools to support decision-making in the presence of trade-offs across policy objectives
Decision-makers have several tools at their disposal when it comes to dealing with the trade-offs that are inherent to regulatory choices.
Multi-criteria analysis (MCA) methods appraise or evaluate a given course of action by taking into account the various dimensions of interest and the interplay between multiple, often contrasting, objectives, and different decision criteria and metrics (Dean, 2022[53]). MCA can be effectively applied to the “areas and sectors where single criterion–based methodologies are found ineffective, and important social and environmental impacts cannot be expressed in terms of monetary values” (Nautiyal and Goel, 2021[54]). MCA assesses one or more regulatory/policy options against a number of different objectives for which criteria have been identified. The performances of an option against the various objectives and criteria (which can be assigned different weights), are identified by scores.
A multi-criteria method is formally defined by the set of rules establishing the nature of options, objectives, criteria, scores and weights. This includes how those objectives, criteria, scores and weights are used to assess, compare, screen in/out or rank options. (See Box 3.4 for an overview of MCA’s key elements).
Box 3.4. Key elements of multi-criteria analysis (MCA)
Multi-criteria analysis comprises various classes of methods, techniques and tools, with different degrees of complexity. MCA is also known as multiple-criteria decision-making (MCDM), multiple-criteria decision analysis (MCDA), multi-objective decision analysis (MODA), multiple-attribute decision-making (MADM), and multi-dimensional decision-making (MDDM).
A 1983 review identified more than 100 different MCA approaches. Despite this diversity, many of these methods share a number of common elements and exhibit a similar decision-support framework which includes the following key elements:
Option: an alternative course of action proposed to address a perceived problem and achieve an overarching end result.
Objective: an intended and specific aim against which any proposed option is being assessed. Objectives are usually clustered around different overarching appraisal and evaluation dimensions (e.g. sustainability policy problems generally include the economic, environmental and social dimensions). They can also be grouped according to their geographical scope (e.g. local, regional, national, supra-national objectives), temporal dimension, or the social groups for whom they are relevant.
Criterion: a specific measurable indicator of the performance of an option in relation to an objective that allows measuring the extent to which an option meets that objective.
Performance Score: a pure number (with no physical meaning), belonging to a given scale (e.g. a 0 to 1 scale, a 1 to 100 scale or a -5 to +5 scale) that identifies the performance of an option against a specific objective/criterion. High-performing options are ascribed high scores, whilst low-performing options score lower on the scale. Critical objectives and criteria may also be assigned some constraints in the form of specific threshold values. These place some restrictions concerning the worst acceptable performance of an option against specified criteria that can include policy targets and legal instruments, ethical standards, or scientific criteria that identifies limits to natural processes and systems.
Criterion Weight: a coefficient representing the level of importance of an objective and corresponding criterion relative to the other objectives and criteria under consideration (i.e. high-importance objectives and criteria are identified with high weights). The actual meaning of weights can change substantially according to the different MCA method employed.
Source: (Dean, 2022[53]).
The Analytic Hierarchical Process (AHP) is one example of a frequently used MCA method. A key advantage is that it reduces multi-criteria decision-making problems to “a series of smaller, self-contained analyses based on the observation that the human mind is incapable of considering simultaneously too many factors when taking a decision” (Dean, 2022[53]) (see Figure 3.3 for an example).
When (and how) to apply the PP: there is no silver bullet
The PP should not be understood as a prescribed formula but rather a “flexible principle that ensures that decision makers are not ignoring problems of scientific uncertainty” and have the means to address complex and uncertain problems in an ongoing and flexible fashion (Fisher, 2007[55]). According to Persson:
Extra precaution may be justified when dealing with important values (such as health and environmental protection), although these are systematically downplayed by more traditional decision methods; or when we suspect that the decision might lead to irreversible and severe consequences, and where the values at stake are also irreplaceable; or when it is more important to avoid false negatives than false positives. (Persson, 2016[56]) (European Commission, 2017[36])
The PP does not prescribe specific policy responses (Tosun, 2013[13]) or prejudge of the type of measures (e.g. bans) to be adopted, or the associated substantive requirements. Instead, the constraints it imposes aim at “identifying, characterising, and evaluating outcomes” (Kaebnick,G. E. et al, 2016[57]). Although it is generally agreed that precautions should be proportionate to the expected risk (Wiener and Rogers, 2002[14]) (European Commission, 2000[31]), the practical application of this premise is not straightforward. Garnett and Parsons formulate this precaution-uncertainty continuum as follows:
There is [thus] a range of uncertainty, between the lower bound of evidence required before the precautionary principle should be considered and the upper bound where the evidence reduces the uncertainty to the level where risk assessment is feasible and appropriate. Unfortunately, the positions of these bounds are unclear, and subject to variations in interpretation in practice (Garnett, K. and Parsons, D. J., 2017[58]).
A 2016 independent review of underground coal gasification (UGC) in Scotland illustrates the relationship between precautionary measures and expected risks. In this case, a few serious incidents — especially where there were clear management failures, serious environmental impacts and an attempt to cover up or avoid monitoring / assessing those impacts — had serious consequences in terms of the approach taken to the sought permissive policy environment, thus making precaution even more likely:
[…] while the industry could be allowed to develop, it would be wise to consider an approach to this issue based upon a precautionary presumption whereby operation of UCG might be considered only were a series of tests applied and passed. These tests would be in relation to the practicality and safety of the full UCG life-cycle - the end-to-end planning, licensing, extraction, processing, use, closure and abandonment regime including provision for long term management, reinstatement and monitoring (Scottish Government, 2016[59]).
In a similar vein, authors have referred to an “epistemic threshold” or minimum level of necessary evidence to invoke the PP (Crawford-Brown D. and Crawford-Brown S., 2011[60]), thereby excluding wildly hypothetical theories where there is no scientifically conceivable link between the technology and the alleged potential harm. Such cases, like unfounded theories about vaccination harm, belong more to the realm of conspiracy theories than of the precautionary principle. It has been argued that extending the application of the precautionary principle from prevention of environmental damage to protection of human health and consumer safety has changed the nature of the hazards considered and the types of evidence available. In that sense, the cases reviewed by Garnett and Parsons revealed “a trend toward requiring less evidence of harm where there was a severe threat to human health” (Garnett, K. and Parsons, D. J., 2017[58]).
A common misconception is that the PP provides an “obvious” answer to difficult policy questions. As already suggested earlier in this report, this is seldom the case in practice. The fundamental guidance for the application of the PP is the consideration of the severity of potential harm. However, different cases have seen different standards of proof being used (see for example Table 3.3).
The specific issue in the climate change context is that the severity of potential harm due to climate disruption is “off the charts”. In this sense, it should be considered a game-changer in the application of the PP since, even in best-case scenarios, harm due to climate change will be of a much greater order of magnitude than the harm due to the use of most technologies under consideration. To be more specific, even in best-case scenarios, the intensity of climate-driven harm is extremely high and the scope of the impact is global. By contrast, all the interventions considered under the energy transition (apart from geoengineering technologies)6 can generate — at worst — far more limited and localised harm. In the worst-case scenarios, runaway global warming could create feedback loops that lead to massive temperature increases, with a “Venus-type” climate trajectory being a remote-but-not-excludable possibility. If, at some point, there were serious indications that this was becoming a somewhat higher-likelihood scenario, it would automatically trump all potential harm from known and foreseeable energy and mitigation technologies. This is because — put simply — that level of global warming would mean total extinction of life on Earth.
In this context, the question that arises when considering the use of a given technology with serious potential to reduce CO2 emissions or concentration has changed. It is no longer — as in a more “classical” PP context — whether the potential harm from a technology is sufficiently severe and has sufficient credibility to warrant a ban or other severely restrictive action. Rather, the first question that should be asked is whether there is a demonstrated or sufficiently credible claim that the technology can make a real and positive contribution to decreasing or limiting the growth of carbon levels in the atmosphere, and/or if it can mitigate the harmful impacts of climate change.
If the answer is positive, then the incommensurability of climate change harm means that the next question cannot be (as it would be with “normal” risk-based regulation): “Is the harm from this new technology sufficiently severe to warrant a ban or severe restriction?” Instead, it can only be: “Is the harm sufficiently severe and well-demonstrated to justify restrictions, strict monitoring and gradual implementation?”
Therefore, some form of precaution could still be warranted for a technology demonstrating exceptionally high-potential harm, serious proof of this potential harm, and still little-known behaviour. However, given climate change considerations, this precaution would generally not be a total or even partial ban. Instead, it could take the form of allowing only an initial implementation, for instance through a “sandbox” regulatory approach, where the results of a pilot with limited geographical space and duration are used to inform a more permanent regulatory framework. As a result of the pilot, this framework would factor in more knowledge about the technology’s behaviour in practice, as well as the effectiveness of different mitigation measures. Indeed, the 2021 OECD Recommendation for Agile Regulatory Governance to Harness Innovation (OECD, 2021[61]) seeks to help governments and regulators realise the full potential of innovation in high-uncertainty contexts, thereby enhancing its benefits for societies while addressing any risks. One important pathway to developing more agile regulatory frameworks consists of facilitating regulatory experimentation by means such as regulatory sandboxes, trials, testbeds, innovation spaces and laboratories.
Table 3.1 presents a selection of technologies with high potential in terms of CO2 reduction/abatement and/or direct climate impact (geoengineering). It examines their respective levels of potential harm, unknown factors at play and potentially applicable regulatory approaches. Information presented in the table does not prejudge the technologies’ actual effectiveness, reliability, feasibility, etc. Instead, it constitutes an attempt to classify them based on their “profile” and the characteristics put forward by their proponents. The “regulatory response” column indicates the most logical approach from a PP/risk perspective, given existing knowledge and claims.
Table 3.1. Illustration of possible assessed risk profiles and regulatory responses for selected technologies with potential to enable the energy transition
Note: this table does not in any way represent an OECD recommendation on the actual classification of technologies in different risk categories, or the correct regulatory response, because there is a vast number of factors to consider, many of which may be country-specific, and considerable research work that would have to be considered. This is, based on the research done as part of preparing this report, an illustration of how such a classification can work, and what regulatory response could be appropriate for a given risk and maturity profile.
Assess maturity of technology (illustrative) |
Possible examples (illustrative) |
Level of potential harm1(illustrative) |
Unknown factors/behaviour (illustrative) |
Suggested regulatory response based on assessed parameters |
---|---|---|---|---|
Existing technologies |
|
Low to serious |
Limited to moderate |
Risk-based regulation |
Existing technologies, new/larger-scale applications, early-stage commercialisation |
|
Moderate to serious |
Moderate to serious |
Risk-based regulation |
Partly novel technologies (existing technology but substantially novel methods or use) / disputed technologies |
|
Moderate to serious |
Significant to high |
Sandbox, pilot, learn lessons (and/or “reversible approach” for spent nuclear fuel, which is the current approach in FR for instance) |
Novel / unproven / potentially very hazardous technologies |
High to extremely high |
High to extremely high |
PP applies: start with limited or tightly controlled pilots for the technologies that lend themselves to it (CCS); initial bans for those that have unpredictable chain-reaction effects (geoengineering). Regular review warranted because of importance as “last defense” against runaway global warming. |
1. Potential harm includes any kind of environmental, health, safety, or other harm to humans and the environment.
2. Technological development has rather moved ahead of regulatory frameworks in the case of SMRs, which emphasizes the need for a risk-based rethink of such regulations to help address climate change (NEA, 2023[62])
3. Major concerns with CCS include the risk of CO2 leakage from the reservoirs into the surrounding air or water. Furthermore, increased seismic risk could result from the built-up pressure underground.
4. Geoengineering refers to the release of particles (such as sulphur) into the atmosphere which, by reflecting solar radiation, intervene in the Earth’s climate and potentially lead to a cooling of the planet. The workings of this technology however are not fully understood and are surrounded by high levels of uncertainty and potential danger, thus making the case for the use of the precautionary principle.
Source: Authors’ own elaboration.
A last point to note is that PP-based decisions should, to the extent possible, avoid total or quasi-total bans — at least when the technologies under consideration have demonstrated major or potential beneficial impact in terms of, for instance, CO2 reduction or climate resilience.
Indeed, bans have a major negative impact on public perceptions in that they tend to consolidate and spread the belief that the technology in question is dangerous, unreliable, and must be avoided. Heuristics and decision-making bias (including issues with inertia, procrastination, mental taxation and exhaustion, and cognitive dissonance) mean that, after a number of years of a technology being banned, it is less likely that it will ever be authorised. This can be the outcome even when scientific research and the experience of other countries can demonstrate that the “potential harm” that led to the initial PP-based decision has not materialised and the technology is substantially safe. This is largely what has happened with bans (total or partial) on genetically modified organisms (GMOs), despite accumulating evidence that early worries about edited genes “crossing species” more easily, or other potential unpredictable environmental and health harms, have not been confirmed by experience. For example, a 2012 review of the previous ten years of genetically engineered (GE) crop safety research concluded that no significant hazards had been detected that were directly connected with the use of GE crops (Nicolia et al., 2013[63]). As GMOs constitute one of the important technologies for climate-change mitigation (in particular for farming), the negative impact of such a ban is bound to increase over time.
To avoid such a pitfall in the future, it seems preferable to use more targeted, scaled measures – and to communicate more cautiously and regularly about how a PP-driven regulatory framework does not prejudge the eventual conclusion regarding the technology but, on the contrary, seeks to provide a space to experiment with as few downside risks as possible, and learn lessons from such experimentation to, eventually, take a more informed regulatory decision.
Selected frameworks and tools to determine the appropriateness of precautionary measures
The EPRS has discussed a number of methods that, subject to sufficient available evidence, can be used to help determine whether it is appropriate to take precautionary measures (none of which is without shortcomings) (European Parliamentary Research Service, 2015[11]):
Cost-benefit analysis (CBA) incorporating Bayesian risk assessment (although critics deem CBA inappropriate if there is uncertainty about the hazards/costs). On a related note, Gollier and Triech have proposed an interpretation of the precautionary principle within the standard Bayesian framework. In this context, they conclude that “more scientific uncertainty as to the distribution of a future risk—that is, a larger variability of beliefs—should induce society to take stronger prevention measures today” (Gollier C., and Triech, N., 2003[64]).
Risk trade-off analysis, sometimes used in administrative law in the United States (and criticised for overestimating the negative effects of regulation).
Cost-effectiveness analysis (to help pre-define an acceptable level of risk at the lowest cost).
Assessing the pros and cons of action/inaction, including non-quantifiable (e.g. ethical) aspect.
As pointed out by the EEA, “the costs of preventive actions are usually tangible, clearly allocated and often short term, whereas the costs of failing to act are less tangible, less clearly distributed and usually longer term, posing particular problems of governance” (European Environment Agency, 2011[65]). OECD analysis suggests that the costs of inaction for society are sometimes substantial and can impact economies negatively (OECD, 2008[66]) – this is particularly true in the energy transition and climate crisis context. The OECD has thus, for instance, emphasised the importance of “regulatory agility” rather than a rigid “yes/no” approach to technological innovation (OECD, 2021[61]).
In a similar vein, the World Health Organization (WHO) has identified two patterns regarding the appraisal of potential impacts in the context of precaution and risk-based regulation. First, the Bayesian-utilitarian approach, entails choosing the course of action with the most favourable outcome for all involved. The outcome of an action is measured using a utility function. This approach tends to favour the option that maximises the average utility and may therefore overlook distributional issues. Second, the Maximin approach is based on a rule according to which, in decision-making, attention should be paid to the worst outcome that could possibly occur. The Maximin approach has also been criticised, e.g. by arguing that it would lead to “absurd decisions” and “force us to discriminate against the legitimate human needs of all individuals enjoying good fortune in any way” (Harsanyi, 1975[67]).
The WHO state that, regardless of the chosen approach, the precautionary framework should include potential or suspected hazards (uncertainty concerns not only the magnitude of the risk, but also its very existence). Despite uncertainty, all efforts must be made to maximise the use of the available scientific information. Lastly, if the risk at hand can lead to involuntary exposures and can be viewed as inequitable, then the precautionary approach and preservation of public health must be prioritised – which is linked to the essential issue of environmental justice. In this perspective, again, harm created by climate change is known to be particularly inequitable, affecting the poor far more than the rich (IPCC, 2022[43]), and should logically be considered in priority.
The WHO note that examination of available evidence on the exposure, hazard, or risk must be done in an interdisciplinary manner. For instance, it must look at direct, indirect, cumulative, and interactive effects. They argue that a comprehensive risk assessment must examine the gaps and uncertainty in information and find ways to reduce these where appropriate. Moreover, the determination of an appropriate course of action must be based on the scientific evidence but also on an assessment of alternative approaches and public input (Martuzzi, 2004[68]).
The IRGC’s Framework for Risk Governance provides guidance for early identification and handling of risks, involving multiple stakeholders. It recommends an inclusive approach to frame, assess, evaluate, manage, and communicate important risk issues often marked by complexity, uncertainty, and ambiguity. The framework notably includes the notion of concern assessment, which “takes into account the values and socio-emotional issues that may be associated with the risks” and “explicitly recognises that people’s decisions about how to handle risks are influenced by their past experience, their perception as well as their perhaps more emotional and value-based concerns” (IRGC, 2017[69]). Further details on this framework are provided in Box 3.5.
Box 3.5. The IRGC’s Framework for Risk Governance
The IRGC’s Framework encompasses the following four interlinked elements, as well as three cross-cutting aspects:
1. Pre-assessment – Identification and framing
Leads to framing the risk, early warning, and preparations for handling it.
Involves relevant actors and stakeholder groups, so as to capture the various perspectives on the risk, its associated opportunities, and potential strategies for addressing it.
2. Appraisal – Assessing both the technical and perceived causes and consequences of the risk
Develops and synthesises the knowledge base for the decision on whether or not a risk should be taken and/or managed.
If so, identifies and selects what options may be available for preventing, mitigating, adapting to, or sharing the risk.
3. Characterisation and evaluation – Making a judgment about the risk and the need to manage it. This comprises:
Process of comparing the outcome of risk appraisal (risk and concern assessment) with specific criteria.
Determines the significance and acceptability of the risk.
Prepares decisions.
4. Management – Deciding on and implementing risk management options
Designs and implements the actions and remedies required to avoid, reduce (prevent, adapt, mitigate), transfer or retain the risks.
Cross-cutting aspects: Communicating, engaging with stakeholders, considering the context
Crucial role of open, transparent, and inclusive communication.
Importance of engaging stakeholders to both assess and manage risks.
The need to deal with risk in a way that fully accounts for the societal context of both the risk and the decision that will be taken.
Source: (IRGC, n.d.[70]).
Iterative approaches to risk and regulation for enhanced adaptability
It should be borne in mind that, to be truly meaningful, any risk/regulatory analysis in the context of precaution will need to be conducted iteratively; e.g. discussion by Gollier and Treich (2003), Farrow (2004), and Hansson (2016) around the dynamic aspects of decision-making and the fact that learning improves scientific knowledge over time (Centre for Transport Studies, 2018[71]). Indeed, robust and iterative assessment can incrementally reduce uncertainty surrounding potential outcomes and their probabilities (Kaebnick,G. E. et al, 2016[57]).
An iterative process flow is all the more useful since the relation between the scientific level of knowledge and the possible hazards is of central importance (Martuzzi, 2004[68]). Situations with low probability or uncertainty of the potential dangers should be treated differently in the analysis from situations where there is adequate scientific evidence (Centre for Transport Studies, 2018[71]). As stated in the European Commission’s communication on the PP, precautionary measures may have to be modified or abolished by a particular deadline, considering new evidence. However, this is not always linked to the time factor, but to the development of scientific knowledge (European Commission, 2000[31]). This iterative, evolving approach requires knowledge sharing as well as institutional development to improve transparency, apply new scientific tools and assess alternatives (Martuzzi, 2004[68]). It can also encourage more dynamic, reflective and critical relationships between policymakers, scientific advisers and wider stakeholders (Stirling, 2003).
The relevance of iterative approaches can be illustrated through the proposal for creating shared data repositories for the regulatory governance of robotics innovation (see Box 3.6). This proposal also raises the question of the potential role of AI-based and data mining solutions in reducing uncertainty. They may for example serve to refine assessments of hazard and risk, as well as to review and revise regulatory standards and practices accordingly. While promising, such approaches need to be assessed and applied carefully, the nature of underlying data and the applicable frameworks being essential parameters in that context.
Box 3.6. Regulatory governance model for robot technology innovation
Fosch-Villaronga and Heldeweg propose a future regulatory governance model for robot technology innovation.
This model builds upon a process that commences with technological advancement, that is later passed through a precautionary and legal/ethical assessment and ends with a go/no go decision (depending on the assessment). Then, the process moves to considering the possibility of modifying existing regulations and considering regulatory impacts upon future developments in robot technology. According to the authors, impact assessments in the legal domain are currently used merely as a way to show a roboticist is compliant with the legal framework, and the law is not updated with new advancements in technology.
Therefore, the authors propose the creation of Shared Data Repositories (SDRs): databases of robot impact assessments and related legislation collected over time and across projects of robot development. They argue that this mechanism of data collection for regulatory purposes can inform regulatory strategies and help “match” emerging technologies to regulation and vice versa.
Regulatory design for scientific uncertainty
Jones examines a variety of regulatory design approaches to scientific uncertainty by looking at various regulatory design tools. She concludes that these approaches are often incorporated into legislation in various combinations, rather than occurring in isolation. The seven approaches identified are: 1) acknowledgement of scientific uncertainty; 2) burden shifting approach; 3) “sound science” approach; 4) consequences approach; 5) consensus approach; 6) estimation approach, and 7) adaptive management approach (Jones, 2007[73]).
Jones suggests that the development of a single approach to address regulatory scientific uncertainty is unrealistic, which is why the PP ends up being understood and applied in a number of different ways. Due to the complexity of regulatory conditions and societal factors that environmental regulation seeks to address, a diverse range of approaches for managing scientific uncertainty in a regulatory context is needed instead. Rather than portraying precaution as a regulatory design solution for addressing scientific uncertainty, the author argues that precaution merely plays a role alongside a range of regulatory approaches or tools. Table 3.2 presents the advantages and disadvantages of each the seven approaches identified by Jones’ work, as well as the various circumstances that favour the adoption of a particular regulatory approach.
Table 3.2. Seven regulatory-design strategies for scientific uncertainty
Regulatory strategy |
Advantages |
Disadvantages |
Suggested circumstances when useful |
---|---|---|---|
Acknowledgement of uncertainty |
Invites reflection on unknowns. |
Acknowledgement without any action maintains status quo thereby permitting the continuation of any currently harmful activity. |
All contexts where scientific information is required for decisions. |
Burden shifting |
Protective action without delay. |
Potentially costly errors if protective action is subsequently found to be unnecessary. |
Potentially serious and especially irreversible harms. |
Sound science |
High levels of scientific information available for decision maker. |
Slow and expensive. Potential for interminable argument over what is “rigorous” science risking paralysis of decision-making. |
Big budget available for research. Long timeframe permits research. |
Consequences |
If sensitive to perceptions of catastrophic harm, may provide “early warning”. |
If action limited to the emergence of a “crisis” situation, then such action may be too late. |
Useful where severe environmental consequences considered likely. |
Consensus |
Potentially more likely to be voluntarily adopted and complied with by industry. |
If a compromise is made, then may not provide the most scientifically rigorous method. |
High levels of disagreement about the choice of alternative scientific methodologies. |
Estimation and avoidance |
Potentially rapid, inexpensive, and broadly applicable. Low administrative burden. |
Not sensitive to locality and specific circumstances. May inadvertently authorise negative impacts on localities. |
Useful when many entities are to be regulated. Probably only feasible if consequences not predicted to be severe and/or there are low levels of scientific uncertainty. |
Adaptive management |
Logical appeal permitting incremental progress. Potentially an efficient use of research resources. |
Risk of impact from “experimental” approval. Administrative costs of on-going regulatory oversight. |
Useful for the trial of novel activities provided low environmental harm is likely. Research resources also need to be available. |
Source: (Jones, 2007[73]).
Analysis and categorisation of selected PP applications
Several authors have set out to examine the variety of interpretations adopted in the application of the PP in regulatory decisions and court judgements. This section presents a number of examples focusing primarily on the EU context and illustrates the difficulties that relevant institutions often face in applying the PP consistently.
A central criterion used in existing literature refers to the strength of application of the PP. Garnett and Parsons, for instance, establish three main strength levels (weak, moderate and strong) based on the following attributes: severity of potential harm prompting precautionary action; degree of epistemic uncertainty/quality of evidence prompting precautionary action; and nature of measures taken and provisions for review (Garnett, K. and Parsons, D. J., 2017[58]). The spectrum of weak to strong refers to the standard of scientific proof required to invoke the principle: weak requiring a higher standard than moderate, with strong requiring the lowest standard of proof.
According to the Court of Justice of the EU, “the precautionary principle can be defined as a general principle of Community law requiring the competent authorities to take appropriate measures to prevent specific potential risks to public health, safety and the environment, by giving precedence to the requirements related to the protection of those interests over economic interests.”7 According to the findings of the EU project REconciling sCience, Innovation and Precaution through the Engagement of Stakeholders (RECIPES) (Vos and De Smedt, 2020[12]), the criteria to perform a proper risk assessment or cost-benefit analysis are not consequently checked by the Court (see examples in Box 3.7). However, it is highlighted that the Court is not bound by the guidance laid down in the Commission’s Communication on the precautionary principle.
Box 3.7. ECJ cases analysed in the RECIPES project
One of the most complex ECJ cases involving the use of the PP was the Pfizer case1 (and the linked Alpharma case2).
The case arose from a 1999 EU regulation banning antibiotic additives in animal feed, on the basis that bacterial resistance to antibiotics could be transferred to humans. This antibiotic “transfer link” was challenged by Pfizer Animal Health and Alpharma, who stated that the ban was based on a zero-risk approach, instead of a thorough risk assessment. In its conclusion, the Court upheld an interpretation of the PP that can be inconsistent with the Communication’s evidence-based approach. It invoked a strict reverse burden of proof in its ruling, stating that the company was unable to prove conclusively that there was no link between the use of an antibiotic as an additive in animal feed and the development of antibiotic resistance in humans. The application of the PP in this case proved useful in preventing irreversible harm on a significant scale.
In the Afton case,3 the Commission did not conduct a risk assessment to determine the negative impact of methylcyclopentadienyl manganese tricarbonyl (MMT) on pollution abatement techniques. However, the Court considered the proportionate character of restrictions demonstrating that the Commission had struck a careful balance between the interests of the consumer and those of traders.4
In the Bayer CropScience case,5 the Court accepted expert consultations as sufficient as a risk assessment and it decided the restriction on the uses of clothiandin, imidacloprid and thiamethoxam.6
In the Paraquat7 and Gowan8 cases, the Court has supported the use of the PP to ban substances in the absence of full scientific evidence.
1. Case T-13/99, Pfizer Animal Health v. Council of the European Union, (2002).
2. Case T-70/99, Alpharma inv.v Council of the European Union (2002).
3. Case C-343/09 Afton Chemical Limited v Secretary of State for Transport (2010).
4. Vos E. and Smedt K., RECIPES, WP1 Report: Taking stock as a basis for the effect of the precautionary principle since 2000 (2020).
5. Case T-429/13 and T-451/13 Bayer CropScience AG and Others v European Commission.
6. Vos E. and Smedt K., RECIPES, WP1 Report: Taking stock as a basis for the effect of the precautionary principle since 2000 (2020).
7. Case T-229/04, Sweden v Commission (2007).
8. Case C-79/09, Gowan Comércio Internacional e Serviços Lda v. Ministero della Salute (2010).
Source: (Vos and De Smedt, 2020[12]).
Garnett and Parsons (Garnett, K. and Parsons, D. J., 2017[58]) have also investigated how the PP has been applied in EU regulatory decisions and court judgments. In most cases, their review pointed to a “weak” application of the PP, where a high standard of scientific proof needed to be established before invoking the principle. More precisely, in cases regarding food safety and public health, the Court required a high standard of proof for invoking the PP by setting out requirements for strong scientific evidence, a cost-benefit analysis, and some consideration of the effectiveness of the measures. The authors notably highlight the examples below:
In the case Commission v. Kingdom of Netherlands,8 the Commission challenged the interpretation of the precautionary principle and suggested that a high standard of proof was needed to impose restrictions on the sale of the vitamin-fortified products. The Commission suggested that such restrictions constituted unjustified obstacles to intra-Community trade and required credible evidence of the threat of serious harm. The Court ruled against the Dutch government, requiring high standards of proof.
In the case United Kingdom v. Commission,9 the Court proceeded to a moderate-to-strong application of the PP. The Commission had imposed stringent precautionary measures by banning the movement of animals, meat and derived products that had possibly been exposed to bovine spongiform encephalopathy (BSE). Despite the lack of definitive evidence, it was argued that the potential impact on human health warranted a high level of protection; protecting public health and the maintaining public trust in European beef were set as a priority over the effect on trade and U.K. agriculture.
This last case is insightful on several accounts (Blanc, F., Ottimofiore, G. and Macrae, D., 2015[74]): a) the initial decision could be described as a “generally sound” application of the PP (potential risk was very severe even though there was uncertainty on whether it would materialise at its full potential); b) the intervention has contributed to a very risk-averse regulatory regime in relation to animal health that may be appropriate but may also not be completely proportionate, and c) there has been insufficient subsequent review and ex post evaluation of the situation and, as a result, no actual reassessment. (This, despite the fact that eventual harm from bovine spongiform encephalopathy (BSE) was relatively limited: slightly above 200 cases worldwide reported as of 2014). Such a situation impedes adaptive learning and somehow departs from the notion that PP application is a precautionary measure that should be reviewed regularly – and not an irrevocable final decision.
In addition to the court cases above, the authors refer to European Commission Decision 1999/832/EC10 as an example of a weak-to-moderate application of the PP. This decision approved a proposal by the Dutch government to establish more restrictive regulations on the use of creosote. The Netherlands’ proposal was based on new scientific evidence and so the Commission approved the national provisions because a potential health risk was substantiated by credible evidence of harm.
Table 3.3 below displays the authors’ conclusions regarding the strength of application of the PP in selected EU Regulatory Decisions and Court Judgments.
Table 3.3. Strength of Application of the Precautionary Principle: Examples of EU Regulatory Decisions and Court Judgments
Case |
Subject(s) of protection |
Severity of potential harm |
Conditions for precautionary action: standard of proof |
Nature of regulatory action |
Strength of application |
---|---|---|---|---|---|
United Kingdom v. Commission: C-180/96 (BSE) |
Human health |
Severe |
Relatively low-to-moderate |
Product ban upheld; hazard considered “sufficiently severe” despite uncertainty about the casual link |
Moderate-to-strong |
Commission Decision 1999/832/EC (Netherlands, creosote) |
Environment/human health |
Severe-to-moderate |
High |
Product ban upheld; “credible evidence” of a threat of harm where local circumstances warrant precautionary action |
Weak-to-moderate |
Commission Decision 2003/653/EC (Austria, GMOs) |
Environment/human health |
Moderate-to-low |
High |
Product ban rejected; insufficient evidence around a “local or geographic-specific” risk of potentially “dangerous effects” |
Weak |
Council Decision 2009/121/EC (antimicrobials) |
Human health/environment |
Low |
High |
Product ban rejected; lack of sufficient evidence around “likelihood of occurrence and severity of consequences” |
Weak |
Commission v. Denmark: C-192/01 (fruit juice) |
Human health |
Low |
High |
Product ban rejected; insufficient scientific data to substantiate “real” threat to public health |
Weak |
Germany v. Commission: C-512/99 (mineral wool) |
Human health/consumer safety |
Low |
High |
Reclassification of carcinogenic potential of product rejected; lack of definitive scientific position on potential for harm |
Weak |
Commission v. Kingdom of the Netherlands C-41/02 (breakfast cereal) |
Human health |
Low |
High |
Product ban rejected; insufficient scientific data to substantiate “real” threat to public health |
Weak |
In contrast with the predominantly “weak” interpretations of the PP inventoried by Garnett and Parsons, in those judgments concerning the transfer of resistance to antibiotics from animals to humans and the authorisation of medicines for human use,11 the Court found that the competent public authorities could be obliged to actively adopt precautionary measures.
In a similar vein, the Court has often applied a broad interpretation of the PP to nature conservation. It considered, for example, that a project “may be granted authorisation only on the condition that the competent national authorities are convinced that it will not adversely affect the integrity of the site concerned”.12 Moreover, in a judgment concerning wastewater treatment,13 the Court considered that a degree of probable causality was sufficient to require Member States to adopt protection measures. More recently in 2021, in Case C-499/18 P Bayer CropScience AG and Others v European Commission, the Court restated that there is a “low bar” on the initiation of a review, and a “high bar” on challenging restrictive measures taken as a result of such a review regarding the PP.
Although the PP is not explicitly mentioned in the agreements of the World Trade Organization (WTO), its Appellate Body, which handles disputes between the WTO's Member States, has on several occasions reached decisions that could be construed as admitting recourse to that principle. In a case concerning EU measures prohibiting the import of meat treated with growth hormones,14 the Appellate Body pointed out that the PP did indeed find reflection in a specific provision of the WTO Agreement on the Application of Sanitary and Phytosanitary Measures, and that Members had the right “to establish their own appropriate level of sanitary protection, which may be higher than that implied in existing international standards, guidelines and recommendations”. In a case concerning US measures prohibiting the import of shrimp caught in nets that do not allow sea turtles to escape,15 the Appellate Body defined sea turtles as an “exhaustible natural resource” that warranted restrictive measures. In a dispute concerning a French ban on asbestos and products containing asbestos,16 the Appellate Body confirmed that a country may take measures to protect human health from serious risks on the basis of a divergent opinion coming from qualified and respected sources.
Precaution, the human factor and the socio-political context
Precautionary approaches have sometimes been characterised as part of public authorities’ quest for legitimacy and credibility (Lofstedt, 2004[75]). Many authors have expressed concerns that superfluous precautionary measures could be taken on the basis of unfounded public fears (European Parliamentary Research Service, 2015[11]), and some sociologists view the PP as a reaction to fears driven by situations of risk and uncertainty (G. Bronner and E. Géhin, 2010[76]). In addition, precautionary action may sometimes be the result of knee-jerk reactions in the wake of calamities. Conversely, misperceptions may cause decision makers to neglect risks associated with rare catastrophic events. In the context of precautionary approaches to technology and innovation (which is further discussed in the next chapter), it has been argued that, while “risk panics” might explain some precautionary policy positions (“grounded on emotion or fear of the unknown rather than reason”), the opposite emotional response to new technologies (an “innovation thrill”) should also be counteracted (Kaebnick,G. E. et al, 2016[57]).
More generally, while academic research and debate have traditionally emphasised the importance of scientific evidence for an appropriate application of the PP, the political economy and a number of, socio-political and psychological elements arguably play an equally important role in shaping precautionary decision-making in the face of uncertainty. A traditional political science assumption is that politicians are risk averse in the face of electoral pressures. However, regulators may swing back and forth between risk taking and risk aversion dependent on contingent events and the outcomes of past decisions. Moreover, the degree of risk aversion may depend on the distributional politics of particular issues across the population, as well as the mobilisation and organisation of professional, public and private interest groups. Referring to the EU law context, Heyvaert has defined the PP’s main function as providing “a rationale and justification for administrative discretion”. According to the author, as a legal principle, it has not played a significant role in compelling decision-making by relevant EU institutions (Heyvaert, 2006[77]).
Indeed, academic research presented in the previous chapter, including by both Wiener and Li, seems to suggest that the PP is applied with a high degree of discretion. It has been pointed out that, in practice, the application of precautionary regulatory measures may be driven by interests and circumstantial issues rather than objectives that maximise social wellbeing (A. Dembe, Raffensperger, C., & Tickner, J., 2004[78]). Similarly, the decision-making context around the PP is often heavily politicised and not necessarily based on a careful assessment of the pros and cons of applying the principle:
Discussions on the precautionary principle quickly turn toxic. Critics argue that it gives policymakers and regulators a motive for far-reaching and intrusive regulatory interventions, in which the costs of regulation outweigh its benefits (Majone, 2016). Advocates argue that it is the most sensible approach to regulating possible harm in the absence of sound knowledge of the possible occurrence or impact of the risk (Taylor, 2018) (van der Heijden, 2019[20]).
The role of public perception and cognitive bias
Governments are created and run by humans, who can experience the same behavioural biases and barriers as individuals in society. A 2021 OECD research paper (Drummond, Shephard and Trnka, 2021[79]) maps the ways in which barriers and biases can affect the institutions, processes and tools of regulatory governance. The paper focuses on regulatory oversight bodies and regulatory management tools and provides a number of recommendations to better account for those factors in decision-making.
In the area of risk regulation specifically, Trappenburg has argued that, “in modern Western societies the societal emphasis on risk and the minimisation of it frequently leads to a call for increasing risk regulation”. She refers to this mechanism as “the vicious circle of risk regulation”, with overregulation and breaches of individual privacy as potential negative effects (Trappenburg and Schiffelers, 2012[80]). In a similar vein, Trappenbrug and authors including Van Tol and Blanc et al have referred to the notion of “risk regulation reflex” to characterise situations in which decisions may be adopted too fast, with little to no analysis or regard for alternatives, and underpinned by unrealistic expectations. Under such circumstances, excessive demands for absolute safety and protection may result in regulations that go beyond the needed and reasonable (Blanc, F., Ottimofiore, G. and Macrae, D., 2015[74]), with increased risk aversion potentially leading to regulatory inflation.
The Dutch Risk and Responsibility programme has extensively investigated the risk regulation reflex. It has notably studied the attitudes of citizens towards safety risks. One important conclusion from their work is that:
When analysing citizens’ perceptions of risk it is more important to question whether a risk is morally acceptable rather than focusing on the exact size of the risk. Technocratic argumentation only strengthens the moral need to reduce risks, as it disconnects risks from the moral reasons why we perhaps ought to take them. And, only the latter contains the key to achieve risk acceptance by the public (Van Tol, 2016[81]).
To prevent the risk regulation reflex from resulting in suboptimal policy decisions, there is, according to the authors, a strong case for combining precaution and proportionality17 when dealing with uncertainty. It is also important to bear in mind that a track record of discounting risks when uncertainty is significant, and subsequent significant harm occurs, can lead to credibility and legitimacy loss, and thus build ground for support for risk regulation reflex-based decisions further down the line (“snowball effect”).
Such issues have been observed in the context of vaccine regulation, including during the COVID-19 pandemic. Namely, some authors pointed to a potential “misapplication” of the PP because of blood clot fears associated with the use of COVID vaccines. More precisely, it was argued that, rather than avoiding risk, the PP had instead moved countries away from one risk (blood clots) towards another (lower vaccine coverage) of which the negative impact could be much larger:
Plans for COVID-19 vaccine safety monitoring until now have been based around rigorous scientific evaluation of safety signals, careful communications to ensure vaccine hesitancy is not increased, and ensuring that signals are investigated to examine if any risk requires regulatory action. Because potential safety signals arise often in vaccine and drug safety, with many being false signals, the precautionary principle doesn’t fit with such plans (Cox, 2021[82]).
Similar arguments or concerns have applied for other vaccines. For instance, claims of side effects have led to the suspension or delay in the introduction of the human papillomavirus (HPV) vaccine. As an example, due to reported side effects, Japan suspended the active recommendations of the HPV vaccine in June 2013. The active recommendation of the vaccine has been resumed since April 2022 (Haruyama, R. et al., 2022[83]), based on emerging scientific estimates that suspension would likely result in almost 11,000 deaths from cervical cancer over the next 50 years, if not reversed (Reuters, 2020[84]).
In a similar vein, Coglianese and Carrigan identify psychological impulses and political pressures as factors that lead politicians to “rush to judgment” and neglect the multiple trade-offs that are inherent in regulatory decisions (Carrigan and Coglianese, 2012[85]). According to the authors, the complete elimination of all harms, including from low-probability catastrophic events, is not possible without stopping altogether the very activities that give rise to these harmful events. Thus, the very choice to regulate, instead of banning, a certain economic activity implicitly rejects the goal of eliminating a risk. For instance, authorities regulate restaurants to prevent them from operating in an unsanitary manner but do not shut them down. Likewise, authorities regulate to ensure that potentially high risk industries follow safety protocols, but normally do not seek to shut down all of them. Hence, when laws allow activities to go forward that have a probability of catastrophic harmoccurring, it is difficult to decide how unpredictable the catastrophic event is. For instance, when an oil spill occurs, it may not be possible to ascertain whether it is due to the failure of the relevant regulator to oversee sufficiently or, conversely, an inevitable consequence of trying to balance oil exploration with environmental concerns (Coglianese, C. (Ed.), 2012[86]).
Moreover, the authors argue that decisions may be driven by cognitive biases rather than well-grounded evidence. For example, the “availability heuristic” tends to focus attention on the worst-case outcomes and may make them appear to be more frequent or more likely than they actually are. Factors including media treatment of potential calamities may increase the pressure on politicians to act fast: “politicians decide on quick legislative action to respond to calamities as voters tend to focus less on the impact of the law than on the enactment of the law itself.” (Coglianese, C. (Ed.), 2012[86]) Consequently, solutions that are either unrelated to the cause of the disaster or highly inefficient/ineffective can end up being adopted.
The authors contend that regulation adopted by invoking the PP often originates from rushed judgements and a lack of complete trade-off analysis, and they conclude that “rigorous academic research needs to play a larger role in decision-making to avoid hasty reactions under political pressure.” The 2010 BP oil spill is presented as an example. This catastrophe led to changes in the underlying regulatory systems, including the creation of new agencies and the adoption of new laws in the US. The US Department of the Interior imposed a temporary moratorium on offshore drilling, closed its Minerals Management Service and transferred regulatory authority to a new Bureau of Ocean Energy Management, Regulation, and Enforcement. However, the authors find that, “although the sweeping reforms were predictable, they were neither necessary nor comprehensive, as they lacked better empirical grounding.”
Sunstein has developed extensive analytical work on the role of cognitive biases in the application of the PP. He argues that people are far more willing to tolerate familiar risks than unfamiliar ones, even if they are statistically equivalent. In this context, he adds the PP often seems helpful because decision makers often focus on the “target” risk, and not on the systemic, risk-related effects of precautionary approaches or the risk-related consequences of risk reduction. Box 3.8 provides further detail including examples.
Box 3.8. Behavioural insights into the application of the PP: the role of cognitive biases
Sunstein argues that a problem with the PP is that “it often offers little to no guidance on its application” even though the regulations that apply the PP for its risk assessment often define very specific benchmarks.
For instance, in the United States in 1993, the government was unsure which benchmark to use in arsenic regulation. The government proposed a limit of 50 parts per billion litres. This would cost around 200 million USD annually and prevent up to six of the hundred lives lost each year. The proposed limit was introduced by invoking the PP; however, the PP did not provide any actual guidance as to what limit of arsenic would be the most effective.
Another example provided by Sunstein regards global warming. He claims that even though “scientists have not reached a uniform accord on the dangers of global warming”, the Kyoto Protocol adopted in 1997 did require most industrialised nations to reduce greenhouse gas emissions to 92%-94% of 1990 levels.1 Furthermore, the very high risk perception of risks around nuclear energy, far higher than scientifically assessed risks (Slovic, 1987[87]) (Slovic and Peters, 2006[88]) often results in regulations making its use impossible or extremely difficult. However, according to the Intergovernmental Panel on Climate Change (IPCC) and International Energy Agency (IEA), if a nation does not rely on nuclear power, it is likely to rely instead on fossil fuels such as coal-fired power plants. Τhe impact of this shift is not accounted for in some of the assessments of measures claiming to apply the PP.
In a similar vein, according to the author, a highly precautionary approach to pharmaceutical regulation regarding the introduction of new medicine onto the market, can cause a “drug lag”. While precaution may help protect people from any harms associated with new inadequately tested drugs, it may also prevent people from enjoying the potential benefits of those drugs. Accordingly, Sunstein concludes that “more stringent regulation may not always imply the most precautionary approach.” In line with this, Sunstein argues that those who invoked the PP to seek regulation against human cloning neglect the possibility that, without therapeutic cloning, many people may die. He also finds that banning the genetic modification of food might result in numerous deaths, as genetic modification holds the promise of producing food that is both cheaper and healthier.
Based on these examples, Sunstein also argues that regulating more stringently is not always the most precautionary approach.2
1. Sunstein made this argument in 2002. Its validity has arguably diminished since.
2. It is important to note that further research on the debate on the genetic modification of food has developed; therefore, this argument is also not as applicable today.
Source: (Sunstein, 2002[89]).
(Bellaby, P. and Clark, A., 2016[90]) acknowledge that the circumstances, beliefs, geopolitical situation, and attitude of a population determine its awareness and acceptance of potential hazards. In their article on the role of public perception on risk management and the PP, they outline some of the uncertainties about the hazards of hydrogen as energy carrier and examine qualitative evidence from deliberative Citizens’ Panels in England and Wales (see Box 3.9 for further details).
Box 3.9. Ambiguity, complexity and uncertainty surrounding the hazards of low-carbon hydrogen and public views of emergent risks
PP application to the use of low-carbon hydrogen energy
Increasing attention is being paid to the possibility of using green hydrogen energy as a partial replacement for fossil fuels. In the literature on the risks associated with hydrogen, certain authors assert that the risks of hydrogen use are well known, whereas others emphasise that the properties of hydrogen pose special and partly (at this point) unpredictable risks in terms of certain specific safety aspects. Risks, hazards, and time horizons of hydrogen production may be additional elements of consideration. In terms of the International Risk Governance Council (IRGC) framework on risk governance, hydrogen energy is simultaneously an emergent and uncertain risk.
Based on available evidence on hydrogen’s behaviour, it could however be argued that most use cases would not warrant the application of the PP and may be allowed to be developed in a conventional risk-based regulatory environment. Indeed, there is no uncertainty on large-scale, longer-term environmental impacts of hydrogen, for instance – but only specific, limited uncertainty on the probability of certain safety outcomes in particular situations and scenarios of use. This does not mean that the PP is applicable, but rather that these specific situations and scenarios need to be addressed through appropriate risk-management measures, and additional knowledge gradually grown through piloting, so that the regulatory framework can be adjusted and improved over time (OECD, 2021[61]).
The fact that some of the risks associated with the use of hydrogen energy are not perfectly known or quantified yet is not, indeed, sufficient to justify the use the PP – no risks are ever perfectly known or quantified. As the industry, research, and regulation develop, there will be opportunities to for further refining the regulatory framework as more evidence becomes available. To be sure, a number of specific applications of hydrogen energy (e.g. in the home) do warrant more precaution due to higher uncertainty and potential harm. However, even in these cases, precaution should not revolve around a yes-or-no question but rather rely on stepwise, scalable experimental approaches.
Stakeholder perception of risks associated with hydrogen and the importance of public deliberation
The IRGC model suggests the use of the PP in risk assessment and management. However, in terms of alternative scenarios on the possible development of hydrogen applications, there is also uncertainty reflecting different stakeholder interests and values, and about which regulatory regimes are appropriate. Hence, the IRGC model recommends deliberative methods and participatory discourse to address some of these issues. In line with the notion of concern assessment, the framework includes consultation, participation, and public engagement, which is deemed necessary to build more transparent and inclusive systems of risk governance. Indeed, without undertaking such public engagement to improve public knowledge and awareness about hydrogen technologies, there is a risk of projects not getting social licence.
As part of a wider series of case-studies about hydrogen in England and Wales, the authors carried out two all-day meetings with citizens’ panels in Teesside (Middlesbrough, in Northeast England) and Wales (Llanelli, South Wales) during 2008–2009. These two areas were selected as they already had some hydrogen production plants, as well as demonstration projects for hydrogen energy technologies. During the two meetings, members of the public were provided with basic information about hydrogen, including alternative scenarios created by a wider use of hydrogen. The authors identified several knowledge gaps about the nature and properties of hydrogen as an energy carrier, and about the required regulatory codes and standards to deal with anticipated risks. The authors found that “though the final deliberations by this panel suggested some positive interest in hydrogen use as energy carrier, this was conditional upon people receiving more detailed information and reassurance about measures to regulate safety in the entire system of production, storage, and distribution.” This experience illustrates that public perception of risks associated with hydrogen can depend on the knowledge and information made available.
The evidence outlined in the article shows the importance of public deliberation about the possible hazards and risk governance of future applications of hydrogen. While the authors acknowledge that conducting deliberative citizens’ panels does not solve the problem of achieving public acceptance of emergent risks, the IRGC framework differentiates important dimensions of risks, which may benefit from wider citizen involvement and debate. For instance, citizen involvement can help ensure more transparent and inclusive risk governance, which is recommended in situations where there is a high degree of uncertainty about framing the risk problem.
Source: Authors’ analysis, (Bellaby, P. and Clark, A., 2016[90]) (Robert Flynn, Miriam Ricci & Paul Bellaby, 2012[91]).
Available evidence points to the importance of inclusion and transparency in risk management and communication. Unconditional pro-transparency approaches may however prove counter-productive by undermining trust and leading to risk-regulation-reflex situations. For instance, Bouder and Löfstedt have highlighted the “ambivalent” relationship between transparency and risk communication; they distinguish between “fishbowl” transparency (full disclosure of information without explanation or contextualisation), and reasoned or managed transparency, which “is about keeping sight of the impact of openness and disclosure on the wider audience” (Löfstedt, R. and Bouder, F., 2014[92]). They advocate the latter, which implies taking the science of risk communication into account. In a similar vein, O’Neill has argued that current “enthusiasm for ever more complete openness and transparency has done little to build or restore public trust”, as “the very technologies that spread information so easily and efficiently are every bit as good at spreading misinformation and disinformation” (O’Neill, 2002[93]).
Complementary insights regarding the role of perception and sociological phenomena can be found in Wiener’s The Tragedy of the Uncommons (Wiener, 2016[94]) (Wiener, 2021[95]). This piece of research explores the factors that can help society to learn through experience and collective mobilisation. The tragedy stems from the mismanagement and misperception of rare catastrophic events. The author identifies this to be caused by psychological unavailability, mass numbering (the larger the number of fatalities, the fewer people will care about the lost lives), and under-deterrence (owing to weak legal mechanisms). It concludes that expert assessment is needed to overcome the public neglect of uncommon risks. While risk assessments often use policy learning from experience and experimentation, rare one-time occurrences do not offer such opportunities for learning. According to the author, foresight, anticipation and precaution are thus particularly important under such circumstances, as is the careful analysis of potential risk-risk trade-offs (preventing one catastrophe might “invite” another).
In the specific context of climate change and the energy transition, there are a number of examples of debatable PP-driven regulatory decisions that have arguably failed to consider the above-mentioned risk-risk trade-offs. As discussed earlier in this chapter (see When (and how) to apply the PP: there is no silver bullet), nuclear energy and GMOs have strong potential, respectively, for limiting/reducing CO2 emissions/concentrations and enabling climate change adaptation in agriculture. Both these technologies have, however, been subject to particularly stringent regulatory approaches invoking the PP. These include outright abandonment in the case of nuclear energy, and a de facto EU moratorium on new GM crops from 1999 to 2004 that “steered the development of an extremely strict and expensive regulatory framework concerning the import and cultivation of GM crops” (Blancke et al., 2015[96]).
While there are a number of factors at play, perception and cognitive biases have arguably played an important role in the process leading up to such regulatory decisions. They have also contributed to preventing subsequent reassessment of the decisions based on newly available evidence and increased awareness of the countervailing risks (e.g. from a climate perspective). A 2015 study, for instance, explains the apparent discrepancy between public opinion and scientific evidence in terms of particular intuitions and emotions that make the mind highly susceptible to negative representations of GMOs. Psychological essentialism, an example of this, has been argued to play a role in public attitudes towards GMOs. Because of it, people are typically more opposed to GM applications that involve the transfer of DNA between two different species (“transgenic”) than within the same species (“cisgenic”) (Blancke et al., 2015[96]).
Regarding nuclear energy, a 2021 study examines the role of public perception as a constraint to the deployment of energy technologies. This study attempts to “disentangle public opposition due to the dread of nuclear power from opposition stemming from its actuarial risk”. Its results suggest that dread about nuclear power leads respondents to choose 40% less nuclear generation in 2050 than they would have chosen in the absence of this dread. Moreover, the authors indicate that “these methods could apply to other technologies, such as carbon storage, where there may be gaps between actuarial and perceived risks” (Abdulla et al., 2019[97]). This gap in risk perceptions has been evidenced many times by social scientists and affects not only nuclear energy – but is particularly salient in this case (Slovic, 1986[98]) (Slovic, 1987[87]) (Slovic and Peters, 2006[88]) (see Box 3.10).
Box 3.10. Quantitative representations of risk attitudes and perceptions
A commonly used approach by social studies to study perceived risk has been the psychometric paradigm, which aims to capture quantitative judgements about current and desired riskiness, and corresponding desire for regulation. Results of psychometric studies have pointed to several conclusions. First, the concept of risk differs for experts and laypeople. Overall, expert judgement tends to correlate with the actual number of fatalities that the hazard presents, while non-experts base their judgements also on other characteristics (e.g. catastrophic potential threat to future generations).
One of the most important factors of perception of risk for laypeople is the so-called “dread risk”, which is associated with characteristics such as uncontrollable, catastrophic, involuntary, presenting a high risk for future generations, etc (see Figure 3.4). Nuclear weapons and nuclear power score highest on the elements that constitute this factor. The higher a hazard’s score on the “dread” factor, the higher the perceived risk, and the greater a desire for risk reduction and regulation. In contrast, experts tend to assess risk as expected annual mortality. As a result, different assessments of the magnitude of the riskiness of a given action or technology, and related acceptability of risk may emerge.
The political economy of precaution
Given this section’s earlier discussion on the importance of motivations and the political economy surrounding decision-making, it may be useful to explore how the application of the PP can be affected by incentive structures and level of alignment between the interests of the different actors involved.
A 2008 report by the UK’s Risk and Regulation Advisory Council articulates a number of valuable concepts pertaining to the socio-political or political economy context surrounding risk and precaution:
Risk landscape: constituted by “risk actors” and the interactions between them. These actors are groups involved with public risk, “including Ministers, civil servants, parliamentarians, the judiciary, the insurance sector, the media, subject-matter experts, single issue lobby groups, standards setters, compliance officers and risk managers”. The risk landscape “influences individuals or organisations responsible for a public risk – usually business, public bodies or the public – to respond in a particular way”.
Riskmongers: “people or groups who conjure up or exaggerate risks inappropriately. Sometimes this will be in order to create some kind of advantage for themselves, such as financial gain, attention, power or even job security. Often it will be well intentioned but misguided.”
Risk alarms: “external influences that can produce a response on the part of a particular risk actor […] These risk alarms can prompt a response from a particular risk actor which, in turn, may influence the actions or perceptions of others.” Examples provided include: events that raise or expose risks (e.g. 9/11, the banking crisis, publication of WHO report highlighting health risks); emerging risk issues (e.g. security of energy supply, pandemic flu, domestic security); newsworthy stories that highlight risk (e.g. flooding, bird flu, child abduction); individuals or groups who stand to gain from highlighting concerns or raising anxiety (e.g. conservation groups, NIMBY campaigners); and issues of broad concern to the public (e.g. environmental, health, education, safety issues) (Risk and Regulation Advisory Council, 2008[100]).
Crucially, the interests and incentives of the various risk actors may be out of alignment, thus leading to undesirable outcomes. Gollier and Triech, for example, note that politicians with strong career concerns may prefer to select the risk policy that “the public” believes is good, and conclude that this kind of political inefficiency may cause the regulator to depart from social welfare maximisation (European Commission, 2017[36]) (Gollier C., and Triech, N., 2003[64]). In addition, institutional constraints may impact incentive structures and alignment. This includes the scope of regulatory mandates, the legal and financial liabilities of decision-makers, and regulatory regime architectures. An example is how fragmented architectures provide scope for inconsistent or incoherent practices to emerge across regimes.
On a related note, Hausken has used principal-agent problem analysis to describe how the PP is usually applied at a societal level. Principals represent society and are usually government departments or agencies which hire agents that may be publicly employed or recruited on the private market:
Principals and agents as players make decisions and play the game with each other. As Shapiro (2016) points out, agents may exercise discretion through self-interest at the expense of their principals, and principals may exercise discretion through choosing agents, taking advantage of agent expertise, and responding to uncertain contingent events. Both agents and principals may exploit each other, and an agent may accumulate knowledge and take over the role of the principal (Bhimani et al. 2010). Further problems arise because the players’ incentives generally don’t align. For example, agents may have information that the government as principal needs for making decisions, agents may withhold information or release disinformation, and agents may undermine government initiatives. (Hausken, 2019[101])
Hausken distinguishes four dimensions related to the PP and analyses the interactions between principals and agents in each of them:
Threat: threats may be beneficial for some agents (safety specialists who may thereby get contracts), but not others. For example, residents who may get their houses destroyed by flooding or wildfires are evidently not benefiting from threats.
Uncertainty: to the extent that principals are not unitary, interactions are possible between the various principals and sub-principals about suitable thresholds. To the extent that principals are not autonomous, games are possible between them and their superiors, stakeholders, pressure groups, audiences, and other actors that can somehow impact how the principals assess uncertainty.
Action: the agents may interact with external actors and each other — especially when their actions cannot be performed independently of each other, or the geographic space is so small that multiple agents cannot perform their actions simultaneously. The agents may also play games with the principals due to different interests; for example, if the agents possess hidden information or knowledge about costs or performance levels that the principals cannot readily access.
Command: the principals interact with each other and with many potential agents with the objective of obtaining a lower number of agents for completing the objective.
On a related note, Espluga has analysed “precautionary local politics” in the context of risks stemming from radiofrequency fields in Spain. The author identifies three main actors in any conflict relating to environmental risks:
Risk generators (agents): in this case, the cell phone operating companies who are interested in promoting and spreading the technology for company profit.
People affected: in this case, members of the public who consider themselves harmed by the installation of antennae, whether in economic, environmental or health terms.
People responsible for guaranteeing safety levels: in this case, public institutions such as national, regional or local government (principals).
Espluga notices that different interests and motives can often be distinguished within these groups. Cell phone operators were concerned by the public alarm and suspicion and the consequences this had on plans for company development. However, when regulations were drawn up, they were concerned about their impact on corporate results — especially as a new generation of telephones was being developed which would likely require the expansion of existing coverage. At the same time, local councils, due to their proximity to the public, were the first government bodies to take precautionary measures — although there was little coordination between them. When the social conflict became more significant, government at a higher level passed specific legislation. However, because this relied on differing criteria, it failed to resolve the confusion. Espluga also points out that, “the media may be considered part of the system of interrelations, through its role in promoting, spreading, or mitigating environmental conflicts. The media make invisible risks visible and provide individuals with information that enables them to link risk factors to certain effects or damages.” (Espluga, 2005[102])
The Bovine Spongiform Encephalopathy (BSE) epidemic in the United Kingdom (see the section Analysis and categorisation of selected PP applications for further details), has also been pointed out as an example of the principal-agent problem. A study noted that “acceptable costs of precaution are equal to subsidies to be paid to the agent in order to elicit the higher effort under the most pessimistic assumptions of the principal on the possibility of catastrophic events”, and that, while the principal is expected to act based on the worst probability distribution, this was not sufficiently taken into consideration. Rendered products of cow and sheep carcasses were produced and labeled as feed for hogs, chickens and other farm animals and continued to be exported to many countries. In 1996, when it became clear that this meal had been fed to cows and sheep, the UK banned animal protein in feedstock.
The issue (and pitfalls) of perceived neutrality
A problematic issue when it comes to applying risk-based regulation and precautionary approaches relates to the supposed “neutrality” and independence of certain risk actors (to use the terminology presented above). It concerns the real interests motivating these actors, their trustworthiness (or lack thereof), and how these factors help shape public opinion and risk perception. This issue is particularly acute in the case of the energy sector, although present in many other sectors.
For instance, it has been argued that NGOs are increasingly perceived to advance their own agendas,18 as opposed to being altruistic or impartial actors (Tortajada, 2016[103]). With regard to issues of independence, developments in recent years around links between the Russian authorities and environmental NGOs abroad can be rather telling.
For instance, the then NATO Secretary General Rasmussen levelled accusations at Russia for meddling in Europe’s energy crisis, for example by engaging with “so-called non-governmental organisations working against shale gas – to maintain European dependence on imported Russian gas”. Allegations of Russian funding to the environmentalist movement in the U.S. (Defence connect, 2022[104]) have been raised by members of Congress, too (Johnson, 2017[105]).
Conversely, even though NGOs advocating for specific energy technology and policy choices are clearly not exempt from major conflicts of interest, it could be argued that the track record of several key industry players in the energy sector has generated such long-lasting suspicion and mistrust that — to the extent that regulatory decisions are strongly influenced by public opinion — it may have hindered the development of certain energies (e.g. nuclear) with potential for enhancing climate change mitigation.
Major reputational damage has resulted from how some energy companies are perceived to have benefited from contributed to global warming – and continue to do so. Indeed, some are accused of wilfully concealing evidence of the phenomenon itself.19
While there is no downplaying the importance of the above-mentioned episodes (just as there is no case for criticising environmental organisations across the board), it may be useful to reflect upon the examples presented to question traditional notions of neutrality and independence, as well as their implications for regulatory decision-making and precautionary approaches in the energy sector and beyond.
Building on the review presented in this chapter, the next chapter explores the articulation between precaution and innovation — something that has no small relevance in the context of climate risks and the energy transition. In doing so, the next chapter pays special attention to existing attempts to overcome the apparent tensions between both principles.
References
[78] A. Dembe, Raffensperger, C., & Tickner, J. (2004), Protecting Public Health and the Environment: Implementing the Precautionary Principle, p. 219, https://doi.org/10.2307/3343463.
[97] Abdulla, A. et al. (2019), “Limits to deployment of nuclear power for decarbonization: Insights from public opinion”, Energy Policy, Vol. 129, pp. 1339-1346, https://doi.org/10.1016/j.enpol.2019.03.039.
[6] Beale, C. (2016), “Has the Chernobyl disaster affected the number of nuclear plants built?”, The Guardian, https://www.theguardian.com/environment/2016/apr/30/has-chernobyl-disaster-affected-number-of-nuclear-plants-built.
[90] Bellaby, P. and Clark, A. (2016), Might More Harm Be Done Than Good When Scientists and Engineers Engage with the Public About New Technology Before it is Fully Developed? The Case of Hydrogen Energy. International Journal of Science Education, Part B 6:, pp. 283-302.
[29] Belvèze, H. (2003), Le principe de précaution et ses implications juridiques dans le domaine de la sécurité sanitaire des aliments, pp. 387–396.
[52] Blanc, F. et al. (2015), Understanding and addressing the Risk Regulation Reflex. Prepared for the Dutch Risk and Responsibility Programme.
[74] Blanc, F., Ottimofiore, G. and Macrae, D. (2015), Understanding and addressing the Risk Regulation Reflex, https://www.researchgate.net/publication/276949964_Understanding_and_addressing_the_Risk_Regulation_Reflex.
[96] Blancke, S. et al. (2015), “Fatal attraction: the intuitive appeal of GMO opposition”, Trends in Plant Science, Vol. 20/7, pp. 414-418, https://doi.org/10.1016/j.tplants.2015.03.011.
[19] Burgess, A., A. Alemanno and J. Zinn (eds.) (2016), The evolution of the regulatory state: From the law and politics of antitrust to the politics of precaution, Routledge.
[85] Carrigan, C. and C. Coglianese (2012), Oversight in Hindsight: Assessing the U.S. Regulatory System In the Wake of Calamity, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2186529.
[71] Centre for Transport Studies (2018), The precautionary principle and regulatory impact assessment: on the need for initial screening of hazards in regulatory work with examples from transport. CTS Working Paper 2018:14.
[111] Climate & Capital media (2021), Is Germany’s Greenpeace Energy at peace selling natural gas?, https://www.climateandcapitalmedia.com/is-germanys-greenpeace-energy-at-peace-with-selling-mostly-natural-gas/.
[86] Coglianese, C. (Ed.) (2012), Regulatory Breakdown: The Crisis of Confidence in U.S. Regulation., http://www.jstor.org/stable/j.ctt3fhzfx.
[82] Cox, A. (2021), Blood clot fears: how misapplication of the precautionary principle may undermine public trust in vaccines, https://theconversation.com/blood-clot-fears-how-misapplication-of-the-precautionary-principle-may-undermine-public-trust-in-vaccines-157168.
[60] Crawford-Brown D. and Crawford-Brown S. (2011), The precautionary principle in environmental regulations for drinking water, pp. 379–387.
[53] Dean, M. (2022), A Practical Guide to Multi-Criteria Analysis, https://www.researchgate.net/publication/358131153.
[104] Defence connect (2022), Russia, green groups not-so-strange bedfellows, https://www.defenceconnect.com.au/key-enablers/10316-russia-green-groups-not-so-strange-bedfellows?utm_source=DefenceConnect&utm_campaign=11_07_22&utm_medium=email&utm_content=1&utm_emailID=155755c57d25d8ece93131a626d9220d1bf56b043f1eaa35f96ed9a5a3caecb5.
[79] Drummond, J., D. Shephard and D. Trnka (2021), “Behavioural insight and regulatory governance: Opportunities and challenges”, OECD Regulatory Policy Working Papers, No. 16, OECD Publishing, Paris, https://doi.org/10.1787/ee46b4af-en.
[37] EEA (2000), Cloudy crystal balls. An assessment of recent European and global scenario studies and models. Environmental issues series, no. 17, https://www.eea.europa.eu/publications/Environmental_issues_series_17.
[26] EPRS (2015), The Precautionary principle. Definitions, applications and governance., https://www.europarl.europa.eu/RegData/etudes/IDAN/2015/573876/EPRS_IDA(2015)573876_EN.pdf.
[102] Espluga, J. (2005), Precautionary local politics and coping with risks of radiofrequency fields in Spain.
[36] European Commission (2017), Future brief: The precautionary principle: decision-making under uncertainty.
[31] European Commission (2000), Communication from the Commission on the precautionary principle, Commission of the European Communities, COM(2000) 1 Final, https://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2000:0001:FIN:EN:PDF.
[65] European Environment Agency (2011), Late lessons from early warnings:, https://www.eea.europa.eu/publications/environmental_issue_report_2001_22/Issue_Report_No_22.pdf/view.
[11] European Parliamentary Research Service (2015), The Precautionary Principle. Definitions, applications and governance, https://www.europarl.europa.eu/RegData/etudes/IDAN/2015/573876/EPRS_IDA(2015)573876_EN.pdf.
[30] European Risk Forum (2011), The Precautionary Principle Application and Way Forward.
[22] Farrow, S. and H. Hayakawa (2002), Investing in safety: An analytical precautionary principle,, pp. 165-174.
[55] Fisher, E. (2007), Risk Regulation and Administrative Constitutionalism, Bloomsbury Publishing.
[72] Fosch-Villaronga, E. and Heldeweg, M. (2018), “Regulation, I presume?” said the robot – Towards an iterative regulatory process for robot governance, pp. 1258-1277.
[76] G. Bronner and E. Géhin (2010), L’inquiétant principe de précaution, PUF.
[58] Garnett, K. and Parsons, D. J. (2017), Multi-Case Review of the Application of the PrecautionaryPrinciple in European Union Law and Case Law, https://onlinelibrary.wiley.com/doi/epdf/10.1111/risa.12633.
[34] Geert Van Calster, ed. (2014), Research Handbook on Environment, Health and the WTO.
[10] Gemmell, J. Campbell; Scott, E. Marian (2013), Environmental regulation, sustainability and risk.
[64] Gollier C., and Triech, N. (2003), Decision-Making under Scientific Uncertainty: The Economics of the Precautionary Principle, pp. 77-103, https://econpapers.repec.org/RePEc:kap:jrisku:v:27:y:2003:i:1:p:77-103.
[42] Graham, J. and Wiener, J. (1995), Risk vs. Risk, Harvard University Press.
[109] Greenpeace (n.d.), Koch Industries: Secretly Funding the Climate Denial Machine, https://www.greenpeace.org/usa/fighting-climate-chaos/climate-deniers/koch-industries/.
[46] Hansen, S. and J. Tickner (2013), “The precautionary principle and false alarms — lessons learned, in Late essons from early warnings: science, precaution, innovation, European Environment Agency”, The precautionary principle and false alarms — lessons learned, S. Hansen and J. Tickner, in Late, http://www.eea.europa.eu/publications/late-lessons-2.
[67] Harsanyi, J. (1975), “Can the Maximin Principle Serve as a Basis for Morality? A Critique of John Rawls’s Theory”, American Political Science Review, Vol. 69/2, pp. 594-606, https://doi.org/10.2307/1959090.
[83] Haruyama, R. et al. (2022), Japan resumes active recommendations of HPV vaccine after 8·5 years of suspension, https://www.thelancet.com/journals/lanonc/article/PIIS1470-2045(22)00002-X/fulltext#seccestitle10.
[101] Hausken, K. (2019), Principal–Agent Theory, Game Theory, and the Precautionary Principle, pp. 105–127, https://doi.org/10.1287/deca.2018.0380.
[77] Heyvaert, V. (2006), “Facing the consequences of the precautionary principle in European Community law”, European Law Review.
[18] Hood, C., H. Rothstein and R. Baldwin (2001), The Government of Risk, Oxford University Press, https://doi.org/10.1093/0199243638.001.0001.
[25] International Risk Governance Council (2006), Risk Governance, Towards an Integrative Approach.
[39] IPCC (2020), The concept of risk in the IPCC Sixth Assessment Report: a summary of cross Working Group discussions, https://www.ipcc.ch/site/assets/uploads/2021/01/The-concept-of-risk-in-the-IPCC-Sixth-Assessment-Report.pdf.
[38] IPCC (2010), IPCC Guidance Notes for Lead Authors of the IPCC Fifth Assessment Report on Consistent Treatment of Uncertainties: IPCC Cross Working Group Meeting on Consistent Treatment of Uncertainties, https://www.ipcc.ch/site/assets/uploads/2018/05/uncertainty-guidance-note.pdf.
[43] IPCC, P. (ed.) (2022), Climate Change 2022: Mitigation of Climate Change. Contribution of Working Group III to the Sixth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, https://doi.org/10.1017/9781009157926.
[69] IRGC (2017), Introduction to the IRGC Risk Governance Framework. Revised version, https://doi.org/10.5075/epfl-irgc-233739.
[70] IRGC (n.d.), International Risk Governance Council, https://irgc.org/risk-governance/irgc-risk-governance-framework/ (accessed on 13 April 2022).
[28] ISO (2021), ISO 31000 management, https://www.iso.org/iso-31000-risk-management.html.
[108] Iverson, T. and C. Perrings (2012), “Precaution and proportionality in the management of global environmental change”, Global Environmental Change, Vol. 22/1, pp. 161-177, https://doi.org/10.1016/j.gloenvcha.2011.09.009.
[105] Johnson, D. (2017), “Intelligence: Putin Is Funding the Anti-Fracking Campaign”, Newsweek, https://www.newsweek.com/intelligence-putin-funding-anti-fracking-campaign-547873.
[73] Jones, J. (2007), Regulatory Design for Scientific Uncertainty: Acknowledging the Diversity of Approaches in Environmental Regulation and Public Administration, https://www.jstor.org/stable/44248615.
[57] Kaebnick,G. E. et al (2016), “Precaution and governance”, Science, https://research.ncsu.edu/ges/files/2014/03/Kaebnick-et-al-2016-Precaution-and-governance-of-emerging-technologies.pdf.
[44] Kahneman, D. and Tversky. A. (1979), “An Analysis of Decision under Risk”, Econometrica, Vol. 47, No. 2 (Mar., 1979), http://www.jstor.org/stable/1914185?origin=JSTOR-pdf.
[5] Kiyar, D. & Wittneben, B. (2012), Nuclear Energy in the European Union after Fukushima: Political and Economic Considerations. CESifo DICE Report. 10. 9-15., https://www.researchgate.net/publication/264541536_Nuclear_Energy_in_the_European_Union_after_Fukushima_Political_and_Economic_Considerations.
[8] Le Gros, G. ( (2020), The beginning of nuclear energy in France: Messmer’s plan. Revue Generale Nucleaire, (5), 56-59., https://inis.iaea.org/search/searchsinglerecord.aspx?recordsFor=SingleRecord&RN=52071181.
[47] Lemons, J. et al. (1997), “The Precautionary Principle: Scientific Uncertainty and Type I and Type II Errors”, Foundations of Science, https://link.springer.com/article/10.1023/A:1009611419680.
[4] Leslie, J. (2018), “After a Long Boom, an Uncertain Future for Big Dam Projects”, Yale Environment 360, https://e360.yale.edu/features/after-a-long-boom-an-uncertain-future-for-big-dam-projects.
[23] Li, H., J. Xu and J. Wiener (2021), “Comparing Environmental Risk Regulations in China and the United States”, Risk Analysis, https://doi.org/10.1111/risa.13797.
[92] Löfstedt, R. and Bouder, F. (2014), New transparency policies: risk communication’s doom?. Effective Risk Communication. editor / J. Arvai ; L. Rivers. Oxon and New York : Routledge/Taylor & Francis Group, 2014. pp. 73-90 (Earthscan Risk in Society)..
[21] Lofstedt, R. (2011), Risk versus Hazard – How to Regulate in the 21st Century. Symposium on Risk versus Hazard.
[75] Lofstedt, R. (2004), The Swing of the Regulatory Pendulum in Europe: From Precautionary Principle to (Regulatory) Impact analysis.
[68] Martuzzi, M. (2004), Role of the Precautionary Principle in Decision Making in Environment and Health, p. S173, https://doi.org/10.1097/00001648-200407000-00460.
[2] NASA (2022), How Do We Know Climate Change Is Real?, https://climate.nasa.gov/evidence/#otp_references.
[54] Nautiyal, H. and V. Goel (2021), “Sustainability assessment: Metrics and methods”, Methods in Sustainability Science, pp. 27-46, https://doi.org/10.1016/B978-0-12-823987-2.00017-9.
[62] NEA (2023), The NEA Small Modular Reactor Dashboard, OECD Publishing, https://www.oecd-nea.org/jcms/pl_78743/the-nea-small-modular-reactor-dashboard?details=true.
[7] NEA (2022), Principles and Practice of International Nuclear Law, OECD Publishing, https://www.oecd-nea.org/jcms/pl_65159/principles-and-practice-of-international-nuclear-law.
[107] Negin, E. (2020), Minnesota Sues ExxonMobil, Koch Industries, and Top Oil and Gas Trade Association for Climate-Related Consumer Fraud, https://blog.ucsusa.org/elliott-negin/minnesota-sues-exxonmobil-koch-industries-and-top-oil-and-gas-trade-association-for-climate-related-consumer-fraud/.
[63] Nicolia, A. et al. (2013), “An overview of the last 10 years of genetically engineered crop safety research”, Critical Reviews in Biotechnology, Vol. 34/1, pp. 77-88, https://doi.org/10.3109/07388551.2013.823595.
[93] O’Neill, O. (2002), Onora O’Neill. The Lectures, https://www.immagic.com/eLibrary/ARCHIVES/GENERAL/BBC_UK/B020000O.pdf.
[61] OECD (2021), Recommendation of the Council for Agile Regulatory Governance to Harness Innovation. OECD/LEGAL/0464.
[110] OECD (2010), Risk and Regulatory Policy: Improving the Governance of Risk, OECD Reviews of Regulatory Reform, OECD Publishing, Paris, https://doi.org/10.1787/9789264082939-en.
[66] OECD (2008), Costs of Inaction on Environmental Policy Challenges: Summary Report, https://www.oecd.org/env/40501169.pdf.
[35] Patelli, E. and Broggi, M. (2015), Uncertainty management and resilient design of safety critical systems. Conference paper. NAFEMS World Congress 2015 inc. the 2nd International SPDM Conference | San Diego, CA, 21-24 June 2015.
[56] Persson, E. (2016), What are the core ideas behind the Precautionary Principle? Science of The Total Environment, https://doi.org/10.1016/j.scitotenv.2016.03.034.
[49] Rawls, J. (1971), A Theory of Justice, Oxford University Press.
[27] Renn, O. (editor) et al (2008), Global Risk Governance: Concept and Practice Using the IRGC Framework, Springer.
[32] Renn, O. et al. (coord.) (2003), The Application of the Precautionary Principle in the European Union.
[84] Reuters (2020), Japan’s halt of regular HPV vaccine to cause thousands of cancer deaths: study, https://www.reuters.com/article/us-japan-hpv-vaccine-study-idUSKBN2050K9.
[100] Risk and Regulation Advisory Council (2008), The risk landscape: interactions that shape response to public risk, https://webarchive.nationalarchives.gov.uk/ukgwa/20100104183913/http:/www.berr.gov.uk/deliverypartners/list/rrac/index.html.
[91] Robert Flynn, Miriam Ricci & Paul Bellaby (2012), Ambiguity, complexity and uncertainty surrounding the hazards of hydrogen and public views of emergent risks, pp. 373-387, https://doi.org/10.1080/13669877.2011.634517.
[106] Sample, I. (2005), “The father of climate change”, The Guardian, https://www.theguardian.com/environment/2005/jun/30/climatechange.climatechangeenvironment2.
[3] Scottish Environment Protection Agency (2022), The Water Environment (Controlled Activities) (Scotland) Regulations 2011 (as amended). A Practical Guide, https://www.sepa.org.uk/media/34761/car_a_practical_guide.pdf.
[59] Scottish Government (2016), Independent Review of Underground Coal Gasification - Report, https://www.gov.scot/publications/independent-review-underground-coal-gasification-report/documents/.
[15] Shapiro, S. and R. Glicksman (2003), Risk Regulation at Risk: Restoring a Pragmatic Approach, Stanford University Press.
[50] Shrader-Frechette, K. S. (1991), Risk and rationality. Philosophical foundations for populist reforms, University of California Press.
[87] Slovic, P. (1987), “Perception of Risk”, Science, Vol. 236/4799, pp. 280-285, https://doi.org/10.1126/science.3563507.
[98] Slovic, P. (1986), “Informing and Educating the Public About Risk”, Risk Analysis, Vol. 6/4, pp. 403-415, https://doi.org/10.1111/j.1539-6924.1986.tb00953.x.
[88] Slovic, P. and E. Peters (2006), “Risk Perception and Affect”, Current Directions in Psychological Science, Vol. 15/6, pp. 322-325, https://doi.org/10.1111/j.1467-8721.2006.00461.x.
[99] Slovic, P. and E. Weber (2002), Perception of Risk Posed by Extreme Events.
[40] Stirling, A. (2007), “Risk, precaution and science: towards a more constructive policy debate.”, EMBO reports, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1852772/.
[89] Sunstein, C. (2002), Beyond the Precautionary Principle.
[33] Taleb, N. N. et al (2014), The Precautionary Principle (with Application to the Genetic Modification of Organisms), https://arxiv.org/pdf/1410.5787.pdf.
[1] Thompson, C. (2019), How 19th Century Scientists Predicted Global Warming, https://daily.jstor.org/how-19th-century-scientists-predicted-global-warming/.
[103] Tortajada, C. (2016), “Nongovernmental Organizations and Influence on Global Public Policy”, Asia & the Pacific Policy Studies, Vol. 3/2, pp. 266-274, https://doi.org/10.1002/app5.134.
[13] Tosun, J. (2013), Risk Regulation in Europe, Springer New York, New York, NY, https://doi.org/10.1007/978-1-4614-1984-6.
[80] Trappenburg, M. and M. Schiffelers (2012), “How to Escape the Vicious Circle: The Challenges of the Risk Regulation Reflex”, European Journal of Risk Regulation, Vol. 3/3, pp. 283-291, https://doi.org/10.1017/s1867299x00002191.
[48] Underwood, A. (1997), “Environmental decision-making and the precautionary principle: what does this principle mean in environmental sampling practice?”, Landscape and Urban Planning, https://www.sciencedirect.com/science/article/pii/S016920469780000X.
[20] van der Heijden, J. (2019), Risk governance and risk-based regulation: A review of the international academic literature. State of the Art in Regulatory Governance Research Paper – 2019.02.
[81] Van Tol, J. (2016), “Dutch Risk and Responsibility programme. Some research into citizens’ views on a proportionate handling of risks and incidents”, Journal of Risk Research, 19:8 1014-1021, https://doi.org/10.1080/13669877.2014.910691.
[16] Vogel, D. (2012), The Politics of Precaution. Regulating Health, Safety, and Environmental Risks in Europe and the United States, Princeton.
[24] Von Schomberg, R. (2012), The precautionary principle: Its use within hard and soft law, pp. 147-156, https://warwick.ac.uk/fac/cross_fac/esrcdtc/advanced/governance/02_von_schomberg.pdf.
[12] Vos, E. and K. De Smedt (2020), RECIPES, WP1 Report: Taking stock as a basis for the effect of the precautionary principle since 2000, https://recipes-project.eu/sites/default/files/2021-01/Report_Taking%20stock%20as%20a%20basis%20for%20the%20effect%20of%20the%20precautionary%20principle%20since%202000_Final.pdf.
[9] Weber, M. (1991), Summary and Observations of the Conference for a Nuclear Free 1990s, April 26-27, 1991, https://atomicinsights.com/wp-content/uploads/Antinuclear-strategy-April-1991.pdf.
[95] Wiener, J. (2021), Disregard and Due Regard, pp. 437-469, http://www.nyuelj.org/wp-content/uploads/2021/10/Wiener-Final.pdf.
[51] Wiener, J. (2020), Learning to Manage the Multi-Risk World, https://doi.org/10.1111/risa.13629.
[41] Wiener, J. (2018), “Precautionary Principle”, Principles of Environmental Law (Ludwig Kramer and Emanuela Orlando, eds.) of the Encyclopedia of Environmental Law (Michael Faure, ed.), https://www.elgaronline.com/view/nlm-book/9781786436986/b-9781785365669-VI_13.xm.
[45] Wiener, J. (2016), Precaution and Climate Change, Oxford Univ. Press, https://global.oup.com/academic/product/the-oxford-handbook-of-international-climate-change-law-9780199684601.
[94] Wiener, J. (2016), “The Tragedy of the Uncommons: On the Politics of Apocalypse”, Global Policy, Vol. 7, pp. 67-80, https://doi.org/10.1111/1758-5899.12319.
[14] Wiener, J. and M. Rogers (2002), Comparing precaution in the United States and Europe, pp. 317-349.
[17] Wiener, J. et al. (eds.) (2011), The Reality of Precaution: Comparing Risk Regulation in the United States and Europe, Routledge.
Notes
← 1. As early as 1898, Swedish scientist Svante Arrherius put forward the theory of the greenhouse effect and calculated that doubling of carbon dioxide in the atmosphere would increase temperatures by 5°C to 6°C (Sample, 2005[106]).
← 2. Additional important aspects that also need to be considered are the execution, supervision, enforcement and evaluation of the measures derived from those decisions.
← 3. The project entitled “Regulatory strategies and Research needs to compose and specify a European policy on the application of the Precautionary Principle” (PrecauPri), is financed by the European Commission (STRATA programme) and aims to put forward a general framework for precautionary risk regulation in Europe.
← 4. Consideration of scientific uncertainty in the application of PP is key and requires careful consideration when applied in practice. For instance, scientific uncertainty may apply to the calculation of environmental impacts over the long-term. In some cases, the immediate environmental impact of technologies may be determined more-or-less accurately (e.g. coal mining and coal burning for electricity production), whereas very-long-term events are inherently more difficult to predict and therefore, even if scientific knowledge is accumulated, there is always some space for “what if one day” considerations. This creates a potential disbalance in risk management, whereby comparatively more harmful but more certain risks get regulated rather more lightly than probably less harmful but more uncertain ones. This is where the “refocused” approach to understanding and applying the precautionary principle we outline is most essential, to ensure proper risk balance and proportionality is retained.
← 5. Bayesian decision rules (or Bayesian Decision Theory BDT) is a statistical approach to pattern classification that quantifies the trade-offs between various classification based on probability and costs. The rules define the most reasonable action to take based on the parameters ‘prior probability’, ‘likelihood’, “evidence” and “posterior”.
← 6. Geoengineering refers to the release of particles (such as sulphur) into the atmosphere which, by reflecting solar radiation, intervene in the Earth’s climate and potentially lead to a cooling of the planet. The workings of this technology however are not fully understood and are surrounded by high levels of uncertainty and potential danger, thus making the case for the use of the precautionary principle.
← 7. Judgment of 26 November 2002 in case T-74/00 (Artegodan & Others v. Commission, paragraph 184).
← 8. Judgment of 2 December 2004 in case C41/02 (Commission v. Kingdom of the Netherlands).
← 9. Judgment of 5 May 1998 in case C180/96 (United Kingdom of Great Britain and Northern Ireland v. Commission).
← 10. C. 1999/832/EC: Commission Decision of 26 October 1999 concerning the national provisions notified by the Kingdom of the Netherlands concerning the limitations of the marketing and use of creosote.
← 11. Judgments of 11 September 2002 in case T-13/99 (Pfizer, paragraph 444) and case T-70/99 (Alpharma, paragraph 355).
← 12. Judgment of 7 September 2004 in case C-127/02 (Waddenzee, paragraph 45).
← 13. Judgment of 23 September 2004 in case C-280/02 (Commission v. France, paragraph 34).
← 14. Appellate Body report of 16 January 1998 on Dispute DS26, paragraph 124.
← 15. Appellate Body report of 12 October 1998 on Dispute DS58, paragraphs 129.
← 16. Appellate Body report of 12 March 2001 on Dispute DS135, paragraphs 167, 168 and 178.
← 17. It should be noted that the question of proportionality is a topic of debate in the context of the PP. As noted by (Iverson and Perrings, 2012[108]) proportionality is an “unresolved question” and there is disagreement about how precautionary efforts should be balanced in a way to ensure that policy solutions (and related costs) are proportional to the level of protection attained. From a broader regulatory policy standpoint, however, the OECD has consistently been emphasising the crucial importance of proportionality (OECD, 2010[110]) (OECD, 2021[61]).
← 18. An example of potentially conflicting interests can be found with the energy cooperative set up by Greenpeace in Germany in the wake of power markets deregulation in the late 1990s. While Greenpeace was advocating for specific energy policy decisions, and specifically the exit from nuclear power, which led to a strong increase in the reliance on natural gas – it was also, through this cooperative, selling gas-powered electricity to consumers. The German power sector continued to undergo important transformation and has come under public scrutiny regarding its sales of natural gas (Climate & Capital media, 2021[111]).
← 19. For example, in June 2020, the state of Minnesota Attorney General sued ExxonMobil, Koch Industries and the American Petroleum Institute (the leading oil and gas industry trade association in the US) for having allegedly mislead the public about the role fossil fuels play in causing the climate crisis. This lawsuit is similar to consumer fraud cases filed in New York and Massachusetts against ExxonMobil (Negin, 2020[107]). Koch Industries, a group that may have a vested interest in delaying climate action, has also come under heavy criticism for directly financing groups that have attacked climate change science and policy solutions. Greenpeace estimate that financing amounted to USD 145 million between 1997 and 2018 (Greenpeace, n.d.[109]).