Questions about why patterns of use and influence are how they are bound up with political economy, necessitating a richer understanding of the policy formulation process. If, in the extreme, all decisions were to be made on the basis of CBA, decision makers would have no flexibility to respond to the various influences that are at work demanding one form of policy rather than another. In short, CBA, or, for that matter, any prescriptive calculus, compromises the flexibility that decision makers need in order to “act politically” or meet other policy objectives. Unsurprisingly, this constrains use or shapes the nature of use in particular ways. Political economy then seeks to explain why the economics of the textbook is rarely embodied in actual decision-making and related to this, policy-formulation processes. But explaining the gap between actual and theoretical design is not to justify the gap. So while it is important to have a far better understanding of the pressures that affect actual decisions, the role of CBA remains one of explaining how a decision should look if the economic approach is adopted.
Cost-Benefit Analysis and the Environment
Chapter 17. Political economy of cost-benefit analysis
Abstract
17.1. Introduction
The methodology of cost-benefit analysis (CBA) has been developed over a long period of time. It has also been subjected to many criticisms, as has its theoretical basis – welfare economics. Nonetheless, most (though certainly not all) economists continue to recommend the use of CBA as a “decision-informing” procedure. Chapter 16, furthermore, indicated findings about substantial use of CBA across OECD countries, at least insofar as certain environment-related policy sectors were concerned. Yet, such evidence on actual policy and investment decisions also reveal another story: appraisal processes often downplay the role CBA, despite it commanding consensus among economists, and actual decisions (based perhaps on that appraisal) often are made in a manner that seems to be inconsistent with CBA. One reason for this disparity between theory and practice is fairly obvious: other factors which are important to making a decision often require other tools to be used additionally in impact assessment more generally (see Chapter 18). In some cases these other factors may be deemed more important than information about monetary costs and benefits, and when this further evidence implies a different recommendation to CBA, it will be the latter that “loses out”. Nor can governments simply design policy measures without taking account of political and institutional realities. This, in turn, highlights a number of important considerations.
First, what economists may regard as an “optimal” instrument design tends to serve one overriding goal – economic efficiency. This demands that other goals are considered also in making actual decisions. Such goals are not necessarily consistent with each other, but play a part in shaping practical policy formulation as well as how specific tools such as CBA are actually used.
Second, government is not simply a guardian of social well-being in the manner usually assumed in CBA textbooks. In fact, while “government” is a convenient umbrella term, this is comprised of a variety of different actors who are internal to the policy formulation process who, in turn, are joined by others who are external to the process but who also have a stake in the outcome. This includes pressure groups and lobbies which, in turn, can represent sets of conflicting interests and objectives.
Third, the above considerations indicate that the political and institutional context in which CBA takes place is complex. And so too is the ability of appraisal actors to negotiate this reality. That is, instead of decision makers being all-knowing and all powerful, those involved in appraisal are better thought of as, to paraphrase Cairney (2016), limited in their ability to generate, as well as process, all of the necessary information ideally needed to make “optimal” decisions. Put another way, these actors are rational (given their objectives) but this rationality is bounded, in interesting ways.
What all of this amounts to is that the “social welfare function” that underlies CBA is not the same as the social welfare function (or functions) that those involved in policy and investment formulation adopt. As a result, actual policy and “optimal” policy need not coincide. Evaluating exactly why this “gap” exists is very much a political economy approach to policy analysis and the policy process. This is the subject of this chapter, the remainder of which is structured as follows. It begins by continuing the discussion in Chapter 16 of use and influence of CBA in investment and policy decisions, although some of this relates more generally to its role in impact assessment processes rather than CBA per se. This discussion then moves on to examine possible explanations of such patterns of usage, including the political motives for using (or downplaying or not using at all) CBA. A more realistic view of the “how” and “why” of CBA use should not absolve decision makers from trying to do better, however. Indeed, a number of innovations that move practice in this direction of travel are also discussed.
17.2. CBA in reality: Use and influence revisited
Chapter 16 provided a range of responses by policy actors, in OECD countries, about their use and influence of (environmental) CBA in policy formulation. This revealed a double-edged interpretation. On the one hand, CBA is used (sometimes extensively) and, on the face of it, those involved in this process perceive that it is influential, and so these practical efforts are not in vain. On the other hand, this uptake is not as widespread as it might be, given progress both at the CBA frontier and in translating this progress into practical applications. Such findings broadly accord with those elsewhere in an emerging empirical literature, based on quantitative and qualitative data, which seeks to assess the extent of use of CBA.
For example, evidence on the use of CBA in the World Bank was revealed in an assessment by the Independent Evaluation Group (IEG) (2011). The proportion of World Bank projects using CBA dropped significantly from 1970 to 2000. According to IEG (2011), one (proximate) explanation for this trend was a shift in investment portfolio from policy sectors with a tradition of using CBA (e.g. energy, transport and urban development) to sectors which do not have such a tradition (e.g. education, environment and health). Nonetheless, the IEG report still found a significant reduction in the use of CBA in traditional sectors which the World Bank remains heavily committed towards investing in (e.g. physical infrastructure). Moreover, given the strides made in extending CBA thinking and practice to novel project contexts, a question inevitably arises as to why this progress has not been translated into actual appraisal in these new sectors.
In the United States, a review of 74 impact assessments issued by the US EPA from 1982 to 1999 found that while all of these regulations monetised at least some costs, only about half monetised some benefits (Hahn and Dudley, 2007). Fewer still (about a quarter on average), provided a full monetised range of estimates of benefits although the number doing increased notably over the sample period. This raises important points. Clearly, there is more to do to increase the use of CBA, not least to bring actual practice in line with official guidelines. However, nor is it the case that use of economic appraisal is entirely lacking; it is usually present but often partially implemented.
A logical further question then is whether, when applied, CBA applications were any good in terms of their quality. Some of the indicators assembled by Hahn and Dudley (2007), for the United States, identify a number of relevant issues. For example, even for those (U.S. EPA) applications which estimated costs and/or benefits, it was relatively uncommon for these estimates to be complete (rather than monetising a small sub-set of impacts) and for point estimates to be accompanied by a range (that is, low and high estimates of the value of a given impact).
Moreover, the consideration of different options or alternatives, in cost-benefit terms, was also infrequent. More commonly, practice involved simply comparing some (presumably) favoured single option for a policy change with the status quo. A similar finding emerged from another recent study of EU studies of environmental projects for which financing was requested under regional assistance schemes (COWI, 2011). In other words, the question of what various options (Chapter 2) are under consideration may have been asked at the outset of the appraisal process. However, there is apparently less tangible evidence that CBA was brought to bear on that question at that stage.
There is also valuable information to be gleaned from studies, more broadly, of the impact assessment process. Turnpenny et al. (2015) present evidence of this use – for EU member states as well as the Commission itself – of impact assessments generally rather than more narrowly focusing at CBA. However, as the table indicates some form of CBA is one element of this via use of “monetary assessment”. Specifically, the authors look at 325 policy cases involving impact appraisals across 8 political jurisdictions. (These are the Cyprus, Denmark, European Commission, Finland, Greece, Ireland, Poland and the United Kingdom.) In some cases, these assessments appear to be substantial documents, particularly in the case of the European Commission. In other words, either extremely concise writing or rudimentary analysis appears to be the case, at least at face value given average length of each assessment report. Use of monetary assessment is similarly diverse. It ranges from 0% in Cyprus to 92% in the United Kingdom as Table 17.1 indicates. Of course, this does not tell how comprehensive that assessment was in terms of a full CBA. But it likely gives a first impression of the extent to which cost-benefit thinking is developed more formally in the appraisal process.
Table 17.1. Policy appraisal across selected European jurisdictions
Country/Organisation (period covered) |
Stated motivation for appraisal |
No. of impact appraisals |
Ave. length of report (pages) |
Monetary assessment (%) |
---|---|---|---|---|
Cyprus (2009-11) |
Better legislation, reduce administrative burden |
20 |
14 |
0 |
Denmark |
Better regulation; evidence-based policy-making; |
50 |
2.5 |
56 |
European Commission |
Better and more efficient regulation; consultation and communication |
50 |
84 |
44 |
Finland (2009) |
Better regulation; participation and transparency; evidence-based policy-making |
50 |
2.5 |
18 |
Greece (2010-11) |
Better regulation; consultation, deliberation and participation and transparency; reduce administrative burden |
36 |
17 |
14 |
Ireland (2004-10) |
Reduce administrative burden; better regulation; evidence-based policy-making; consultation |
49 |
13 |
45 |
Poland (2008-10) |
Better regulation; evidence-based policy-making; reducing regulatory costs; transparency and consultation |
20 |
7 |
40 |
United Kingdom (2007-10) |
Reduce administrative burden; transparency and accountability; assess costs and benefits |
50 |
38 |
92 |
Source: Adapted from Turnpenny et al. (2015)
CBA was extensively used in 2014 in a Canadian assessment of its air quality management options (Canadian Department of the Environment and Department of Health, 2014).1 The values estimated included those associated with health improvements as well as a range of environmental values, such as impacts on agricultural productivity (through reductions of ground-level ozone exposure), reduced soiling of residential and commercial buildings (through reductions in ambient air pollution) and improved visibility. This appears to give very high benefit-cost ratios – in the range of 15 to more than 30 – for regulations which increase the environmental standards that (non-transportation) engines, boilers and heaters as well cement production meet. Assuming these values are roughly accurate, this indicates some clear economic merits to tightening these standards. An interesting feature of this analysis is that it is the culmination of a collaborative institutional process involving, amongst others, federal, provincial and territorial governments across Canada.2
Howlett et al. (2015) conducts a survey of nearly 3 000 decision-makers in prominent policy departments in Canada at both the Federal and the Provincial levels. This includes those working in sectors in addition to environment: education, finance, health, transport and welfare, among others. Their results indicate that technical analysis including CBA (but also risk analysis and financial impact analysis) is used as extensively in environment as in (most) other departments and that expertise and capacity for making decisions was comparable in that sector with that in other departments. However, environment is more of an outlier in terms of respondents judging that evidence actually informs decision-making in this sector and adequate support and resources to undertake evidence informed work. That is, respondents working in this policy sector were relatively dubious on these criteria compared with those working in other prominent policy sectors.
Further interesting insights emerge where studies have also tried to pinpoint influence of CBA on decisions. For example, IEG (2011) find relatively higher returns for World Bank projects for which ex ante CBA had been undertaken. Yet, disentangling the influence of appraisal on project outcomes from other confounding factors is a challenge as the IEG report acknowledges. Hahn and Tetlock (2008) review evidence of influence of economic appraisal on a number of health and safety regulations in the United States. This appears to indicate little effect in weeding out regulations which protect life and limb at inexplicably high cost. Moreover, where influence can be identified, CBA has tended to be used to formulate the specific details of an already chosen option. That is, it is more difficult to find examples where CBA has been used to help guide thinking about appropriate policy responses from the outset of the decision process. Therefore, it appears at least in this context, actual applications have not taken advantage of the strength of CBA (and similar technical methods) identified by Turnpenny et al. (2015) which is the possibility to assess options at the design stage of the policy formulation process. By contrast, at least some of the more prominent evidence that exists instead suggest that CBA has been used for fine-tuning design once a policy decision has been made.
That the quality of many CBA applications could fall short, and possibly far short, of good practice might lead to scepticism about whether there is a serious commitment to using economic appraisal to guide policy formulation. There is, however, a risk of concluding too gloomily. So while the point immediately above indicated an absence in some jurisdictions of use of CBA at the outset of policy formulation, there appears significant use of CBA even earlier in the policy cycle in playing an agenda-setting role too. In the United Kingdom, the Stern Review on the Economics of Climate Change (Stern, 2007) and the UK National Ecosystems Assessment (NEA, 2011) are examples of this. Other large-scale ecosystem assessments – such as the TEEB Review (TEEB, 2010) – use benefit assessment to provide important evidence and arguments about what has been lost when ecosystems are depleted and degraded. While not a substitute for policy (which will then require evaluation), this sort of knowledge is important for framing policy thinking and subsequent formulation.
In addition, studies of use and influence are taking stock of what is a moving target given that practice – and its extent – is evolving (more-or-less) continually. There is certainly much more evidence and nuance to unearth as well. Companies in the water industry in England and Wales, for example, make use of social CBA as one element of the investment case that they put forward to the water services regulation authority OfWAT under the periodic pricing reviews that this sector is subject to. This has resulted in a huge grey literature on the practical implementation of stated preference methods – notably approaches based on choice modelling – within the water sector. Lessons about use undoubtedly can be found too in these studies; however, as these data are both proprietary and unpublished (in large part), the extent to which these lessons can be learned easily is more questionable.
17.3. The politics of CBA
The fact that decisions are often inconsistent with, or downplay, CBA can be squared with the reality that, in practice, CBA is only one input to the decision and, in some circumstances, other considerations (as well as analytical tools) trump the thinking that is codified in that economic appraisal. What this means, in practice, needs exploring further and at best a “marker” indicating an urgent need for a more detailed and nuanced understanding of actual policy formulation and how CBA fits into these processes.
Indeed, this policy-making model is, in the words of Adelle et al. (2012, p. 402): “…a far more chaotic model of policy making, in which many actors pursue multiple goals” than is commonly assumed in CBA texts.
For example, the “many actors” referred to might consist of those who are “internal” (to the appraisal process such as serving officials and ministers) or those who are “external” actors (perhaps members of the legislature or external consultants, and so on) (Turnpenny et al., 2015). The “multiple goals” might reflect the various motives these actors have for utilising CBA (or, conversely, downplaying its role). For Dunlop et al. (2010) this helps explain their observations about what they term an “incomplete contract”: the mismatch between the codification of assessment requirements in official guidelines and the discretion that appears possible in practice. Much of the debate here in the literature is usually conducted in terms of assessment tools and impact assessment more generally. However, this remains highly relevant for thinking about the issues that pertain to CBA, and thus use of CBA can understood in this context.
As such, Dunlop et al. (2010) identify four motives underlying the usage of assessment tools.
The first is the one which will arguably be most familiar for cost-benefit practitioners. This is an “instrumental usage”, characterised notably by an objective to inform evidence-based policy-making. This fits a more rationalistic approach to using analytical tools for policy formulation.
Second, there is “political usage”. This could refer to situations where appraisal is used by some political entity to exercise control over the policy formulation process. This can take a variety of forms depending on political and institutional context (Turnpenny et al., 2015). But in the U.S. context, Posner (2001) argues that an interpretation of CBA use is that it has been a way in which politicians (e.g. elected political representatives) exercise power over the agencies that formulate policy. This, in turn, might simply be based on wanting to delay decisive (and possibly irreversible) action by the latter until sufficiently satisfied that these actions are consistent with political objectives (Radaelli, 2008).
Third, there is a “communicative usage” which refers to using an appraisal tool for consultation. Again, this can take a variety of forms from long-standing formal consultation processes to more substantive interactions between some authority and stakeholders, perhaps even involving deliberation. Tools within the policy formulation process, as well as the process itself provide a medium for these interactions, presumably to a greater or lesser extent depending on the characteristics of that tool.3
Lastly, there is a “perfunctory usage” which encapsulates pragmatism, where appraisal is required but not implemented by institutional actors with any conviction. In this sense, appraisal in the policy formulation process exists but reflects perhaps what Radaelli (2008) calls political symbolism: that is, is the use of a particular policy formulation tool “merely” (or perhaps mostly) a “ritual” or simply a “box to tick”?
Table 17.2. Examples of motives underlying the usage of assessment tools
Political |
Instrumental |
Communicative |
Perfunctory |
|
---|---|---|---|---|
Climate change I – assessment of options for addressing climate change in Europe post-2012 (EC) |
X |
|||
Groundwater protection – directive to improve protection of groundwater from pollution (EC) |
X |
|||
Air pollution – strategy on air pollution (EC) |
X |
X |
||
Landfill – policy for implementation EU Landfill Directive (UK) |
X |
X |
||
Climate change II – policy on linking Kyoto Protocol project credits to the European Emission Trading Scheme (ETS) (UK) |
X |
|||
Environment / health – plan for preventative action of environmental sources of health impacts (EC) |
X |
X |
Source: Adapted from Dunlop et al. (2010).
A key point here is that appraisal of a particular proposed action does not need to trace its genesis to one of these motives for usage only. In this respect, Table 17.2 describes the findings of Dunlop et al. (2010) in the context of impact assessment, in the European Commission or within the United Kingdom, more generally (rather than CBA specifically). The table includes those assessments relating to environmental proposals and summarises the motives for usage that the authors were able to ascribe, based on the four types of usage previously defined, and is based on judgements made from a detailed inspection of relevant policy documents and so on. The findings indicate that actual appraisal may reflect more than one of these possible usages, even for the handful of environmental proposals discussed here. Moreover, instrumental usage is not necessarily a motive for appraisal; indeed, on the basis of the table, it is a motive in 2 of the 6 cases illustrated and it is never the sole motive according to Dunlop et al.
While these results relate to impact assessment more generally, the findings possibly do throw light on discussions about the quality of CBA considered in Chapter 16 and the previous section of the current chapter. That is, recognising a broader set of motives underlying the usage of analytical tools such as CBA provide an interpretation of findings about shortcomings in CBA uptake or its quality which have typically been identified in the handful of studies that have posed this question. It also might explain why in practice policy-makers resort to a range of analytical tools for appraisal, that are themselves either incomplete or just as problematic as CBA (if not more so) (see Chapter 18).
As an illustration, recall that for Posner (2001), political usage of CBA might be motivated by a desire for control by politicians of the bureaucracy. In this case, politicians value CBA for reasons other than wishing for actual decisions to be literally bound by its recommendations. Indeed, Posner uses the example of the 1999 Senate Bill in the United States as an example of this flexibility. This mandates that while CBA is undertaken, the proposed action for which the appraisal is done – itself need not be guided by it. Alternatively, political usage might shape the character of a CBA. For example, political agendas about public management, however, may influence the implementation of CBA by perhaps being content with a focus mostly on cost burdens, or benefits from the narrow standpoint of a particular sector of society (e.g. small or medium enterprises, etc.) (Radaelli, 2008).
Put another way, advocacy of CBA – in policy processes, based on non-instrumental usages – does not necessarily require politicians to view its worth as a way of achieving the social goal exemplified in the standard cost-benefit criterion: economic efficiency. In turn, this may also provide an explanation of shortcomings in CBA quality, such as apparently inadequate quantification and valuation of impacts. Adelle et al. (2012) thus ask “quality for whom?” in relation to such judgements about shortcomings. In other words, while the evaluation of actual CBA by economists has been (logically enough), based on their own criteria, what is “good enough” from the perspective of those within the policy process, juggling an array of motives and priorities, might be quite different.
All this has a practical importance too for making recommendations about how the appraisal process can do better. Typically, these proposals have focused on improving guidance and building capacity (i.e. investing in technical expertise). For example, in 2015, the Third Report of the UK Natural Capital Committee (NCC) in advocating better treatment of natural capital in UK public policy recommends that: “The Government should revise its economic appraisal (Green Book), implementing our advice, and as a matter of urgency, apply the revised guidance to new projects.” (NCC, 2015, p. 6). Quinet et al. (2013) (for France) also makes substantial recommendations about French guidelines in order to address new appraisal challenges. Such guidance are focal documents and so are important starting points. Yet, the argument in Adelle et al. (2012) is that there are higher level considerations that may ultimately constrain better practice (or simply constrain it living up to what is currently intended to do). Relieving these constraints – which might otherwise lead to watered down forms of CBA – is likely to be a considerable challenge, however, raising questions about political leadership, institutional context and bureaucratic culture.
Similarly, capacity and expertise may also constrain both acceptance and use of CBA given that it requires an input of time and effort in order to understand the underlying rationale and some of the technical details. Hertin et al. (2009) note a trend in countries such as the United Kingdom and Germany, for internal actors (e.g. serving officials) in appraisal processes to deal less frequently with policy matters in the substantive areas in which they had trained or had very little training in formal policy analysis. One distinguished economic advisor in the United Kingdom remarked, for example, on the distinction between:
“the theorists who seek to trap the inner secrets of the economy in their models and the practitioners who live in a world of action where time is precious, understanding is limited, nothing is certain and non-economic considerations are always important and often decisive” (Cairncross, 1985).
CBA, with its elaborate theoretical underpinnings and reasonably well-defined but extensive rules for valid implementation, may therefore be too complex for the busy civil servant wrestling with a complex array of policy motives. The situation will be worse where economic advice or expertise is regarded as an “appendage” to higher-level decision-making. There are two views of such situations: (a) that they reflect a poor understanding of the relevance of CBA, and economic techniques in general, or (b) that the decision-making structure itself reflects the distrust that is felt about economic evaluation techniques. The former view seems easier to fix than the latter, although the political literature on CBA (and impact assessment, more generally) appears to suggest that it is these trickier issues that really matter insofar as these constrain use.
Howlett et al. (2015) emphasise an important grouping of external actors involved in the appraisal process that must have had some role in easing this constrain. These are analysts, including consultants on the outside but working for governments on its policy analysis. In the context of environmental CBA, this might include undertaking environmental valuation (whether estimating primary or secondary monetary values) and preceding stages (such as estimation of physical parameters to be valued) or subsequent steps in the CBA process. As Howlett et al. note, this work undertaken by well-trained external personnel might even supplant internal analysis. In this way, capacity and technical expertise are being outsourced, on the one hand relieving capacity constraints, on the other hand presumably raising interesting issues about the governance of this outsourcing process.
It is important to acknowledge that situating CBA in these wider considerations about the policy formulation process does not inevitably mean that it will fall short in the core mission that cost-benefit practitioners envisage for it. Adelle et al. (2012), for example, wonder whether political controversy can be lessened, and so more easily resolved, by transferring a contested issue into a technocratic context such as CBA. On the fact of it, use of CBA might be a means to reduce the influence of special interest groups in the formulation process. Assuming those interest groups are not purely “honest brokers” in that process, this might be viewed as no bad thing (see, for example, Posner, 2001). Alternatively, CBA could be an avenue for interested parties, outside of Government, to monitor an agency and its proposals, offering some additional tier of scrutiny (Radaelli, 2008).
A possible example of this in the United Kingdom is the appraisal of HM Government proposals for a proposed investment linking London with the Midlands and North of England by high speed rail network (HS2). CBA formed part of the official case for government financial support and significant scrutiny of the official CBA of HS2 by those opposed to the scheme. Discussion focussed on costs which were left out of the appraisal; particularly the landscape changes and biodiversity losses that the new infrastructure may cause. Debate has also surrounded the estimation of time savings for business travellers that a faster train service provides. What is interesting here is the way in which cost-benefit arguments have contributed to shaping this debate and, moreover, the economic content of this debate has not been the sole preserve of technical experts.
17.4. Incentives, behaviour and CBA
Another way in which CBA quality might be assessed is by asking: “how accurate is it?”. Testing this might involve first of all a mechanical exercise to compare the results of ex ante and ex post CBA studies of the same intervention. An ex ante CBA is essentially a forecast of the future: estimating likely net benefits in order to inform a decision to be made. Ex post CBA – i.e. conducting further analysis of costs and benefits of a project at a later stage – can be viewed therefore as a “test” of that forecast. That is, what can be learned – e.g. for future, similar applications or the accuracy with which CBA is undertaken generally – with the benefit of this hindsight? Actual use of CBA is less common than use of economic appraisal ex ante. But there are some important exceptions. For example, Meunier (2010) documents extensive official use of ex post CBA for transport infrastructure investments in France going back a number of years.
Such assessments can provide useful and additional insights which could improve the way the ex ante CBA is done (and its findings interpreted) (Meunier, 2010, Quinet et al., 2013). Flyvbjerg et al. (2003) provide a meta-study of the ex ante and ex post costs of transport infrastructure investment in Europe, USA and other countries (from 1920s to 1990s). The results are revealing: ex post cost escalation affected 90% of the projects that they examined. Nor are cost escalations a thing of the past according to these data. HM Treasury (2018) for example provides guidance for incorporating such findings into actual appraisal through official premia on investment costs (and timetables to completions) in the case of physical infrastructure projects. However, the direction of bias is not uniform across policy contexts. The opposite can be found in the case of environmental policy regulations. For example, MacLeod et al. (2009) find evidence across the EU for lower regulatory costs ex post (than predicted ex ante), a finding they attribute to firms affected by these burdens finding more cost-effective ways of complying with policy. For the United States, however, Hahn and Tetlock (2008) find no systematic evidence of such bias for environmental regulations.
Addressing cost optimism in public investment projects (or more generally appraisal optimism) might start from at least two points. One is to “live with it”. This is the UK procedure in that it is recommended in official guidance to build in a “premium” to estimated capital and operating costs of e.g. public projects involving investment in infrastructure. A second response is to “overcome it”. That is, to see it as a technical result of poor analysis, and seek to do better through more training for practitioners and so on. However, discussions about such matters clearly also need to consider the “political economy of CBA” and behavioural incentives that actors in this process face. This is a point made by De Rus (2011) in the context of rail projects: demand forecasts always seem too high and cost forecasts always seem to be too low, all viewed from an ex post perspective. Forecasting is undoubtedly challenging and so may result in these technical errors being made. However, strategy and incentives possibly plays its part as well.
For example, Florio and Santori (2010) look at the issue of appraisal optimism in the context of the EU appraisal of the Cohesion and Structural Funds disbursed as part of its regional policy.4 An issue arises here because in making its decision to approve financing for projects, the EU is reliant on the information (about costs and benefits) that it receives from those in eligible regions proposing investments (such as in transport or environmental infrastructure). This might be a regional or national authority which in turn could be using information provided by private agents (e.g. a contractor of some description).
A member country or regional jurisdiction (that is eligible for EU funds) proposes a project. To substantiate this request for assistance, the jurisdiction must firstly determine the net present value (NPV) of the project on social CBA terms. If the social NPV>0, then it is required to do a financial analysis of the cash flows associated with the project. If the financial NPV>0, the EU will not (co)finance the project, on the grounds that the project pays its own way. Only if the financial NPV<0, will the EU consider financing part of the funding gap that exists.
COWI (2011) illustrates the incentive problem starkly here in the following quotation from a EU Member State representative that appraisal is: “… a matter of making the financial analysis look as bad as possible in order to increase the funding need, and to make the economic analysis to look as positive as possible in order to justify the public funding” (p. x). There is an increasing suspicion that such incentives could explain a lot of what might have been previously thought to be simply an analytical shortcoming.
How is the suggested appraisal bias possibly an issue of the incentives that policy actors that CBA process face? One problem is that inevitably the EU, as “principal” in this appraisal process, has limited ability to assess the veracity of the social and financial CBA presented as part of the case put by the jurisdiction as the “agent”. For example, imposing this sort of scrutiny is costly and, anyhow, assessors inevitably have bounded rationality (limited time and ability, given other pressing priorities). To the extent that there is scope for (and willingness to) exaggerate financial costs and social benefits, then this institutional context could provide the ingredients for this to happen.
Addressing this has to involve altering these incentives. Some of this has been introduced into the process already with “co-financing”. For example, some of the burden of cost inefficiency falls on those jurisdictions now sharing the burden of paying for the project along with the EU. Florio and Sartori (2010) propose ex post accountability as an additional instrument. That is, if a jurisdiction knows there is a good prospect that its appraisal process will be scrutinised ex post and that this scrutiny will be highly likely to result in any shortcomings being exposed and possibly “punished” in some way, then incentives to do the ex ante assessment properly, in the first place, are heightened.
Of course, these are important “ifs” and “ands”. While punishment or reputational risk presumably will be a concern for the agent, whether the principal is really prepared to play the role of accuser, to this extent, is another matter. Put another way, this may be either unfair (because inaccuracy arose for unknown reasons beyond the agent’s control) or politically difficult. More generally, whether ex post studies can be routinely undertaken is an open question. There may be little appetite amongst politicians for adding costly ex post studies to look at decisions which are literally history and a potential source of political embarrassment (Hahn and Tetlock, 2008). That said, serious consideration of the political economy of CBA, in this way, is to be welcomed as a way of improving the CBA process.
17.5. Improving the process of appraisal
Of course, explaining shortcomings in actual CBA, relative to the ideal, while important does not justify them and the role of CBA remains one of explaining how a decision should look if the economist’s conception of this approach is adopted. The question then is what implications these explanations have to shape actual CBA more in the mould of the latter. An important notion here is the institutional infrastructure that might help this process. This must include ground-rules for practical CBA applications – i.e. mandated use, guidelines, manuals, etc. – as well as technical capacity. But, as the discussion in previous sections indicated, this is unlikely to be enough in itself.
Equally, if not more, crucial then is strengthening other aspects of the process by which CBA is done. This might include formal institutions to scrutinise (and rate) the quality of appraisals. For example, impact assessment in the EU is one prominent area of this and itself reflects an ongoing process with the most recent guidelines strengthening the potential role for CBA (European Commission, 2009a, b). This now requires that the executive summaries of Impact Assessment (IA) reports “… provide a clear presentation of the benefits and costs (including appropriate quantification) of the various options …” (p1). This is supplemented for more prominent CBAs in the EU IAs by more detailed guidance on assessing and valuing non-market impacts. But an interesting innovation to all of this architecture of economic appraisal is the addition of independent scrutiny of IA conclusions and appraisal via an independent Regulatory Scrutiny Board (RSB), formerly the Impact Assessment Board.
The RSB, in its original form, was established in 2007 to have a role in evaluating formal impact assessment of policies (rather than projects). This is a substantive role, as a positive decision by this body is needed for a proposal which is the subject of the impact assessment to be presented to the European Commission. The RSB is able to demand improvements in the assessment evidence as well as require a resubmission of the evidence, in the light of these revisions. A recent example of an IA subject to this scrutiny is European Commission (2013a) which sets out options for institutional rules to develop unconventional energy resources (e.g. shale gas) in Member States (including a potential new Directive if current legislation, particularly on environmental protection, is deemed insufficient). Important aspects of this appraisal that the RSB opinion document (European Commission, 2013b) focuses on asking for clearer identification of economic benefits (both in terms of assessing impacts on economic activity and fiscal revenues) and a greater consideration of costs and benefits of options more generally (as well as specific queries about how compliance cost estimates were calculated for those data which were presented in the original IA).
Table 17.3 summarises the percentage of assessments which the RSB required to be resubmitted. Notably, the number of required resubmissions initially increased since its establishment and has not appeared to have exhibited any noticeable decline in subsequent years, although clearly the series here is limited given the novelty of this institution. The number of IA submitted is, however, noticeably less in 2014 and 2015. Interestingly, the problems raised do not appear to have changed much in recent evaluations of these IAs (e.g. 2012-15) compared with earlier verdicts. Banable (2013) summarises some of the key issues which emerged from the scrutiny work that this body undertook in the period 2009 to 2012. Amongst the most prominent and frequent conclusions on the quality of IAs generally have been issues identified with the analysis of impacts, definitions of project objectives, baselines and options, as well as the assessment of economic impacts.
Table 17.3. Percentage of assessments which had to be resubmitted
2007 |
2008 |
2009 |
2010 |
2011 |
2012 |
2013 |
2014 |
2015 |
|
---|---|---|---|---|---|---|---|---|---|
% Re-submission requested |
9% |
33% |
37% |
42% |
36% |
47% |
41% |
40% |
48% |
No. of IA initially submitted |
102 |
135 |
79 |
66 |
104 |
97 |
97 |
25 |
29 |
Source: Regulatory Scrutiny Board (2015).
In the United Kingdom, the Regulatory Policy Committee (RPC) is a roughly analogous institution to the RSB. All its information and reports are online publicly available, which provides some transparency for “outsiders” to view the committee’s work. A key element of this work, however, is that its remit focuses on the evidence for the business case as well as the impact of a proposal for business interests (and charitable or voluntary bodies). Obviously, this is different from scrutinising the evidence for the social case, perhaps one based on standard CBA. Nevertheless, its recommendations are based on detailed scrutiny. For example, in its evaluation of the UK plastic bag tax (RPC, 2014), which would require retailers to charge for use of (disposable) plastic bags by their customers, the RPC questioned the assumption in the cost-benefit analysis conducted by Defra that revenues from the tax would be passed on to charities (rather than boost business profits) and that cost savings would be passed on to consumers.
On the face of it, the verdicts of the RPC have teeth. Ultimately it confirms or rejects the evidence put before it, given its terms of reference (judgements about the costs and benefits to business, the quality of the evidence, and so on). The RPC assessment of a Defra proposal on biodiversity offsetting (RPC, 2013) goes further in its criticisms, giving it a “red rating” as not fit for purpose. In particular, this verdict picked up on an apparent lack of provision for enforcement and monitoring, as well as the increased costs the proposal would impose on developers (given the policy was partly targeted on requiring property developers to offset the loss of greenspace and biodiversity resulting from their construction projects).
The UK and EC RSB cases are not unique; other examples exist for other countries too, such as France (see, for example, Quinet et al., 2013). Indeed, a large number of OECD countries have some form of similar institutional structures some of which are at arm’s length from Government (see OECD, 2015). Further evidence of scrutiny at the EU level can also be seen in the institutions of the Chemicals Directive (i.e. REACH, see for example, European Commission, 2007). Under this regime, the use of (new and existing) chemicals by industry is licenced with these permissions only approved if an applicant can show that the net social benefits are positive.
The creation of these institutions might be viewed as a positive development. At the very least, it allows routine evidence to be collected about the quality of appraisals and in both the cases above made available to a potentially wide audience. And while the RSB’s reports make for sobering reading about recent IA quality, the existence of this institution provides a platform as well as the incentives for doing better in the future. All of these measures could have an important influence on the quality of CBA from the outset (e.g. if poor quality or inadequately detailed appraisals become more likely to rejected).
It is important to ask critical questions as well. The membership of the RSB, while independent and full-time, appears to be former high-level officials in economic, social and environmental decision-making in the EU. A natural question to ask is to what extent members should be representative of the diverse actors in the appraisal process and what the composition should be between “internal” and “external” actors in that respect. One other issue is that any such body is reliant on information provided and proper scrutiny, as EU cohesion funds example indicates, is both costly and difficult (see Florio and Sartori, 2010).
Another interesting question surrounds the underlying motive for these institutions: that is, is it simply better practice for “instrumental usage” reasons or is it something else, such as to exercise political control and perhaps limit proposals. Hence, while this is pure speculation currently, one question might be whether a fall in the number of IA being submitted (such as that in Table 17.3) is due to a possibly “chilling effect” of this scrutiny and, moreover, whether that effect is an anticipated (deliberate) consequence of its design. In the case of the RPC, the terms of reference more overtly point (at least in some respect) to “political usage” given its emphasis within an apparent deregulation agenda. The RPC itself appears aware of this, as well as the dynamic effect this might have on the evidence it sees. An example of this recognition is a report on the RPC’s work by the (Parliamentary) Public Accounts Committee (PAC, 2016). This notes both a RPC finding that, in 2014, only one third of cases it examined had satisfactory assessment of social costs and social benefits and the fact that this body has no power to influence this by rejecting these assessments, for example (given its remit to focus on regulatory (net) costs to business). Put this way, given these weak incentives it is not surprising that policy proposers provide incomplete or sub-par evidence on social benefits (despite this being a requirement and the subject of numerous guideline documents, starting with HM Treasury, 2018, and so on). Of course, RPC’s reach – or some other organisational body – could be broadened in this way to correct that imbalance.
Greater uptake of CBA, of course, could depend also on how practical and accessible the tool is to routine use. Renda et al. (2013) provide an assessment of the role and use of IA methods amongst EU Member States and beyond and discuss critically how different approaches might be routinely used. That judgement is based on a range of criteria, including burdens imposed by data requirements and whether applications can be done by generalists or only those with access to specialist skills (of using economic models, and so on). Responding to policy needs in a timely way is an important attribute for appraisal processes to be judged against. In this respect, the growing breadth and depth of environmental valuation databases is a notable development. This includes the pioneering EVRI database (Environmental Valuation Reference Inventory) maintained by authorities in Canada (www.evri.ca) (see Chapter 6).
In the United Kingdom, the Environment Agency is using CBA to consider options for compliance with the EU’s Water Framework Directive. An interesting feature of these appraisals is that much of the detailed appraisal work is undertaken by dozens of environment officers – with little previous training in economic approaches – working in relatively local river management catchments. In this case, local knowledge of ecological conditions is combined with valuation data which has been collated more centrally. What this means is that if the data provision challenge can be surmounted, transforming this into meaningful appraisal need not be the preserve of the economic specialist.
17.6. Conclusions
CBA works with a very precise notion of economic efficiency. A policy is efficient if it makes at least some people better off and no-one worse off, or, far more realistically, if it generates gains in well-being for some people in excess of the losses suffered by other people. In turn, well-being is defined by people’s preferences: well-being is increased by a policy if gainers prefer the policy more than losers “disprefer” it. Finally, preferences are measured by willingness-to-pay (accept) and this facilitates aggregation across the relevant population: the numeraire is money. The underlying social welfare function consists of the aggregate of individuals’ changes in well-being and would typically take a form such as the following: where Δ signifies “change in”, W is well-being and ΔW can be positive for some individuals and negative for others, i is the ith individual and t is time (discounting is ignored, for convenience). For a policy to pass a CBA test, ΔSW must be positive.
Political economy suggests that actual decisions are not made on the basis of this social welfare function. While simplistic as it stands, this immediately explains why CBA may be rejected or its use (and character) falls short at the political level: it simply fails to capture the various pressures and motives for usage amongst governments in making decisions. The essential point is that the textbook recommendation is formulated in a context that is wholly different from the political context. CBA is, quite explicitly, a normative procedure. It is designed to prescribe what is good or bad in policy-making. But politics can be thought of as the art of compromise, of balancing the various public and specialised interests embodied in what might be termed as a “political welfare function”.
If, in the extreme, all decisions were to be made on the basis of CBA, decision makers would have no flexibility to respond to the various influences that are at work demanding one form of policy rather than another. In short, CBA, or, for that matter, any prescriptive calculus, can compromise the flexibility that decision makers need in order to “act politically”.5 Unsurprisingly, this constrains use or shapes the nature of use in particular ways, as discussed in this Chapter. Political economy then seeks to explain why the economics of the textbook is rarely embodied in actual decision-making and related to this, policy-formulation processes. But explaining the gap between actual and theoretical design is not to justify the gap. So while it is important to have a far better understanding of the pressures that affect actual decisions, the role of CBA remains one of explaining how a decision should look if the economist’s social welfare function approach is adopted.
References
Adelle, C., A. Jordan and J. Turnpenny (2012), “Proceeding in Parallel or Drifting Apart? A Systematic Review of Policy Appraisal Research Practices”, Environment and Planning C: Government and Policy, Vol. 30, pp. 401-415, http://dx.doi.org/10.1068/c11104.
Banable, S. (2013), “The European Commission Impact Assessment System”, Paper presented to the Conference on Theory and Practice of Regulatory Impact Assessments in Europe”, Paris, June 2013.
Cairncross, A. (1985), “Economics in theory and practice”, American Economic Review, Vol. 75, pp. 1-14, www.jstor.org/stable/1805562.
Cairney, P. (2016), The Politics of Evidence-based Policy Making, Palgrave Macmillan, London.
COWI (2011), Report on Ex-post evaluation of Environmental Cohesion Fund Projects 2003-2008, European Commission, Directorate-General Regional Policy, Brussels.
de Rus, G. (2011), Introduction to Cost-Benefit Analysis, Looking for Reasonable Shortcuts, Edward Elgar, Cheltenham.
Dunlop, C.A. et al. (2012), “The Many Uses of Regulatory Impact Assessment: A Meta-Analysis of EU and UK Cases”, Regulation and Governance, Vol. 6, pp. 23-45, http://dx.doi.org/10.1111/j.1748-5991.2011.01123.x.
European Commission/DG ENV (2013a), An Initiative on an Environment, Climate and Energy Assessment Framework to Enable Safe and Secure Unconventional Hydrocarbon Extraction, European Commission, Brussels.
European Commission/Impact Assessment Board (2013b), Opinion on ’DG ENV – An Initiative on an Environment, Climate and Energy Assessment Framework to Enable Safe and Secure Unconventional Hydrocarbon Extraction’, European Commission, Brussels.
European Commission (2009a), Impact Assessment Guidelines, SEC(2009)92, European Commission, Brussels, http://ec.europa.eu/smart-regulation/impact/commission_guidelines/docs/iag_2009_en.pdf.
European Commission (2009b), Memo: Main Changes in the 2009 Impact Assessment Guidelines Compared with 2005 Guidelines, European Commission, Brussels, http://abrio.mee.government.bg/upload/docs/revised_ia_guidelines_memo_en.pdf.
European Commission (2007), Reach In Brief, European Commission, Brussels, http://ec.europa.eu/environment/chemicals/reach/pdf/publications/2007_02_reach_in_brief.pdf.
European Parliament (2018), Draft opinion of the Committee on the Environment, Public Health and Food Safety for the Committee on Legal Affairs and the Committee on Constitutional Affairs on the interpretation and implementation of the interinstitutional agreement on Better Law-Making, European Parliament, Brussels, www.europarl.europa.eu/sides/getDoc.do?type=COMPARL&reference=PE-615.308&format=PDF &language=EN&secondRef=01.
Florio, M. and D. Sartori (2010), “Getting incentives right, do we need ex post CBA?”, Working Paper No. 01/2010, Centre for Industrial Studies, Milan.
Flyvbjerg, B., M.K. Skamris Holm and S.L. Buhl (2003), “How common and how large are cost overruns in transport infrastructure projects?”, Transport Reviews, Vol. 23(1), pp. 71-88, http://dx.doi.org/10.1080/01441640309904.
Hahn, R.W. and P.M. Dudley (2007), “How well does the U.S. Government do benefit-cost analysis?”, Review of Environmental Economics and Policy, Vol. 1(2), pp. 192-211, https://doi.org/10.1093/reep/rem012.
Hahn, R.W. and R.C. Tetlock (2008), “Has economic analysis improved regulatory decisions?”, Journal of Economic Perspectives, Vol. 22(1), pp. 67-84, http://dx.doi.org/10.1257/jep.22.1.67.
Hertin, J. et al. (2009), “Rationalising the Policy Mess? Ex Ante Policy Assessment and the Utilisation of Knowledge in the Policy Process”, Environment and Planning A, Vol. 41, pp. 1185-1200, http://dx.doi.org/10.1068/a40266.
HM Treasury (2018), The Green Book: Central Government Guidance on Appraisal and Evaluation, HM Treasury, London, www.gov.uk/government/uploads/system/uploads/attachment_data/file/220541/green_book_complete.pdf.
Howlett, M. et al. (2015), “Policy Formulation, Policy Advice and Policy Appraisal: The Distribution of Analytical Tools”, in Jordan, A. and J. Turnpenny (eds.), The Tools of Policy Formulation: Actors, Capacities, Venues and Effects, Edward Elgar, Cheltenham.
Independent Evaluation Group (IEG), (2011), Cost-Benefit Analysis in World Bank Projects, World Bank, Washington, DC, https://ieg.worldbankgroup.org/Data/Evaluation/files/cba_full_report1.pdf.
Macleod, M. et al. (2009), Understanding the Costs of Environmental Regulation in Europe, Edward Elgar, Cheltenham.
Meunier, D. (2010), Ex post evaluation of transport infrastructure projects in France: Old and new concerns about assessment quality, Laboratoire Ville Mobilité Transports, Université Paris-Est, www.civil.ist.utl.pt/ContentPages/694954807.pdf.
NCC (Natural Capital Committee) (2015), State of Natural Capital, Natural Capital Committee, London.
OECD (2015), OECD Regulatory Policy Outlook 2015, OECD Publishing, Paris, http://dx.doi.org/10.1787/9789264238770-en.
PAC (House of Commons Committee of Public Accounts) (2016), Better Regulation: Eighteenth Report of Session 2016-17, House of Commons, London, www.publications.parliament.uk/pa/cm201617/cmselect/cmpubacc/487/487.pdf.
Posner, E.A. (2001), “Controlling Agencies with Cost-Benefit Analysis: A Positive Political Theory Perspective, The University of Chicago Law Review, Vol. 68(4), pp. 1137-1199.
Quinet, É. et al. (2013), Cost-Benefit Analysis of Public Investments: Summary and Recommendations, Report of the Mission Chaired by Émile Quinet, Commissariat Général à la Stratégie et à la Prospective, www.strategie.gouv.fr/sites/strategie.gouv.fr/files/atoms/files/cgsp-calcul_socioeconomique_english4.pdf.
Radaelli, C.M. (2009), “Rationality, Power, Management and Symbols: Four Images of Regulatory Impact Assessment”, Scandinavian Political Studies, Vol. 33(2), pp. 164-188, http://dx.doi.org/10.1111/j.1467-9477.2009.00245.x.
RPC (Regulatory Policy Committee) (2014), “Impact Assessment Opinion: Plastic Carrier Bags Charge”, Regulatory Policy Committee , London, www.gov.uk/government/uploads/system/uploads/attachment_ data/file/499221/2014-9-4-RPC14-DEFRA-2124_2_-Plastic_Carrier_Bags_Charge.pdf (Accessed 10/03/2017).
RPC (2013), “Impact Assessment Opinion: Biodiversity Offsetting”, Regulatory Policy Committee , London, www.gov.uk/government/uploads/system/uploads/attachment_data/file/260635/2013-10-03-RPC13-DEFRA-1840-Biodiversity-Offsetting.pdf (Accessed 10/03/2017).
Stern, N. (2007), The Stern Review on the Economics of Climate Change, Cambridge University Press, Cambridge, www.cambridge.org/catalogue/catalogue.asp?isbn=9780521700801.
TEEB (2010), The Economics of Ecosystems and Biodiversity, Mainstreaming the Economics of Nature. A Synthesis of the Approach, Conclusions and Recommendations of TEEB, Routledge, Oxford.
Turnpenny, J. et al. (2015), “The Use of Policy Formulation Tools in the Venue of Policy Appraisal: Patterns and Underlying Motivations”, in Jordan, A. and Turnpenny, J. (eds) The Tools of Policy Formulation: Actors, Capacities, Venues and Effects, Edward Elgar, Cheltenham.
UK National Ecosystem Assessment (2011), The UK National Ecosystem Assessment: Synthesis of the Key Findings, UNEP-WCMC, Cambridge.
Notes
← 1. Footnote by Turkey:
The information in this document with reference to “Cyprus” relates to the southern part of the Island. There is no single authority representing both Turkish and Greek Cypriot people on the Island. Turkey recognises the Turkish Republic of Northern Cyprus (TRNC). Until a lasting and equitable solution is found within the context of United Nations, Turkey shall preserve its position concerning the “Cyprus issue”.
Footnote by all the European Union Member States of the OECD and the European Union:
The Republic of Cyprus is recognised by all members of the United Nations with the exception of Turkey. The information in this document relates to the area under the effective control of the Government of the Republic of Cyprus.
← 2. See: www.gazette.gc.ca/rp-pr/p1/2014/2014-06-07/html/reg2-eng.html (accessed December 2017).
← 3. See: www.ccme.ca/en/resources/air/aqms.html (accessed December 2017).
← 4. The discussion in Chapter 16 of countries’ current practices regarding the publication of CBAs in different contexts is of relevance here.
← 5. The EU Structural & Cohesion Funds (SCF) disbursed more than EUR 300 billion over the period 2007-13. How parties applying to the SCF should carry out CBA is illustrated in a guidance document (European Commission, 2008).
← 6. European Parliament (2018) includes the following statement in a draft opinion on the interpretation and implementation of the interinstitutional agreement on Better Law-Making:
“The Committee on the Environment, Public Health and Food Safety calls on the Committee on Legal Affairs and the Committee on Constitutional Affairs, as the committees responsible, to incorporate the following suggestions into its motion for a resolution:
…
Impact assessments
Reiterates its call for the compulsory inclusion in all impact assessments of a balanced analysis of the medium- to long-term economic, social, environmental and health impacts;
Stresses that impact assessments should only serve as a guide for better law-making, and as an aid for making political decisions, and should in no event replace political decisions within the democratic decision-making process, nor should they hinder the role of politically accountable decision-makers;
Considers that impact assessments should not cause undue delays to legislative procedures, nor should they be utilised as procedural obstacles in an attempt to delay unwanted legislation;
...”.