This chapter lists out the lessons learnt and the key challenges to an increasingly “first choice” approach to regulatory delivery. Problems related to data integration into a single register, reliance on third parties for automatizing solutions and updating of risk classes in a timely interval would have to be resolved. Legal and administrative hurdles also place hindrance on the development of IT tools. In addition to this, IT systems need to fillip the real goals of regulatory delivery – risk management and reduction, to ensure a meaningful protection of key elements of the public welfare – than just weeding out non-compliant actors. The report ends with reference to the guidance existing OECD toolkits and frameworks that can be used in resolving these issues.
Data-Driven, Information-Enabled Regulatory Delivery
4. Lessons learnt
Abstract
Research is ongoing and these pilots are showing that, given the increasing availability of computerised historical inspection records, and the decreased entry barriers to apply Machine Learning techniques (growing availability of skills and software, decreasing cost of computing power), using such an approach to improve risk-based targeting can now be considered a “first choice” approach. Except when historical records are unavailable (or recorded in a way that cannot be processed easily, i.e., pure text files, which require a more complex analytical process), conducting such type of research is clearly possible, not overly costly, or complex, and can provide significantly better results than more “traditional” approaches based on heuristics and on experience of inspectors and outside experts. The ability of an individual to make decisions can often be impaired by the quantity of data required to be processed. That said, professional experience, of course, remains highly relevant, and in combination with Machine Learning, will define the assumptions and questions that quantitative research will put to test. In the future, with the cross-functional applicability of data and greater uniformity in the manner in which data is governed, risk identification and classification can be better performed.
There are, however, different challenges and limitations pertaining to this computer-assisted approach. Regarding the RAC Engine and the use of algorithms applied within a single register a main challenge is to integrate the data in one only repository so to apply the algorithms needed for the risk assessment. The exportation process of data into the single system might be addressed in different ways and depends on the future of the software which the data is coming from. Yet, if the solution cannot be automatised the work of an operator might be required. In the worst-case scenario, an export request to the software house that developed the software might be needed. Additionally, the frequency of the risk classes might be distorted. While available data on the system allows to update risk classes and businesses risk-based rating often, the frequency of the risk classes definition would depend on the availability of the data to be exported from other software. In these situations, it will be possible to choose a frequency for the definition of the six-monthly or annual risk classes. Moreover, for some regions, legal constraints and juridical interpretations of privacy regulation and data protection norms entail further barriers for data exportation and data sharing. Holistic data governance frameworks should be designed to ensure the proper management of data through its entire life cycle. A uniform framework would also help in achieving transparency and privacy goals across the cross functional regulatory purposes that a given set of data serves (OECD, 2019[1]).1
On the ML domain the first challenge is the greater difficulty in assessing factors predicting the “impact” dimension of risk (how much damage is done by non-compliance), because detailed data on this aspect is often missing, or indirect (registering the type of non-compliance, but not necessarily allowing to link to actual damage).The second is the fact that following only the predictors suggested by the algorithm would cause inspections to focus only on the cases where the likelihood of “catching” non-compliances is the highest, but this would not correspond to the actual goals of the regulatory system, and must thus be balanced by other elements to provide an adequate targeting system. We briefly develop these two points below.
The likely “impact” of non-compliances, which is a necessary data point to build a fully data-driven risk model, is often missing from inspection records. Indeed, in many cases these register only the administrative decision taken, without details. In better situations, they also record the exact points of non-compliance, but there is very rarely any data that could directly point to harm (e.g., record of accidents etc.), so that the analysis of impact must rely on assumptions about the impact of different types of non-compliances (i.e., on expert opinion, essentially, even if grounded in science etc.). Future efforts would focus on the value such data has created in risk management and in absolute terms of number of lives saved, accidents avoided, etc. Still, there is much that can be done using such data, and this will be the subject of further stages of this pilot research, to see if good (improved) predictors can be found not only for “non-compliance” in general, but for the severity of non-compliance as well. Such work will in any case remain more difficult and more tentative and uncertain than simply predicting non-compliance.
An additional challenge is created by the difference in goals between the model defined through Machine Learning (predicting the highest possible percentage of non-compliances, i.e., ideally targeting inspections so that 100% of them reveal non-compliances), and the regulatory system – which aims at managing and reducing risks, i.e., overall, at reducing non-compliance, and must consider not only the “known” targets, but the “unknown” ones. If we take the example of OSH inspections in construction sites, there are always several companies (construction companies, safety supervisors) which are new to the market, and thus “unknown”. In the score-card model, and in line with usual risk-based practice, the regional OSH services have assigned a certain level of risk to these “unknown” operators (rather than zero), to ensure that a certain number of them are targeted. The “pure” Machine Learning model would, by contrast, exclude them since they are (by definition) not known. Thus, a first point is that it is essential to balance the data-driven model by a percentage of selection that covers uncertainty and unknowns. Experience also suggests that, because fraud, dissimulation, or errors in data etc. are all possible, a small percentage of inspections should also target objects that the model would classify as “low risk”, just to ensure the system’s robustness and check that there are no significant “leaks” in the risk-assessment system. Finally, selection based on likelihood of non-compliance is only a component of a good risk-based, outcomes-focused regulatory delivery system. Taking a higher view, the system should aim at reducing non-compliances, and thus (in good circumstances and/or in regulatory areas or sectors where non-compliances are less frequent etc.) planning based on non-compliance predictors may be less useful in certain contexts (where, on the contrary, potential impact may be more important).
Inspections’ experience suggests that implementing effective inspections strategies requires to address several complex challenges simultaneously. The OECD Regulatory Enforcement and Inspections Toolkit (OECD, 2018[2])) offers government officials, regulators, stakeholders and experts several criteria to assess the inspection and enforcement system in a given public structure headed to address many of these challenges. IT solutions such as the RAC engine and the Machine Learning tools could significantly allow for a better implementation of the Toolkit recommendations with greater results on regulatory delivery. Challenges related to staff training and digital capacity building are gradually improving. As technologies advance, more than digital literacy, “21st Century Skills” are required. Digital government user skills require public servants to recognise the potential for digital transformation, understand users (society) and their needs, collaborate openly for iterative delivery, use data trustworthily and support data-driven government (OECD, 2021[3]).
The Italian pilots has shown so far that the ranking of economic activities under specific risk classes and the prediction of non-compliance likelihood are more data evidenced than those produced without IT techniques with several advantages. Correctness of predictions and accurate targeting of risks allows for more proportionate and responsive approach. The use of an automated system grounded on transparent criteria corroborates clear and fair procedures. Leveraging on technology when implementing and enhancing risk-based inspections increases regulatory delivery levels. Yet, the use of IT in the enforcement process does not mean a replacement of the inspectors and the human volition. Inspectors’ decisions lie upon the very core of the inspection activity and the enforcement process. It is only the inspector who can raise disputes in line with applicable rights and professional obligations. IT tools are used in this context to better address their actions towards risk. Future endeavors, based on current learnings, would also focus on training inspectors and empowering them to better apply IT tools, while simplifying the process using intuitive dashboards and interfaces so that those without adequate data experience can also benefit from improved technologies.
References
[3] OECD (2021), “The OECD Framework for digital talent and skills in the public sector”, OECD Working Papers on Public Governance, No. 45, OECD Publishing, Paris, https://dx.doi.org/10.1787/4e7c3f58-en.
[1] OECD (2019), The Path to Becoming a Data-Driven Public Sector, OECD Digital Government Studies, OECD Publishing, Paris, https://dx.doi.org/10.1787/059814a7-en.
[2] OECD (2018), OECD Regulatory Enforcement and Inspections Toolkit, OECD Publishing, Paris, https://dx.doi.org/10.1787/9789264303959-en.
Note
← 1. See specifically Chapter 2, https://www.oecd-ilibrary.org/sites/9cada708-en/index.html?itemId=/content/component/9cada708-en.