From mechanics to machine learning
In the early 1640s, the young mathematician, Blaise Pascal, watched his father, who served as a tax commissioner in Normandy, struggle through long nights of calculating taxes and verifying records. With only pen, paper and an abacus, tax administration was slow and error prone. Inspired by the recent innovations in the field of mechanics, Pascal set out to create a machine that could ease his father’s tedious work. After many prototypes, he unveiled the “Pascaline”, the first mechanical calculator that enabled addition and subtraction with numbers up to six digits and was a simple yet useful response to the calculation challenges of tax administration in that era.
The challenge Pascal faced in processing and making sense of data accurately and efficiently remains relevant, although its scope and complexity have increased dramatically in today’s interconnected and multi-faceted economy. In parallel, the tools available to tax administrations have also evolved radically, from mechanical devices over punch card systems, electronic computing, the internet, to, most recently, advanced machine learning, which forms part of the broader field of artificial intelligence (AI).
New tools and approaches for the digital economy
The past few decades have marked a significant shift in the environment within which tax administrations operate. As the global economy has grown more digitised, it has also, inevitably, become more deeply datafied. Economic activity that once only left paper traces now generates continuous streams of digital information. As a consequence, tax administrations have evolved into some of the most data‑rich institutions in the public sector. In response to this, they have taken advantage of advances in data science and AI that enable them to use their data in far more intelligence‑based ways. As part of their broader digital transformation, tax administrations are thus moving beyond tax calculations toward adopting more predictive and proactive approaches in line with the risk‑based principles that guide modern Compliance Risk Management (CRM) as well as the OECD’s Tax Administration 3.0 vision of “seamless taxation.”
The use of artificial intelligence in tax administration today
So, how does the adoption of AI in tax administration look from a quantitative perspective? Data from the International Survey on Revenue Administration (ISORA) shows that in 2023 over 90% of the 50+ member countries of the OECD’s Forum on Tax Administration reported that they had either implemented AI solutions or were in the process of doing so. In 2018, that figure was over 40%, which was already a high baseline by public sector standards. That implementation has more than doubled over five years reflects the earlier‑mentioned long‑term transformation of tax administrations into adaptable and mature data organisations, as well as their continued commitment to more proactive approaches. This development makes AI a natural technological extension of their operations.
How tax administrations are putting AI to use
What does AI use in tax administrations look like in practice? Recent data from the latest round of the Inventory of Tax Technology Initiatives (ITTI) survey provides a useful overview , drawing on responses from countries in the OECD’s Forum on Tax Administration. The results show that AI is currently used across a wide range of tax administration functions, with the strongest uptake in fraud and evasion detection. This is hardly surprising given that advanced AI models are particularly effective in identifying subtle patterns, anomalies, and correlations across complex datasets that traditional methods may fail to identify effectively. Another notable share of AI adoption is concentrated in risk assessment processes, where administrations rely on predictive modelling, AI‑supported segmentation methods, and other AI‑based approaches to help identify non‑compliant behaviours and emerging risk patterns. Beyond these cases, AI, and increasingly generative AI, is transforming how tax administrations interact with taxpayers. Virtual assistants and digital tools are enabling more personalised, real-time support, 24/7. At the same time, AI is supporting internal operations, helping administrations prioritise cases, allocate resources and manage workloads more effectively. Concrete examples of these use cases are available in the OECD’s Tax Administration Series Database.
New capabilities bring new responsibilities
AI enables more efficient and accurate operations in tax administration and can help reduce the compliance burden on taxpayers. But new AI capabilities also introduce new types of risks. These include risks related to fairness, transparency, potential bias, and the opaque “black‑box” nature of certain types of AI models. Such risks bring responsibilities for tax administrations in establishing mechanisms that make AI systems more explainable, ensuring that systemic biases in dataset are effectively mitigated, safeguarding taxpayer rights, and upholding the rule of law on which taxpayer trust and the social contract depend.
As tax administrations gradually move towards more predictive approaches to compliance and guidance, some risks become more pronounced because predictive systems draw inferences not only from verified past behaviour but also from projected future actions. This makes it even more important to ensure that predictive insights remain fair and proportionate.
Enhancing trustworthy AI in tax administration
As the use of AI expands, ensuring it is deployed in a trustworthy way has become a central priority for tax administrations. To support this, the OECD Forum on Tax Administration (FTA) has established a dedicated project group under its Tax Administration 3.0 Initiative. Bringing together senior experts from tax administrations across the world, as well as representatives from business and academia, the group provides a platform to explore the key dimensions of trustworthy AI from multiple perspectives.
An essential reference point in this work is the OECD’s Principles on Trustworthy AI (see box). These principles help clarify what trustworthy AI should achieve and what values should govern its implementation. Yet, as highlighted in the recent publication Governing with Artificial Intelligence - The State of Play and Way Forward in Core Government Functions, public administrations often need more practical guidance on how to interpret and operationalise the principles within their specific institutional contexts. The FTA project aims to bridge this gap by turning high-level AI principles into concrete frameworks and tools that reflect the realities of tax administrations and the risks related to specific use-cases. At the same time, the Tax Administration 3.0 Initiative provides a collaborative platform for knowledge sharing on AI with sessions addressing critical aspects of trustworthy AI.
A final note
Even Pascal, who helped lay the foundations of predictive reasoning, could not have foreseen the scale of today's AI-driven transformation. Tax administrations are currently using and implementing AI models that exceed what few would have imagined only a generation ago. Their strong data maturity and adaptability have placed them in a good position to take advantage of these advances. Yet the long term value of AI depends on how responsibly it is implemented and on taxpayers being able to trust the administrations that use it. As the field continues to evolve rapidly, co-operation and knowledge sharing between tax administrations play a key role in identifying emerging risks and developing effective mitigation strategies and best practices.
Interested in learning more?
- AI in Government
- Tax Administration Series Database | OECD
- Inventory of Tax Technology Initiatives | OECD
- OECD Principles for Trustworthy AI
| Inclusive growth, sustainable development and well-being | highlight that AI should support social and economic progress and improve quality of life. |
| Human-centred values and fairness | stress that AI must uphold human rights, rule of law and democratic values, and should be non-discriminatory. |
| Transparency and explainability | require AI systems to be understandable, with users (taxpayers) and auditors being able to access meaningful information about their functioning |
| Robustness, security, and safety | focus on building systems that are reliable, secure, and resilient with in-build mechanisms for monitoring integrity and mitigation of risks. |
| Accountability | ensures that dataset, processes and decisions are traceable and that those developing and deploying AI remain answerable for the behavior implications. |