Generated Title: HMRC's AI Transparency: A Data Analyst's Reality Check
The UK's HM Revenue & Customs (HMRC) is wading into the AI arena, and a recent tribunal ruling is forcing them to be more transparent about it. The case, triggered by a Freedom of Information Act (FOIA) request, highlights the tension between leveraging AI's potential and the public's right to understand how it's being used in tax administration. Judge Christopher McNall openly disclosed using AI in his judgment to help produce a decision, setting a benchmark for transparency. But is it enough?
The Numbers Game: AI vs. Human Oversight
The core issue isn't whether HMRC should use AI. They already are, and so are tax authorities worldwide. The OECD reports that 70% of global tax authorities are using AI. The real question is about oversight and accountability. HMRC's initial reluctance to disclose information about its AI usage in R&D tax credit compliance raises red flags.
Initially, HMRC refused to disclose the information under section 31(d) of FOIA — namely, that doing so would be likely to prejudice “the assessment or collection of any tax or duty or of any imposition of a similar nature”. After the tax practitioner complained to the UK Information Commissioner (“ICO”), HMRC changed its position and, relying on section 31(3) of FOIA, refused to confirm or deny holding the requested information. According to HMRC, confirming or denying whether it held the information would provide “valuable insight” into the operation of its tax credit regime, thereby assisting individuals and companies that seek to defraud the system.
The First-Tier Tribunal overturned the ICO’s decision, stating that the ICO over-emphasised the “unsubstantiated and unevidenced” risks of fraudulent activity resulting from confirmation of HMRC’s use — or not — of LLMs and AI in respect of R&D Tax Credits and, moreover, had given inadequate weight to the societal benefits of transparency around such uses. When Tax Meets Automation: Lessons From HMRC's Use (Or Not) Of Artificial Intelligence
Think of it like this: AI is a powerful magnifying glass. It can find needles in haystacks of data, potentially identifying fraud or errors with greater efficiency. But if we don't know how the magnifying glass is calibrated, how can we trust the results? How can we be sure that the AI isn't amplifying existing biases or creating new ones? This isn't about Luddites versus progress; it's about responsible implementation.
The Human Element: Trust and Transparency
HMRC argues that disclosing AI usage would help fraudsters game the system. But the tribunal countered that this secrecy undermines public trust. And this is the part of the report that I find genuinely puzzling. The tribunal’s decision stated that HMRC’s failure to either confirm or deny its use of AI reinforces the belief that AI is being used by its case officers, perhaps in an unauthorised manner.

Trust is a critical component of any tax system. If taxpayers believe the system is opaque or unfair, compliance rates will plummet. The cost of enforcing compliance will then skyrocket. In 2023/24, the cost of collecting £1 of IHT was 0.67p – just 0.02p less than income tax (0.69p). The HMRC also announced its plans to reinstate a process as part of a “test and learn phase,” targeting specific UK debtors. This process is known as the direct recovery of debts (DRD), which allows HMRC to recover debts directly from bank accounts. Beneficiaries warned over automatic deductions risk — HMRC may soon dip into bank accounts
The tribunal's point is valid: transparency isn't just a nice-to-have; it's essential for maintaining legitimacy. What are the specific criteria used for selecting the models? What measures are in place to ensure the privacy and security of taxpayer data? What are HMRC’s policies and procedures governing use of the AI models?
Data-Driven Skepticism: A Necessary Reform
The underlying issue is that UK tax administration operates under a statutory regime that was legislated in 1970. As such, the march of GenAI into tax practice, and HMRC’s stance in cases such as this, only increase the sense that reform is overdue. The law, written before the internet existed, let alone sophisticated AI, needs an update. The 2020 amendment stating HMRC can use "any means (including but not limited to computers)" is too broad. It needs to be more specific about AI, data privacy, and human oversight.
Moreover, the fact that HMRC initially confirmed that it held the requested information, but subsequently relied on a neither confirm nor deny response, was “untenable”, “beyond uncomfortable”, and “like trying to force the genie back in its bottle”.
The government's Transformation Roadmap for HMRC envisions embedding GenAI in the tax authority’s operations. Fine. But without clear guidelines and transparent processes, it risks becoming a black box. The tribunal's ruling is a wake-up call. HMRC needs to show its workings.
Smoke and Mirrors, or Real Progress?
The ruling is a step in the right direction, but it's not a revolution. HMRC is "carefully reviewing the decision" and "considering [its] next steps." Watch what they do, not what they say.
