In the age of AI, vendor risk has moved to the centre of strategic tax discussions. As external tools play a greater role in tax operations, questions about data exposure, compliance, and control are increasingly...
It’s 6pm on the last Friday of the quarter. You’re juggling between five live forecasts and three tax reconciliations when a request claiming to be from your CFO lands in your inbox. The sender...
So, would it be Milk Shakes, Mansions or Millionaires? After the most chaotic period of softening us up for Budget annoucements that I have experienced in my 40 plus years in the tax profession I was waiting...
The Autumn Budget is just around the corner, and our Tolley experts have been analysing every hint and headline to make their bold predictions on what changes could be coming. From tax reforms to policy...
AI use in UK tax has surged. Two-thirds (64%) of tax professionals now use generative AI in their daily work, up from 40% earlier this year. But while adoption is rising, the million-dollar question is...
Questions about using AI tools to enhance tax performance are no longer confined to financial outcomes. They now intersect with cybersecurity, data governance, and technology risk. AI adoption has moved beyond efficiency gains to become a matter of organisational trust that tax leaders must address before regulators, audit committees, or clients seek answers.
This blog explains why the growing use of AI for in-house tax functions requires clear human oversight, where gaps are appearing in UK teams, and how leaders can put practical guardrails in place to protect both insight and confidence.
Why weak governance matters for UK tax teams
Our research shows that interest in AI among in-house tax professionals is quickly increasing. In our most recent in-house taxation cybersecurity report, we found that while most UK tax practitioners now use AI for work purposes, they are also wary of a lack of formal readiness: Only 22% say their organisation has provided proper training on safe AI use, and just 17% report having a formal AI policy. In many cases then, adoption has moved faster than control.
I want to learn how threats to cybersecurity affect in-house tax teams.
Generative AI tools depend on the data they are given. When these tools are used without clear rules, the risks are significant:
Data security and confidentiality Because in-house tax teams handle highly sensitive material, poor control over AI tools can lead to accidental disclosure and breaches of legal or contractual duties.
Regulatory and audit scrutiny As AI becomes part of everyday tax work, regulators and auditors are more likely to ask how tools are controlled, reviewed and documented, particularly where outputs influence filings or formal advice. Without governance, teams may face questions around accountability, explainability, and evidence.
Accuracy and human judgement AI systems can produce confident but incorrect outputs. Without structured review, these errors can flow into reports and decisions, increasing compliance risk and undermining confidence in the tax function.
From pilots to policy: what leaders can do now
If AI is already part of your in-house tax technology stack, governance needs to move up the agenda. The following actions help shift teams from ad hoc use to controlled deployment that protects trust.
1. Clarify purpose and scope Set out which AI uses are approved, which are being tested, and which are not allowed. Be clear about where AI can support analysis and where outputs must not feed directly into compliance documents.
See Tolley’s AI tools, developed for in-house professionals
2. Create an AI usage policy based on risk Policies should define rules on data handling, approved tools and review steps. The UK government’s Code of Practice for the Cyber Security of AI offers useful guidance on managing risk across the AI lifecycle. While voluntary, it signals likely future expectations around secure deployment.
3. Keep people accountableAI does not replace professional judgement. Assign named reviewers to check and approve outputs, ensuring decisions remain traceable and defensible.
4. Link AI use to the wider cyber approach AI risk should be part of enterprise risk assessments. Include AI scenarios in security testing and involve cyber and privacy teams early, rather than after tools are already in use.
I want to understand how Tolley+ Guidance can help me apply tax legislation with confidence
5. Provide role-specific training General awareness sessions are not enough. In-house tax teams need practical training based on real situations, including safe prompting, spotting risks in outputs and knowing when to escalate issues.
A strategic responsibility, not a tactical choice
Building resilience into AI use means ensuring governance, risk management and controls develop at the same pace as technology. For teams ready to use AI responsibly, secure, tax-focused platforms designed with UK risk expectations in mind can help turn a potential liability into a source of strength.
As one board-level leader at a large tax practice told us:
"While protocols may exist, their effectiveness depends on constant updating in the face of new vulnerabilities introduced by AI … Regular and proactive risk assessments, as well as increased team awareness, are essential to ensure security is adapted to current challenges."
To see how leading UK tax teams are addressing this balance, read the full Tolley report Securing trust: Cybersecurity in the age of tax tech.