Artificial intelligence is no longer a technology on the horizon. It is already embedded in how organisations make decisions, manage risk, and govern themselves. For internal audit, this creates a double imperative: harness AI to work better, and develop the capability to provide credible assurance over environments where AI plays an increasingly decisive role.
These two imperatives are connected but must not be confused. Efficiency within the function is the entry point, not the destination. A function that has learned to draft better reports faster is not yet a function ready to provide meaningful assurance over an AI-enabled organisation. This article addresses both imperatives, making the case for why the second one matters more.
1. AI as a Productivity Tool: A Legitimate Starting Point
According to the 2024 report Harnessing Generative AI for Internal Audit Activities by the Internal Audit Foundation and Wolters Kluwer TeamMate, more than three in four internal auditors rate themselves as novices or beginners in GenAI proficiency, and 92% of organisations remain in the initial exploration or partial implementation phases. Despite this, practical use is growing fast.
Across the engagement lifecycle, the areas generating the most traction are those requiring synthesis and communication: turning meeting transcripts into walkthrough documents, summarising large-volume questionnaire responses, generating first draft reports in minutes, standardising tone and language across audit teams, and automating follow-up tracking. These gains are real. They reduce low-value cognitive work and free practitioners to focus on judgment.
But productivity gains do not automatically translate into better assurance. The risk is that audit functions optimise their internal processes while the more important challenge of providing credible assurance over AI-influenced control environments goes unaddressed.
2. The Bigger Shift: AI Is Changing What Gets Audited
The deeper transformation is happening not inside internal audit, but in the control environments that internal audit is expected to examine.
Across operations, compliance, credit, fraud detection, and customer management, AI is increasingly influencing or determining consequential decisions. When a credit decision or a regulatory filing is generated or shaped by a model, the audit trail becomes more complex: the logic may be opaque, the outputs probabilistic, and the point of human involvement far removed from where the model's output became consequential.
The standard audit lens must evolve. The question is no longer simply whether the right process was followed. The question is whether the right conditions exist for AI-assisted decisions to be trusted, challenged, and corrected.
What the Audit Lens Must Now Include
- Mapping where AI influences decisions, not just in formally labelled AI projects
- Assessing whether review mechanisms over AI outputs are substantive or symbolic
- Testing whether accountability has been preserved or transferred to the model
- Evaluating escalation paths and incident response when AI outputs are wrong
- Identifying where AI has reduced perceived need for human controls without adequate compensating governance
3. The 'Human in the Loop' Problem
No phrase is more frequently invoked, or more frequently misunderstood, than 'human in the loop.' The premise is appealing: keep a human decision-maker involved, and accountability is preserved. The reality is more complicated.
When AI outputs look confident and consistent with expectations, humans tend to trust them even when they should not. This is automation bias: a reviewer who lacks the time, technical understanding, or authority to meaningfully challenge an AI output is not providing a control. They are providing the appearance of one.
"Human in the loop is necessary. But it is not enough. Sometimes it is just a slogan."
Internal audit should test the substance of human oversight, not just its existence:
- Understanding: Does the reviewer know what the system is designed to do, and what it is not?
- Capacity: Do volumes and timelines allow for genuine assessment, or is approval effectively automatic?
- Authority: Can the reviewer override the AI output without penalty or disproportionate escalation?
- Evidence: Do overrides actually occur, and are they reviewed for systemic implications?
- Risk awareness: Is the reviewer actively treating hallucination as a live risk, not a theoretical one?
Where these tests reveal that the human element is nominal rather than substantive, the control environment is weaker than it appears. This is a governance finding, not a technical observation.
4. What Internal Audit Should Be Examining and Protecting
The audit focus should be on the organisational conditions that determine whether AI-enabled decisions are well governed, not on auditing the model line by line. This means governance, accountability, and control: who owns the system's outputs and is accountable for them; whether use boundaries are defined, communicated, and enforced; whether users understand the system's limitations in practice rather than just on paper; and whether monitoring, incident capture, and error correction are functioning.
The Institute of Internal Auditors (IIA) two-domain framework is a useful organising lens: AI as an audit topic (governance, model risk, strategy, controls) and AI as an auditing tool (GenAI capabilities applied to audit tasks). Most functions are currently working only in the second domain. The first, where AI governance is itself the subject of assurance, is where the greater strategic opportunity lies and where board expectations are increasingly directed.
Protecting Auditor Judgment
There is a parallel risk inside the function itself. As AI tools become embedded in audit workflows, teams may begin to rely on AI-generated structure and reasoning without reinforcing the underlying judgment that makes those outputs useful. If an auditor cannot recognise when a risk assessment is incomplete or a control is poorly designed, no AI tool will compensate for that gap.
"AI should come after competence, not instead of it."
Intentional adoption means introducing AI tools and simultaneously reinforcing training, mentoring, and quality review. The goal is augmentation: AI working with well-trained auditors, not in place of them.
What Internal Audit Can Do That AI Cannot
- Apply professional scepticism and independent judgment to ambiguous situations
- Understand organisational context, culture, and unstated assumptions
- Make ethical assessments where rules alone do not determine the right answer
- Take accountability for conclusions and defend them under challenge
- Recognise when an AI output is plausible but wrong
5. A Leadership Opportunity and an Obligation
AI is moving fast, governance frameworks are still developing, and many organisations are making consequential decisions about AI adoption without adequate assurance support. Internal audit functions that build genuine capability, both in using AI effectively and in auditing AI-enabled environments, can occupy a position of real strategic value: part of the conversation before AI systems are deployed, not arriving after the fact to report on what went wrong.
The data is unambiguous: 80% of Chief Audit Executives believe AI upskilling is essential within two years, and 52% of boards expect internal audit to provide assurance over technology and data governance. The expectation is already there. The capability must follow.
Internal audit that treats AI primarily as a productivity tool is missing the larger obligation. The functions that invest now, in capability, in governance frameworks, in auditor knowledge, will hold a significant advantage over those that wait.
The time to act is now.
Key Resources for Internal Audit Leaders
Authoritative guidance, data, and frameworks for internal audit functions navigating AI governance and adoption.
| Source | What it offers |
|---|---|
| IIA: AI Auditing Framework (2024) | Structured framework for auditing AI governance, risk, and controls. Covers strategy, data governance, ethics, model risk, third-party controls, and monitoring. |
| IIA: AI Knowledge Centre | Continuously updated hub of articles, podcasts, webinars, and tools on AI in internal audit. Includes research from the Internal Audit Foundation. |
| Wolters Kluwer / IIA Foundation: Harnessing GenAI for Internal Audit (2024) | Global survey of 924 internal audit leaders on GenAI adoption, maturity, use cases, and governance gaps. Includes use case matrices for all four audit phases. |
| OECD Principles on AI (updated 2024) | The first intergovernmental standard on AI, adopted in 2019 and updated in 2024. Covers transparency, accountability, robustness, and responsible stewardship. A key reference for governance and ethics assessments. |
| NIST AI Risk Management Framework (AI RMF 1.0) | Practical framework for managing AI risks across the full AI lifecycle, covering Govern, Map, Measure, and Manage. Widely referenced in audit, compliance, and technology governance. |
| NIST AI RMF: GenAI Profile (AI 600-1) | Companion to the AI RMF addressing risks unique to generative AI: hallucination, data provenance, bias, and privacy. Directly applicable to audit of GenAI deployments. |
| EU AI Act: Regulatory Framework | The EU's risk-based framework for AI, classifying systems by risk level with corresponding obligations. Essential for auditors in or serving EU-regulated organisations. |
All links verified March 2026.
Strengthening AI assurance in your organisation?
Govern AI works with internal audit and risk functions to build the capability and frameworks needed to provide credible assurance over AI-enabled environments.