AI adoption is accelerating faster than governance maturity. Across OECD economies, the share of firms using AI rose from 8.7% in 2023 to 20.2% in 2025. In Swiss financial services, FINMA found that around 50% of surveyed institutions already use AI or have initial applications in development, while a further 25% intend to use it within three years.
The question is no longer whether organisations should slow down. It is whether they can build the governance backbone fast enough to support AI use responsibly. That is not only a technical issue. It is a question of accountability, ownership, risk appetite, oversight, and continuous capability-building. In practice, governance often lags behind adoption: inventories are incomplete, responsibilities are unclear, and visibility into externally sourced AI is limited.
A practical starting point matters: do not reinvent the wheel. If your organisation already has a strong ERM framework, AI governance should be integrated into it rather than built as a disconnected parallel structure. That is more consistent with the logic of COSO ERM, which links governance and culture, strategy and objective-setting, performance, review and revision, and information, communication and reporting into one management approach.
Just as importantly, AI governance should not be reduced to a policy exercise. A sound approach identifies where the most significant risks sit, prioritises them proportionately, tracks whether controls are working, and improves the framework over time. Frameworks such as NIST AI RMF and ISO/IEC 42001 are helpful here because they treat AI governance as an ongoing management discipline rather than a one-off approval event.
What are you actually governing?
AI use cases often look similar on the surface, but they create different governance questions. In Switzerland, current enterprise use is especially visible in IT, finance, cybersecurity, customer service, and marketing, while financial institutions also report use cases such as chatbots, process optimisation, and text generation.
| AI domain | Typical current use cases | Why it matters | Main Risks |
|---|---|---|---|
| Internal productivity and knowledge work | Enterprise search, drafting, summarisation, coding copilots, reporting support, meeting notes | Widely adopted through generative AI tools and productivity suites | Data leakage, hallucinations, over-reliance, unapproved tool use |
| Customer interaction and distribution | Chatbots, multilingual service support, outreach, personalisation, marketing content | Relevant for banks, insurers, asset managers, and customer-facing service models | Transparency, customer outcomes, privacy, escalation, reputational risk |
| Risk, compliance and control support | Fraud alerts, transaction review support, surveillance, case triage, compliance drafting, control testing support | Highly relevant in regulated environments with strong documentation and monitoring demands | False positives or negatives, explainability, drift, weak human challenge |
| Decision support | Credit assessment, underwriting support, hiring support, forecasting, prioritisation, anomaly detection | Relevant where AI influences material business or people decisions | Bias, explainability, human oversight, error tolerance, accountability |
| Technology and operations automation | IT operations, cyber monitoring, workflow routing, process optimisation, service management | Swiss organisations show relatively strong AI use in IT, finance, and cybersecurity functions | Hidden dependencies, resilience risk, automation errors, weak fallback design |
| Third-party embedded AI | Copilots, cloud AI services, CRM and ERP features, vendor analytics, external HR or compliance tools | Common because many organisations adopt AI through existing vendors rather than internal builds | Limited transparency, concentration risk, change risk, unclear accountability |
This table is not just a taxonomy. It is a diagnostic. If AI is already influencing one of these areas without clear ownership, controls, and review, the organisation is already carrying governance debt. It is also important to recognise that AI risk does not sit only in the model itself. It can arise across the whole value chain: upstream data, external models, cloud services, embedded features, human review steps, operating context, and downstream use.
Six practical steps to move from experimentation to structured governance
1. Establish AI Governance, Accountability and Culture
Effective AI governance starts with tone from the top. Boards and senior management should define how AI fits into the wider governance model, what the organisation is trying to achieve with it, and what level of uncertainty it is prepared to accept. In practice, this means translating AI governance into measurable objectives, clear governance bodies, defined decision rights, and practical oversight through KPIs and KRIs.
This is also where risk appetite and tolerances belong. AI-related tolerances should not sit outside enterprise risk appetite. They should be integrated into it, so the organisation is clear on what level of automation risk, model risk, third-party dependency, data risk, or control weakness it is willing to accept.
A second element is AI literacy. Governance is weakened when organisations deploy AI faster than they build the human capability to understand and challenge it. Boards, senior management, business owners, control functions, and general users need different levels of literacy, and that capability needs to be developed continuously.
What to do
- Define governance roles and decision rights. Clarify who owns AI use cases, who sets policy, who challenges risk decisions, and who provides independent assurance.
- Set AI-related risk appetite and tolerances. Translate them into practical thresholds for model unreliability, data sensitivity, third-party dependency, automation risk, and control weakness.
- Create governance forums for material use cases. Significant AI deployments should be reviewed through an appropriate committee or decision body, not approved informally.
- Build role-based AI literacy. Boards, senior management, business owners, control functions, and users need different levels of understanding to oversee and use AI responsibly.
- Make training continuous, not one-off. Refresh capability regularly so governance keeps pace with evolving use cases, risks, providers, and regulatory expectations.
2. Define the AI Landscape, Business Context and Ownership
You cannot govern what you cannot see. The first operational step is to build a reliable picture of where AI is used, why it is used, what data it touches, which process it affects, and who owns the outcome.
This should not be treated as a technical inventory only. It should connect to strategy and objective-setting: what business objective the use case supports, who is accountable for outcomes, which stakeholders are affected, what data and systems it depends on, and whether the use case is experimental, operational, or business-critical.
What to do
- Create an AI use-case inventory. Record where AI is used, for what purpose, in which process, and whether it is experimental, operational, or business-critical.
- Assign a named owner to each material use case. Accountability should sit with a person, not with a tool or vendor.
- Map the business context. Identify which objectives the use case supports, which stakeholders are affected, and where outputs influence decisions, customer interactions, or control processes.
- Trace data and system dependencies. Understand what data the use case accesses, where outputs go, and which upstream or downstream systems it affects.
- Introduce approval and change governance. New AI use cases and material changes should be reviewed before they enter production, not after issues arise.
3. Assess AI Risk Exposure and Set Response Priorities
Not every AI use case needs the same governance response. The right question is not 'Are we using AI?' but 'Which AI uses matter most, and why?' A mature organisation assesses AI within the broader enterprise risk portfolio, not as a detached novelty risk. That means linking prioritisation to business objectives, materiality, and risk appetite.
This assessment should be broader than compliance alone. It should cover legal and regulatory exposure, operational and conduct risk, data protection implications, control weakness and model risk, reputational impact, and third-party dependency.
What to do
- Classify use cases by materiality and risk. Distinguish low-impact productivity uses from customer-facing or decision-support uses that require stronger governance.
- Assess exposure across multiple dimensions. Look beyond compliance to operational risk, conduct risk, data protection, reputational impact, model risk, and dependency risk.
- Link AI risk to the existing ERM framework. Use existing risk categories, appetite logic, and reporting structures where they already exist.
- Prioritise governance effort where it matters most. Focus first on use cases with higher impact on customers, employees, financial decisions, regulated processes, or sensitive data.
- Define the required response. Decide where stronger controls, human review, restrictions, enhanced monitoring, or escalation are needed.
4. Govern AI Across the Lifecycle
AI governance is not a one-time approval gate. NIST AI RMF makes clear that AI risk management should run across the lifecycle through Govern, Map, Measure, and Manage. That matters because models, prompts, vendors, data, and operating contexts change. A system that looked acceptable at launch can drift, degrade, or create new harms later. Governance therefore has to be continuous.
What to do
- Test before deployment. Validate whether the use case performs as intended, whether outputs are reliable enough, and whether controls are in place before release.
- Monitor performance continuously. Track drift, error patterns, incidents, unusual outputs, and emerging weaknesses over time rather than relying on launch approval.
- Set re-evaluation triggers. Define when material changes in data, prompts, models, vendors, or business use should trigger reassessment or reapproval.
- Maintain documentation and traceability. Keep sufficient records on purpose, assumptions, controls, changes, incidents, and review decisions.
- Build fallback and human-override procedures. Staff should know what happens when outputs are unreliable, harmful, or difficult to explain.
5. Strengthen Third-Party AI Oversight
For many organisations, the most common AI use cases are not internally built models. They come through productivity copilots, foundation model providers, cloud AI services, embedded enterprise software features, and external HR, compliance, fraud, analytics, or customer-service solutions. That makes third-party oversight a stand-alone governance pillar, not a sub-point of reporting.
What to do
- Evaluate model transparency. Can the provider explain how the model works, what it is designed to do, what limitations apply, and what level of transparency exists around training data and updates?
- Define contractual accountability. Clarify responsibilities, service expectations, change notification, incident handling, and liability where the AI produces harmful or incorrect outputs.
- Monitor vendor performance continuously. Annual reviews are not enough for tools that are used daily in important business processes or control activities.
- Assess concentration and dependency risk. Heavy reliance on one provider, one model family, or one platform can create strategic and operational vulnerability.
- Require advance notification of model or service changes. Unannounced updates to underlying models, thresholds, or features are one of the most common and underappreciated AI risks.
- Look beyond the direct vendor. Consider indirect dependencies behind your direct vendor, cloud hosting arrangements, embedded subcontractors, and exit feasibility if service quality deteriorates.
6. Embed AI Risk Reporting, Escalation and Continuous Improvement
Governance needs a closing loop. It should be possible to tell senior management and the Board which AI uses are material, which ones sit outside appetite, where incidents or performance deterioration are emerging, where external dependency is creating concentration or resilience risk, and whether controls are working in practice.
This is where reporting, escalation, assurance, and continuous improvement come together. Strong Board oversight should be supported not only by reporting, but by regular assurance on whether governance, controls, and risk responses are operating effectively in practice.
What to do
- Report against approved AI risk appetite and tolerances. Management reporting should show whether material AI use cases remain within agreed limits.
- Use KPIs and KRIs for ongoing oversight. Track performance, incidents, override rates, control exceptions, user behaviour, and external dependency indicators where relevant.
- Set clear escalation thresholds. Define what triggers escalation to management, control functions, committees, or the Board.
- Provide regular assurance to the Board. Board oversight should be supported not only by reporting, but also by periodic assurance on whether governance, controls, and risk responses are working in practice.
- Use incidents and near misses to improve the framework. Refresh controls, tolerances, governance arrangements, and training content based on what monitoring and assurance reveal.
- Refresh organisational capability over time. Continuous improvement should include AI literacy, targeted retraining, and updates for new use cases, new controls, and new third-party dependencies.
The bottom line
AI governance is not a constraint on innovation. It is what makes innovation sustainable. The organisations that move best will not simply be the ones running the most pilots. They will be the ones that know where AI sits, who owns it, which risks matter most, how performance is monitored over time, how external dependencies are controlled, and how issues are escalated before they become control breakdowns, regulatory concerns, or reputational damage.
That is the real shift from experimentation to structured governance. Experimentation tests what AI can do. Structured governance determines how AI should be used, under what conditions, with what level of risk, and with what evidence that the framework is working.
AI governance should not begin with a separate framework if a strong ERM structure already exists. The better approach is usually to integrate AI governance into existing governance, risk, control, and assurance processes, while addressing the genuinely new features of AI: opacity, autonomy, rapid change, provider dependency, and decision impact.
Strong AI governance does not start with tools. It starts with tone from the top, clear accountability, defined risk appetite and tolerances, practical oversight through KPIs and KRIs, regular assurance to the Board, and sufficient AI literacy across the organisation. Organisations that make that transition successfully will be in a far stronger position to scale AI with confidence rather than complexity.
Interested in strengthening AI governance?
Structured governance enables organisations to scale AI confidently, reduce operational incidents and demonstrate regulatory readiness.