Why AI Governance Matters

Artificial Intelligence is transforming how organizations operate, automate decisions, and create value. Yet innovation is advancing faster than governance, creating regulatory, operational, and ethical exposure when AI is deployed without structured oversight.

In Europe, the EU AI Act introduces a risk-based regulatory framework that classifies AI systems according to their level of risk. It distinguishes between prohibited practices, high-risk systems, and lower-risk applications subject mainly to transparency obligations. Higher-risk use cases, particularly those affecting individuals' rights or access to services, require stronger governance, oversight, and control mechanisms, while lower-risk uses face lighter regulatory expectations.

While Switzerland does not yet have a standalone AI law, its legislative approach is expected to progressively align with the EU AI Act. In the financial sector, FINMA already expects firms to integrate AI within existing governance, risk management, and internal control frameworks. Supervisory observations show that AI adoption across Swiss institutions is accelerating, while governance maturity is still evolving.

The regulatory landscape also has a cross-border dimension. The EU AI Act applies to organizations outside the EU whose AI systems or outputs are used within the European market. Swiss firms operating internationally must therefore align with EU requirements alongside Swiss supervisory expectations.

International frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework provide structured guidance to operationalize AI governance. They support implementation and auditability but remain voluntary best-practice standards rather than binding legal requirements.

Key AI risk domains include:

  • Inaccurate model outcomes impacting decisions
  • Data privacy and governance breaches
  • Bias and discriminatory impacts
  • Lack of explainability and transparency
  • Absence of human oversight and accountability
  • Weak robustness and cybersecurity safeguards
  • Regulatory non-compliance
  • Third-party and vendor risk
  • Reputational damage and loss of trust

AI governance is therefore a Board-level priority, enabling organizations to scale innovation responsibly while ensuring compliance, transparency, and sustainable business value.

Comprehensive AI Governance Solutions

End-to-end advisory services for organisations navigating responsible AI, risk, compliance, and governance.

An integrated, end-to-end engagement supporting organisations in preparing for the EU AI Act while anticipating evolving regulatory and supervisory expectations in Switzerland.

This service translates regulatory requirements into concrete governance, control and documentation mechanisms. It is designed to move organisations from high-level principles to audit-ready implementation, aligned with the EU AI Act's risk-based approach and complementary frameworks such as OECD AI Principles, NIST AI RMF and ISO/IEC 42001.

The engagement covers the full AI lifecycle, from use case identification and classification to conformity readiness, operational monitoring and ongoing compliance.

Key outputs

Governance & Risk Foundation

  • AI system inventory and risk-based classification (EU AI Act aligned)
  • AI governance and accountability model (including roles across the three lines of defence)
  • Consolidated AI risk register aligned with regulatory risk categories

Compliance & Conformity Readiness

  • EU AI Act gap assessment and applicability analysis
  • High-risk AI system identification and obligation mapping
  • Technical documentation framework aligned with Article 11 requirements
  • Conformity assessment and CE-marking readiness support (where applicable)

Lifecycle Controls & Documentation

  • End-to-end lifecycle control framework (design, validation, deployment, monitoring)
  • Model documentation standards (model cards, data documentation, assumptions and limitations)
  • Data governance and dataset quality controls (bias, representativeness, lineage)

Transparency & Human Oversight

  • AI transparency and user disclosure framework (AI interaction, limitations, intended use)
  • Human oversight model including escalation paths, override mechanisms and accountability
  • Explainability requirements aligned with risk level and use case criticality

Monitoring, Logging & Incident Management

  • Logging and traceability framework aligned with EU AI Act requirements
  • Post-market monitoring and performance tracking model
  • AI incident and breach reporting process (internal and regulatory escalation)

Third-Party & Vendor Governance

  • AI vendor risk assessment and due diligence framework
  • Contractual and control expectations for third-party AI providers
  • Ongoing vendor monitoring and dependency risk assessment

Remediation & Audit Readiness

  • Prioritised remediation roadmap based on risk and regulatory exposure
  • Audit-ready evidence pack (policies, controls, documentation, traceability)
  • Board-level readiness pack with risk exposure, decisions and oversight model
↑ Back to Services index

We design and embed a structured AI governance framework that enables organisations to control, oversee and scale AI use in a consistent, accountable and risk-aware manner.

As AI adoption expands across business functions, governance challenges quickly move beyond technology. Organisations face fragmented ownership of AI systems, unclear accountability for AI-driven decisions, inconsistent application of controls, and limited visibility at senior management and board level. These gaps increase exposure to regulatory, operational and reputational risks.

This service establishes a clear governance operating model that defines how AI is overseen, how decisions are made, and how risks are managed in practice. It translates high-level principles such as fairness, transparency, robustness, privacy and human oversight into concrete roles, decision structures, policies and enforceable rules.

The result is a governance framework that is not only aligned with regulatory expectations (EU AI Act) and recognised best practices (including ISO/IEC 42001 and NIST AI RMF), but also embedded into day-to-day operations and designed to be auditable.

Key outputs

Governance Structure & Operating Model

  • AI governance framework including roles, committees and decision-making bodies
  • Definition of decision rights, approval processes and escalation paths
  • Integration with existing governance structures (risk, compliance, IT, internal audit)
  • Alignment with the three lines of defence model

Accountability & Lifecycle Ownership

  • AI accountability model across the full AI lifecycle (design, deployment, monitoring)
  • Clear definition of roles such as AI owner, model owner, risk owner and control owner
  • RACI matrix for key AI governance and risk processes
  • Definition of accountability for AI-assisted and AI-driven decisions

Policy & Internal Standards Framework

  • AI policy framework covering responsible AI principles and governance requirements
  • Acceptable use policy for AI tools (including generative AI)
  • Definition of internal standards for AI development, validation and use
  • Control requirements embedded into policies to ensure enforceability and auditability

Risk Governance & Risk Appetite

  • Definition of key AI risk categories (e.g. bias, explainability, data quality, robustness, security)
  • AI risk appetite statement and tolerance thresholds
  • Alignment with enterprise risk management framework and existing risk taxonomy
  • Guidance on acceptable vs non-acceptable AI use cases

Oversight, Reporting & Escalation

  • Governance reporting framework for senior management and board
  • Definition of key risk indicators (KRIs) and escalation triggers
  • Structured escalation process for AI-related incidents and control failures
  • Board- and senior-management-ready governance documentation

Control Integration & Operational Embedding

  • Integration of governance requirements into business processes and workflows
  • Definition of minimum control requirements for AI systems based on risk level
  • Linkage between governance framework and lifecycle controls (development, validation, monitoring)
  • Alignment with recognised control frameworks (including ISO/IEC 42001 and NIST AI RMF)
↑ Back to Services index

We provide a structured approach to identifying, assessing and managing AI-related risks across use cases, aligned with Enterprise Risk Management and Operational Risk practices.

As organisations adopt AI at scale, risk exposure becomes more complex and less visible. AI introduces new risk dimensions such as bias and discrimination, lack of transparency in decision-making, data quality and privacy issues, model instability, cybersecurity vulnerabilities and dependencies on third-party providers. These risks often cut across functions and are not fully captured by traditional risk frameworks.

This service establishes a consistent and auditable methodology to assess AI risks at use case level, while consolidating them into an enterprise-wide view. It ensures that risks are clearly defined, owned, measured and monitored, and that mitigation actions are prioritised based on business impact and regulatory exposure.

The approach is aligned with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF) and supports regulatory expectations under the EU AI Act, particularly for high-risk AI systems.

Key outputs

AI Risk Taxonomy & Framework

  • AI risk taxonomy aligned with Enterprise Risk Management and Operational Risk frameworks
  • Definition of key AI risk categories (e.g. bias, explainability, data quality, robustness, security, third-party risk)
  • Alignment with recognised standards (including ISO/IEC 42001 and NIST AI RMF)
  • Mapping of AI risks to existing enterprise risk categories

Use Case-Level Risk Assessment

  • Structured AI risk assessments for individual AI use cases
  • Evaluation of likelihood, impact and risk severity based on defined criteria
  • Identification of inherent and residual risks
  • Linkage between risk level and required control intensity (including high-risk AI considerations under the EU AI Act)

Risk Ownership & Accountability

  • Assignment of risk ownership and accountability for each identified risk
  • Integration with existing risk governance structures and three lines of defence
  • Definition of responsibilities for risk acceptance, mitigation and monitoring
  • RACI alignment for key risk management activities

Centralised Risk Register & Aggregation

  • Centralised AI risk register consolidating risks across all AI use cases
  • Aggregation of risks into an enterprise-wide view
  • Identification of systemic risk concentrations and cross-cutting themes
  • Integration with existing risk reporting and tooling

Mitigation, Controls & Monitoring

  • Definition of risk mitigation measures and control strategies
  • Linkage between identified risks and existing or required controls
  • Definition of Key Risk Indicators (KRIs) and monitoring thresholds
  • Ongoing risk monitoring framework aligned with lifecycle management

Reporting & Decision Support

  • Board- and senior-management-ready AI risk reporting
  • Risk dashboards highlighting exposure, trends and critical issues
  • Support for risk-based decision-making and prioritisation
  • Alignment with regulatory and audit expectations
↑ Back to Services index

We define governance-driven control and evidence expectations across the full AI lifecycle, from use-case approval and design through deployment, monitoring, change and retirement.

As AI systems evolve, are retrained or rely on third-party components, organisations often struggle to demonstrate how decisions were made, what data and models were used, and who was accountable at each stage. In practice, this leads to weak traceability, inconsistent control application, insufficient documentation for high-risk use cases and gaps in evidence during audits, incidents or regulatory reviews.

This service establishes a clear and auditable control framework that defines what "good control" and "sufficient evidence" look like across the AI lifecycle. It enables organisations to ensure traceability, accountability and reproducibility of AI-driven outcomes, while aligning control expectations with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF) and regulatory requirements such as the EU AI Act.

The focus is not on implementing tools, but on defining control principles, minimum requirements and evidence standards that can be embedded into existing processes and systems.

Key outputs

Lifecycle Control Framework

  • End-to-end lifecycle control expectations covering design, development, validation, deployment, monitoring, change and retirement
  • Definition of minimum control requirements based on AI risk level and use case criticality
  • Alignment of lifecycle controls with governance decisions, approvals and risk assessments
  • Integration with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF)

Logging, Traceability & Reproducibility

  • Logging and traceability requirements proportionate to AI risk and criticality
  • Definition of traceability across data, models, versions and decisions
  • Reproducibility expectations for model outputs and decision pathways
  • Alignment with EU AI Act expectations on logging and record-keeping

Explainability & Transparency Controls

  • Explainability requirements tailored to use case sensitivity and impact
  • Transparency expectations for AI-assisted and AI-driven decisions
  • Definition of minimum documentation for assumptions, limitations and model behaviour
  • Alignment with regulatory expectations for high-risk AI systems

Evidence & Documentation Standards

  • Definition of evidence requirements for audits, investigations and regulatory dialogue
  • Structured documentation standards (e.g. model documentation, data documentation, decision logs)
  • Audit trail expectations across the AI lifecycle
  • Preparation of audit-ready evidence aligned with EU AI Act and recognised standards

Change Management & Model Governance

  • Control expectations for model updates, retraining and versioning
  • Change approval workflows and documentation requirements
  • Rollback and fallback control expectations
  • Linkage between change management and risk reassessment

Incident Management & Operational Resilience

  • Control expectations for AI-related incident detection, escalation and resolution
  • Definition of incident classification and response procedures
  • Fallback mechanisms and human intervention triggers
  • Alignment with operational resilience expectations and business continuity principles

Control Integration & Internal Control Alignment

  • Integration of AI controls into existing internal control frameworks
  • Mapping of AI controls to existing control libraries and processes
  • Guidance on control ownership, execution and testing
  • Alignment with internal audit expectations and control testing approaches
↑ Back to Services index

We support governance-driven selection, due diligence and ongoing oversight of AI vendors, ensuring that third-party AI risks are identified, controlled and aligned with enterprise risk and outsourcing frameworks.

As organisations increasingly rely on external AI solutions, risk exposure extends beyond internal systems. Third-party AI introduces challenges related to transparency, explainability and accountability, but also creates structural dependencies on external providers, models and services. These dependencies can be difficult to assess, monitor and manage over time, particularly when critical processes rely on opaque or non-substitutable AI components.

This service establishes a structured approach to AI vendor risk management, covering the full lifecycle from selection and onboarding to ongoing monitoring and exit. It ensures that AI-related third-party risks are fully integrated into existing outsourcing, operational resilience and third-party risk management frameworks, while aligning with recognised standards (including ISO/IEC 42001) and regulatory expectations such as the EU AI Act.

Key outputs

Vendor Selection & Assessment Framework

  • AI vendor assessment and selection framework aligned with procurement and risk requirements
  • Definition of minimum control and transparency expectations for AI vendors
  • Risk-based vendor classification based on criticality and AI use case impact
  • Alignment with existing third-party risk management and outsourcing frameworks

Due Diligence & Risk Analysis

  • Structured AI vendor due diligence covering governance, controls and risk exposure
  • Assessment of model transparency, explainability and data usage practices
  • Evaluation of vendor control environment and documentation maturity
  • Identification of inherent risks including bias, security, data protection and model limitations

Dependency, Concentration & Substitutability Risk

  • Assessment of dependency on external AI providers and critical services
  • Analysis of concentration risk across vendors and technologies
  • Evaluation of substitutability and portability of AI solutions
  • Identification of lock-in risks and mitigation strategies

Contractual & Control Requirements

  • Definition of AI-specific contractual clauses (e.g. transparency, audit rights, data usage, performance)
  • Minimum control expectations to be embedded in vendor agreements
  • Alignment with internal governance, compliance and risk requirements
  • Support for integration into procurement and legal processes

Ongoing Monitoring & Vendor Oversight

  • Framework for continuous monitoring of AI vendor performance and risk exposure
  • Definition of KPIs/KRIs and reporting requirements for vendors
  • Periodic reassessment of vendor risk and control effectiveness
  • Integration with existing vendor monitoring and review cycles

Operational Resilience & Exit Strategy

  • Alignment of AI vendors with operational resilience objectives
  • Assessment of impact of vendor failure on critical processes
  • Definition of fallback solutions and contingency measures
  • Exit and transition strategy including data, model and service continuity

Documentation & Audit Readiness

  • Documentation package supporting procurement, risk and audit review
  • Evidence of vendor due diligence, risk assessment and monitoring activities
  • Alignment with internal audit expectations and regulatory scrutiny
  • Support for supervisory interactions and third-party risk reviews
↑ Back to Services index

We assess governance, control and operational risks arising from AI agents and semi-autonomous systems that optimise, coordinate or execute business processes.

As AI systems evolve from supporting decisions to executing them, risk exposure fundamentally changes. AI agents can initiate actions, trigger workflows and interact with multiple systems with limited human intervention. This creates new challenges around accountability, control effectiveness and auditability, particularly when decisions are distributed across systems, external services or vendor components.

In this context, traditional control frameworks are often insufficient. Organisations face risks such as unclear decision ownership, unintended bypass of controls, breakdown of segregation of duties, reduced transparency in automated workflows and increased dependency on agent availability and external infrastructure.

This service defines how autonomy is governed in practice by establishing clear decision boundaries, human oversight mechanisms and control expectations. It ensures that AI agents operate within controlled, auditable and resilient environments, aligned with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF) and consistent with regulatory expectations.

Key outputs

Agent Autonomy & Decision Boundaries

  • Assessment of agent autonomy levels (decision support vs decision execution)
  • Definition of decision boundaries and permitted actions for AI agents
  • Identification of high-impact or sensitive decisions requiring human involvement
  • Definition of escalation logic and approval thresholds

Control Impact & Segregation of Duties

  • Impact analysis of AI agents on existing internal controls
  • Identification of potential control bypass scenarios
  • Assessment of segregation of duties (SoD) risks introduced by agent behaviour
  • Definition of control safeguards to preserve control effectiveness

Oversight & Accountability Model

  • Governance model for AI agents including roles and responsibilities
  • Clear accountability for agent-driven decisions and outcomes
  • Definition of human-in-the-loop / human-on-the-loop oversight mechanisms
  • Integration with existing governance and risk frameworks

Dependency & Operational Resilience Risk

  • Identification of dependencies on external systems, APIs and vendors
  • Assessment of resilience risks linked to agent availability and performance
  • Evaluation of failure scenarios and cascading effects across processes
  • Alignment with operational resilience and business continuity expectations

Monitoring, Intervention & Fallback Controls

  • Definition of monitoring requirements for agent behaviour and performance
  • Triggers for human intervention and override mechanisms
  • Operational fallback procedures in case of agent failure or unexpected behaviour
  • Alignment with incident management and escalation processes

Evidence, Traceability & Auditability

  • Definition of logging and traceability requirements for agent decisions and actions
  • Documentation standards for agent logic, workflows and interactions
  • Evidence expectations for audits, investigations and regulatory reviews
  • Audit-ready documentation aligned with recognised standards (including ISO/IEC 42001)
↑ Back to Services index

We provide independent, risk-based assurance over AI governance, risk management and control frameworks, supporting organisations in demonstrating that AI systems are designed and operated in a controlled, transparent and compliant manner.

As AI adoption increases, organisations must be able to evidence that governance structures, controls and monitoring mechanisms are not only defined, but also effective in practice. This includes demonstrating that trustworthiness risks such as bias, lack of transparency, data and privacy issues, model instability, cybersecurity vulnerabilities and operational resilience are adequately managed.

This service delivers structured and auditable assurance aligned with internal audit methodologies and recognised frameworks (including ISO/IEC 42001 and NIST AI RMF). It supports organisations during internal audits, regulatory reviews and external assessments, providing an independent view on control effectiveness, documentation quality and overall AI risk management maturity.

Key outputs

Assurance Scope & Methodology

  • Definition of AI assurance scope, objectives and assessment criteria
  • Risk-based scoping aligned with AI use cases and risk classification
  • Alignment with internal audit methodologies and recognised standards (including ISO/IEC 42001 and NIST AI RMF)
  • Definition of audit approach (governance, risk, controls, lifecycle)

Control Testing & Effectiveness Assessment

  • Testing of AI governance and control design and operating effectiveness
  • Assessment of control implementation across the AI lifecycle
  • Identification of control gaps, weaknesses and inconsistencies
  • Evaluation of alignment between defined controls and actual practices

Evidence Review & Documentation Assessment

  • Assessment of evidence quality, completeness and traceability
  • Review of documentation (policies, model documentation, decision logs, monitoring records)
  • Verification of audit trail and reproducibility of AI-driven outcomes
  • Evaluation of readiness for audit and regulatory scrutiny

Model Evaluation & Validation Governance

  • Review of model evaluation, validation and benchmarking practices
  • Assessment of governance over testing methodologies and performance metrics
  • Evaluation of explainability, robustness and fairness validation processes
  • Alignment with regulatory expectations for high-risk AI systems

Audit & Regulatory Support

  • Direct support during internal audits, compliance reviews and regulatory inspections
  • Preparation of documentation and evidence for audit processes
  • Interaction support with internal audit, compliance and supervisory authorities
  • Alignment with EU AI Act expectations and audit requirements

Reporting, Findings & Remediation

  • AI assurance reports tailored for management and Audit Committees
  • Clear articulation of findings, root causes and risk implications
  • Prioritised remediation recommendations and action plans
  • Tracking of remediation actions and follow-up reviews
↑ Back to Services index

Principal Advisor

Massimo Barison

Based in Zurich and Lugano
Languages: German, English, Italian, French, Spanish
Connect on LinkedIn
Certified Internal Auditor (CIA)
Certificate in Risk Management Assurance (CRMA)
Preparing: AI Business Specialist Federal Diploma
Preparing: ISO 42001 Lead Auditor

I am a senior advisor specialising in AI governance, risk management and assurance, with a background in internal audit, operational risk and highly regulated environments.

Over the years, I have worked closely with organisations operating under strict governance and supervisory expectations, where accountability, transparency and defensible decision-making are critical.

My experience sits at the intersection of governance, regulation and real-world operations. I support organisations in translating high-level principles and emerging regulatory expectations into practical governance structures, risk frameworks and controls that can be clearly explained to Boards, tested by auditors and defended under scrutiny.

My focus is not on technology implementation or product development, but on ensuring that AI use is governed in a way that is consistent, auditable and aligned with international standards and supervisory expectations.

How I work

I work as an embedded, hands-on advisor within the organisation.

I typically collaborate closely with senior management, risk management, compliance and internal audit teams, adapting my approach to the organisation's context, maturity and risk profile. Engagements are pragmatic and proportionate, designed to strengthen existing governance, risk and control frameworks rather than creating parallel structures or unnecessary bureaucracy.

Legal interpretation and technical implementation remain with the appropriate specialists. My role is to ensure that AI governance, risk management and assurance are coherent across the organisation, clearly owned, and capable of standing up to external and internal audits.

Contact

Let's talk

Get in touch to discuss your context and priorities.

Get in touch