Why AI Governance Matters

Artificial Intelligence is transforming how organizations operate, automate decisions, and create value. Yet innovation is advancing faster than governance, creating regulatory, operational, and ethical exposure when AI is deployed without structured oversight.

In Europe, the EU AI Act introduces a risk-based regulatory framework that classifies AI systems according to their level of risk. It distinguishes between prohibited practices, high-risk systems, and lower-risk applications subject mainly to transparency obligations. Higher-risk use cases, particularly those affecting individuals' rights or access to services, require stronger governance, oversight, and control mechanisms, while lower-risk uses face lighter regulatory expectations.

While Switzerland does not yet have a standalone AI law, its legislative approach is expected to progressively align with the EU AI Act. In the financial sector, FINMA already expects firms to integrate AI within existing governance, risk management, and internal control frameworks. Supervisory observations show that AI adoption across Swiss institutions is accelerating, while governance maturity is still evolving.

The regulatory landscape also has a cross-border dimension. The EU AI Act applies to organizations outside the EU whose AI systems or outputs are used within the European market. Swiss firms operating internationally must therefore align with EU requirements alongside Swiss supervisory expectations.

AI governance is therefore a Board-level priority, enabling organisations to scale innovation responsibly while ensuring compliance, transparency, and sustainable business value.

Key AI risk domains include:

  • Inaccurate model outcomes impacting decisions
  • Data privacy and governance breaches
  • Bias and discriminatory impacts
  • Lack of explainability and transparency
  • Absence of human oversight and accountability
  • Weak robustness and cybersecurity safeguards
  • Regulatory non-compliance
  • Third-party and vendor risk
  • Reputational damage and loss of trust

Comprehensive AI Governance Solutions

End-to-end advisory services for organisations navigating responsible AI, risk, compliance, and governance.

We design and embed a complete AI governance framework, combining governance structures, policies and internal standards into a single, coherent foundation that promotes responsible AI use across the organisation.

As AI adoption grows, organisations face fundamental risks that are rarely technical: unclear accountability for AI-driven decisions, inconsistent oversight across teams, uncontrolled or inappropriate use of AI tools, lack of transparency when AI influences outcomes, and weak escalation when risks materialise. These risks are explicitly highlighted in the OECD AI Principles, the EU AI Act and the NIST AI Risk Management Framework.

This service defines who is responsible for AI, how decisions are made, which risks are acceptable, and how AI use is governed in daily operations. High-level principles such as fairness, transparency, privacy, safety, robustness and human oversight are translated into enforceable and auditable rules that ensure responsible AI deployment.

Key outputs

  • AI governance framework (roles, committees, decision rights, escalation paths)
  • AI accountability model across the AI lifecycle
  • AI acceptable-use and generative AI policy
  • AI risk appetite statement and tolerance principles
  • Definition of key AI risk categories
  • Board- and senior-management-ready governance documentation

We provide structured identification, assessment and documentation of AI risks aligned with Enterprise Risk Management and Operational Risk practices, ensuring responsible AI outcomes.

This service addresses core AI risks such as unfair or biased outcomes, privacy and data-protection impacts, lack of explainability in sensitive decisions, unreliable behaviour in edge cases, cybersecurity vulnerabilities and operational resilience concerns. Risks are assessed consistently across use cases and consolidated into a single enterprise view.

Key outputs

  • AI risk taxonomy aligned with ERM and Operational Risk
  • AI risk assessments per use case
  • Centralised AI risk register with ownership
  • Mitigation and monitoring structure
  • Board-ready AI risk reporting

An integrated, end-to-end engagement supporting organisations in preparing for the EU AI Act while also anticipating the evolution of AI regulation and supervisory expectations in Switzerland.

This package brings together governance, use-case definition, risk assessment, controls, vendor oversight and assurance into a single, coherent readiness programme. It is built around the EU AI Act's risk-based approach and the trustworthiness risks identified by OECD, NIST and ISO.

Key outputs

  • AI system inventory and risk-based classification
  • AI governance and accountability model
  • Consolidated AI risk register
  • Lifecycle control and documentation framework
  • Prioritised remediation roadmap
  • Vendor and third-party alignment
  • Board-level readiness pack

We define governance-driven control and evidence expectations across the full AI lifecycle, from use-case approval and design through deployment, monitoring, change and retirement.

As AI systems evolve, are retrained or are sourced from external providers, organisations often struggle to demonstrate what happened, why it happened and who was accountable. Typical weaknesses include insufficient logging and traceability, weak change control, limited explainability for sensitive decisions and gaps in evidence during incidents.

This service specifies what "good control" and "sufficient evidence" look like, aligned with OECD principles, EU AI Act expectations, NIST AI RMF trustworthiness characteristics and ISO/IEC 42001 lifecycle discipline, without implementing technology.

Key outputs

  • Lifecycle control expectations covering design, deployment, monitoring, change and retirement
  • Logging and traceability requirements proportionate to AI risk and criticality
  • Explainability and transparency expectations for sensitive or high-impact use cases
  • Evidence standards for audits, investigations and regulatory dialogue
  • Resilience-oriented expectations (incident handling, fallback, change and rollback)
  • Integration of AI controls into existing internal control frameworks

We support governance-driven selection, due diligence and ongoing oversight of AI vendors.

Third-party AI solutions often introduce not only transparency and accountability risks, but also structural dependency and operational resilience risks. Organisations may become reliant on opaque models, external decision logic or critical AI services that are difficult to substitute, monitor or control over time.

This service ensures AI vendor relationships are governed with a clear view on dependency, concentration risk, substitutability and resilience, and that AI-related third-party risks are integrated into existing outsourcing and third-party risk management frameworks.

Key outputs

  • AI vendor assessment and selection framework
  • Vendor due diligence and risk analysis
  • Assessment of dependency, concentration and substitutability risks
  • Alignment of AI vendors with operational resilience objectives
  • Documentation for procurement, risk and audit review

We assess governance and control risks arising from AI agents and semi-autonomous systems that optimise, coordinate or execute business processes.

As autonomy increases, risks shift from decision support to decision execution. AI agents can create accountability gaps, unintended control bypasses, reduced auditability and operational resilience risks, particularly when critical workflows depend on agent availability, external services or vendor components.

This service focuses on preserving human oversight, control effectiveness and resilience by clearly defining decision boundaries, escalation mechanisms and fallback conditions.

Key outputs

  • Assessment of agent autonomy, decision boundaries and escalation logic
  • Impact analysis on internal controls and segregation of duties
  • Identification of dependency and resilience risks
  • Oversight and accountability model for AI agents
  • Operational fallback and evidence expectations
  • Audit-ready documentation

We provide independent, risk-based assurance over AI governance and controls, including direct support during audits, internal audits and compliance reviews.

The focus is on whether AI governance and controls effectively address trustworthiness risks such as bias and unfair outcomes, insufficient transparency and explainability, privacy impacts, unreliable performance, cybersecurity vulnerabilities and operational resilience.

Key outputs

  • AI assurance scope and assessment criteria
  • Support during audits, internal audits and compliance reviews
  • Control testing and evidence assessment
  • Review of evaluation, validation and benchmarking governance
  • AI assurance reports for management and Audit Committees
  • Findings and remediation recommendations

Principal Advisor

Massimo Barison

Based in Zurich and Lugano
Languages: German, English, Italian, French, Spanish
Connect on LinkedIn
Certified Internal Auditor (CIA)
Certificate in Risk Management Assurance (CRMA)
Preparing: AI Business Specialist Federal Diploma
Preparing: ISACA Advanced in AI Audit (AAIA)

I am a senior advisor specialising in AI governance, risk management and assurance, with a background in internal audit, operational risk and highly regulated environments.

Over the years, I have worked closely with organisations operating under strict governance and supervisory expectations, where accountability, transparency and defensible decision-making are critical.

My experience sits at the intersection of governance, regulation and real-world operations. I support organisations in translating high-level principles and emerging regulatory expectations into practical governance structures, risk frameworks and controls that can be clearly explained to Boards, tested by auditors and defended under scrutiny.

My focus is not on technology implementation or product development, but on ensuring that AI use is governed in a way that is consistent, auditable and aligned with international standards and supervisory expectations.

How I work

I work as an embedded, hands-on advisor within the organisation.

I typically collaborate closely with senior management, risk management, compliance and internal audit teams, adapting my approach to the organisation's context, maturity and risk profile. Engagements are pragmatic and proportionate, designed to strengthen existing governance, risk and control frameworks rather than creating parallel structures or unnecessary bureaucracy.

Legal interpretation and technical implementation remain with the appropriate specialists. My role is to ensure that AI governance, risk management and assurance are coherent across the organisation, clearly owned, and capable of standing up to external and internal audits.

Contact

Let's talk

Get in touch to discuss your context and priorities.

Get in touch