AI Governance, Risk & Assurance
We help organisations implement responsible AI governance in a practical, proportionate and defensible way, ensuring AI use can be clearly explained, effectively controlled and confidently defended.
Get in TouchArtificial Intelligence is transforming how organizations operate, automate decisions, and create value. Yet innovation is advancing faster than governance, creating regulatory, operational, and ethical exposure when AI is deployed without structured oversight.
In Europe, the EU AI Act introduces a risk-based regulatory framework that classifies AI systems according to their level of risk. It distinguishes between prohibited practices, high-risk systems, and lower-risk applications subject mainly to transparency obligations. Higher-risk use cases, particularly those affecting individuals' rights or access to services, require stronger governance, oversight, and control mechanisms, while lower-risk uses face lighter regulatory expectations.
While Switzerland does not yet have a standalone AI law, its legislative approach is expected to progressively align with the EU AI Act. In the financial sector, FINMA already expects firms to integrate AI within existing governance, risk management, and internal control frameworks. Supervisory observations show that AI adoption across Swiss institutions is accelerating, while governance maturity is still evolving.
The regulatory landscape also has a cross-border dimension. The EU AI Act applies to organizations outside the EU whose AI systems or outputs are used within the European market. Swiss firms operating internationally must therefore align with EU requirements alongside Swiss supervisory expectations.
International frameworks such as ISO/IEC 42001 and the NIST AI Risk Management Framework provide structured guidance to operationalize AI governance. They support implementation and auditability but remain voluntary best-practice standards rather than binding legal requirements.
AI governance is therefore a Board-level priority, enabling organizations to scale innovation responsibly while ensuring compliance, transparency, and sustainable business value.
Services
End-to-end advisory services for organisations navigating responsible AI, risk, compliance, and governance.
An integrated, end-to-end engagement supporting organisations in preparing for the EU AI Act while anticipating evolving regulatory and supervisory expectations in Switzerland.
This service translates regulatory requirements into concrete governance, control and documentation mechanisms. It is designed to move organisations from high-level principles to audit-ready implementation, aligned with the EU AI Act's risk-based approach and complementary frameworks such as OECD AI Principles, NIST AI RMF and ISO/IEC 42001.
The engagement covers the full AI lifecycle, from use case identification and classification to conformity readiness, operational monitoring and ongoing compliance.
→ Governance & Risk Foundation
→ Compliance & Conformity Readiness
→ Lifecycle Controls & Documentation
→ Transparency & Human Oversight
→ Monitoring, Logging & Incident Management
→ Third-Party & Vendor Governance
→ Remediation & Audit Readiness
We design and embed a structured AI governance framework that enables organisations to control, oversee and scale AI use in a consistent, accountable and risk-aware manner.
As AI adoption expands across business functions, governance challenges quickly move beyond technology. Organisations face fragmented ownership of AI systems, unclear accountability for AI-driven decisions, inconsistent application of controls, and limited visibility at senior management and board level. These gaps increase exposure to regulatory, operational and reputational risks.
This service establishes a clear governance operating model that defines how AI is overseen, how decisions are made, and how risks are managed in practice. It translates high-level principles such as fairness, transparency, robustness, privacy and human oversight into concrete roles, decision structures, policies and enforceable rules.
The result is a governance framework that is not only aligned with regulatory expectations (EU AI Act) and recognised best practices (including ISO/IEC 42001 and NIST AI RMF), but also embedded into day-to-day operations and designed to be auditable.
→ Governance Structure & Operating Model
→ Accountability & Lifecycle Ownership
→ Policy & Internal Standards Framework
→ Risk Governance & Risk Appetite
→ Oversight, Reporting & Escalation
→ Control Integration & Operational Embedding
We provide a structured approach to identifying, assessing and managing AI-related risks across use cases, aligned with Enterprise Risk Management and Operational Risk practices.
As organisations adopt AI at scale, risk exposure becomes more complex and less visible. AI introduces new risk dimensions such as bias and discrimination, lack of transparency in decision-making, data quality and privacy issues, model instability, cybersecurity vulnerabilities and dependencies on third-party providers. These risks often cut across functions and are not fully captured by traditional risk frameworks.
This service establishes a consistent and auditable methodology to assess AI risks at use case level, while consolidating them into an enterprise-wide view. It ensures that risks are clearly defined, owned, measured and monitored, and that mitigation actions are prioritised based on business impact and regulatory exposure.
The approach is aligned with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF) and supports regulatory expectations under the EU AI Act, particularly for high-risk AI systems.
→ AI Risk Taxonomy & Framework
→ Use Case-Level Risk Assessment
→ Risk Ownership & Accountability
→ Centralised Risk Register & Aggregation
→ Mitigation, Controls & Monitoring
→ Reporting & Decision Support
We define governance-driven control and evidence expectations across the full AI lifecycle, from use-case approval and design through deployment, monitoring, change and retirement.
As AI systems evolve, are retrained or rely on third-party components, organisations often struggle to demonstrate how decisions were made, what data and models were used, and who was accountable at each stage. In practice, this leads to weak traceability, inconsistent control application, insufficient documentation for high-risk use cases and gaps in evidence during audits, incidents or regulatory reviews.
This service establishes a clear and auditable control framework that defines what "good control" and "sufficient evidence" look like across the AI lifecycle. It enables organisations to ensure traceability, accountability and reproducibility of AI-driven outcomes, while aligning control expectations with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF) and regulatory requirements such as the EU AI Act.
The focus is not on implementing tools, but on defining control principles, minimum requirements and evidence standards that can be embedded into existing processes and systems.
→ Lifecycle Control Framework
→ Logging, Traceability & Reproducibility
→ Explainability & Transparency Controls
→ Evidence & Documentation Standards
→ Change Management & Model Governance
→ Incident Management & Operational Resilience
→ Control Integration & Internal Control Alignment
We support governance-driven selection, due diligence and ongoing oversight of AI vendors, ensuring that third-party AI risks are identified, controlled and aligned with enterprise risk and outsourcing frameworks.
As organisations increasingly rely on external AI solutions, risk exposure extends beyond internal systems. Third-party AI introduces challenges related to transparency, explainability and accountability, but also creates structural dependencies on external providers, models and services. These dependencies can be difficult to assess, monitor and manage over time, particularly when critical processes rely on opaque or non-substitutable AI components.
This service establishes a structured approach to AI vendor risk management, covering the full lifecycle from selection and onboarding to ongoing monitoring and exit. It ensures that AI-related third-party risks are fully integrated into existing outsourcing, operational resilience and third-party risk management frameworks, while aligning with recognised standards (including ISO/IEC 42001) and regulatory expectations such as the EU AI Act.
→ Vendor Selection & Assessment Framework
→ Due Diligence & Risk Analysis
→ Dependency, Concentration & Substitutability Risk
→ Contractual & Control Requirements
→ Ongoing Monitoring & Vendor Oversight
→ Operational Resilience & Exit Strategy
→ Documentation & Audit Readiness
We assess governance, control and operational risks arising from AI agents and semi-autonomous systems that optimise, coordinate or execute business processes.
As AI systems evolve from supporting decisions to executing them, risk exposure fundamentally changes. AI agents can initiate actions, trigger workflows and interact with multiple systems with limited human intervention. This creates new challenges around accountability, control effectiveness and auditability, particularly when decisions are distributed across systems, external services or vendor components.
In this context, traditional control frameworks are often insufficient. Organisations face risks such as unclear decision ownership, unintended bypass of controls, breakdown of segregation of duties, reduced transparency in automated workflows and increased dependency on agent availability and external infrastructure.
This service defines how autonomy is governed in practice by establishing clear decision boundaries, human oversight mechanisms and control expectations. It ensures that AI agents operate within controlled, auditable and resilient environments, aligned with recognised frameworks (including ISO/IEC 42001 and NIST AI RMF) and consistent with regulatory expectations.
→ Agent Autonomy & Decision Boundaries
→ Control Impact & Segregation of Duties
→ Oversight & Accountability Model
→ Dependency & Operational Resilience Risk
→ Monitoring, Intervention & Fallback Controls
→ Evidence, Traceability & Auditability
We provide independent, risk-based assurance over AI governance, risk management and control frameworks, supporting organisations in demonstrating that AI systems are designed and operated in a controlled, transparent and compliant manner.
As AI adoption increases, organisations must be able to evidence that governance structures, controls and monitoring mechanisms are not only defined, but also effective in practice. This includes demonstrating that trustworthiness risks such as bias, lack of transparency, data and privacy issues, model instability, cybersecurity vulnerabilities and operational resilience are adequately managed.
This service delivers structured and auditable assurance aligned with internal audit methodologies and recognised frameworks (including ISO/IEC 42001 and NIST AI RMF). It supports organisations during internal audits, regulatory reviews and external assessments, providing an independent view on control effectiveness, documentation quality and overall AI risk management maturity.
→ Assurance Scope & Methodology
→ Control Testing & Effectiveness Assessment
→ Evidence Review & Documentation Assessment
→ Model Evaluation & Validation Governance
→ Audit & Regulatory Support
→ Reporting, Findings & Remediation
About
Massimo Barison
I am a senior advisor specialising in AI governance, risk management and assurance, with a background in internal audit, operational risk and highly regulated environments.
Over the years, I have worked closely with organisations operating under strict governance and supervisory expectations, where accountability, transparency and defensible decision-making are critical.
My experience sits at the intersection of governance, regulation and real-world operations. I support organisations in translating high-level principles and emerging regulatory expectations into practical governance structures, risk frameworks and controls that can be clearly explained to Boards, tested by auditors and defended under scrutiny.
My focus is not on technology implementation or product development, but on ensuring that AI use is governed in a way that is consistent, auditable and aligned with international standards and supervisory expectations.
I work as an embedded, hands-on advisor within the organisation.
I typically collaborate closely with senior management, risk management, compliance and internal audit teams, adapting my approach to the organisation's context, maturity and risk profile. Engagements are pragmatic and proportionate, designed to strengthen existing governance, risk and control frameworks rather than creating parallel structures or unnecessary bureaucracy.
Legal interpretation and technical implementation remain with the appropriate specialists. My role is to ensure that AI governance, risk management and assurance are coherent across the organisation, clearly owned, and capable of standing up to external and internal audits.
Contact
Get in touch to discuss your context and priorities.
Get in touch