AI Governance, Risk & Assurance
We help organisations implement responsible AI governance in a practical, proportionate and defensible way, ensuring AI use can be clearly explained, effectively controlled and confidently defended.
Get in TouchContext
Artificial Intelligence is transforming how organizations operate, automate decisions, and create value. Yet innovation is advancing faster than governance, creating regulatory, operational, and ethical exposure when AI is deployed without structured oversight.
In Europe, the EU AI Act introduces a risk-based regulatory framework that classifies AI systems according to their level of risk. It distinguishes between prohibited practices, high-risk systems, and lower-risk applications subject mainly to transparency obligations. Higher-risk use cases, particularly those affecting individuals' rights or access to services, require stronger governance, oversight, and control mechanisms, while lower-risk uses face lighter regulatory expectations.
While Switzerland does not yet have a standalone AI law, its legislative approach is expected to progressively align with the EU AI Act. In the financial sector, FINMA already expects firms to integrate AI within existing governance, risk management, and internal control frameworks. Supervisory observations show that AI adoption across Swiss institutions is accelerating, while governance maturity is still evolving.
The regulatory landscape also has a cross-border dimension. The EU AI Act applies to organizations outside the EU whose AI systems or outputs are used within the European market. Swiss firms operating internationally must therefore align with EU requirements alongside Swiss supervisory expectations.
AI governance is therefore a Board-level priority, enabling organisations to scale innovation responsibly while ensuring compliance, transparency, and sustainable business value.
Services
End-to-end advisory services for organisations navigating responsible AI, risk, compliance, and governance.
We design and embed a complete AI governance framework, combining governance structures, policies and internal standards into a single, coherent foundation that promotes responsible AI use across the organisation.
As AI adoption grows, organisations face fundamental risks that are rarely technical: unclear accountability for AI-driven decisions, inconsistent oversight across teams, uncontrolled or inappropriate use of AI tools, lack of transparency when AI influences outcomes, and weak escalation when risks materialise. These risks are explicitly highlighted in the OECD AI Principles, the EU AI Act and the NIST AI Risk Management Framework.
This service defines who is responsible for AI, how decisions are made, which risks are acceptable, and how AI use is governed in daily operations. High-level principles such as fairness, transparency, privacy, safety, robustness and human oversight are translated into enforceable and auditable rules that ensure responsible AI deployment.
We provide structured identification, assessment and documentation of AI risks aligned with Enterprise Risk Management and Operational Risk practices, ensuring responsible AI outcomes.
This service addresses core AI risks such as unfair or biased outcomes, privacy and data-protection impacts, lack of explainability in sensitive decisions, unreliable behaviour in edge cases, cybersecurity vulnerabilities and operational resilience concerns. Risks are assessed consistently across use cases and consolidated into a single enterprise view.
An integrated, end-to-end engagement supporting organisations in preparing for the EU AI Act while also anticipating the evolution of AI regulation and supervisory expectations in Switzerland.
This package brings together governance, use-case definition, risk assessment, controls, vendor oversight and assurance into a single, coherent readiness programme. It is built around the EU AI Act's risk-based approach and the trustworthiness risks identified by OECD, NIST and ISO.
We define governance-driven control and evidence expectations across the full AI lifecycle, from use-case approval and design through deployment, monitoring, change and retirement.
As AI systems evolve, are retrained or are sourced from external providers, organisations often struggle to demonstrate what happened, why it happened and who was accountable. Typical weaknesses include insufficient logging and traceability, weak change control, limited explainability for sensitive decisions and gaps in evidence during incidents.
This service specifies what "good control" and "sufficient evidence" look like, aligned with OECD principles, EU AI Act expectations, NIST AI RMF trustworthiness characteristics and ISO/IEC 42001 lifecycle discipline, without implementing technology.
We support governance-driven selection, due diligence and ongoing oversight of AI vendors.
Third-party AI solutions often introduce not only transparency and accountability risks, but also structural dependency and operational resilience risks. Organisations may become reliant on opaque models, external decision logic or critical AI services that are difficult to substitute, monitor or control over time.
This service ensures AI vendor relationships are governed with a clear view on dependency, concentration risk, substitutability and resilience, and that AI-related third-party risks are integrated into existing outsourcing and third-party risk management frameworks.
We assess governance and control risks arising from AI agents and semi-autonomous systems that optimise, coordinate or execute business processes.
As autonomy increases, risks shift from decision support to decision execution. AI agents can create accountability gaps, unintended control bypasses, reduced auditability and operational resilience risks, particularly when critical workflows depend on agent availability, external services or vendor components.
This service focuses on preserving human oversight, control effectiveness and resilience by clearly defining decision boundaries, escalation mechanisms and fallback conditions.
We provide independent, risk-based assurance over AI governance and controls, including direct support during audits, internal audits and compliance reviews.
The focus is on whether AI governance and controls effectively address trustworthiness risks such as bias and unfair outcomes, insufficient transparency and explainability, privacy impacts, unreliable performance, cybersecurity vulnerabilities and operational resilience.
About
Massimo Barison
I am a senior advisor specialising in AI governance, risk management and assurance, with a background in internal audit, operational risk and highly regulated environments.
Over the years, I have worked closely with organisations operating under strict governance and supervisory expectations, where accountability, transparency and defensible decision-making are critical.
My experience sits at the intersection of governance, regulation and real-world operations. I support organisations in translating high-level principles and emerging regulatory expectations into practical governance structures, risk frameworks and controls that can be clearly explained to Boards, tested by auditors and defended under scrutiny.
My focus is not on technology implementation or product development, but on ensuring that AI use is governed in a way that is consistent, auditable and aligned with international standards and supervisory expectations.
I work as an embedded, hands-on advisor within the organisation.
I typically collaborate closely with senior management, risk management, compliance and internal audit teams, adapting my approach to the organisation's context, maturity and risk profile. Engagements are pragmatic and proportionate, designed to strengthen existing governance, risk and control frameworks rather than creating parallel structures or unnecessary bureaucracy.
Legal interpretation and technical implementation remain with the appropriate specialists. My role is to ensure that AI governance, risk management and assurance are coherent across the organisation, clearly owned, and capable of standing up to external and internal audits.
Contact
Get in touch to discuss your context and priorities.
Get in touch