Navigating AI in Finance: Insights from FIFAI II
Introduction to FIFAI II
The Office of the Superintendent of Financial Institutions (OSFI), in collaboration with the Global Risk Institute (GRI), has convened the highly anticipated second Financial Industry Forum on Artificial Intelligence (FIFAI II). This gathering unites thought leaders from the financial sector, government, and academia to delve into the myriad risks, opportunities, and oversight considerations that artificial intelligence (AI) brings to Canada’s financial landscape. Building on prior discussions, notably the AI and Quantum Questionnaire from December 2023 and the joint AI Risk Report alongside the Financial Consumer Agency of Canada (FCAC) released in September 2024, FIFAI II aims to enhance understanding and governance in an increasingly AI-driven economy.
Foundation of the Forum: Regulatory Milestones
The FIFAI II forum is a continuation of significant regulatory and policy advancements. Notably, previous recommendations from OSFI and FCAC regarding sound AI risk management laid the groundwork for more structured regulatory guidance. These earlier milestones, including the initial joint risk report, signal a collective effort to pave the way for effective AI integration into the financial services sector.
Thematic Workshops: Exploring Key Areas
FIFAI II is structured around four thematic workshops, each concerning critical areas of focus: security and cybersecurity, financial crime, consumer protection, and financial stability. Each workshop will lead to interim reports, culminating in a comprehensive final report that encapsulates the key findings and recommendations.
Gathering of Experts
The first workshop of FIFAI II, held on May 28, 2025, attracted 56 AI professionals and experts from Canada and around the globe. This diverse group included representatives from major banks, insurance firms, asset management companies, fintechs, and regulatory bodies. The event was co-sponsored by OSFI, the Department of Finance Canada, and GRI, highlighting the collaborative nature of the initiative.
Growing AI Adoption in Financial Institutions
The uptake of AI technologies among federally regulated financial institutions (FRFIs) is accelerating at an impressive rate. In 2019, nearly 30% of FRFIs reported utilizing AI in their operations, a figure that has surged to about 50% by 2023. Projections suggest that this number will exceed 70% by 2026, as institutions increasingly employ AI for various applications, including fraud detection, customer service platforms, document automation, underwriting, trading, and claims management.
The EDGE Framework for Responsible AI Use
Central to the discussions at FIFAI II is the introduction of a proposed framework for responsible AI use, encapsulated in four guiding principles referred to as "EDGE": Explainability, Data, Governance, and Ethics. These principles form the backbone of a robust approach to AI, focusing on:
- Explainability: Ensuring AI decisions are transparent and understandable to all stakeholders.
- Data: Utilizing reliable and well-governed data to construct trustworthy models.
- Governance: Establishing strong frameworks for overseeing AI implementation across organizations.
- Ethics: Prioritizing ethical considerations, including transparency, privacy, consent, and algorithmic bias mitigation.
Identifying Internal and External Risks
The forum highlighted several internal risks tied to AI deployment. These include challenges in data governance, opaque and complex models, potential legal and reputational risks, excessive dependency on third-party AI vendors, cybersecurity threats, and exposure to market and credit risks through automated decision-making.
Conversely, external risks involve the rising complexity of cyber threats amplified by generative AI technologies, which introduce issues like deepfakes and phishing. Moreover, competitive pressures may accelerate AI adoption, sometimes neglecting essential safeguards.
Cybersecurity Threats: A Growing Concern
Speakers at FIFAI II stressed the pressing nature of AI-enabled cyber threats, with some estimates suggesting these incidents could lead to economic losses ranging between 1-10% of global GDP. Alarmingly, attacks using deepfake technology have increased twenty-fold over the past three years. The forum’s discussions on cybersecurity transcended mere digital risks, addressing broader implications for physical infrastructure, personnel safety, technology, and national security.
Participants’ Insights on Security Risks
A participant survey revealed the top hurdles in managing AI security risks. 60% cited the rapid pace of AI innovation as a critical challenge, while 56% expressed concerns regarding the vetting of third-party vendors. Additionally, 49% highlighted uncertainty around governance structures as another significant barrier.
The Need for Adaptable Governance Structures
To tackle these complex challenges, participants spotlighted the necessity for robust and adaptable governance frameworks. Existing risk management guidelines, such as OSFI’s draft Guideline E-23, should be extended to encompass AI-specific considerations. Institutions must adopt a lifecycle-based methodology for AI risk management, emphasizing thorough model validation, human oversight, and clear communication with consumers regarding data utilization and decision-making processes.
Proactive Measures for Financial Institutions
Key strategies discussed emphasized the importance of monitoring third-party vendors and investing in employee training on AI ethics and data literacy. Such measures are critical for building a solid foundation for responsible AI deployment in financial systems.
OSFI’s Regulatory Evolution
Looking to the future, OSFI is progressing toward a more formalized regulatory framework for AI. The upcoming release of the revised draft Guideline E-23, explicitly addressing AI and machine learning risks, is anticipated by September 11, 2025. OSFI is working strategically with other federal agencies, including the FCAC and the Department of Innovation, Science, and Economic Development, to align financial sector oversight with broader federal initiatives, like the Artificial Intelligence and Data Act (AIDA).
The Call for Caution and Accountability
As the financial sector increasingly integrates AI technologies, FIFAI II underscores the critical need for institutions to approach AI deployment with caution and accountability. The EDGE principles provide a structured framework to help agencies manage risks effectively while unlocking AI’s transformative potential.
A Stimulus for Proactive AI Strategies
OSFI urges all regulated entities to take proactive stances, ensuring that their AI strategies balance innovation with responsible governance. The FIFAI II discussions highlight that the supervisory focus on AI is both a continuation and an acceleration of oversight in this arena.
Monitoring Ongoing Developments
The regulatory landscape surrounding AI is in constant evolution, and we will continue to monitor and report on these developments as they unfold in anticipation of key milestones in Guideline E-23. Financial institutions must remain vigilant and responsive to any changes, ensuring they are well-prepared for the implications of AI in financial governance.
Conclusion: The Future of AI in Finance
In conclusion, as AI becomes increasingly embedded in the financial sector, the insights gathered from FIFAI II emphasize that organizations must remain vigilant, implementing robust frameworks to manage associated risks. The discussion around the EDGE principles not only assists in navigating through the complexities of AI but also allows financial institutions to harness its transformative potential safely. As we move forward, the emphasis on responsible AI governance will be pivotal in shaping the future landscape of Canada’s financial sector.
This article offers a comprehensive look at the FIFAI II forum and its implications for the future of AI in finance, providing guidelines and insights that are essential for navigating the evolving regulatory environment.