The Financial Conduct Authority is stepping up its work on AI in financial services. The results of its recent AI Sprint show how important it is financial firms to protect consumers as they integrate AI into their business. The FCA has also announced that it will launch a service for regulated firms to test AI models.
AI Sprint discussions
The FCA has published the results of its AI Sprint held earlier in the year. Four common themes from participant discussions were:
- Regulatory clarity: Areas where the FCA could explain how existing rules apply to AI.
- Trust and risk awareness: Consumer trust is “vital” to realise the full benefits of AI.
- Collaboration and coordination: Public and private partnership is needed.
- Safe AI innovation through sandboxing: A safe space to test responsible innovation.
Discussions on the regulatory framework for financial services covered, among other things:
- Governance: Accountability for AI within firms, including under the Senior Managers and Certification Regime (SMCR).
- Explainability and transparency: Telling customers about how AI models are used.
- Fairness: Managing risk of bias and impact on vulnerable customers.
- Market risks: Including how they are covered by principles and outcomes-based regulation.
- Outsourcing and critical third parties: How existing regulation can manage third party risks.
The AI Sprint will feed into the FCA’s future work on AI in financial services. The topics listed above hint at the areas that the FCA will engage with firms about in their supervisory work. Firms should be particularly mindful to ensure ongoing compliance with the Consumer Duty as they roll out AI which could impact outcomes for retail consumers.
Automation in the customer journey presents risk if it removes the nuance interaction with human customer support offers in circumstances in which this would be beneficial. This is a particular risk in the case of vulnerable customers, where the FCA has identified real benefits from more tailored responses. Add in AI and the risks grow due to the risk of coding-in bias or assumptions that disproportionately negatively impact a particular subset of your target market.
That said, there also are real benefits to consumers from using AI. The FCA's recent feedback on firms' treatment of vulnerable customers highliighted the use of an AI tool to review call recordings to check if customers mentioned potential characteristics of vulnerability as good practice. Firms should think carefully about the intended (and unintended) consequences of any AI tool they are considering entering into their business.
AI Live Testing
Insights from the AI Sprint have led to the FCA developing a testing programme as part of its AI Lab. From September 2025, a live testing service will allow firms to work with the FCA while they check that their new AI tools are ready to be used. The service is aimed at firms which are ready to deploy consumer or market-facing AI models.
The FCA has launched an Engagement Paper on its proposed approach for AI Live Testing. The FCA wants to hear about the primary blockers for live market deployment of AI models and whether the AI Live Testing would address these challenges. It invites feedback by 10 June 2025.
The AI Live Testing is initially expected to run for around 12 months. A report in spring 2026 will evaluate its impact. If successful, it may be made a permanent service.