This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 2 minute read

UK regulators sum up industry views on AI in financial services

The Bank of England, FCA and PRA have issued a feedback statement in response to their joint discussion paper on artificial intelligence and machine learning. The latest update does not include policy proposals but instead summarises responses from the discussion paper. These responses include valuable insights about where the financial services industry would appreciate further guidance for AI adoption.

Last year the regulators published a discussion paper (DP5/22) which invited comments on the best approach to defining AI, identifying the benefits, risks and harms of AI, and how regulation can support the safe and responsible adoption of AI. Now in a feedback statement (FS2/23) the regulators share the themes from the responses they received.

Regulatory definition and objectives

  • No need for a regulatory definition: Respondents agreed with the regulators that a strict regulatory definition of AI is unnecessary and unhelpful, not least because of the pace of technology development. Instead a technology-neutral principles-based or risk-based approach that focuses on AI's specific characteristics or risks could better support the safe and responsible adoption of AI in financial services.
  • Dynamic guidance for dynamic AI: Regulators should consider creating and maintaining ‘live’ guidance, including periodically updated best practices, which can adapt to the rapidly changing AI landscape.
  • Ongoing industry engagement: Respondents praised initiatives like the AI Public Private Forum and recommended using it as a template for future public-private collaboration.

Potential risks and benefits

  • Consumer outcomes: Respondents emphasised that regulation and supervision should prioritise consumer protection, particularly in terms of fairness and other ethical dimensions. Though AI could benefit consumers, it also creates risks like bias, discrimination and lack of explainability and transparency which could lead to exploitation of consumers. This is especially relevant given the regulators’ current focus on firms’ implementation of the Consumer Duty.
  • Governance structures: Respondents suggested that the most salient risk for firms is insufficient oversight. Respondents felt that current firm governance structures, and regulatory frameworks like the SMCR, could be adequate for addressing AI risks but sought actionable guidance on how to interpret the ‘reasonable steps’ element of the SMCR in an AI context.
  • Third party risks: According to the feedback, third-party providers of AI software/tools do not always provide sufficient information to allow for effective oversight of their services. Third-party exposure could increase systemic risks too. Given this, and the increased complexity of models, respondents asked for further guidance and also mentioned the relevance of the incoming critical third party regime.
  • Model risk management: Respondents deemed the principles proposed in PRA SS1/23 sufficient for AI model risk but suggested certain areas could be strengthened or clarified to address AI-specific issues.
  • Joined-up approach: Respondents suggested firms adopt a joined-up approach across business units and functions, including closer collaboration between data management and model risk management teams.

Regulatory barriers and concerns 

  • Complex landscape: Respondents expressed concerns about the complexity and fragmentation of AI regulations, calling for better coordination and alignment between regulators (both domestic and international). Regulatory barriers around data protection and privacy could hinder adoption of AI. They argued for an industry-wide standard for data quality to be developed. Respondents asked for more practical and actionable guidance through illustrative case studies over the more complex and confusing areas of AI regulation.
  • Data regulation concerns: The majority of respondents highlighted the fragmented nature of data regulations and emphasised the need for regulatory alignment to address data risks associated with fairness, bias, and protected characteristics management. Further guidance was sought in relation to interpreting Equality Act 2010 and FCA Consumer Duty vis-à-vis fairness in the context of AI models.

Tags

ai, uk, fintech