November 6, 2023

November 6, 2023

November 6, 2023

What do the FCA, PRA & the Bank of England think about AI?

What do the FCA, PRA & the Bank of England think about AI?

We provide an analysis of the new Bank of England, PRA and FCA Feedback Statement on the topic of AI and Machine Learning.

We provide an analysis of the new Bank of England, PRA and FCA Feedback Statement on the topic of AI and Machine Learning.

In a recent development, the Bank of England, along with the PRA and FCA, has published a Feedback Statement on the topic of Artificial Intelligence (AI) and Machine Learning. This statement summarises responses to their earlier Discussion Paper from October 2022, providing insights into the potential future direction of AI-related financial services regulation in the UK. It focuses on key themes, including the regulatory approach, potential benefits and risks of AI, and the need for improved coordination and alignment in the regulatory landscape.

For firms looking to deploy AI, this regulatory development is highly relevant. It underscores the growing importance of AI in financial services and the need for effective management and oversight of AI-powered processes. The shift towards a technology-neutral, outcomes-based approach highlights the importance of proper evaluation frameworks and vetting of technologies to ensure they remain in compliance with evolving regulatory standards. The regulations neither prohibit, nor enforce any specific technologies, giving firms quite some flexibility in choosing any type of technology they believe could be beneficial for their organisation and their customers. The approach in the discussion paper is noticeably more hands-off from other attempts at regulation, such as the proposed EU AI Act which includes a definition of “Artificial Intelligence”, whereas the UK is avoiding this clear-cut distinction.

Still, there is clearly strong emphasis on consumer protection and mitigating potential AI-related risks, such as bias and discrimination. This is a positive step forward that aligns with the rigorous testing and evaluation processes we put all our models through at CourtCorrect. These risks are notably higher for consumers with protected characteristics, opening up a possible intersection with the Consumer Duty. The Discussion Paper identified the root of these concerns as primarily stemming from inadequate or biased data, underscoring the need for representative, diverse, and unbiased data sets in AI development.

When it comes to assessing the advantages and pitfalls of AI in the financial sector, opinions vary, revealing no clear consensus on which metrics are most indicative of AI's impact. Nevertheless, two critical types of metrics emerged as widely acknowledged: those evaluating consumer outcomes and those assessing data integrity and model performance. These insights are shaping the conversation on how best to navigate AI's integration into financial services while safeguarding consumer interests. As AI regulations evolve, CourtCorrect is well-positioned to adapt and continue delivering top-notch AI-driven solutions for complaints management in a compliant and ethical manner.

At CourtCorrect, we have built-in rigorous evaluation frameworks for all our models before deploying these to customers. All frameworks follow the CourtCorrect Ethical AI Use Standard. Notably, every single model that is live on the CourtCorrect platform has been evaluated by subject matter experts manually on hundreds of example cases and automatically on hundreds of thousands of training documents. Not only does this framework boost the accuracy of our models, it also ensures edge cases are covered and responses are reliable and safe. 

We will continue to monitor this space as it evolves.