RegTech Intelligence


Article
UK AI forum weighs governance frameworks, regulators consider future engagement

By Rachel Wolcott, Regulatory Intelligence

The Bank of England and the Financial Conduct Authority (FCA)-convened Artificial Intelligence Public-Private Forum (AIPPF) this month discussed potential accountability and governance frameworks that could form future guidance for the use of AI in financial services.

Senior management accountability as well as the creation of a chief AI officer role were contemplated as oversight options, while Dave Ramsden, the Bank’s deputy governor for markets and banking, who is one of the forum’s co-chairs, observed a shortage of technology expertise in most firms’ boards, both individually and collectively, according to minutes published last week.

Artificial intelligence might “fall into a category of new issues where governance is a key regulatory principle”, said Jessica Rusu, the FCA’s chief data, information and intelligence officer, the forum’s other co-chair.

It is important for regulators to learn from industry practice and existing AI governance approaches and jointly explore how governance can contribute to ethical, safe, robust and resilient use of AI in financial services, she said.

Broader regulation likely

Broader AI regulation is on the UK government’s agenda. Its new National AI Strategy emphasises governance as one of three pillars to nurture the use of technology in the UK. The government’s Office for Artificial Intelligence will develop a national position on governing and regulating AI. A white paper to be published in early 2022 will set out the government’s position on the potential risks and harms posed by AI technologies, and proposals to address them.

The AIPPF was established last year to explore areas where principles, guidance or good practice examples could support safe AI adoption. The Bank and the FCA, however, have yet to commit to anything apart from a final report on the forum’s findings, according to its terms of reference.

“The Bank and FCA will be thinking about what future engagement with the financial industry more broadly could look like in light of the lessons learned through the AIPPF. This includes how to take forward the numerous findings and recommendations that have come out of the AIPPF and will be included in the final report,” the most recent minutes said.

International discussion underway

UK efforts come as international regulatory and legal discussion about AI and its use in financial services has advanced. The European Union launched its draft AI Act in April. U.S. regulators launched a consultation on the use of AI in financial services, which ran between March and July. It sought comments to help it: gain a better understanding of the use of AI, including machine learning, by financial institutions; devise appropriate governance, risk management and controls over AI; and address challenges in developing, adopting and managing AI. The Cyberspace Administration of China has also drafted regulations governing how companies use algorithms to provide services to consumers.

“This is a very busy policy arena, with regulators in Europe, the United States, China and elsewhere also defining their AI guidelines. Our analysis of the industry’s global infrastructure control framework is showing there are many interdependencies and overlaps between AI and other initiatives like data privacy, operational resilience, cyber and, of course, accountability rules like SM&CR. There will be massive implications across the supply chain and big legal bills from this next generation of RegTech controls,” said PJ Di Giammarino, chief executive at JWG, a regulatory think tank.

UK’s regulatory approach to AI so far

The Bank and the FCA published a high-level research paper on machine learning in UK financial services in 2019. The UK Information Commissioner’s Office (ICO) published its AI guidance in 2019. The UK government’s Department for Digital, Culture, Media and Sport’s consultation on the UK General Data Protection Regulation (UK GDPR) contemplates some AI data privacy issues that will shape the financial services sector’s use of AI.

 

This article was published by Thomson Reuters Accelus Regulatory Intelligence  on 20-Oct-2021

For more analysis on global AI regulatory efforts in financial services see JWG’s analysis here

Register Here

Additional resources:

  • Access the JWG Digital Integrity here or Surveillance LinkedIn here
  • To create your own JWG RegTech Intelligence Hub, sign up here
  • To register for JWG’s 16/17 November 2021 conference, see here

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy