RegTech Intelligence

New AI rules: forcing board-level risk debate

In response to mounting global concerns about generative Artificial Intelligence (AI), legislators and stakeholders have been listening hard to technologists while finalizing tough new rules for digital non-financial risk.

Will AI be a wake-up call for firms to define ‘what good looks like’ for infrastructure standards before massive fines start to land?

To avoid a frustrating and costly three years boards should connect the dots between regulatory requirements, technology and the banks’ supply chain.

FS Non-financial risk in a nutshell

As Forbes reported here in Q4 2022, the notion of a comprehensive risk management framework which crosses back-office silos is needed for this fast-growing class of regulation on ‘the how.’

In 2022 our RegRadar picked up well over 10,000 documents detailing Operational Resilience, Cyber and new technology risks which we covered on in our managing digital infrastructure risk report.

We concluded that by 2025, unprecedented transparency and assurance from third-party technology providers will be the norm.

A fundamental rethink of what good FS infrastructure looks like, who delivers it, where it is based and how to control the risks.  Boards will be forced to ask many more difficult questions about their cloud providers, the use of AI by their applications and how safe the supply chain is from cyberattacks.

Divergent approaches to AI, Cloud, PET will require individual risk ‘tribes’ to be joined-up. While the EU has the most obligations and so is seemingly leading the charge, the UK remains close behind and collaboration with the US is of high probability.

New accountability regimes will be used to impose painful sanctions which could include losing the right to work in the industry and jailtime.

EU AI Act update

As previously highlighted in RegRadar new AI statutes are already on their way to the rulebooks.

Key committees have overwhelmingly agreed to the first draft negotiating mandate on the Artificial Intelligence Act (AI Act) and work on EU AI standards has already begun and by the end of the 2023 / early 2024 more detailed guidance is expected.

These rules look to define and restrict AI risks according to 21 high-risk AI systems business purposes, including in Financial Services: the evaluation of creditworthiness, employee monitoring and recruiting.

Critically, detailed obligations for risk management, data quality, technical documentation, human oversight, transparency, robustness, accuracy and security are required.

Providers of high-risk AI systems must complete a conformity assessment before the AI systems are deployed. Providers must then monitor their systems and complete a detailed registration in a public EU database on high-risk AI systems, which the Commission will establish.


The US senate committee on the Judiciary held a 16 May hearing on Oversight of A.I.: Rules for Artificial Intelligence.

This hearing brought renewed interest in the future of AI regulation and in his testimony, CEO of OpenAI, Sam Altman emphasized that without proper regulations, AI could pose severe and often irreversible risks to society, including job displacement, privacy breaches, and societal biases.

He advocated a global AI regulatory agency, safety feature labeling and an agreed audit framework.

The January 2023 NIST AI Risk Management Framework, identifying six risk mitigating factors including validity, resilience and management of harmful bias. The framework aims to apply a similar cohesive approach as NIST frameworks perform in other technology areas with well defined risk management obligations for processes along the lifecycle of an AI. Time will tell if NIST’s framework will become a gold standard for AI implementation.

Italian action

The 2023 OpenAI data breach, where the visibility of payment-related information of 1.2% of ChatGPT Plus subscribers was compromised within a specific nine-hour window called for a swift ChatGPT ban in Italy. This breach according to the Italian regulator served as a stark reminder of the risks posed by unregulated AI and the dire consequences that can result without proper safeguards in place.

On 11th April 2023, the Italian data protection agency issued an order imposing nine conditions on ChatGPT’s parent company, OpenAI. These conditions include requirements such as ensuring human oversight of the AI system, implementing robust security protocols, disclosing relevant information about the AI system to users, and establishing clear channels for user feedback and redressal Compliance with these conditions is essential for OpenAI to continue offering the ChatGPT service to Italian users.

UK talking points

According to UK Artificial Intelligence Regulation Impact Assessment, there are at least 18 key legal frameworks that indirectly control the development and use of AI in the UK (e.g., consumer rights law, data protection law, product safety law, etc.).

Recognizing the need to address the lack of clarity and establish a cohesive approach while keen not to stifle innovation, the UK government has embarked on several initiatives to foster understanding and shape effective regulations. Key among these initiatives are TechSprints, PolicySprints, and Sandboxes, which have been emphasized by Jessica Rusu, the Chief Data, Information, and Intelligence Officer at the Financial Conduct Authority (FCA), during her speech.

To tackle the complexities surrounding AI regulation, the government is actively seeking insights from stakeholders through an ongoing consultation process. This pro-innovation consultation is set to conclude on 21st June, serving as a crucial opportunity to gather and “bolt together,” the five cross-sectoral principles that according to the white paper will underpin a non-statutory framework for AI regulation.


Technology age safety will require new standards and infrastructures faster than they could emerge from competitive markets.

Both the industry and the regulators will need to learn, fast and together, the best way to balance technology competition and the need for collective action.

As regulators take a hard look at the supply chain, a holistic and collaborative approach between firms and suppliers is required.

Risk communities which should be uniting behind these initiatives. Compliance, operational risk, data and technology tribes often appear to be working in silos and though some best practices have arisen, there is no body or unified approach to holistic controls today. Overall, this is a recipe for a very complex, frustrating, and costly 3 years ahead

Now more than ever, the board will need to spend time understanding the interdependencies between business models, regulatory requirements, technology and the banks’ supply chain

The paper, along with a companion IT guide to Operational Resilience is available free of charge to JWG registrants.

If you do not have a JWG account register here.

To promote global dialogue on how to deliver regulatory change JWG post hundreds of focused articles a year to thousands of subscribers. Get involved and join the mail list.

By hitting the subscribe button you agree to our Privacy Policy