Press Article - Initially published on AGEFI

Banking in Luxembourg on AI: Digital Revolution

  • July 15, 2024

Setting the scene

On 12 July, the artificial intelligence (AI) Act was published in the Official Journal of the EU (European Union) and will enter into force on the 20th day following this date, 1 August, marking an historic step in artificial intelligence governance.

This landmark legislation introduces a 'risk-based' approach, imposing stricter rules on AI systems that pose greater potential harm to society. By aiming to harmonise AI regulations across Europe, the Act could set a new global standard. Its chief goal is to promote the development and use of safe, reliable AI within the EU’s single market, ensuring these technologies respect fundamental rights while boosting investment and innovation.

Generative AI tools are being hailed as a major technological advancement, promising to revolutionise the financial sector. However, their influence will largely rely on how data is managed and how models are created and used by financial institutions. If AI becomes common in finance and controlled by a few providers, it could raise operational risks, such as cyber-attacks, and worsen market concentration and 'too-big-to-fail' problems. Moreover, AI might encourage herd behavior and market correlation. If current regulations can't cope with these challenges, specific measures may be needed.

Challenges and opportunities

Luxembourg has a robust digital infrastructure, a clear vision for applied AI research, and continuous investments in AI-related projects. These factors give Luxembourg a good opportunity to benefit from the AI Act. Having said that, the Act was developed with almost no consideration of the banking business model, rather it was designed to apply to a wide range of different types of businesses. This means that many banking tools, models and processes may fall under the “high risk” category, making them subject to more stringent rules and standards (see analysis in one of our previous blogs: Artificial Intelligence Act: Oops. EU did it again)

The AI Act uses a risk-based approach that enacts different rules for different risk levels

The AI Act uses a risk-based approach that enacts different rules for different risk levels

Luxembourg is a top financial hub in Europe, and it has a great opportunity to use AI to boost not only banking processes, strengthen risk management, but also increase financial results. However, to adopt AI in a responsible way, financial institutions will have to deal with a complicated legal and regulatory environment, some of which we will discuss in this article.

A recent study found that 64% of businesses expect AI to enhance their efficiency, though 40% of business owners worry about relying too heavily on technology. Globally, generative AI could add between USD 2.6 trillion and USD 4.4 trillion in annual economic value across various industries. The banking sector is poised to benefit significantly, with an estimated annual boost of USD 200 billion to USD 340 billion, largely from increased productivity, which could account for 9-15% of operating profits.

One of the key challenges for banks is to ensure that their AI and ML (Machine Learning) systems are aligned with their business models and objectives and that they do not compromise their core values and principles.

The AI Act will probably categorise AI systems that can score and evaluate credit as high risk. This means that Luxembourg financial institutions that use these systems will have to meet stringent standards, such as maintaining data quality, human oversight, transparency, and risk management.

Another key challenge for banks is to ensure that their AI and ML systems are ethical, responsible, and trustworthy, and that they respect the human rights and dignity of the people affected by them. This requires banks to adopt ethical principles and frameworks to guide their AI and ML development and deployment, and to involve diverse and inclusive teams and perspectives in the process. It also requires banks to monitor and mitigate the potential biases, errors, or harms that their AI and ML systems may cause or amplify, such as excluding, discriminating, or disadvantaging certain groups or individuals. For example, banks need to ensure that their AI and ML systems for lending do not replicate or worsen the existing biases and disparities in the access to credit and financial inclusion, especially for marginalised communities. Banks need to ensure that their AI and ML systems for lending are fair, explainable, and accountable, and that they do not violate the privacy or autonomy of their customers.

The Basel Committee will soon release a more comprehensive report on how finance is becoming digital and what that means for regulation and supervision.

AI Governance

AI Governance is the set of processes, policies, and tools that bring together diverse stakeholders across data science, engineering, compliance, legal, and business teams to ensure that AI systems are built, deployed, used, and managed to maximise benefits and prevent harm.

AI can bring many benefits if used responsibly. A good governance framework can help AI applications and systems achieve their best results. In the past, governance functions have been designed for static processes. But a key feature of AI processes is that they change and adjust over time – and so AI governance needs to do the same. To implement such a system, organisations need to have a coherent strategy. A plan for how AI can help an organisation achieve its wider goals is an AI strategy. It also acts as a guide for the tech infrastructure, making sure the business has the necessary hardware, software and other resources.

AI Governance

Governance is mainly about following regulatory requirements and company principles. But for AI, it is more than that; it is the essential function that allows a company to create AI solutions that are ethical, and that customers and employees can rely on.

Selected topics that should be considered when implementing AI/ML tools

Privacy and Data protection

The AI Act states in Recital 10 that it does not intend to interfere with the application of the EU General Data Protection Regulation (GDPR) and the ePrivacy Directive, including the roles and responsibilities of the relevant authorities that supervise and enforce those laws.

The most prominent legal basis in the GDPR is consent (Article 6(1)(a)) as per which processing shall be lawful only if the data subject has given consent to the processing of his or her personal data for one or more specific purposes. Any processing activity of personal data–such as storing, transferring, copying, or anything else–requires a legal basis under Article 6 GDPR. For companies that do not have an establishment in the EU, they still have to comply with the GDPR if they offer their services in the EU, for example, which is true for many major LLM (Large Language Models) products. To complicate matters even further, a much larger number of personal data pieces than expected may be especially protected as sensitive data under Article 9 GDPR. Additionally, if an LLM is indeed deemed personal data, it means that data subjects could, in theory, exercise their right to erasure under Article 17 of the GDPR. This right, also known as the 'right to be forgotten' enables individuals to ask for the removal of their personal data under specific conditions.

One additional challenge concerns data privacy, notably whether publicly available systems respect user input data privacy (which could, for instance, also be confidential firm-specific information) and whether there is a risk of data leakage.

The AI Act, proposed by the European Parliament, has a provision in Article 27 that mandates a fundamental rights impact assessment for high-risk AI deployed by public or private entities that offer public services, such as insurance companies and banks. For these deployers, if they already have a data protection impact assessment under the GDPR, the Data Protection Impact Assessment (DPIA) will be incorporated into the Fundamental Rights Impact Assessment (FRIA).

DORA

The Digital Operational Resilience Act (DORA) and the Artificial intelligence (AI) Act are both relevant for the use of new technologies for the financial sector. While DORA, exclusive to the financial sector, does not explicitly refer to the AI Act (and vice-versa) and AI systems, it means to keep a broad scope and definition of what should be considered as an “ Information Communication Technology (ICT) asset” or an “ICT service”, and does not leave space for taking AI systems out of its scope. DORA provides for the governance rules for the use of such technologies and attached services from various aspects, from ICT risk management to the management of ICT third-party service providers. The AI Act comes with an additional set of rules, that should not be seen as replacing DORA.

On the contrary, whenever an actor under the AI Act (such as a “provider” or a “deployer” of an AI system) is a financial institution, the AI Act makes it clear that other sectoral rules shall also apply from a governance point of view for the topics common for both texts (e.g., on requirements related to certain documentation keeping obligations, or for the monitoring of the system). The AI Act therefore takes a different approach from DORA, without trying to overstep on it, while adding a new layer of rules to DORA when financial entities resort to using AI systems.

Liability

Liability is a legal concept that determines who has to pay the people harmed by a wrong action or failure to act, and in what situations. Liability can depend on contractual duties, tort law, product liability law, or other particular rules. 33%1 of firms view “liability for damage” as the top external obstacle to AI adoption, especially for LLMs (Large Language Models), only rivalled by the “need for new laws”, expressed by 29% of companies.

The AI Act does not cover the responsibility for harm caused by AI, only the breach of the regulation's rules on the security of products and services provided, with national authorities imposing administrative sanctions. To ensure that victims are compensated2 and preventive costs are reduced, a new and effective liability regime may be needed. Two recent EU regulatory proposals on AI liability could have an impact on LLMs: One that revises the current Product Liability Directive (PLD) for defective products, and another that establishes processes for fault-based liability for AI-related damages through the Artificial Intelligence Liability Directive (AILD). The AILD is stalled in the legislative process, though.

One advantage is that Luxembourg civil law principles are flexible enough to deal with different and complicated situations. Also, contract and tort law can compensate and help people who suffer harm or loss from AI systems.

ESG

Existing AI regulations, both in the EU and elsewhere, seek to promote AI that is trustworthy (e.g., AI Act) and accountable (e.g., AI Liability proposal). However, what is lacking is a strong regulatory discussion and plan to ensure that AI, and ICT in general, is environmentally3 friendly. The environmental impact of Large-Language-Models (LLM’s) is growing from month to month. Assuming static usage of 100 million weekly active users (excl. OpenAI ChatGPT which has 180 million active users worldwide) and just five queries per user per week, the total energy consumption for operating an LLM like GPT-3.5 is staggering—around 44,200 MWh per year. To put this in perspective: With an emission intensity of 0.4 kg of CO2 per kWh, this level of energy consumption emits the same amount of CO2 as making 56,000 round trips in your petrol-powered car from Luxembourg to Cannes, France (a road distance of 965 km one-way). ICT produces up to 3.9% of global greenhouse gas (GHG) emissions, while global air travel accounts for about 2.5%. Another example of the impact of technology on the environment is Google’s4 PaLM, which uses a lot of computation power. If you had to use a drop of water for every floating-point operation (FLOP) that it performs during training, it would fill up the Pacific Ocean.

We believe that as AI expands its impact on the environment, the EU should revise its current environmental rules to deal with these new technologies better. EU environmental law does not, at present, cover GHG emissions of AI and broader ICT infrastructure. Reporting GHG emissions is one of the requirements of CSRD (Corporate Sustainability Reporting Directive) for in scope banks and ICT emissions should be included to show the actual emissions of the GHG not only for Air Travel/Transportation, electricity production etc.

Notes:

  1. European Commission, Directorate-General for Communications Networks, Content, and Technology, European enterprise survey on the use of technologies based on artificial intelligence: final report, Publications Office, 2020. The survey refers to the broader category of natural language processing models, pp. 71-72.
  2. "Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity", Working Paper (version: 14 January 2024), Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato, Luciano Florid, p. 2
  3. "Sustainable AI Regulation", Working Paper (this version September 8, 2023), Philipp Hacker, p.1
  4. "The Coming Wave", New York 2023, Mustafa Suleyman with Michael Bhaskar, p 66

Summary

Luxembourg aims to be a centre for AI innovation, especially in the financial sector. The country needs to make sure its legal framework does more than just oversee but also support technological progress. AI is too important not to regulate, and too important not to regulate well. The regulation that applies directly to all EU member states, aims to establish and align rules on AI. Unlike the EU General Data Protection Regulation, which was made to safeguard people's privacy and data protection rights, the first proposal for an AI Act came from the perspective of product safety, concentrating on making sure AI products and services in the EU market are safe.

As part of their usual oversight, banks should be ready for the possible dangers of applying AI and ML in their activities. Banks need to demonstrate that their AI and ML systems for these purposes are transparent, accurate, and fair, and that they do not introduce or amplify any biases, errors, or harms. New partnerships and new skills will be required to implement and manage AI initiatives, moreover, involvement of senior management is essential. High-risk AI systems will need strong governance, risk management and internal controls under the AI Act. This could pose difficulties for Luxembourg's current AI governance frameworks, such as the CSSF's AI recommendations.

Contact us

Ryan Davis

Advisory Partner, Risk & Compliance Advisory Services, PwC Luxembourg

Tel: +352 621 333 580

Tomasz Wolowski

Senior Manager, Regulatory & Compliance Advisory Services, PwC Luxembourg

Tel: +352 621 332 243

Follow us