How the EU Artificial Intelligence Act Impacts the Financial Market

0
43

By Bruna Veiga, Legal and Compliance Officer at Contea Capital


The new EU Artificial Intelligence Act has been published, and it may enter into force in February 2025. Additionally, the European Committee will publish guidelines by February 2026, according to Article 97. Consequently, all providers of AI systems in the European Union (EU) or in a third country must comply with these rules in a timely manner. The Act also impacts individuals who use AI systems and are located within the EU.

The primary concern of this Act is to ensure that all AI systems do not jeopardize users’ safety, security, or human rights. This is why Articles 7 and 27 establish that all providers must analyze the purpose of the system and evaluate how its use can impact human rights in the long term, thereby creating a governance standard.

Specifically regarding the impact of this Act on the financial market, all companies involved in this sector must be aware of two key points: (1) whether the AI system they engage with is located within the EU, and (2) how this provider will comply with the Act. Failing to do so may result in overlooking certain risks that the financial company must monitor.

In a hypothetical scenario, a financial services company may be exposed to risks if it relies on an AI system that provides inaccurate or misleading responses to queries, potentially swaying financial decisions, or utilizes outdated templates that fail to reflect current market conditions or regulatory requirements. The latter scenario is more common for policies and contracts that the back office may eventually require.

Furthermore, one of the worst-case scenarios would involve the user inputting personal data into the system, which could then be leaked to third parties. This situation would not comply with the General Data Protection Regulation (GDPR) either.

Regarding data protection, the system must implement monitoring based on a post-market plan – pursuant to the Act, the AI system must be designed to comply with regulatory requirements once it is made available to the public, ensuring that its implementation aligns with the stipulated standards.

This is the only way to prevent such occurrences, as part of the technical documentation. This plan must be provided in accordance with Articles 10, 72, and 98. Additionally, Article 19 establishes that financial companies under the EU regulation must maintain logs generated automatically by high-risk AI systems.

In other words, the Act envisions a risk-based approach for AI systems: unacceptable, high, limited, and minimal risk – with supervisory measures that must be proportionate to the risks involved. Although the category of high risk is permitted, it will be subject to stringent obligations and standards.

“High-risk providers” refer to those that possess advanced complexity, capabilities, and performance – which will affect both the provider and the user in terms of data quality, robustness, and transparency.

The provider of a high-risk system must comply with tests to (1) ensure that the system does not replace human evaluation, as Article 14 aims to guarantee, and (2) identify all measures necessary to ensure consistent performance for the intended purpose. Risk management must adhere to the requirements outlined in Articles 9 and 60.

Moreover, the provider shall keep the documents listed in Article 18 available to the competent national authorities for a period of 10 years after the high-risk AI system has been placed on the market. Not only high-risk providers, but all providers must implement a policy to comply with EU legislation pursuant to Article 4 of Directive (EU) 2019/790 and Article 53 of this Act.

It is important to conclude that the Act’s penalties can vary from restricting access to the market to fines. These fines can cost 35 million euros, 7% of global turnover, 7.5 million or 1.5% of turnover – under articles nr 5 and 99. The evaluation will consider the company’s size and the infringement.

On one side, it is important that AI systems comply with the Act, taking care of proper management of the service’s quality, in order to guarantee that it follows a strategy to comply with all conformity assessment procedures and procedures for change management. Also, the provider must put in place processes for examination, testing and validating all required procedures to be carried out before, during and after the development of the system – according to article nr 72.

According to article nr 17, it is also required that the provider will manage all data involved at the system. A policy shall establish the frequency that the examination must be carried out. This article includes “data acquisition, data collection, data analysis, data labelling, data storage, data filtration, data mining, data aggregation, data retention and any other operation[1].

This concern is also related to the General Data Protection Rule.

On the other hand, it is important that financial companies implement AI systems that are trustable and that comply with all these Governance standards. The company shall follow up on reports and news coming related to these providers, avoiding risks such as mentioned at the beginning of this paper. Also, it is highly recommended to establish a policy and training for all employees about how to use these systems in a safe way.


[1] European Union. “Artificial Intelligence Act”. 2024. Available from https://artificialintelligenceact.eu/ai-act-explorer/. [Accessed at 10/17/2024]

References

BSI.  “Discover essential insights into the EU AI Act with our expertly crafted whitepaper”. 2024. Available from https://page.bsigroup.com/eu-ai-act-whitepaper?utm_source=google&utm_medium=cpc&utm_campaign=gl-rs-cross-lg-nss-ot-mpd-mp-euaiactwhitepaperpromotion-0924&gad_source=1&gclid=CjwKCAjw68K4BhAuEiwAylp3ksU6Bff9kt3VEAHnfatfacbKGl_tFBnsBm0R7mTOtx5jpTNfGDy2fxoCIXcQAvD_BwE. [Acessed at 10/17/2024]

European Union. “Artificial Intelligence Act”. 2024. Available from https://artificialintelligenceact.eu/ai-act-explorer/. [Acessed at 10/17/2024]

GOODMAN, Ellen P; Tréhu, Julia. “AI Audit-Washing and Accountability”. 2022. Available from https://www.gmfus.org/news/ai-audit-washing-and-accountability. [Acessed at 10/17/2024]

KPMG. “Decoding the EU AI Act”. 2024. This study is available from https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2024/02/decoding-the-eu-ai-act.pdf. [Acessed at 10/17/2024]

MEZZANOTTE, Félix E. “EU Sustainable Finance: Complex Rules and Compliance Problems”. 2024. This study is available from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4344100. [Acessed at 10/23/2024]