At the end of last year, the rapid development of generative artificial intelligence shook an industry that has long relied on AI tools – banking. In search of greater cost efficiency and higher investment returns, financial institutions have been experimenting with machine learning algorithms for over a decade. Algorithmic trading systems (automated trading) and high-frequency trading (fast transactions), along with innovative robo-advisors, have become an integral part of the financial reality.
The recent boom in generative AI has sparked heated discussions and put artificial intelligence at the forefront of board strategies across the financial sector. Introducing AI to investment services for retail clients could be a game-changer. Imagine an app that automatically adjusts your investment portfolio by analysing market data in real-time, or a chatbot that provides personalised investment advice based on your goals and transaction history. This will make investment management faster, easier, and potentially more profitable. It sounds promising but requires a thorough analysis of the risks that AI systems can generate, both for individual clients and for the stability of the entire financial system.
What Does the Financial Supervision Say?
Aware of the risks associated with retail investments, the European Securities and Markets Authority (ESMA) published a key statement on 30 May 2024, providing guidelines for investment firms intending to implement AI-based systems in retail customer service.
ESMA recognises the wide possibilities of AI application in the financial services sector, while noting that each of these applications carries specific risks that firms must consider. In its statement, ESMA points to several main areas of AI application:
- Customer service and support: AI-powered chatbots and virtual assistants have the potential to revolutionise customer interactions, offering quick access to sought-after information and efficient query management.
- Investment advice and portfolio management: Known as robo-advisors, these systems gained popularity in 2017-2018. ESMA emphasises AI’s ability to process huge amounts of data and provide personalised recommendations tailored to individual client needs and preferences.
- Back-Office: ESMA notes that AI systems can significantly support compliance teams by automating the analysis of financial regulations and preparing compliance reports. AI can also support risk management by effectively assessing and monitoring risks associated with client assets and portfolios. Moreover, AI has enormous potential in fraud detection by identifying unusual patterns in transactions and communications, which may be more difficult for human staff.
AI Act – The World’s First Regulations. Are They Comprehensive?
The Artificial Intelligence Act (AI Act) is an ambitious and groundbreaking European Union initiative aimed at regulating the dynamically developing field of artificial intelligence. Introduced in April 2021 by the European Commission, the AI Act is the first of its kind legal framework to ensure that AI will serve people, not threaten them. The focus is on dividing AI systems into different risk levels, meaning that riskier technologies will be subject to stricter regulations, while those with minimal risk will be assessed more flexibly. Systems deemed to pose unacceptable risks, such as behavioural manipulation tools or real-time biometric identification systems, will be banned.
In the financial sector, the AI Act is particularly significant because artificial intelligence technologies are widely used for risk management, fraud detection, and personalisation of financial services. For example, AI systems used for credit scoring will have to meet specific transparency and non-discrimination standards to ensure fairness and consumer protection. Moreover, the AI Act requires that AI systems used in the financial sector be regularly monitored and assessed for risk, which may affect how financial institutions implement and manage AI technologies.
Robo-advice, or automated financial advice, is another example of AI application in finance that will be subject to AI Act regulations. Robo-advisors, using algorithms to create personalised investment plans, will have to ensure transparency in the operation of their algorithms and ensure that they do not mislead clients. Regular audits and risk assessments of such systems may become standard to guarantee that they operate in the best interest of users.
The developing blockchain and distributed ledger technologies (DLT) may also be covered by AI Act regulations, especially if they use AI to automate financial processes such as cryptocurrency trading or digital asset management. The AI Act may require firms using AI in conjunction with blockchain to ensure security, transparency, and compliance with regulations, which may include the obligation to report and monitor transactions and protect user data.
source: Brighter, Navigating the Timeline of the EU Artificial Intelligence Act, 21.02.2024, https://prighter.com/resources/eu-artificial-intelligence-act-timeline-implications/
“Too AI to Fail”?
The European Commission has also initiated consultations aimed at gathering opinions on the role of artificial intelligence (AI) in the financial sector. These “targeted consultations on AI in the financial sector” are part of a wide-ranging strategy to ensure that AI development is in line with European Union values and regulations. All this is in preparation for the implementation of the Artificial Intelligence Act (AI Act), which will come into force in July 2024.
AI, which has long been an integral part of financial services, is now in the spotlight thanks to these consultations. The aim is to identify the most important applications, benefits, barriers, and risks associated with AI in this sector. The European Commission encourages all interested parties to share their experiences and perspectives to help shape the future of AI in finance.
In the context of financial firms’ concerns about potential regulatory overlap, the Commission aims to create a harmonious policy that will align with existing regulations such as MIFID/R and market abuse regulations. The goal is to avoid bureaucratic turbulence that could hamper innovation.
The consultations are open until 13 September 2024, and participation can be made through an online questionnaire (European Commission). After the consultations, the commission plans to publish a report summarising the collected results, which will be crucial in shaping future EU regulations on AI in the financial sector.