The opening of Pandoras Jar? Wolfsburg’s 5 principles for Using AI/ML in Financial Crime Compliance
Pandoras Jar, an Introduction:
A Brief overview
Pandoras Jar, an Introduction:
There is no more a fitting metaphor than Pandoras Jar[i] in describing the entry of AI into our society. Pandoras “box” is a historical mistranslation[ii] , a inaccuracy. The true description of the container is “jar”. The application of AI/ML in financial crime compliance according to Wolfsburg’s 5 principles demands legitimacy in purpose and integrity of data outputs. It is therefore apt that Pandoras container of doom be accurately referred to.
Furthermore it was not that Pandoras Jar contained the ingredients of doom and therefore should not have been opened that is the moral of the story, but rather the purpose for which it was opened. Pandora opening the jar out of curiosity, rather than for the good of humanity, lead to the end of the Golden age of humanity[iii]. Essentially Pandora’s risk was not taken for a legitimate purpose.
Pandoras reason for her risk taking played a integral part in her demise and that of the human race. Wolfsburg 5 principles defines the reason for opening the modern day Pandoras Jar that is AI;
“FIs’ programmes to combat financial crimes are anchored in regulatory requirements, and a commitment to help safeguard the integrity of the financial system, while reaching fair and effective outcomes”.
Wolfsburg’s 5 principles[iv].
A Brief overview
Principle number one: Legitimate Purpose
Wolfsburg’s first principle refers to legitimate purpose and presumably is a direct reference to Art. 5 (b)of the GDPR which reads: “Personal data shall be: collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes……”.
Wolfsburg states “FIs should implement a programme that validates the use and configuration of AI/ML regularly, which will help ensure that the use of data is proportionate to the legitimate, and intended, financial crimes compliance purpose.” Wolfsburg then advises that FI’s should follow a risk based approach to development and use of AI/ML solutions for financial crimes compliance.
Principle number two: Proportionate Use
Wolfsburg advises that FIs should balance “the benefits of use with appropriate management of the risks that may arise” from the use of AI/ML solutions in FCC. Wolfsburg proportionate use principle further states that FIs must weigh the “severity of potential financial crimes risk” against “any AI/ML solutions’ margin for error. FIs should implement a programme that validates the use and configuration of AI/ML regularly”.
Principle number three: Design and Technical Expertise
Wolfsburg’s recommends that the FIs obtain and possess the necessary expertise to control and thoroughly understands the “implications, limitations, and consequences” of using AI/ML solutions for financial crimes compliance. Furthermore, that the design of AI/ML systems be developed according to a “clear definition of the intended outcomes and ensure that results can be adequately explained or proven given the data inputs”. In short Wolf burg recommends a thorough understanding of the AI/ML technology and a well-defined reason for the use thereof accompanied. FIs must also be able to prove and explain that the data generated was in line with the intended outcomes.
Wolfsburg under this principle focuses on expertise, control, understanding and constant monitoring of the technology to ensure the tech is used for its intended purpose and outcomes.
Principle number Four: Accountability and Oversight
Wolfsburg emphasises FIs accountability for the use of the technology regardless if the AI/ML systems where developed in-house or sourced externally. Further emphasis is placed on staff training to ensure the appropriate use of AI/ML and to enable oversight of their design by technical teams specific responsibility for the ethical use of data in AI/ML, through existing risk or data management frameworks. Oversight is recommended to ensure the ethical use of data in AI/ML by developing and implementing risk or data management frameworks. Processes that would challenge technical teams and “probe the use of data within their organisations” should be developed.
Principle number five: Openness and Transparency
Wolfsburg recommends Openness and Transparency “about their use of AI/ML, consistent with legal and regulatory requirements’” without facilitating the “evasion of the industry’s financial crime capabilities, or breach reporting confidentiality requirements and/or other data protection obligations inadvertently”. Engagement with all relevant role-players is recommend to achieve this principle.
Risk based assessment
Wolfsburg’s recommendation is a risk based assessment for the use of AI/ML:
The Principles should be operationalised by each FI according to a risk-based approach dependent on the prevailing and evolving regulatory landscape, as well as on its use of AI/ML against financial crime, and governed accordingly.
Risk assessment is therefore the primary concern.
The RegTech Jar
The Financial Action Task Force (FATF)[v] refers to RegTech (Regulatory Technology) as the umbrella under which AI/ML technology falls . RegTech is a subset of FinTech and focuses on technologies that facilitates delivery of regulatory requirements with better efficiency than existing procedures and legacy technology[vi]. Global bodies are clearly pushing AI/ML as the new go to technology and it is here to stay. The application of AI/ML in the AML space is all about risk versus reward. Therefore the most prudent question that first needs to be answered is what are the risks?
Wolfsburg’s third principle “Design and Technical Expertise” specifically addresses the capacity of FIs to
thoroughly understands the implications, limitations, and consequences of using AI/ML solutions. In other words its risk.
I would argue that understanding the implications, limitations, and consequences of using AI/ML solutions in AML is at the core of it legal application. There is no other way to effectively gage and implement the risk/reward equation.
The risks of AI/ML
Effectively gauging the risks of AI/ML is a team effort. A deep and thorough understanding of the technologies, information technology law and regulatory requirements is needed for effective assessment. It is these combination of skillsets that possess the greatest risk and challenges to industry to get it right. It is common sense that a lawyer cannot give scientific advice and a scientist cannot give legal advice. The legal threats posed by AI/ML can only be effectively gaged if the team has a thorough understanding of the tech and applicable law. Wolfsburg’s third principle reads:
Teams involved in the creation, monitoring, and control of AI/ML should be composed of staff with the appropriate skills and diverse experiences needed to identify bias in the results.
Understanding AI/ML
FIs need a basic understanding of the tech in the first instance. Science fiction and hype needs to be safely tucked away to ensure the legal and realistic application of AI/ML technologies.
One of the core realities of machine learning is that the software operates on its own cognisance without human input or potentially even human oversight.
Autonomousity can be said to be the ultimate goal of AI. Although, that is not what defines it. The difference lies in the fact that direct human input (of information) could result in AI output, while lacking autonomousity. Similarly artificial intelligence and machine learning are also not the same thing.
"….artificial intelligence and machine learning are not the same, but they are closely related. Machine learning is the method to train a computer to learn from its inputs but without explicit programming for every circumstance. Machine learning helps a computer to achieve artificial intelligence." [vii]
Utilizing AI means a dependence and trust on the outputs of a computer program not under the direct control of a human. That is what you call a risk. FIs therefore needs to understand this concept well.
Conclusion:
In short Wolfsburg’s first principle of “Legitimate Purpose” should be the reason for opening the AI/ML jar in conjunction with a deep and thorough understanding of the tech and its possible outcomes.
FI’s should take note of Wolfsburg’s principles and understand that their criminal and civil liability depends on the legality of the output of artificial intelligence software. A Pandoras jar.
[i] Gill, N.S. "Understanding the Significance of Pandora's Box." ThoughtCo, Aug. 27, 2020, thoughtco.com/what-was-pandoras-box-118577.
[ii] In a later story the jar contained blessings that would have preserved the golden age of humanity rather than destroying it.
"Pandora". Encyclopedia Britannica, 5 Dec. 2022, https://www.britannica.com/topic/Pandora-Greek-mythology. Accessed 7 May 2023 “Pandora’s jar became a box in the 16th century, when the Renaissance humanist Erasmus either mistranslated the Greek or confused the vessel with the box in the story of Cupid and Psyche.”
[iii] Britannica, The Editors of Encyclopaedia. "Pandora". Encyclopedia Britannica, 5 Dec. 2022, https://www.britannica.com/topic/Pandora-Greek-mythology. Accessed 7 May 2023.
[iv] Wolfsberg Principles for Using Artificial Intelligence and Machine Learning in Financial Crime Compliancechrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://www.wolfsberg-principles.com/sites/default/files/wb/Wolfsberg%20Principles%20for%20Using%20Artificial%20Intelligence%20and%20Machine%20Learning%20in%20Financial%20Crime%20Compliance.pdf
[v] “The Financial Action Task Force (FATF) is the global money laundering and terrorist financing watchdog. The inter-governmental body sets international standards that aim to prevent these illegal activities and the harm they cause to society. As a policy-making body, the FATF works to generate the necessary political will to bring about national legislative and regulatory reforms in these areas.” https://www.fatf-gafi.org/en/the-fatf/who-we-are.html
[vi] FATF (2021), Opportunities and Challenges of New Technologies for AML/CFT, FATF, Paris, France, at para 27 https://www.fatf-gafi.org/publications/ fatfrecommendations/documents/opportunities-challenges-new- technologies-aml-cft.html
[vii] Copeland, B.J.. "artificial intelligence". Encyclopedia Britannica, 24 Aug. 2022, https://www.britannica.com/technology/artificial-intelligence. Accessed 18 October 2022.
