For both the “cat” and “mouse” sides of the table, cybersecurity is hardly an exception to the growing importance of AI in our daily lives. A Ponemon Institute study conducted in collaboration with Accenture found that, on average, cybercrime cost financial services companies $18.5 million in 2018.
In general, the Cybersecurity and Infrastructure Agency (CISA) estimates that cybercrime costs the US $242 billion annually. However, AI can be used in enterprise defensive operations to help firms avoid penalties and reduce their risk exposure in almost any conflict involving rules-based systems.
With an emphasis on three significant AI topics affecting financial services companies today, we will examine some of the most important terminologies and terminology at the nexus of AI and cybersecurity in this article:
Privacy-driven regulatory compliance: Based on various regional standards, AI capabilities are well adapted to assist financial institutions (FIs) in meeting regulatory obligations that protect the privacy of user data.
Threat detection and the AI arms race in cyberattacks: AI is being used in a veritable arms race by financial services and cybercriminals alike.
“Front-door” verification: The increasing significance of confirming system users during their initial engagement with the system in order to expedite cybersecurity and compliance procedures.
We’ll start by providing definitions for the relevant terms and laws.
Cybercrime and cybersecurity working definitions:
Cybercrime is based on two linked types of illegal activity, according to different international standards:
Crimes that can only be conducted using an ICT equipment are known as cyber-dependent crimes, and they are regarded as the majority of hacking.
Cyber-enabled crimes are those in which the criminal activity is scaled up by the use of an ICT device.
We are taking a broader perspective of cybercrime to include the deliberate and coordinated attempt of individuals to utilize personal data outside of common international and industry norms and regulatory regimes, even though it is sometimes limited to referring to government and public sector organizations on the internet.
In this article, we’ll use the term “cybersecurity” to refer to initiatives to deal with the fallout from that definition of cybercrime and preventative steps to stop cybercrime from happening in the first place.
Additionally, we shall describe cyberattacks as a coordinated wave of hacking techniques used to carry out extensive cybercrime and undermine financial institutions’ tech stacks and legacy systems against their customers.
Trend 1: Regulatory Compliance Driven By Privacy
Use cases for AI are common in the domains of fraud detection and communications surveillance. AI capabilities in machine learning and data analytics, in particular, carry out the fundamental tasks intended to ensure that illicit activity is not occurring in these enterprise domains, whether they are monitoring chat rooms, messaging apps, eCommerce environments, or individual credit card transactions.
Similar methods are used by AI-driven cybersecurity solutions, which use data analytics and machine learning to identify questionable activity between devices and networks in an online-only setting.
These solutions frequently concentrate on offering AI-regimented adherence to current laws and other rule-based frameworks tailored to the particulars of cybercrime. Three issues are addressed by the different laws pertaining to cybercrime:
How points of sale, including online and in-person, handle customer data and other sensitive financial information.
international business and governmental standards for consumer data security and privacy.
Anywhere else, in order to verify a financial transaction, personal information is made public.
Each of the regulatory regimes within entails a fine or penalty for violation to a certain extent, and they are typically standard practice for financial institutions that operate internationally. In any case, those expenses generate a lot of interest in and investment in tested AI solutions that shield businesses from fines for noncompliance.
Compliance Solutions Associated With BSA
Detecting fraud that falls between AML and KYC compliance—both related to the Bank Secrecy Act of 1970, one of the first laws in the US to directly combat money laundering—is where the most significant market for AI solutions in compliance exists. Since the process of identifying money laundering activity basically comes down to how well a financial institution knows its customers, money laundering and KYC compliance work hand in hand.
Knowing your consumer can have equally global proportions in terms of potential implications, especially considering the current issues financial institutions face when conducting transactions in a worldwide context. Thomas Mangine, the Bank of Montreal’s director of AML and risk reliance, recently spoke on Emerj’s AI in Business podcast about how international conflict and sanctions can elevate AML and KYC compliance to new levels:
And you will see discussions of tougher penalties against Russia for the war in southeast Europe in the West practically every day of the week, whether from the US, Canada, Australia, or the EU. As a result, you must do more and more. And one of the problems is figuring out where to find the crucial data that gives us the indicators. Or, as the industry refers to them, the “red flags” that indicate that anything strange or unlawful may be happening and that you should investigate more.
Thomas Mangine, Director of AML and Risk Reliance at the Bank of Montreal
The potential of AI technologies to significantly lower false positives in AML/KYC threat detection is one of the most exciting use cases in BSA-related compliance. Regulatory requirements will soon follow in raising the bar, and emerging AI technologies are also identifying deeper patterns of behavior that expand enterprise KYC compliance capabilities to KYCC (or “knowing your customer’s customer”) levels.
Complete Compliance Packages
Numerous AI systems on the market today claim to be able to offer FIs enterprise-level AML/KYC compliance. However, unlike AML/KYC compliance and fraud detection in general, there are no outright AI solutions offered for, say, simply international privacy compliance or PCI DSS certification for the remaining requirements mentioned above.
As Thomas Mangine explains in his appearance on the AI in Business podcast, the main cause of the current trend is that the very purpose of AML/KYC compliance is an information-gathering exercise, and the related solutions focus on highlighting that “vital information” that raises concerns for human compliance professionals.
Since there is always more data to collect in a KYC environment, concentrating on such an open-ended process requires the strength of a fully digital platform.
A second reason for the division of AI cybersecurity solutions is that, for digital platforms, complying with nearly all other cybersecurity standards is frequently so straightforward that providing specialized features is pointless. Instead, adherence to these standards is typically integrated into broader systems that offer businesses depending on industry and the related legislation all-in-one automated compliance.
In addition to HIPAA and other requirements, AI solutions for banking and financial services typically provide compliance solutions for the cybersecurity standards mentioned above. For instance, Nightfall.AI’s solution complies with both PCI and HIPAA. All of the aforementioned regulations can be automatically complied with by other systems, such as UpGuard.
Given the variety of choices, business executives thinking about implementing AI technologies to ensure cybersecurity compliance should approach compliance from two different angles:
One remedy for their AML/KYC or BSA-related compliance
One adaptable option outside of AML/KYC that is suited to their unique regulatory needs
AI leaders should think about beginning there in terms of weighing possible in-house solutions and even adopting early AI projects, depending on the size of their enterprise and market circumstances, since the applicability of individual regulations varies from industry to industry and often from business to business.
Trend 2: AI Arms Race In Cyberattacks And Threat Detection
It is commonly known that machine learning can identify hacker behavior, even in the absence of rigorous regulatory compliance. Machine learning-enhanced threat detection, especially in the context of transaction surveillance, is the first line of defense for financial institutions’ cybersecurity.
However, cybercriminals also have their own AI tools at their disposal to weaken financial institutions, and machine learning is a key component of their defenses, particularly when launching massive cyberattacks against multinational corporations and extremely huge legacy systems.
Here is a quick 8-minute interview with Mark Gazit, CEO of ThetaRay, explaining how cybercriminals are using artificial intelligence to carry out increasingly complex cyberattacks:
Notably, Mark notes that because it is simpler to conceal minor cybercrimes that have the potential to grow into more serious kinds of fraud and theft, hackers using AI prefer to target larger institutions.
Gazit compares it to real-life bank robberies that are portrayed in popular culture and then notes:
These days, it’s far more practical to set up a service outside of the US and have an AI-based program run automatically on the server that will break into bank accounts and take half a dollar. No one will object, particularly if you call it an I-tune transaction or a stock transaction and then use it automatically 20 or 30 million times in a row. This means that 20 to 30 million dollars go to someone else’s bank account in a single month, after which it simply disconnects the connection, vanishes, and a bank may find out about it a year later.
Mark Gazit, CEO of ThetaRay
Even though that interview was conducted in 2018, Mark’s April 2020 participation on the AI in Business podcast exposed a financial services environment that hasn’t changed much.
Unfortunately, given the arms-race dynamics of the issue, banking and financial institutions have little choices outside of maintaining “eternal vigilance” to keep ahead of cybercriminals’ AI tools. For the entire firm, from the top down, the endeavor will unavoidably require ongoing education and strategy, or comprehensive AI vendor solutions that will keep them ahead of AI cybercriminal technology.
At least for the time being, there are new use cases in the financial services industry that hold promise for providing organizations with a clear edge in the basic dynamics of the arms race.
Scott Nowson, the AI Leader for PwC Middle East, recently spoke on Emerj’s AI in Financial Services podcast about spearheading initiatives to deploy technology that analyze fraud patterns and adapt to “new normals” as they emerge in data collection:
Theoretically, systems that can analyze cybersecurity reporting—both in internal data and other external media—in order to find new ways that criminals are using AI to conduct cyberattacks would become available. The appropriate institutional leaders could then be informed by these prospective solutions about how to modify their procedures appropriately.
Trend 3: Simplifying Cybersecurity Workflows With “Front Door” Verification
Security specialists place special emphasis on the introduction, or “front door,” of systems, such as call centers, HR onboarding, or data access, in the defensive organizational mindset that characterizes banking leadership.
Because departments and data are separated in large organizations, the security significance of the system’s “front doors” is typically undervalued. As a result, there are frequently checkpoints within the system to confirm the identity and intent of participants.
Verifying participants (or “knowing your customer” in a strict banking or financial context) from the system’s “front door” is becoming an increasingly valuable organizational capability across financial services, from banking to real estate.
Because AI skills like machine learning and data analytics tend to break down divisions in persons and data, these “front doors” are becoming increasingly important to organizational leaders, particularly in cybersecurity. Professionals in compliance and security are better equipped to respond to consolidated data sourcing, which allows them to quickly access pertinent information and complete consumer or employee profiles.
To use a nightclub analogy, if security personnel can ensure everyone by just checking IDs at the door, there will be considerably less need for bouncers to “work the floor” (or, in the case of banking, spend so much time confirming clients and system participants).
The approach is broadly applicable to any process that functions as a funnel and involves external partners integrating on the basis of trust, and it goes far beyond consumer encounters. Verifying prospective hires at the top of a recruitment funnel is another example in an HR-related situation.
Whether in banking to avoid penalties and streamline false positives on AML compliance, or in insurance, as in claims and underwriting, many verification procedures in the financial services industry are essentially data-dependent. As a result, individuals are equally vulnerable to fraud and other illegal activities that occurs online.
In a recent appearance on Emerj’s AI in Business podcast, Christian van Leeuwen, Chief Strategy Officer of FRISS, discussed an insurance process for what his company refers to as “trust automation” that also uses a front-door verification mindset. Below, he discusses the model’s significance in optimizing workflows:
Additionally, they are rules-based systems that could be cleansed by similar “front door” verification, which can only be accomplished by an organizational culture that supports AI’s transformative potential.
Front-door verification is gradually emerging as the key to guaranteeing security in many financial services contexts while reducing human workflows. It is a policy supported by AI-enhanced procedures that ensure trust in participants in a system. Cybersecurity is not the least of these, particularly with online portals.
Financial services executives would be wise to approach cybersecurity funnels and workflows from the “front door” given the demonstrated outcomes, whether they are seeking AI solutions on the market or creating their own internal procedures and early cybersecurity AI projects.