AI transparency is a significant challenge for companies, especially those in the financial services sector. Models of generative artificial intelligence (GenAI), especially non-deterministic models, are frequently perceived as “black boxes,” making it challenging to comprehend the underlying decision-making procedures.
Banks may encounter a variety of AI mishaps as a result of this black box risk, such as algorithm bias, data bias, and system errors. Consequently, banks and other financial institutions have had an average short-term cumulative abnormal returns (CARs) loss of -21.04%.
For financial services companies, this raises serious issues, especially with regard to risk management, regulatory compliance, and client trust. Because data bias can be harmful to people, it can exacerbate these problems.
Financial services companies must be able to articulate the results of AI. Failure to do so may lead to class-action lawsuits, regulatory fines, and harm to one’s reputation. According to an article in Information Fusion, a model needs to be reliable in order for industries and end users to accept it.
Two main conclusions from their discussion will be the subject of the article that follows:
Using a “TIE” framework to drive GenAI in businesses: focusing on explainability, interpretability, and transparency in order to use GenAI to enhance research and decision-making in fields such as fraud detection.
Recognizing how crucial data control is for financial executives: managing data governance to satisfy legal standards and lessen bias in AI models.
Below, you can hear the entire episode:
Utilizing A “TIE” Framework To Advance GenAI In Businesses
Yohannes offered insight when questioned about the specific advantages of first-generation and deterministic AI models against more recent, sophisticated second-generation AI models and how the developments, especially in GenAI, are helping contemporary enterprises.
He agrees that the majority of the present buzz is focused on GenAI and believes that we have meaningfully demonstrated the value of AI over the past 20 years. Yohannes continues by explaining that over the years, AI initiatives have seen significant consumer uptake, a high enough adoption rate, and corporate experimentation, particularly for major organizations. He stresses repeatedly that the experimenting stage is now long behind us.
He talks about how GenAI is proving to be very helpful in enhancing problem-solving in a variety of industries, such as:
Financial services: With intriguing use cases emerging, especially in banking, insurance, and fintech.
The scientific and medical fields: To enhance research and healthcare service decision-making.
Personalized services: AI is utilized in banking to create loans, open and onboard accounts, and spot irregularities to help prevent fraud.
While big financial institutions are incorporating AI technology into their operations, the majority of their AI initiatives are still designated for internal use, according to Yohannes, who believes that a lot of the experimentation is moving to production. He describes how JPMorgan Chase expanded access to 145,000 employees after first making its proprietary LLM suites available to 50,000 analysts.
He continues by mentioning the crucial characteristics that businesses should prioritize when developing solutions that financial services clients should seek out. He calls these attributes the TIE principle, which enables banks to provide their clients with reliable solutions in the end: Transparency, Interpretability, and Explainability.
Solutions cannot be a “black box”; they must be clear.
Able to provide an explanation.
Possess the capacity to protect it.
Recognizing Data Control’s Significance For Financial Leaders
Yohannes goes into detail when asked about his thoughts on the state of hybrid strategies today and what financial executives should know, especially about endpoint storage alternatives and how to use them for different AI use cases.
Yohannes describes how the need for more sophisticated infrastructure—specifically, endpoint storage—is fueled by AI. He lists a number of essential elements for successfully implementing AI solutions:
GPU-powered computing: Required for LLM training and operation.
Data requirements: Obtaining access to confidential information, especially for the financial services sector.
Agentic infrastructure: RAG, or retrieval-augmented generation, aids in improving the specificity and targeting of AI answers.
He highlights the increasing trend toward cloud-based storage options due to their efficiency, speed, and scalability. Yohannes also highlights the necessity of efficient data management, which includes giving data governance top priority in order to guarantee that data is safe and appropriately controlled with robust privacy protections.
Yohannes asserts that conventional AI and ML tools play a bigger role in endpoint telemetry analysis and anomaly detection. Maintaining security requires unified endpoint management, especially when using zero-trust frameworks. In AI systems, object-storage endpoints are crucial because they offer rapid access to big datasets.
Yohannes claims that fintech companies and financial services organizations are putting more effort into creating their own LLMs. He goes on to discuss the problems these groups face in more detail:
Hallucinations could occur in AI output.
Due to the LLMs’ training on public data that intersected with proprietary data, there is a chance of class action litigation.
Additionally, he highlights data control from a banking standpoint, contending that banks ought to give internal data management in completely regulated settings top priority. More important than data sovereignty and localization or physical location—which depend on the regulatory region—is the ability to regulate and control the data.
Yohannes highlighted a number of crucial tactics in response to a question on how executives can adjust to the ongoing changes in cloud technology and endpoint storage options:
Using synthetic data: Yohannes suggested using some synthetic data to lessen or even eliminate the influence of bias and produce more accurate, balanced models.
Data sovereignty and control: Financial leaders should oversee the complete data lifetime, from creation to eventual disposal, he said, emphasizing the need for precise control measures. These controls, when used in concert with policies, procedures, and processes, assist companies in maintaining data ownership and meeting regulatory and compliance obligations.
Retraining LLMs: Yohannes suggests that instead of only improving LLMs, financial organizations should think about retraining them. A more specialized and regulated use of the technology can be accomplished by modifying the fundamental characteristics of a model to better suit the unique requirements of an organization.
Establishing new executive positions: Yohannes highlighted the development of new executive jobs as a component of this trend in order to highlight the increasing significance of data in AI plans. The Chief Data and Analytics Officer position at JPMorgan Chase is directly visible to the board. This error demonstrates how data is becoming increasingly important in shaping AI strategy.
Resolving data bias: Financial executives must proactively detect and reduce biases in the data that their AI models are trained on. Yohannes underlined that the same risk of bias exists with proprietary data. When biases are not addressed, AI results suffer, posing threats to one’s reputation and legal standing.