Clicky chatsimple

Using ChatGPT In Financial Services Sensibly

Category :

Workflow Automation

Posted On :

Share This :

Responsible AI management techniques have historically been a real compliance issue as well as a topic of theoretical ethics discussion. To protect customers and the economy as a whole, the financial sector is heavily regulated and closely watched. A large number of those regulations are designed to prevent penalties through the use of responsible management practices.

But to prevent damage to users and their data, systems like ChatGPT that generate text using natural language processing need a lot of infrastructure and resources to run. Due to its sometimes opaque and difficult-to-understand decision-making process, ChatGPT has shortcomings when it comes to generating trustworthy decisions in scenarios where it interacts with customers.

Decision-making biases or errors may result from this lack of openness, which could have serious repercussions for both financial institutions and their clients. More than ever, adopting responsible AI management techniques entails avoiding the negative effects of poorly managed AI-enhanced technologies on business operations in addition to compliance issues.

In the discussion that follows, we look at two important observations:

Limitations of ChatGPT in financial services and other industries: Although there are many benefits to using ChatGPT for research, there are drawbacks in terms of controls and ethical AI when making judgments that affect customers.

Combining AI and human workflows: ChapGPT may offer convincing substitutes to support human decision-making, but to get the greatest outcomes, humans must collaborate with AI, not the other way around.

The entire episode can be heard below:

Knowledge: Unstructured data analytics, Unsupervised machine learning, Explainable and Ethical AI, Cybersecurity, Fraud analytics, and Utility analytics

Synopsis of Recognition:  With more than 25 years of expertise in artificial intelligence, machine learning, and advanced analytics, Scott Zoldi is a seasoned technology executive who spearheads the creation of cutting-edge analytics solutions for FICO’s clientele. Having authored more than 120 patents, 80 of them have been issued and 47 are still being worked on. He is a member of the Forbes Technology Council and has a Ph.D. in computer science from Duke University.

The Restrictions Of ChatGPT In The Financial Services And Other Industries

To correct misconceptions, Scott states that while ChatGPT is a great tool for obtaining certain kinds of data, there is still more to learn about the process by which the tool draws its judgments. He has doubts about the fact-checking of the data it has access to and the possibility of bias in its answers.

Although he acknowledges the purpose and great importance of Chat GPT in the financial services industry, he issues a warning that the current versions are still far too basic for direct, customer-facing operations. There is still a long way to go before ChatGPT-like large language model-driven tools can be trusted for important decision-making, but they have potential.

Zoldi is questioned by Emerj Senior Editor Matthew DeMello on his comparison of ChatGPT’s technological capabilities to the highly publicized advantages of Ozempic, the recently launched miracle medication for diabetes and weight loss.

Zoldi quickly explains that the comparison’s main point is that explainable systems are accountable systems. Zoldi informs Emerj, “We need controls on top because what usually happens with AI is that we get to a point where we’re callous because the technology told us so, and we can’t figure it out.” This is problematic.

To put it another way, Scott characterizes a lot of business decisions as “auditable, safe, and ethical.” AI practices follow from openness and other principles incorporated into systems to guarantee that administrators and users are aware of them.

Zoldi draws a broader comparison between AI and transformers, which are adept at collecting information and converting it into a format that humans can comprehend and absorb. This is because creating technology with the potential to affect as many people as LLMs requires effort and time to make it maximally safe for public use.

In it, he concedes that AI offers benefits, such as improving human comprehension and productivity, but he also emphasizes the necessity of carefully weighing application cases.

AI may be safe to employ for research purposes, for instance, as opposed to jobs like interacting with customers or composing term papers. He issues a warning, saying that it will be problematic if we let the cutting-edge technology communicate with a client who is dissatisfied with a choice that another AI model made on his behalf.

The automated responses that are sent to clients are thoroughly examined and accepted. He cites recent remarks made by Open AI CEO Sam Altman, who warns against getting too excited about LLMs and other AI capabilities because of reliability concerns.

Combining Human And AI Work

Later on, Scott talks about the drawbacks and possible risks associated with machine learning and artificial intelligence. He issues a warning about putting all your faith in AI, particularly when it comes to broad or particularly customer-facing activities where errors could have a ripple effect on society. 

Although there is a lot of promise for large language models (LLMs) to improve customer experience pipelines, Scott stresses the significance of additional use cases in compliance.

He emphasizes how machine learning and other AI skills can be enhanced by the user interface to spot anomalies in transaction records and other financial databases:

Consider it akin to a detective; compiling all the information will be like playing Sherlock Holmes. He describes the apparent workflow to Emerj, saying, “Then you put all those pieces of information that this ChatGPT surface for you into your conclusion.”

Because of these factors, Scott maintains that ChatGPT’s original intent in many use cases involving financial services may be better understood as providing options to human code and sales scriptwriters to foster their creativity rather than as writing code autonomously without human oversight.

The fact that LLMs continue to be susceptible to “hallucinations,” or misinterpreting context based on guessing the word that will follow in a sentence, is one of the reasons why technology is kept outside of direct consumer encounters with the assistance of human supervision.

Because of ChatGPT’s well-documented ability to spread false information, Zoldi advises corporate leaders to consider ChatGPT’s solutions in these domains as Mad Libs-style templates that require human intervention to fill in the blanks with real, verified data. 

Scott points out that human agents in customer experience and other pipelines will need to be especially selective (or “jaded”) about what information ChatGPT and other LLM-powered technologies convey to them and their clients as LLMs become increasingly integrated throughout financial services workflows.

To exacerbate the issue, larger language models or more training hours do not automatically improve their ability to understand user intent. Because these models don’t always match user purpose, human supervisors play a crucial role in fine-tuning language models to match user intent.

In a different Emerj interview, the co-founder and co-CEO of AI21Labs clarified the issue by pointing out the following about huge language models:

These language models encode some knowledge when they are used, but not in a way that is controllable. It’s still extremely limited, and extracting specific knowledge requires a lot of effort. Since it’s ultimately a statistical model, there is most likely no reliability as well.

Scott and Ori both stress that language models’ goal should not be confused with comprehending and conveying complicated concepts; rather, it should be to anticipate the next word token in a prompt by considering the likelihood that the user will respond positively to it statistically. Making this distinction is essential to comprehending why models frequently mishandle data, produce biased text, and fail to follow user directions consistently. 

At the end of the discussion, Scott reiterates his conviction that the financial services industry will eventually come to understand and accept the fact that not all generative or experiential AI tools are suitable for autonomous creative activities. At least with current ChatGPT iterations, humans and technology must ultimately share responsibility in processes when it comes to fact-checking and optimizing detection anomaly attempts.