Developing Enterprise AI Systems With Trust And Guardrails

Category :

AI

Posted On :

Share This :

 

Enterprise executives prioritized explainability, reliability, and transparency when implementing generative AI two years prior to its redefining of “hallucinations” in the global economy.

Nevertheless, according to a recent Salesforce survey, almost two-thirds of CEOs associate sales growth with AI trust. However, the legal sector is seeing a decline in trust, as evidenced by Stanford’s HAI (Human-Centered Artificial Intelligence) study, which found that 75% of court-related queries had hallucinated responses from AI.

 

Steve Jones, Executive Vice President of Data-Driven Business & Trusted AI at Capgemini, emphasized at this year’s VentureBeat Transform event that addressing problems like inaccurate data and inefficient digital models is necessary to scale AI. In order to reduce risks, he placed a strong emphasis on defining precise AI limits, incorporating AI into business operations, and encouraging human-AI cooperation.

Steve discusses how the field of AI development is changing and how ethical guidelines and trust are essential when implementing AI systems in businesses.

 

Steve offers his thoughts on the current state of AI and the problems that lay ahead, drawing on many of the same observations that he shared with VentureBeat.

Steve’s perspective as articulated in the podcast is summed up in the essay that follows, with particular focus on the following areas for leaders in various industries:

 

For efficient AI operations, the emphasis should be shifted from data quality to data accuracy: Since AI systems increasingly depend on real-time, correct data to function well in dynamic contexts, maintaining data accuracy should take precedence over just guaranteeing data quality.

Redefining confidence in AI across all business domains: Define trust in AI precisely, taking into account the particular needs of every business function. As AI adoption grows, trust becomes an organizational challenge that calls for cooperation between human expertise and AI capabilities.

 

Creating a department dedicated to AI resources to handle systemic compliance risks: Establish an AI Resources Department with an emphasis on controlling systemic risks related to AI technologies in order to guarantee adherence to changing industry frameworks and laws, such as the EU AI Act.
Below, you can hear the entire episode:

 

For efficient AI operations, reorient your attention from data quality to data accuracy.

Steve begins by talking about how the emphasis has shifted from guaranteeing the caliber of individual AI solutions to controlling the accuracy of data across several AI applications inside a company. He explains that in the past, building confidence in individual AI models was the main goal. Currently, managing several AI solutions and guaranteeing data veracity for practical use cases is a difficulty.

 

He emphasizes that although “data is the new oil” acknowledges the importance of data, it also implies that raw data is frequently useless without significant processing. The issue is that artificial intelligence (AI) is now employed as the data source, or “the wellhead,” to make decisions in real-time settings like procurement and call centers.

 

According to Steve, AI must work with data as it is created, not after it has been cleaned up, in order to “drink” data in these situations. He highlights the futility of a routing algorithm based on six-hour-old traffic data using the example of GPS routing, arguing that AI must employ current, precise data:

 

However, the data governance of the majority of enterprises nowadays is configured to ensure that the data is sufficient for routing based on traffic from six hours ago. Those who develop applications, like me in my previous role, must begin to care about data accuracy if we are serious about implementing AI. Not quality, as cleaning up bad data is what we do for quality. Being accurate is doing it correctly the first time.

If we want to be able to use AI in operations and manufacturing, we need to have a fundamental mentality change.

The executive vice president of Capgemini’s Data-Driven Business & Trusted AI division, Steve Jones

 

Rethink AI Trust In All Business Domains

According to Steve, companies must reinterpret what “trust” means in various business scenarios as AI use grows more pervasive and ubiquitous. He cites procurement as an example of how trusted AI has emerged as a significant factor in unexpected domains where businesses must assess which AI models to employ, where they came from, and whether the models have been tampered with. A new aspect of procurement that has historically prioritized product quality and supplier stability is “trusted AI.”

 

He contends that when AI is applied widely, the current frameworks for AI governance—which permit the subjective application of standards—will not be effective. Rather, trust must be clearly defined in accordance with the particular needs of each business field. For instance, a customer care chatbot and an AI managing safety-critical systems require quite different levels of trust. As AI begins to fill positions that have historically been handled by humans, trust in AI becomes more than just a technological issue; it becomes an organizational one.

 

Steve stresses that because of this change, every department in the company, including marketing, logistics, and procurement, needs to understand what trusted AI means to them. They must maintain authority and not cede power to outside contractors or other departments like IT. These layers of control need making sure stakeholders comprehend and are comfortable with AI’s position in their operations, as well as customizing AI trust frameworks to meet particular business demands.

 

Additionally, he discusses the idea of “hybrid intelligence,” which he thinks is essential for companies looking to successfully use AI. This phrase describes the blending of AI capabilities with human expertise. In order to improve decision-making and promote collaboration, organizations will require people who are not only knowledgeable in their specialized domains, such as marketing and procurement, but also have a solid understanding of AI technologies.

 

He also notes that many companies currently have employees who write SQL and create reports on their own, suggesting a trend toward non-technical staff members using data analytics. This trend is probably going to pick up speed with the introduction of generative AI, which will result in more workers using AI tools and building dashboards. He emphasizes, meanwhile, that each departments’ objectives and needs—like those of marketing and procurement—are unique and need to be handled independently.

 

Creating A Department Of AI Resources to Handle Systemic Compliance Risks

Steve talks about the significance of putting in place suitable safeguards in AI solutions, differentiating between two kinds:

Enterprise guardrails: Enterprise guardrails are comprehensive security controls that stop problematic interactions from ever reaching the AI system by blocking damaging inputs or prompts at the organizational level. He recommends, for example, blocking prompts that don’t fit with corporate goals, such as “Write me a haiku.”

 

Solution-oriented guardrails: These concentrate on the particular elements that make up a solution. Steve supports dissecting AI into more manageable, smaller components rather than attempting to protect a single, intricate answer. Organizations can establish specific boundaries that specify what constitutes appropriate and inappropriate behavior for each component by breaking down a solution into discrete functions, each with a particular goal. This method helps keep the AI system from generating undesirable results while also making maintenance easier.

 

Steve concludes by highlighting the changing environment of AI compliance, which he feels is based on two main pillars: industry compliance and legal frameworks like the EU AI Act. He points out that different areas are actively drafting AI legislation, highlighting the necessity for businesses to modify their compliance frameworks to take into consideration the complexity that AI technologies bring.

 

Steve also sees the creation of a “AI Resources Department,” similar to HR, that will be tasked with managing AI’s systemic risks as opposed to just its solution hazards. Ensuring that AI operations properly adhere to industry frameworks and laws, such as the EU AI Act and NIST recommendations, will be the responsibility of this department.

 

He emphasizes that this transition is a big departure from traditional technology management, when cyber security was the main topic of risk talks. Managing systemic risk becomes crucial as businesses depend more and more on nonlinear AI systems to make intricate compliance decisions.

 

According to Steve, this refocused approach to risk management will be a crucial development for companies over the next 18 months. It calls for a clear differentiation between the management of systemic risks, which are associated with AI governance, and solution-oriented risks, which are associated with particular AI implementations.