Since generative and predictive algorithms are already influencing important public sector decisions, government and business leaders widely concur that governance is now fundamental to AI, not optional.
The Colorado Office of Information Technology’s generative AI guidelines demonstrate why: Adoption can surpass governance, as evidenced by the fact that 16% of firms reported cybersecurity risks and nearly a quarter reported faulty outputs.
According to a recent OECD research, government AI is frequently kept in pilot programs due to fragmented data, outdated systems, and inadequate impact monitoring. The paper continues by arguing that early definitions of accountability and measurement are necessary for governance.
AI governance is defined by NLP Logix in terms of testing, ethics, and policy. In actuality, these standards entail conducting standardized bias/robustness testing both before and after deployment, documenting models, and requiring human approval in delicate workflows. According to this viewpoint, governance serves as a safeguard against risk as well as a facilitator of reliable, scalable AI.
In a special series sponsored by NLP Logix, Emerj Editorial Director Matthew DeMello interviewed Russell Dixon, Strategic Advisor at NLP Logix, Matt Berseth, Co-Founder and CIO at NLP Logix, and Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank, to discuss how businesses can use AI tools efficiently, strike a balance between innovation and governance, and gauge actual business impact.
Their conversations highlight how AI projects fail when controls, training, and measurement are neglected. Three key concepts for effective AI adoption—strong governance, quantifiable business results, and strategic deployment—are examined in this article:
- Enforcing role-based access, stringent data classification, phased rollouts, and required human oversight for safer implementation are all examples of AI governance as an integrated control layer.
- Plan, govern, train, and measure AI: To guarantee successful results and return on investment, AI technologies should be deployed with a clear strategy, specified use cases, upfront governance, user training, and quantifiable uptake.
- Enforce strategic planning and metrics for AI success: To avoid tool creep and produce quantifiable value, AI installations should be planned with precise objectives, metrics, and usage monitoring.
AI Governance As An Integrated Control System
Episode: Managing AI at Scale for Automation, Fraud, and Compliance with TD Bank’s Naveen Kumar
Visitor: Naveen Kumar, TD Bank’s Head of Insider Risk, Analytics, and Detection
Proficiency in Fraud and Threat Detection, Regulatory Compliance
In a nutshell, Naveen has over 16 years of expertise in fraud, insider risk, AML, and sanctions. He was previously employed by Stellaris Health Network and PwC. He graduated from The Rochester Institute of Technology with a Master of Science in data modeling.
In the interview, Kumar makes the case that traceability—knowing what data is utilized, who can access it, and how AI interacts with it—is the first step towards AI governance.
Role-based AI is similar to a courteous bouncer, in my opinion. It only offers information according to role; finance is unaware of any ongoing insider investigations. Nothing should be returned when you enter it into the AI. Guardrails are simply an unseen force. Regardless of the instruction, AI is unable to violate these restrictions. This prevents people from asking a lot of questions and giving up information that an attacker shouldn’t know.
Naveen Kumar, TD Bank’s Head of Insider Risk, Analytics, and Detection
He presents striking a balance between innovation, consumer responsibilities, and security and regulatory compliance as a laborious process requiring careful consideration of trade-offs. He suggested a staggered implementation that begins with limited data access and use cases, then gradually increases rights and sources only when controls are proven effective.
Additionally, classification is crucial, and Naveen advises leaders to categorize data as critical, sensitive, or safe and to omit important information from initial iterations. He believes that this methodical, systematic approach is what aids firms in resolving the conflict between risk and utility.
He makes a clear contrast between compliance and retail use cases, emphasizing that the domain has a significant impact on how AI is deployed. A more aggressive use of AI would make sense in the retail sector, where client acquisition is the main objective. However, the opposite strategy is required for compliance; firms must be far more cautious.
Kumar illustrates this point by using the Suspicious Activity Reports example to show that although AI can assist the process, it shouldn’t be permitted to function entirely without human oversight.
Finding a balance between automation and supervision is the difficulty. Naveen advises managing this by considering speed versus accuracy: assign higher-risk cases to human reviewers and automate low-risk notifications. In the end, he explains, the domain and the use case determine the ideal balance. AI should not always be viewed as a fully autonomous, end-to-end solution, but rather as an efficiency layer or a first draft.
He then outlines a series of doable actions for advancing AI in a regulated manner, including involving compliance early on, creating a thorough inventory of internal and vendor AI, and beginning in safe sandboxes. Visibility into the models that are in use, the data they affect, and their governance is the aim.
AI for ROI: Plan, Govern, Train, and Measure
Episode: Using ChatGPT Enterprise and Microsoft Copilot to Your Advantage with Matt Berseth and Russell Dixon of NLP Logix
Guests: NLP Logix Strategic Advisor Russell Dixon
Knowledge of Information Technology, Business Transformation, and Technology Innovation
In a nutshell, Dixon is a Strategic Advisor at NLP Logix with a focus on business transformation and international operations. Having worked in information technology for more than 20 years, he counsels businesses on the use of cloud computing and artificial intelligence. Russell specializes on business automation and corporate sales, with an emphasis on finding high-value use cases to increase return on investment.
In his podcast appearance, he makes the case that although products like Microsoft Copilot and ChatGPT are practically universally applicable, their use is only beneficial when training, safeguards, and productivity and adoption metrics are included.
Russell cautions that merely introducing AI tools into the company would not produce outcomes or return on investment without that framework. Rather, users are likely to lose patience, search for other options, or worse, come to the conclusion that the tools are useless.
Therefore, before implementing AI tools, governance needs to be established. He contends that organizations won’t achieve the desired outcomes if they don’t have a practical use case, deployment plan, and user training strategy.
This also has a governance component. How will I utilize this technology, and what safeguards will I erect around it to ensure the security of my client and internal data? Last but not least, you must consider how you will gauge productivity. To gauge uptake in usage as the project is implemented, will you rely on user feedback or will you implement more formal measurement tools and procedures along the way?
Russell Dixon, NLP Logix’s strategic advisor
Dixon directly links the quality of the use case definition to the success of AI programs. The likelihood of success increases with the genericity of the use case. For instance, there should be quite high expectations when using tools like Copilot or ChatGPT to promote general workplace efficiency, particularly when the aim is a wide productivity boost across typical office chores.
On the other hand, he claims that use cases that are extremely specialized or specific are riskier. The likelihood that a solution won’t produce the intended results increases with its narrowness. He concurs with coworker and podcast guest Matt Berseth, co-founder and CIO of NLP Logix, that it is appropriate to aim for an 80% success rate and that, if firms are promoting innovation, some degree of failure is expected, if not required.
Russell does stress the importance of early signs, though. Organizations should take a break and reevaluate if adoption and outcomes aren’t occurring right away. According to him, the technology itself is capable; rather than AI limits, user behavior or a mismatch between the tool and the use case are more likely to be the underlying reason of initiatives that fail.
Implement Metrics And Strategic Planning For AI Success
Episode: Using ChatGPT Enterprise and Microsoft Copilot to Your Advantage with Matt Berseth and Russell Dixon of NLP Logix
Visitor: Matt Berseth, CIO and co-founder of NLP Logix
Proficiency in AI, Data Science, and Software Engineering
Brief Recognition: Berseth is the CIO and co-founder of NLP Logix, where he oversees the provision of cutting-edge machine learning solutions for sectors including banking, logistics, and healthcare. He has over 20 years of technical leadership experience and has worked in engineering and architecture at CEVA Logistics and Microsoft. He has a master’s degree in software engineering from North Dakota State University and is an adjunct professor.
A good AI deployment, according to Matt, is not merely rolled out and numbered but also measured, thoroughly understood, and consistently reinforced. He makes a distinction between value and adoption, arguing that usage trends are more important than license counts. He suggests integrating user input with data about who uses the products, how frequently they are used, and for what activities.
He claims that adoption is simply “bodies in the tool.” Usage patterns—the high-leverage ways in which various individuals, teams, and departments are utilizing AI—are what matter most.
Additionally, he warns of an increasing tendency known in development circles as “tool creep” and explains what might go wrong without adequate planning:
“I believe that tool creep is currently taking place. These tools end up being simply one more item we are unable to use. Despite purchasing licenses, we don’t perceive any benefit. I prefer ChatGPT’s interface when I’m at home because I don’t want to learn a new, constantly-changing program at work. The true problem is that you need to consider these technologies as a strategic component of your business AI plan. You require a strategy, specific objectives, measurements, and a means of promoting adoption within the company. You will accomplish your objectives if you do it. If not, you’ll be back in three or six months attempting to correct a rollout that got off to a bad start.
Matt Berseth, CIO and co-founder of NLP Logix
On the other hand, he stresses that in order to spur innovation, some failure is required. The fact that about 80% of his team’s AI proofs of concept make it to production and remain there for a year demonstrates the technology’s potential. Poor use-case selection, not the tools, is typically the cause of projects’ problems. Creating value is now easier than ever thanks to accessible AI like ChatGPT; businesses just need to choose the right challenges to tackle.
According to Berseth, an organization may be avoiding risk rather than promoting innovation if it boasts a 100% proof-of-concept success rate. When teams explore new ideas, some failure is both expected and even encouraged.
Regarding governance, Matt restates that organized planning, precise objectives, and well-defined KPIs are necessary for the successful implementation of AI. He stresses that in order to guarantee the responsible, efficient, and quantifiable use of AI tools such as Microsoft Copilot and ChatGPT, enterprises need to carefully choose use cases, keep an eye on uptake, and track tool-level and business-level results.

