Enhance AI Governance

Enhancing AI Governance

AI is moving fast. Faster than most companies can keep up. While organizations are eager to unlock its potential, many are just beginning to ask the right questions about how to manage it responsibly.  

Many companies recognize the need to strengthen their AI governance – but first, what exactly is AI governance?

Gartner defines AI governance as the management and overseeing of the legal, ethical and operational performance of AI systems.  Put more simply, AI governance involves three key aspects: 

  • Reliability and effectiveness 
  • Ethical standards 
  • Compliance 

Why are these aspects important?

The importance of AI governance  

Whatever one’s views on AI risks, compliance is clearly essential— particularly when it carries real penalties. If the failure to comply leads to significant financial or even criminal consequences, companies naturally take it seriously. 

Reputation is another key driver.  A damaged reputation can undermine customer trust, discourage potential employees from working for your company, and even impact your stock price.  

Closely related to reputation is trust, which is generally defined as “a feeling that somebody or something can be relied upon.”  Public relations theorist, James Grunig, defines it as the “willingness to open oneself to risk by engaging in a relationship with another party.” For organizations to maintain trust, they need to ensure that the public trusts their use of AI, at least to the extent that such use of AI is obvious to the public. 

To maintain trust, AI must reflect principles such as: 

  • Transparency 
  • Explainability 
  • Fairness 
  • Privacy 
  • Accountability 
  • Rectifiability 

You’ll notice that these principles of trust underlie the ethical standards for AI and sometimes underlie the compliance and effectiveness aspects as well. We’ll go through these principles in more detail below. 

So having introduced what AI governance is and why it’s important, let’s go back and take a closer look at the three pillars of AI governance.

Reliability and effectiveness 

The first aspect of AI governance deals with how well the AI application performs the tasks for which it was intended. Does it give legitimate answers or recommendations? 

Legitimate answers are not necessarily perfect answers, but they are as good as or better than those given by the humans whose efforts the application is trying to automate. Note that these efforts may not be those of the best human expert, but those of the people who actually do the job.  And of course, it may be sufficient that answers of similar quality may be given much quicker or at times of day when humans are not easily available. 

A key input to reliability and effectiveness is clean data for training the model. Without clean data, AI stands for artificial idiocy, not artificial intelligence. One aspect of clean training data is that it must cover the range of situations and subjects on which the AI system is intended to provide answers. Thus, An AI facial recognition application with insufficiently diverse training data may misinterpret individuals’ ethnic backgrounds or features. 

Another aspect of effectiveness is return on investment (ROI). AI is at its heart an automation tool.  Automation takes place to cut costs and sometimes to provide higher levels of service (e.g., providing service at times when live people would generally not be available). Thus, even if AI gives reliable answers, it is not effective if its cost is higher than the value provided. 

To measure the effectiveness of AI, it’s important to measure the baseline costs, such as personnel time and hourly pay before and after AI implementation.  Through this measurement over time, it becomes clear whether the AI implementation is not just reliable but is cost-effective. 

One other aspect to be considered is the real-world value of labor savings. An ROI for an AI application may look good on paper, but if the employee time savings is not being repurposed to higher-value work or if positions are not eliminated over time, then the cost savings of AI are illusory. Measuring and managing this redeployment of labor savings should be a key aspect of the AI change management plan. 

Ethical standards

As we’ve shown, ethical standards for AI are important both for their intrinsic right and wrong, but also for an organization’s reputation and the trust that it engenders among customers, employees, the market, and the public. Furthermore, many of the ethical standards are also important for purposes of AI effectiveness and compliance. So let’s look at some of the key ethical principles that should underlie AI. 

Fairness – Fairness is driven by the quality of training data (i.e., it must not be incomplete or faulty), by correct algorithmic choice, and by avoiding the cognitive bias of developers.  When AI output is unfair, it frequently can hurt people.  For example, if a loan approval AI application makes it difficult for home loans to be approved in areas of the city where a particular race or ethnic group predominantly lives, the application is most likely unfair (and illegal).  Similarly, if an AI-based medical diagnosis for a woman is incorrect because most of the test data on which it was trained belonged to men, it will similarly be unfair. 

Transparency and Explainability – The AI application must be open. People need to be able to understand broadly how the AI system works, how it comes to decisions, and on what data it has been trained.  The opposite of transparency is the block box.  With the black box paradigm, we don’t know how the AI application came up with an answer or recommendation.  Sometimes even the data scientists and AI engineers can’t explain it.

The AI application must be able to provide relatively simple, understandable explanations of the reasons behind its answers or predictions.  Gartner states that transparency “fosters trust, accountability and informed decision making” and that AI recommendations should be as measurable as possible.  For more information about AI transparency and explainability, see my previous blog on Explainable AI – A Key to the Practical Success of AI and ML.

Remediation and rectifiability – It is important that AI applications allow errors to be fixed. Errors may be due to faulty individual training records or more general model drift over time. When remediation is not allowed or is impractical, not only are results fallacious, but the AI environment loses credibility. 

Privacy – AI is frequently trained using very large amounts of data.  It is easy for these data sets to include personally identifiable information (PII) for which no informed consent has been obtained.  This is an obvious ethical and sometimes legal issue.  It becomes particularly serious when that PII data may appear in an LLM response.

Intellectual property rights – Large language model and generative AI are frequently based on enormous collections of data for which intellectual property concerns may or may not have been taken into account.  This is a difficult area as legal issues are outstanding and the means of acting ethically are not always clear.  As a Forbes article asserted, however, “Effective leaders facilitate users’ ability to respect … copyrights, sources, and use information responsibly.”

Security and resilience – As AI becomes much more integral to areas impacting people, it is crucial that it not endanger human life or well-being.  In particular, AI must not pose a serious risk to national security, economic security, or public health or safety.

Accountability – Tech Target defines accountability “as an assurance that an individual or organization is evaluated on its performance or behavior related to something for which it is responsible.”   With AI applications, it is easy to blame the system rather than putting accountability where it belongs with engineers or leaders.  We are all make mistakes, but the importance of AI is such that personal and leader accountability must be maintained.  This requires on-going monitoring. 

Socially beneficial – Many ethicists would also include the concept that AI should be socially beneficial.  Although this is a strong moral imperative, in practice it poses many difficult questions over the balancing of competing positives.  For example, how do we balance the very high GenAi energy usage vs. the positive economic benefits of AI.  How do we balance the ability to automate more mundane job activities vs. the concerns over unemployment and employment displacement.  None of this is to suggest that the objective of being socially beneficial be jettisoned, but simply that companies will need to be constrained by what is practically within their control. 

AI regulatory compliance

The final key aspect of AI governance is regulatory compliance. 

This is a complex and fast-evolving area with many mandates in the U.S. being in their infancy. For this reason, it is important to maintain compliance not just today, but next month, next year, and so on. To handle this, it’s necessary to not just look at what compliance mandates currently exist, but at the direction in which mandates are headed. This can frequently be surmised based on national debates, leading-edge state mandates, and similar factors.     

Furthermore, while some of these are true mandates with significant penalties, many of these are really frameworks to demonstrate best practices or something in between.   

As a complete discussion of the various compliance mandates and frameworks is outside of the reasonable scope of this blog, I have attempted to simply list some of the more important ones together with a link and a few comments. 

Of particular note for U.S. banks is SR 11-7: Guidance on Model Risk Management, which was issued by the U.S. Federal Reserve and the Office of the Comptroller of the Currency.  This covers a wide range of models including credit risk assessments, market risk management, and regulatory compliance.  Penalties for non-compliance can be substantial.  

Evaluating your current AI governance 

So how do we enhance AI governance?  The first step is to understand how well your AI projects are currently being governed. 

Based on an overall framework made up of the types of criteria discussed in this blog, it is crucial to both do periodic audits of your AI governance and ongoing incorporation of the framework into your AI project approval and management processes. As part of your audit, you’ll want to identify issues and develop recommendations and a roadmap for improvement. Although the formalized nature of an audit is crucial for compliance purposes, it can also be helpful in evaluating AI ethical issues and effectiveness. 

Recommendations should obviously not be simply a laundry list of every desirable activity. They need to be prioritized based on their importance — particularly the payback of the activity and the opportunity cost of using the resources for some other activity.  Prioritization should use measures that examine both the specificity of the compliance mandate and the (legal and reputational) penalties for non-compliance.

One way to deal with the reality of limited resources for AI governance is to set aside some reasonable market percentage of AI spend on governance. For example, IBM found that in 2024, the average company spent 4.6% of AI spend on AI ethics; and it expects that this percentage with grow to 5.4% in 2025. Similarly, AI compliance spending may be redirected from current compliance spending. Finally, AI efficiency spending should be captured as part of the initial business plans for any AI platform or application spending. 

AI governance platforms can help but no platform today can eliminate the basic steps of an AI governance audit and the necessity of on-going monitoring. Whether an actual tool is used or not, it is critical to put together an overall framework for guiding your AI governance.

How Celsior can help?

We help organizations tackle the core of AI governance—where ethics, performance, and compliance intersect.

  • Data & AI governance workshop We guide your team in setting governance policies, defining roles, assessing training data quality, and implementing tools. 
  • Bias elimination – Using predictive modeling, we flag and mitigate potential bias—both in model training and real-world usage. 
  • Explainable, transparent AI – Our platform translates complex model decisions into plain language explanations, with numeric thresholds and confidence scoring. It works standalone or as a plug-in to your existing AI platform.

Conclusion 

No matter where you are in your AI journey, governance is essential. It protects your organization, strengthens your AI’s value, and builds the trust needed for long-term success. 

By developing a strong governance framework, conducting regular audits, and embedding governance into your AI lifecycle, you can make sure your AI does what it’s meant to do — effectively, ethically, and responsibly. 




 

MORE BLOGS

BLOG
more
How to Maximize ROI Through Smarter Mobile Test Automation 

Scaling mobile testing with smart automation

Learn More
BLOG
more
Adding AI Capabilities to a Test Automation Framework

Evolving from code-based to AI-driven frameworks

Learn More
BLOG
more
Explainable AI – A Key to the Practical Success of AI and ML

Building trust with explainable AI

Learn More
Explainable AI