Building AI Trust

by | Apr 9, 2025 | White Papers

In the context of AI making decisions akin to the trolley problem, how important is it that the AI’s decision-making process is transparent and explainable to the people affected by it? Can transparency reduce ethical concerns, or does it complicate the issue further?

 

Do the needs of the one outweigh the needs of the many? This has been the question ever since the inception of democracy in Ancient Greece. It is very difficult to answer, especially in today’s society where we value individual liberties and rights. This acceptance creates an open society where creativity and free communications prevail.

With this openness, we expect transparency from our business and government leaders when they make decisions that impact on us.  People want to understand what the decision means, how it was determined, and what information was used to support the decision. Today, many businesses and governments are using an AI-derived decision-making process to make better decisions given all the different inputs that leaders must consider.

There are two key attributes, transparency and explainability, that can build trust in those AI-derived decisions. The ability of people to understand how the decisions were made, often referred to as explainable AI or XAI, is as critical as transparency. The importance of transparency and explainability increases as the decision becomes more critical such as in healthcare devices [1] , financial services [2] , and other critical services that we depend on as individuals.

There are two categories of decision making or decision recommending AI models: White Box and Black Box.

The White Box generally has a few simple rules and is trained on a limited data set. They typically use linear regression or decision tree algorithms which are more easily understood by the AI model developer, the decision maker, and the individual potentially impacted by the decision.

The Black Box is a more complicated model which uses opaque algorithms like Randon Forests or Neural Networks to evaluate the numerous input parameters, as compared to a White Box. [2] Black Box will require more insight for people to trust the output and the AI-derived decision.

Transparency is a critical expectation for people to trust any AI-derived decision. This means that the AI model developer should provide insights into how the model was built and evaluated. In the decision-making process, people want to know if the information is correct, unbiased, and appropriate for an AI model.

The AI model developer must document and make available the answers to the following questions:

  • What data is used as input into the model?
  • Where did the entity get the input data?
  • How was that data processed?
  • What does the output say?
  • How does that output contribute to the decision?

 

Being able to answer these questions will provide transparency but not necessarily the comprehension of how the AI model works. That leads us to the explainable AI (XAI) discussion next.

The ability to explain how an AI model came up with their decision and/or recommendations is a critical component to build trust in the AI-derived decision-making process. White Box models with fewer rules and parameters are simpler to understand and are more likely to be trusted by the individual. Black Box models will be much more difficult to explain because they are potentially processing millions or billions of input parameters with much more complex algorithms.

The Cognitive Load Theory demonstrated that humans could understand models with up to 7 rules. Black Box models exceed this considerably given the number of input parameters it is trying to process. If you, as the AI model developer, can deploy a White Box solution, this is the recommended approach.

The AI model developer should follow the KISS principle, Keep It Simple Stupid. Black Box models will require much better documentation, data flow diagrams, and definitions. Black Box model explainability also benefits if you use recognized best practices and frameworks to explain the model.

Medical devices, as outlined in a Med Device Online article, have a set of best practices with an ISO 14971 Framework to address explainability.[1] This is important in potentially life impacting devices. NIST developed an explainability framework using four principles of explanation, meaningful, explanation accuracy, and knowledge limits. [3]

The ethical concerns of AI-derived decision making is addressed by providing the decision maker with transparency and the comprehension of the AI model. The decision maker will understand the appropriate use of the AI model, the limitations, and the inherent model biases. The decision maker can then course correct when the AI model output does not make sense or is inappropriate. This will lead to better decision making and the ability to avoid the ethical challenges that can result.

In conclusion, having transparency and explainability of AI models will build trust in the AI applications and the decisions we make using these applications. Transparency and explainability will also empower our leaders to avoid the ethical concerns that can arise using AI applications in their decision-making process.

 

REFERENCES:

    1.  Zhao, Yu.  “Enhancing Trust, Safety, and Performance Through Explainability In AI-Enabled Medical Devices.” MED DEVICE ONLINE. Enhancing Trust Safety And Performance Through Explainability In AI-Enabled Medical Devices January 30, 2025
    2. Candelon, Francois. “AI Can a Both Accurate and Transparent” Harvard Business Review, May 12, 2023
    3. Phillips, P/ Jonathon.  “Four Principles of Explainable Artifical Intelligence.” National Institute for Standards and Technology Four Principles of Explainable Artificial Intelligence January 30, 2025

 

 

 

A condensed version of this article is published on our Blog under the title Trust in AI Models.

Recent Blog Articles

AI’s Everyday Impact In The United States

AI’s Everyday Impact In The United States

Artificial Intelligence (AI) is subtly transforming daily life in the United States, extending beyond generative platforms like ChatGPT, Gemini, Grok, Llama, and Copilot. Here are some of the practical applications where you can see the everyday impact of AI in our...

read more
A Repeat of the DOT-COM Bust, but with AI?

A Repeat of the DOT-COM Bust, but with AI?

A quarter of a century ago the DOT-COM bust began – Is Artificial Intelligence next? The rise of artificial intelligence (AI) has drawn comparisons to the DOT-COM era, a transformative time when the internet promised to revolutionize industries and change lives....

read more

Please contact us if you are interested in a free discovery call.