the-secret-to-enterprise-ai-success:-make-it-understandable-and-trustworthy

The Secret To Enterprise AI Success: Make It Understandable And Trustworthy

Image Credit: VentureBeat made with Midjourney

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here


The promise of artificial intelligence is finally coming to life. Be it healthcare or fintech, companies across sectors are racing to implement LLMs and other forms of machine learning systems to complement their workflows and save time for other more pressing or high-value tasks. But it’s all moving so fast that many may be ignoring one key question: How do we know the machines making decisions are not leaning towards hallucinations?

In the field of healthcare, for instance, AI has the potential to predict clinical outcomes or discover drugs. If a model veers off-track in such scenarios, it could provide results that may end up harming a person or worse. Nobody would want that.

This is where the concept of AI interpretability comes in. It is the process of understanding the reasoning behind decisions or predictions made by machine learning systems and making that information comprehensible to decision-makers and other relevant parties with the autonomy to make changes.

When done right, it can help teams detect unexpected behaviors, allowing them to get rid of the issues before they cause real damage.

Event

VB Transform 2023 On-Demand

Did you miss a session from VB Transform 2023? Register to access the on-demand library for all of our featured sessions.

Register Now

But that’s far from being a piece of cake.

First, let’s understand why AI interpretability is a must

As critical sectors like healthcare continue to deploy models with minimal human supervision, AI interpretability has become important to ensure transparency and accountability in the system being used. 

Transparency ensures that human operators can understand the underlying rationale of the ML system and audit it for biases, accuracy, fairness and adherence to ethical guidelines. Meanwhile, accountability ensures that the gaps identified are addressed on time. The latter is particularly essential in high-stakes domains such as automated credit scoring, medical diagnoses and autonomous driving, where an AI’s decision can have far-reaching consequences.

Beyond this, AI interpretability also helps establish trust and acceptance of AI systems. Essentially, when individuals can understand and validate the reasoning behind decisions made by machines, they are more likely to trust their predictions and answers, resulting in widespread acceptance and adoption. More importantly, when there are explanations available, it is easier to address ethical and legal compliance questions, be it over discrimination or data usage.

AI interpretability is no easy task

While there are obvious benefits of AI interpretability, the complexity and opacity of modern machine learning models make it one hell of a challenge.

Most high-end AI applications today use deep neural networks (DNNs) that employ multiple hidden layers to enable reusable modular functions and deliver better efficiency in utilizing parameters and learning the relationship between input and output. DNNs easily produce better results than shallow neural networks — often used for tasks such as linear regressions or feature extraction — with the same amount of parameters and data. 

However, this architecture of multiple layers and thousands or even millions of parameters renders DNNs highly opaque, making it difficult to understand how specific inputs contribute to a model’s decision. In contrast, shallow networks, with their simple architecture, are highly interpretable.

The structure of a deep neural network (DNN) (Image by author)

To sum up, there’s often a trade-off between interpretability and predictive performance. If you go for high-performing models, like DNNs, the system may not deliver transparency, while if you go for something simpler and interpretable, like a shallow network, the accuracy of results may not be up to the mark. 

Striking a balance between the two continues to be a challenge for researchers and practitioners worldwide, especially given the lack of a standardized interpretability technique.

What can be done?

To find some middle ground, researchers are developing rule-based and interpretable models, such as decision trees and linear models, that prioritize transparency. These models offer explicit rules and understandable representations, allowing human operators to interpret their decision-making process. However, they still lack the complexity and expressiveness of more advanced models. 

As an alternative, post-hoc interpretability, where one applies tools to explain the decisions of models once they have been trained, can come in handy. Currently, methods like LIME (local interpretable model-agnostic explanations) and SHAP (SHapley Additive exPlanations) can provide insights into model behavior by approximating feature importance or generating local explanations. They have the potential to bridge the gap between complex models and interpretability.

Researchers can also opt for hybrid approaches that combine the strengths of interpretable models and black-box models, achieving a balance between interpretability and predictive performance. These approaches leverage model-agnostic methods, such as LIME and surrogate models, to provide explanations without compromising the accuracy of the underlying complex model.

AI interpretability: The big possibilities

Moving ahead, AI interpretability will continue to evolve and play a pivotal role in shaping a responsible and trustworthy AI ecosystem.

The key to this evolution lies in the widespread adoption of model-agnostic explainability techniques (applied to any machine learning model, regardless of its underlying architecture) and the automation of the training and interpretability process. These advancements will empower users to understand and trust high-performing AI algorithms without requiring extensive technical expertise. However, at the same time, it will be equally critical to balance the benefits of automation with ethical considerations and human oversight. 

Finally, as model training and interpretability become more automated, the role of machine learning experts may shift to other areas, like selecting the right models, implementing on-point feature engineering, and making informed decisions based on interpretability insights. 

They’d still be around, just not for training or interpreting the models.

Shashank Agarwal is manager, decision science at CVS Health.

DataDecisionMakers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers