AI can explain anything – except why no one listens to me

Paul Yang - August 27th, 2021

AI models don’t make decisions – people make decisions. And so, any organizations who previously did not execute in response to basic analytics will continue to not execute based on cutting edge ML. Adding sophistication to analysis might increase predictive accuracy, but will also increase coordination complexity. We cannot only acquire new tools or capabilities and hope for the best. Instead, we need to reimagine the final step of presenting analysis, and rethink how to turn AI/ML outputs into decision making.

For data scientists, the final recommendation or model is the product of a long journey – distilling stakeholder coordination, context setting, data wrangling, and careful analysis. But when that recommendation is delivered, stakeholders are asked to respond relatively quickly to a lot of new information, and so rely on three things to take action confidently.

  1. Understand the recommendation: The stakeholder must understand the expected outcome, and the levers that need to be pulled to achieve that outcome. An effective analysis owner will take on the responsibility for the stakeholder’s understanding, through communicating both specific predictions and the supporting evidence in a consumable way.
  2. Trust the source: A credible voice needs to deliver the results. For instance, if the technical owner has a track record of quality delivery, or if the decision maker has bought into the analysis approach. Other times, trustworthiness is just branding: it’s why Watson plays chess and Jeopardy in its free time. Ultimately, this becomes a feedback loop, as good outcomes on executed changes builds the credibility to recommend other changes.
  3. Own the capacity to implement the recommendations: High quality ML models are outmatched when facing real-world operational complexities. Both the human bandwidth and technical capacity to implement any particular recommendation need to be in place.

As AI/ML systems have rolled out further into organizations, they have:

  1. Increased complexity of recommendations: Fundamentally, it is hard to explain what AI algorithms are doing. Explaining ML results is analogous to informed consent in medicine. Doctors, with decades of medical training, need to explain a medical procedure. But the patient learned about their disease for the first time ten minutes ago. What does it mean to be an “informed” decision maker as the patient? The more complex the procedure, the harder it is for the patient to understand all risks and implications. In the face of more advanced data science techniques, it has become harder for decision makers to be “informed.”
  2. Tested the limits of the existing working model: Changing the underlying tools, but without changing how the data science team interacts with decision makers does not suddenly create a new way to build trust or understanding. In the face of more complex processes and less comprehensibility for stakeholders, implementation of recommendations teeters into becoming reliant on relationship capital.
  3. Increased the operational difficulties when implementing recommendations: A retailer might think about customizing pricing. With new data management and analytic tools, an analyst could build a model to tailor pricing for each customer individually, creating a predicted elasticity based on all of their tracked web behavior. Or, for a fraction of the complexity, just raise the prices in New York City by $2.

So, while ML has helped predictions become more accurate and easier to create, the process of executing changes has actually become bumpier. As more advanced tools are applied to thornier problems, organizations must actively prevent communication complexity from scaling at the same rate. Even a perfectly predictive model requires buy-in; and so it does not make sense to asymmetrically develop the capability to become more predictive to the ability to win buy-in.

The simplest way to take on a challenging task is spreading it out over a longer period of time. Business stakeholders need to become more involved during the process of building an answer. If we start building a ML model, non-technical stakeholders should not sit idly, and imagine the ball is out of their court. Rather, they should be roped in, reviewing context setting descriptive outputs, voicing over contextual knowledge, and leaning in with feedback about the right targeting variables. Expert input can only improve the model; meanwhile participating with critical thinking helps build the intuition behind the model itself.

When this type of collaboration becomes routinized, it distills into a process of rapid prototyping before productionalization. A first pass walkthrough will touch on all points of the analysis, and become a framework to mutually exchange information between data science and business line teams. There is no need to fine tune models or waste time; creating the end-to-end MVP should be painless and directly solve the challenges raised above:

  1. Starting with simple answers to reduce complexity: Answers become less monolithic when stakeholders can see the end product built up brick-by-brick. Whether by rapid prototyping, or some other process, exposing digestible sub-answers helps the digestibility of the final answer. It is not necessary to understand the mechanics of deep learning to see the descriptive scatterplot that relates two variables.
  2. Decomposing questions into smaller chunks removes uncertainty about the overall answer: As stakeholders start to understand the approach, and see the dataset, uncertainty or confusion can be worked through live. If we wait until the entire ML pipeline is built, confusion is non-specifically directed at the entire product. By cleaning up doubts each step of the way, the end product should also be considered clean.
  3. Working through variables removes late discovery of infeasibility: Redesigning the stakeholder-analyst interaction model does not create new operational capabilities, but can save a whole lot of time avoiding the wrong recommendations. Asking the business line to actively prototype together allows the team to understand that a particular targeting approach is impossible in the first 15 minutes of analysis – not after days of data wrangling and model iteration, and definitely not at a meeting with the COO.

Machine Learning has the ability to harness vast amounts of data, and create better answers for the questions facing organizational leadership. But as the complexity of analysis approaches increases, it is imperative that data science does not become more complex to understand. Committing to deeper collaboration pathways will lead to improved end outcomes: better upfront context setting, fewer roadblocks and pitfalls for the analysis team, and a better understanding of the output for the decision team.

Start using Einblick

Pull all your data sources together, and build actionable insights on a single unified platform.

  • All connectors
  • Unlimited teammates
  • All operators