The C-Suite has Trust Issues with AI
Despite rising investments in artificial intelligence (AI) by today’s enterprises, trust in the insights delivered by AI can be a hit or a miss with the C-suite. Are executives just resisting a new, unknown, and still unproven technology, or their hesitancy is rooted in something deeper? Executives have long resisted data analytics for higher-level decision-making and instead preferred to rely on gut-level decision-making based on field experience over AI-assisted decisions.
There is also a school of thought on whether AI will replace the human workforce. If not explained properly, that can be viewed as a fear factor to drive the morale low with the users of such technology while under constant fear. Not just AI, but any new technology is not about replacing humans. It is about augmenting them. In some cases, it may be necessary to redeploy the workforce differently to make things more efficient.
When you combine the above factor with the fact that AI decisions themselves may not be trustworthy (for reasons that Joe and I discussed in this article in Harvard Business Review) things can get ugly really quick. This is one of the reasons why executives are a little way of making decisions based on AI-delivered insights.
I also wrote on similar topics earlier which can be seen here “8 Tips for Building An Effective AI System Infrastructure” and here “It is Our Responsibility to Make Sure Our AI is Ethical and Moral” which might be worthy of reading as well.
The full article “The C-Suite has Trust Issues with AI” can be read in HBR here.
This post was originally published in Harvard Business Review.
I am a VP/Principal Analyst with Constellation Research covering AI, ML topic areas.