Is Your AI Eroding Your Customers’ Trust?

by

At AWS Re-Invent 2023, Dr. Swami Sivasubramanian, Vice President of Data and AI at AWS, likened AI’s relationship with human-supplied data to a mutually beneficial partnership, akin to the symbiotic relationships found in nature. While this metaphor captivates our imagination and underscores the potential synergy between humans and AI, it can also be misleading. It is important to remember that AI is not a living organism acting in its own interest for the benefit of itself, or for humans for that matter.  

Symbiotic relationship of a whale shark and feeder fish

From your customer’s perspective, it is an extension of your company; the experience and relationship your customer has with the AI directly impacts how your customer thinks of you.  In the insurance industry, where trust is paramount but often lacking – due to the complexity of policies, lack of transparency, and press coverage among other things – the adoption of AI presents both an opportunity and a challenge. Without thoughtful implementation, AI has the potential to exacerbate existing trust issues.  

Here are 4 key factors that can erode trust in AI: 

1. False identity  

Failure to clearly distinguish between AI and human interactions can lead to confusion, frustration, and mistrust. Imagine a claims situation where a customer pours their heart out to a voice AI, only to find out it isn’t human after the fact. Whether through chat, voice, or video, users should always be aware when they’re engaging with AI versus a human representative. Transparency fosters trust and ensures that users know what to expect from each interaction. 

2. Overpromising AI Capabilities  

Misrepresentation of AI’s capabilities can set unrealistic expectations and ultimately disappoint users. If your AI is trained on a few key workflows, don’t allow for open-ended situations it can’t assist with. In the best case, your user will get frustrated by not getting the information or action needed from it. In the worst case, your AI might make up something wildly inaccurate (hallucinate) that could harm your business or reputation. It’s essential that your AI communicates clearly about how it can assist users and when human intervention may be necessary.  

3. Forcing AI 

It is important to recognize that customers may occupy different stages of the AI adoption curve. As highlighted in Insurity’s recent survey (link here), consumers are mixed on their perception of AI use in the Insurance industry. Acknowledging that some users may prefer human interaction over AI-driven interfaces, it’s essential to offer flexible engagement channels that accommodate diverse preferences. Prompt the user to start with AI to help onboard users to a new AI experience but offer an easily accessible option to speak to a human representative. This shows you understand the variability in users’ comfort levels and ensures that all customers feel supported and valued. 

4. Lack of transparency 

Users deserve insight into the data-driven decisions made by AI. Providing explanations for decisions and avenues for reporting errors both empowers users to understand AIs decisions, and helps you correct potential biases or inaccuracies in your AI. Building AI systems that are capable of evaluating and communicating their own decision-making process is just good practice in a highly regulated industry like Insurance. Admittedly, this isn’t easy: it means you must train the AI to be introspective and transparent to be able to tell you why it made the decision it made. But doing so reinforces accountability and builds both internal and external confidence in your AI.  

AI Employee Persona 

Human-centered AI best practices can help steer clear of these risks and foster trust. One beneficial human-centered AI methodology is to conceptualize AI as a virtual employee, complete with a defined role, goals, need, capabilities, and personality traits. Creating an AI persona that articulates these attributes helps your business ensure that your AI has the right data, directives, and personality to engage with your customer and reduce risks when faced with situations it isn’t suited for. An AI persona can also help place your AI in the ecosystem of your users’ workflows and experiences, ensuring a positive interaction and experience by your users.  

As you consider your AI roadmap, it is important to recognize the importance of cultivating trust, especially in an industry such as insurance, which is both highly regulated and which struggles with customer trust as it is. By thinking of AI as a representative of your company and creating a persona for it, you can be better prepared to ensure it communicates candidly, transparently, and within the constraints of its abilities. With care and human-centered AI design practices, insurance companies can leverage AI to enhance user experiences while mitigating trust issues.

Recent Articles