Trust is crucial as it impacts people’s behavior in all kinds of relationships—personal and social relationships, buyer and seller relationships, and relationships with technology such as AI. Trust is also fragile—the CEO of SAP put it best when he said, “Trust is the ultimate human currency, it’s earned in drops and lost in buckets.”

“Trust is the ultimate human currency, it’s earned in drops and lost in buckets.”

 

If customers don’t trust a bank’s virtual financial assistant, they won’t unlock value from it—and if they get no value, the assistant is just another unused feature within their banking app. In this scenario, the bank loses an opportunity to engage with its customers and increase customer loyalty and lifetime value.

We’ve identified six trust-eroding behaviors banks should teach their virtual assistants to avoid:

1. Being deceptive about being a bot

First and foremost, your virtual assistant should immediately inform the user that it’s a bot. This is good practice and sets expectations with the user up front. It’s also the law in some places—California just introduced a bill prohibiting bots from pretending to be human.

2. Being unclear or unreliable

Your virtual financial assistant should have the ability to communicate naturally with the user and do what is expected of it. If the user and the bot can’t communicate or successfully perform tasks together, trust will be lost. According to a recent study, a virtual assistant must be able to understand and sustain conversation context, engage in small talk, and indicate when they fail to perform a task. These are fundamental requirements to maintaining trust.

3. Doing something unexpected

If a virtual financial assistant takes an action on the user’s behalf that they didn’t want or expect, trust will be damaged. For example, imagine a virtual assistant proactively used money a customer was saving for a new car to pay off the remainder of their student loan. The assistant made what may have been a sound financial decision based on the information available to it, but the decision lacked context, was unexpected, and unwanted.

4. Using unexpected information

People are more sensitive than ever about data privacy—and rightly so. While users would expect a virtual financial assistant to know a certain amount of information about them, they would be mistrustful of surprise insights. Teach your AI what information it should retain and re-use so it’s delivering the right information at the right time.

5. Storing information for too long

Bringing up old information is as damaging as using information without permission. Most information is only relevant for a short period of time. If your colleague made you tea yesterday, it would be acceptable for them to remember that you like milk and no sugar today. However, if you bump into an old colleague who you haven’t seen for a decade, it would be strange if that person remembered your exact lunch order from ten years ago.

Artificial intelligence remembers everything unless you teach it not to. If your virtual machine brings up an old transaction or behavior in the wrong context, the user will be at best confused, and at worst, wary of ulterior motives.

6. Putting the best interests of the bank ahead of the users’

According to a study by Ernst and Young, 60% of consumers think banks should help them achieve life goals but only 26% of them trust that the banks will provide unbiased advice. Users understand that the virtual financial assistant is an agent of the bank but they also expect that it will give them advice that will benefit them financially (and not the bank).

 

“60% of consumers think banks should help them achieve life goals but only 26% of them trust that the banks will provide unbiased advice.” – Ernst & Young

 

Basic tenets of trust

There are five key questions that customers ask themselves (consciously or subconsciously) when deciding if a virtual financial assistant is trustworthy:

Is it competent? If you tell the bot to move $50 from checking, the bot does this correctly.

Is it well intentioned? The bot is not sneaky. It is working for you, and only you.

Does it know me? The bot understands your unique needs and only recommends actions, products, or services that will benefit you financially.

Is it reliable? The bot is available whenever you need it. It’s never offline or out of service.

Is it discreet? The bot only uses the information that you’ve shared for the purposes that it was shared. It will not use this information in the future, for example, to prohibit you from being approved for a loan.

Banks should keep these questions on their minds as they implement their virtual assistants and put controls in place to avoid trust-eroding behaviours.

Read the Celent study on Raising the Customer Experience Bar: How to Close the Trust Bar in Retail Banking for more.
Kevin Jaako
Kevin Jaako is VP UX and Conversation Design at Finn AI. He has 10+ years of experience leading design teams on four different continents. As previous UX Design Lead at Commonwealth Bank of Australia (CBA), he led many high-profile projects, including the launch of the CommBank Property App (awarded the 2016 Australian Business Award for Mobile Innovation).
Dan Jacobsen
Daniel Jacobsen is currently Product Manager at Finn AI overseeing the Customer Acquisition use case feature. His skills extend to project management, company operations and software engineering, with a specialization in understanding consumer markets to develop valuable and in-demand products. Prior to Finn AI, Dan was Co-Founder at Gaslamp Games, an independent game studio where he was a key contributor in business development, design, and AI development.