Finn AI is now a part of Glia.

This post originally appeared on the Finn AI blog, which is now part of Glia.

Trust is crucial as it impacts people’s behavior in all kinds of relationships—personal and social relationships, buyer and seller relationships, and relationships with technology such as AI and chatbots. Trust is also fragile—the CEO of SAP put it best when he said, “Trust is the ultimate human currency, it’s earned in drops and lost in buckets.”

“Trust is the ultimate human currency, it’s earned in drops and lost in buckets.”

If customers don’t trust a bank’s financial chatbot, they won’t unlock value from it—and if they get no value, the chatbot is just another unused feature within their banking app. In this scenario, the bank loses an opportunity to engage with its customers and increase customer loyalty and lifetime value.

We’ve identified six trust-eroding behaviors banks should teach their banking chatbots to avoid:

1. Being deceptive about being a bot

First and foremost, your banking chatbot should immediately inform the user that it’s a chatbot. This is good practice and sets expectations with the user up front. It’s also the law in some places—California just introduced a bill prohibiting chatbots from pretending to be human.

2. Being unclear or unreliable

Your banking chatbot should have the ability to communicate naturally with the user and do what is expected of it. If the user and the chatbot can’t communicate or successfully perform tasks together, trust will be lost. According to a recent study, a chatbot must be able to understand and sustain conversation context, engage in small talk, and indicate when they fail to perform a task. These are fundamental requirements to maintaining trust.

3. Doing something unexpected

If a banking chatbot takes an action on the user’s behalf that they didn’t want or expect, trust will be damaged. For example, imagine a virtual assistant proactively used money a customer was saving for a new car to pay off the remainder of their student loan. The assistant made what may have been a sound financial decision based on the information available to it, but the decision lacked context, was unexpected, and unwanted.

4. Using unexpected information

People are more sensitive than ever about data privacy—and rightly so. While users would expect a banking chatbot to know a certain amount of information about them, they would be mistrustful of surprise insights. Teach your chatbot what information it should retain and re-use so it’s delivering the right information at the right time.

5. Storing information for too long

Bringing up old information is as damaging as using information without permission. Most information is only relevant for a short period of time. If your colleague made you tea yesterday, it would be acceptable for them to remember that you like milk and no sugar today. However, if you bump into an old colleague who you haven’t seen for a decade, it would be strange if that person remembered your exact lunch order from ten years ago.

Artificial intelligence remembers everything unless you teach it not to. If your banking chatbot brings up an old transaction or behavior in the wrong context, the user will be at best confused, and at worst, wary of ulterior motives.

6. Putting the best interests of the bank ahead of the users’

According to a study by Ernst and Young, 60% of consumers think banks should help them achieve life goals but only 26% of them trust that the banks will provide unbiased advice. Users understand that the banking chatbot is an agent of the bank but they also expect that it will give them advice that will benefit them financially (and not the bank).

“60% of consumers think banks should help them achieve life goals but only 26% of them trust that the banks will provide unbiased advice.” 

– Ernst & Young

Basic tenets of trust

There are five key questions that customers ask themselves (consciously or subconsciously) when deciding if a banking chatbot is trustworthy:

Is it competent? If you tell the chatbot to move $50 from checking, the chatbot does this correctly.

Is it well intentioned? The chatbot is not sneaky. It is working for you, and only you.

Does it know me? The chatbot understands your unique needs and only recommends actions, products, or services that will benefit you financially.

Is it reliable? The chatbot is available whenever you need it. It’s never offline or out of service.

Is it discreet? The chatbot only uses the information that you’ve shared for the purposes that it was shared. It will not use this information in the future, for example, to prohibit you from being approved for a loan.

Banks should keep these questions on their minds as they implement their banking chatbots and put controls in place to avoid trust-eroding behaviors.