Finn AI is now a part of Glia.

This post originally appeared on the Finn AI blog. Finn AI is now a part of Glia.

As the development of intelligent conversational AI systems becomes more common and mainstream, stories of biased data sets and failed models are coming to light. We’ve heard about Amazon scrapping their AI recruitment tool because it was biased against women. Microsoft debuted the infamous social AI chatbot Tay which, within 24 hours, began tweeting a slew of racist, misogynistic, and offensive comments.

AI technology is undeniably a powerful tool and instances like these highlight the importance of being thoughtful in your conversational AI development and deployment. Good news is that regulations and policies are already underway to adopt the principles of ethics in AI.

[Finn AI’s] co-founder and COO, Natalie Cartwright recently attended the G7 Multi-stakeholder Conference on Artificial Intelligence. The event in Montreal gathered a global group of visionaries, ministers, policymakers, and researchers to discuss how to reinforce the development of responsible, human-centric AI.

Furthermore, the Montreal Declaration for Responsible Development of Artificial Intelligence was officially launched on December 4, 2018. The declaration includes guiding principles and recommendations to promote the development of AI in the best interest of society.

The increasing conversations and regulations around the ethical use of AI is particularly important and beneficial for the financial services industry. Conversational AI banking offers huge potential to help people make better sense of their money, automate actions to promote financial wellbeing and help rebuild trust between banks and customers. However, there is also the potential risk of developing biased data sets which can greatly affect application processes or recommendations – and when it comes to handling personal finances, there is no room for mistakes.

So when your bank decides to deploy an AI-Powered Chatbot, it is imperative that you adopt ethical principles that will guide the development of your AI for the good of the public. Here is a set of standards your bank should consider:

1. Protect consumer data and privacy

As a bank, you hold a large store of data about your banking customers, particularly if your bot offers authenticated experiences. Everyone has the right to protect their data, making it your responsibility to uphold standards that respect user privacy.

In Europe, we’ve already seen data privacy laws – – come into effect to govern and protect consumer data. The banking regulatory landscape in North America is expected to shift and align closer to Europe where consent is required by law to obtain personal information.

78% are happy to share personal data with their bank but 66% demand faster, easier services in return.“

– 2017 Global Distribution & Marketing Consumer Study, Accenture

Also consider how you handle personal data you’ve already collected. Ensure that you are thoughtfully obtaining only information you need, and that you are treating it properly by limiting usage and access.

Learn more about data and privacy considerations when adopting AI in banking.

2. Be transparent about the use of AI

In 2018, the United States saw a 20-point drop on the Edelman Trust Barometer when it came to consumer trust in financial services. Needless to say, banks must work on regaining this trust.

While AI technology provides immense opportunity to help build trust through personalized advice and coaching, as well as confidential human-like conversations, these intelligent systems are still emerging and unfamiliar to the general public. In fact, in a 2017 survey by Hubspot, 63% of respondents were already using an AI tool, but didn’t realize it.

Consider how your bank can be more transparent about the use of AI technology. Users are more likely to trust an organization when they are forthcoming about the purpose and limitations of the technology. While you want your banking chatbot to have ‘personality’ and natural language capabilities, you must also consider the balance between mimicking a human interaction, while still being apparent to the user that this is the work of conversational AI.

3. Harness diversity and inclusivity

By far, one of the top concerns of AI is its implication of perpetuating existing societal bias, or even creating new ones. Bias in machines often occurs when there is lack of sufficient or related data sets for training the models. The root of this issue traces back to the humans that developed and trained the bot.

AI reflects the subconscious biases of the creators, which is why it is important to be thoughtful when building your team. Gathering a group of people with varying skills, backgrounds, and experiences will help ensure that your models are being trained on a diverse set of data that is not based on human biases or prejudices.

Bias detection tools are beginning to surface in the industry to ensure fairness of machines, such as the Google “What-If Tool”. As machine development evolves, we will continue to see more tools that can test, measure, and mitigate bias in AI.

“In 2019 we expect a significant move forward with frameworks and standards for measuring and testing bias in AI. We will see an increase in need for human judgement and, consequently, an increase in these types of jobs, standards, and protocols.”

– Jake Tyler, Co-Founder & CEO, Finn AI
Forbes 120 AI Predictions For 2019

The Challenge is Big, But the Potential is Bigger

Navigating the ethics of AI chatbots in banking and figuring out how to responsibly develop unbiased machine learning algorithms are considerations all organizations and financial institutions need to plan and discuss from the start. While there is immense potential for banking chatbots to reduce front and back-office costs, improve customer loyalty and satisfaction, and increase profitability, if you lack a thoughtful and ethical approach, the consequences can be catastrophic.

When implementing conversational AI for your bank, consider partnering with a full-service banking chatbot provider. Look for a diverse team of experts that is knowledgeable and experienced in not only developing, but publicly deploying conversational AI.