Finn AI is now a part of Glia.

This post originally appeared on the Finn AI blog, which is now part of Glia.

AI is evolving faster than the regulations that govern it. This was one of the big agenda items at the World Economic Forum earlier this year. Given the proliferation of conversational AI banking applications, the industry can expect regulators to start asking more questions about how banks are using the technology.

By being transparent and proactively engaging the regulators upfront, banks can not only prepare for the rules that will inevitably come, but help shape them.

“What is the best government? That which teaches us to govern ourselves.”

– Johann Wolfgang von Goethe

The power of self-governance

When the regulators come knocking, banks must be able to demonstrate that their AI technology isn’t harming customers or creating undue risk to the banking system as a whole. Savvy banks can actually help regulators set the benchmark for AI best practices—practices that are ethical but that still enhance the business and deliver value.

In this scenario, transparency is key. Open the lines of communication with regulators and work with them as you answer important questions about your business, including:

  • What personal data are used by your AI applications and does this data usage comply with
    privacy standards?
  • What are you doing to prevent bias and discriminatory outcomes in your AI applications?
  • To what extent is AI relied upon for mission-critical tasks?
  • How well does your management and board understand your application of AI?

Tips to set the benchmark for AI regulatory standards

1. Follow the Reduce, Redact, Review principles

AI in banking relies heavily on data. To be useful, conversational AI algorithms must consume masses of big data—the more they consume, the better they get at spotting patterns, making decisions, and—in frontline services—delivering a better user experience. It is this data consumption that raises questions around data governance, particularly in relation to the General Data Protection Regulation (GDPR) in Europe.

Follow these principles and ensure your banking chatbot technology partners do the same:

  • Reduce: Only request or store PII that is absolutely needed. All personally identifiable
    information (PII) should be evaluated as part of regular Software Development Lifecycles.
  • Redact: In cases where PII must be stored, take all due care to quickly redact and anonymize
    the end user identifiable elements.
  • Review: Stored data should be reviewed continually to ensure redaction and reduction
    policies are working.

By putting measures in place to secure data, banks will satisfy the regulators and help build customer trust.

2. Be prepared to talk about AI and security

When it comes to shaping the regulatory landscape, you can’t just walk the walk—you have to talk the talk too. Ensure you (and your management and board) can articulate how you’re handling PII; how you’re scrubbing, deleting, or anonymizing data; and how you’re approaching privacy and adhering to GDPR, etc.

3. Define your trust strategy and roadmap

According to Georgian Partners, when your business relies on data, you need customer relationships that are founded on trust. Know what trust means to your customers and define your trust philosophy. To really show your customers and the regulators that you are serious about building trust through ethical AI applications, hire a Chief Trust Officer.

When your business relies on data, you need customer relationships that are founded on trust.

4. Build systems that are fair and unbiased

Like humans, AI technology is known to make mistakes. For example, we’ve seen early AI applications displaying unfair bias against people of color and women. These examples are top-of-mind with regulators when it comes to banking. It would be extremely harmful for people to be disadvantaged in the application of conversational AI banking on a large scale.

To prevent this, make sure you understand the quality of data that goes into your AI chatbot models. Calculate the risk of putting your machines into production (the risk of error in a customer response, the risk of bias, and so on). Show regulators that you have a process in place to test for bias and ensure fairness.

5. Document everything

You must have an audit trail surrounding the use of AI and its decision-making, that can be thoroughly explained to regulators. This audit trail should be examined to ensure your AI is producing understandable outcomes and that your team members (and your fintech partners) have the expertise to analyze those outcomes and make adjustments when necessary.

6. Recruit technology partners, not vendors

Carefully scrutinize your AI vendors. Choose a banking chatbot vendor that takes security and compliance as seriously as you do. As well as helping to satisfy regulators and maintain your trustworthy reputation, it is more cost-effective to select technology partners that take regulatory concerns into account upfront. Fixing a system after regulations are introduced (and potentially compromised) can be disruptive and costly.

AI regulation is coming down the track—banks have a choice between getting crushed by the train or helping to drive it. Smart banks will choose the latter, building best practices and working with regulators to help shape the regulatory landscape in a way that protects banking customers, adds value, and minimizes burden.