While Software-as-a-Service (SaaS) has been around for a while, it is uncommon to see Conversational AI delivered in a SaaS model. There are several reasons for this. Firstly, Conversational AI is costly to build, test, and deploy—even when it’s in the cloud. These costs are stifling for many fintech startups who struggle to secure enough funding to prove out their product-market fit and gain a critical mass of banking customers to drive down costs.

Retail banking language model

This is common across all emerging technologies. Think about how much it cost to build the first Tesla car. The original Tesla Roadster launched in 2006 with a price tag of $109,000. Today, the Tesla Model 3 retails at just $35,000. And although the cost to manufacture it at the time of launch was actually $38,000, Tesla was confident that improvements in production efficiency would lead to profitability within six months

Related image

While it took a little longer than six months, Finn AI is pioneering the ‘SaaSification’ of Conversational AI for banks which, in turn, is bringing down the cost to build and deploy effective Conversational AI. This is particularly important for de novo and smaller community banks where, until now, the cost to build and deploy Conversational AI was prohibitive.

Affordable Conversational AI

Recent cost efficiencies for Conversational AI is driven, in part, by the relative affordability of massive computing power. This is due to the earlier commoditization of cloud computing and the dawn of the SaaS era. Led by the likes of Salesforce.com, the SaaS era has turned software licensing models on their head, reduced the cost of application delivery, and created opportunities for innovative fintech companies to create exciting new solutions for banks.

In the past, banks had to spend millions of dollars just to get some very basic Conversational AI features to work effectively. Thanks to learnings and product iterations over the past 12 months, Finn AI has been able to lower the cost of entry for banks. 

As a result, instead of focusing on Conversational AI development, testing, delivery, maintenance, backup, and security, customers can concentrate on what they do best – banking.


Instead of focusing on Conversational AI development, testing, delivery, maintenance, backup, and security, customers can concentrate on what they do best – banking.


Creating efficiencies in deployment

Finn AI is continuously iterating to increase efficiencies in the delivery of Conversational AI. During the first iteration, the team invested time building different AI models for different customers around the world to gain a deep understanding of their businesses—and the similarities and differences between them. They then consolidated all this learning into one codebase and made the customizations standard and controllable by the customers themselves.

Today, there is a standard way to build the software so everyone gets an out-of-the-box experience that’s easily customizable for each bank. This removes the cost of maintaining different versions of the product. Code deployment is also fully automated. 

By consolidating and fine-tuning the infrastructure and standardizing the deployments, Finn AI has created efficiencies to reduce the per-unit cost. Furthermore, the deployments are automated which drastically reduces the resources required on the customer side.

Creating efficiencies in Conversational AI models

When it comes to Conversational AI, Finn AI has gathered shared learnings from multiple banks around the world about what banks want to do and how they want to do it, allowing new customers to reap the rewards of this shared knowledge.

As a result, the team has been able to standardize the way they solve problems, categorizing problems as either user goals or feature-based use cases. They’ve also decoupled how they label things, removing the one-to-one relationship between labels and goals so there are fewer restrictions on what the AI can do. This is unique to Finn AI and improves the performance of the AI significantly. 

This same shared learning model is reflected in the way Finn AI aggregates data from every customer deployment, allowing net new customers to hit the ground running with access to a significant store of banking data out of the gate.

Additional efficiencies are provided with Finn AI.Q, a suite of capabilities that enables the team to gather, train, and label data; deploy models in the wild; see how people are using the AI; discover new patterns; and feed all these learnings back into the system.

The Finn AI.Q Model


The continuous research and reiteration cycles have allowed Finn AI to learn about the language and the workings of banks. They now understand local and regional nuances and how their core model can be adapted to suit them.

To gain this deep expertise from scratch would cost a bank millions of dollars. Partnering with Finn AI helps banks deliver high-performance Conversational AI at a fraction of the cost.

 

To gain this deep expertise from scratch would cost a bank millions of dollars.

 

Building Conversational AI from scratch takes time

Siri, Cortana, Alexa, and Google are examples of very broad Conversational AI—they can tell you a little bit about a lot of things. While this works well in the consumer space where the volume of people talking to the systems helps the models become better over time, it’s harder to achieve high-performance at the vertical level.

Many banks try to build bots in-house or use third-party toolkits. These bots often end in failure. In 2018, Nordnet, a Swedish online bank fired its AI employee, Amelia, because of its underwhelming performance. Amelia is not an isolated case—Gartner predicts that by 2020, 40% of the bot applications launched in 2018 will have been abandoned. 

That’s because many of these bots are not using true natural language processing. The banks are starting their Conversational AI journey without the shared knowledge from the many successful deployments that Finn AI has already delivered.

To sum up, it takes a lot of time to build and train Conversational AI. Specialized partners like Finn AI have worked with many banks around the world, so they understand the language and nuances of your industry. The products and processes may differ, but the basics are the same and can be anonymized and replicated to create significant efficiencies. 

Learn more about the most common pitfalls and best practices when deploying Conversational AI assistants for your bank. Watch the on-demand webinar: Overpromised & Underdelivered – Common Misconceptions About Conversational AI for Banking.
Kenneth Conroy
Dr. Kenneth Conroy is the Vice-President of Data Science at Finn AI. He leads the development of our proprietary NLP system and leverages machine learning to enable intelligent communication through turn-based, conversational flow. When he is not busy leading the team of data scientists, Ken enjoys speaking about the application of AI at events, and taking his new-born Boston Terrier on long walks on the beach.
Steve Zhu
Steven Zhu brings over two decades of hands-on experience in software engineering, enterprise architecture and management. Just prior to Finn AI, he was Director of Software Engineering at CA Technologies where he managed multiple technical teams to develop and deliver optimal models and solutions at scale. His vast portfolio of enterprise solutions experience also includes large organizations such as Boeing and Ritchie Bros.