One of the most common and potentially valuable use cases for generative AI is in customer support. Companies have long sought to automate call handling in call centers, but despite many proclamations of technological advances over the years, most of us still desperately press the 0 key or yell “agent” when we find ourselves connected to an automated system. Generative AI is clearly good at conversation, and so it seems a natural technology to apply to customer support functions. The primary task seems to be training it on an organization’s specific products, services, and typical support needs.
That’s what the Canadian software company Ada (after Ms. Lovelace, of course) is already doing for its customers in multiple industries. To find out more about what they’re up to in this regard, I spoke with the CEO Mike Murchison and a couple of their customers. One of them, the personalized makeup supplier IPSY (perhaps needless to say, I am not a customer) has not yet converted to the generative AI version of Ada, but plans to in the future to modernize the customer experience. The other, Wealthsimple—one of the most trusted financial institutions in Canada—is using generative AI from Ada, and is quite happy with it.
A Brief History of Ada
Mike Murchison has a good story about customer service and AI. He was an undergraduate student studying cognitive science at the University of Toronto, home of deep learning godfather Geoffrey Hinton. He became involved in software startups shortly after graduating, but they had big problems with customer service. Instead of learning from customers, he realized that companies were just trying to keep them at bay—what call centers refer to as “diversion and deflection.”
Because customer feedback makes software much better, Murchison became very curious about why his companies wanted to talk to customers less instead of more. He researched the issue by contacting many VPs of customer service, none of whom wanted their agents to talk to customers more. Intrigued by customer service, Murchison actually acted as a customer service agent for seven different teams over several years. He learned that 30% of customer inquiries are repetitive and mundane, and painful for agents to answer. He also learned that customer service experiences need to be great for everyone—the customer, the company, the agent, etc. Given his AI training, he was naturally interested in applying sophisticated natural language processing to customer support tasks.
Murchison co-founded Ada in 2016. At that time generative AI was not yet a thing, so Ada started with AI for customer intent classification. They initially used some third party models, and then built their own trained on the conversations their company customers had with their individual customers. In 2019 they began to explore the use of large language models (LLMs)—not as a standalone capability, but to augment the efforts of “bot managers.” These are humans—typically former agents—who decide on the scripts that Ada’s software will use for non-generative (also known as “declarative”) AI customers like IPSY, or on the training conversations and the prompts that generative AI Ada customers use. Anna Skidmore, the VP of Customer Care for IPSY, described the role of the bot manager:
Our bot manager refines the workflows, and can spend hours optimizing them. She maintains the knowledge base and the macros we input into Ada, and the curated responses generated back to our members. In the future, with generative AI, it will be able to crawl our website and knowledge base providing more curated responses, and the bot manager will evolve to being the AI coach when we make this shift.
The generative models for Ada sit as a layer on top of customer service transactional systems like Salesforce or Zendesk. Murchison said that many customers are now automatically resolving customer service inquiries with the language models in real time. Customers can also integrate their phone systems with Ada and the LLMs can answer any question about their company.
Ada works with a variety of LLMs, including those from OpenAI, Anthropic, and the Toronto-based Cohere. “We are agnostic to the particular foundation model,” Murchison said. What they do need to do, however, is to fine-tune the model with a company’s own content—knowledge base articles, agent scripts, corporate data, etc. It is a process of teaching the AI employee what to say, how to behave, and how to get access to the systems it needs to help customers.
The models are trained by prompts in the form of questions and answers, which are then converted into vector embeddings and stored in a vector database. The vector embeddings condense the data but preserve contextual relationships in the content. When a user enters a prompt into the system, a similarity algorithm determines which vectors should be submitted to the LLM in order to elicit the most relevant answers. The resulting models yield minimal hallucination, and get smarter over time with prompts. Ada has built a lot of safety filters into the process so that proprietary customer content doesn’t get incorporated into public LLMs.
Murchison described the customization process with Ada:
Underneath the hood, our customers’ generative models are customized many times per day as their Bot Managers use Ada to teach their AI to improve and their upwards of millions of customers interact with their AI. There are three broad buckets of customization: knowledge, actions and guidance. Knowledge refers to what content your AI is an expert in. Actions refers to what business systems your AI is integrated to so it can do things on behalf of customers like processing a payment or issuing a refund. And guidance refers to performance feedback given to your AI that mirrors the feedback you’d give a direct report.
This is all in service of an automated resolution that is relevant, safe, and accurate. The success of the resolution can be measured; today Ada customers have an average resolution rate of 40% of customer enquiries without human intervention, but some highly aggressive customers who manage their AI agents particularly well can approach 70% resolution.
Ada at Wealthsimple
Wealthsimple, which offers various financial services like managed investing, do-it-yourself trading, and regulated crypto trading to more than 3 million Canadians, uses Ada’s generative product for its customer support chat channel. Paul Teshima, the Chief Customer Experience Officer there, says that it works very well:
Clients come in and ask questions on the chat. The generative agent responds, answers questions, and offers general financial advice. Ada built an interface to examine chats, ratings, and scores, so we can audit the experience for our clients with their metrics. There was an initial setup period where Ada got access to our content and learned from it, and that continues to be refined. But from my perspective it went much faster than we expected and has had better results.
Wealthsimple’s management team is enthusiastic about LLMs overall, and has even built their own generative model with OpenAI. It has a “proxy” between the user and the LLM to mitigate and manage data sharing and prevent Wealthsimple or client data from being used to train OpenAI models. Wealthsimple is exploring the idea of using an LLM to generate text-based financial plans.
The Future of Human Support Agents
Teshima says that Wealthsimple has no plans to replace human agents with Ada; “We want to be the most human financial services company in the world,” he said. Ada answers standard questions and can perform simple transactions like transferring money, updating addresses, and providing statements. Humans, Teshima said, are left to do the more sophisticated and difficult tasks like building trust and helping clients make smart decisions about investing. Ada handles about 70% of client inquiries, but they can get to an agent at any time. At some point in the future, Ada will be used to train human agents as a coach.
Skidmore at IPSY also said that there are absolutely no plans to reduce the number of human support agents at the company as a result of their work with AI. She did say that the fear of job loss for people is always there, but the goal is a different role for human agents, not job loss. Skidmore tells agents that it will elevate their roles to performing more strategic functions and fewer tactical ones, enabling better customer experiences. Ipsy is ready to take agents beyond service and incorporating themselves into the entire customer journey, such as utilizing proactive chat to promote offerings and shopping events. Skidmore expects that generative AI will be a great partner to agents when it comes to personalization and driving forward more seamless experiences.
It’s great that these two companies don’t plan to replace human agents with generative AI tools. However, both companies are growing and seem to have plenty of other tasks for human agents to perform. In companies that are attempting to lower their costs, it seems likely that human call center agents will shrink in number as generative AI is implemented. I’m not sure that anyone should encourage their children or grandchildren to become customer support agents in the age of generative AI. Instead, considering a role in AI development or management is likely to be a safer career bet.
Read the full article here