Remo Peduzzi, CTO, PensionDynamics.
As the world becomes increasingly digital, companies are constantly looking for innovative ways to improve customer experience. Recent developments in AI have provided some groundbreaking opportunities.
While deterministic approaches have been the norm for decades, they have delivered limited results. Such chatbots use rule-based decision trees to respond, with predefined inputs and outputs. If a question doesn’t match existing rules, the chatbot isn’t able to answer, causing customer frustration and negative brand perception. The first well-known deterministic chatbot was ELIZA, developed by Joseph Weizenbaum at MIT in the mid-1960s. ELIZA used pattern-matching techniques to simulate conversation.
Generative AI models like GPT-3.5 have shown promising results. GPT-3.5 is an advanced AI language model that generates human-like answers on any question entered by the user. So far, it seems to be best in class in various natural language processing tasks. Its promising results include enhanced text summarization, translation, question-answering and creative content generation.
I have been working with this technology for the last three months, focusing on generating text and source code. Currently, I am developing a proof of concept for a pension fund support chatbot using OpenAI’s API-based services. In this article, I will explore some of my experiences about using GPT-3.5, what it does differently than deterministic chatbots and how companies can begin their journey with this emerging technology.
The Limitations Of Deterministic Approaches
Deterministic chatbots use pre-programmed, rule-based algorithms to respond to customer inquiries. These chatbots use decision trees, knowledge bases or predefined questions and answers. In my experience, though, these systems have caused a few issues that generative AI could help to overcome.
1. Inflexibility: Deterministic systems are built on fixed sets of rules and pathways. If customers gets off this path, they likely won’t get satisfying answers.
2. Scalability: The costs for updating and maintaining rule-based systems do not scale. The cost increase is linear to the size, which makes it time-consuming. New rules must be defined and integrated into the solution carefully. As a consequence, customer support will likely always be a step behind the business.
3. Lack Of Personalization: Most deterministic systems cannot answer support questions in multiple languages.
The AI Revolution In Customer Support
GPT-3.5 is an advanced AI model developed by OpenAI. Its text-generation abilities have the potential to revolutionize the field of customer support.
Based on my experience the last few months, GPT-3.5 can understand and respond to customer inquiries with remarkable accuracy and nuance by leveraging machine learning and natural language processing. Here are a few ways that companies can look to it to improve on the classic approach of communicating with customers through AI.
1. Adaptability: Text-generating AI doesn’t have a fixed set of rules or pathways. Instead, it leverages its extensive training data to generate contextually appropriate responses to customer questions.
2. Scalability: Companies no longer manually maintain rules for their customer support chatbots. The bot learns and adapts continuously with the help of new questions.
3. Personalization: These AI-based customer support systems can answer the question in more than fifty languages.
4. Cost-Effectiveness: Implementing AI-based customer support systems can lead to cost savings, as fewer humans need to be involved in the process.
Getting Started With Generative AI
The benefits of such customer support use cases aren’t just theoretical. Numerous companies—including Meta, Canva and Shopify—are already using the ChatGPT-based technology for customer support.
My company has already created a minimal viable product to try out these new capabilities. Our company’s project involves using existing PDF documents from pension funds—such as regulations, fact sheets, etc.—as a source for the chatbot. We extract context by selecting similar text passages from the PDFs when users ask questions.
In my experience, the MVP performs well for questions explicitly explained in the documents but struggles when information isn’t clearly stated in the underlying sources, leading to inaccurate answers. Our next step will be to train the system to improve its understanding and response quality for such cases.
Companies looking to use GPT 3.5, in particular, for customer service should start with OpenAI documentation and examples, focusing on Python due to its widespread presence in data science. Moreover, more libraries and resources exist for Python. Therefore, firms should avoid using Java or C# for this purpose, because Python is the entity’s main programming language.
Companies should also anticipate challenges in adapting GPT to specific use cases and prepare to invest time in refining and training the model for improved performance.
Conclusionhile deterministic customer support systems have been the norm for decades, I have found them to now be limited in ways—such as inflexibility, limited scalability and lack of personalization—that can lead to unsatisfied customers. AI-based approaches are promising because of their ability to solve these challenges and for their cost advantage.
That said, when getting started with generative AI, I would suggest building your own MVPs and ensuring that you are able to overcome existing challenges like refining and training the model.
Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
Read the full article here