Everyone involved in designing CX processes knows about ChatGPT. This generative artificial intelligence system released at the end of 2022 by OpenAI has caught the public imagination so much that sometimes it feels as if AI has only been with us for a year.
ChatGPT was a breakthrough. It was when AI became appreciated by the general public rather than just computer scientists. The technical teams have been improving AI for decades, but nothing they have developed has become so popular so quickly. If you search Google today for ‘ChatGPT’, it will return over a billion pages of information.
Many companies designing CX systems have started flying the ChatGPT flag. They are boasting about how ChatGPT can improve customer chatbots and how it can be used inside the contact centre to manage administrative tasks.
This is all true, but there is a problem. ChatGPT uses a Large Language Model (LLM) that is absolutely enormous. This is the body of knowledge that is used to train the system. ChatGPT 4 has about one trillion different parameters, compared to around 175 billion in ChatGPT 3.5. Each parameter measures an individual relationship between words - linked by numbers and algorithms.
To simplify this, it’s a bit like training a chatbot on all the knowledge in Wikipedia, every digital book available online, and every website. It’s vast.
This works well for general questions of the chatbot. If you ask it to summarize a document using the style of the Financial Times, then it knows what you mean. If you ask about the history of Thor and Odin before the Marvel movies, you will learn all about Norse mythology. If you ask it why London was located on the river Thames, then it will know.
But when your new TV does not connect to the internet, and you need advice on how to set it up correctly for your chosen internet provider, then it is highly likely that ChatGPT was never trained on this specific product-focused information.
So I believe we need to adopt a more Specialized LLM approach to training the AI systems we are using to interact with customers. The key benefits of using it for customer service - privacy, security, speed, accuracy, cost savings, and improved customer experience.
So, the real problem is that we have created some fantastic tools, but using an LLM for a customer support chatbot is a bit like using a sledgehammer to crack open a walnut. We have over-engineered the solution, and it will be incredibly expensive - and will not even work as well as a more focused solution.
So, if you are thinking about deploying a customer service solution that uses AI, then think carefully about a few questions, such as:
When you think about your AI solution like this, it becomes much more obvious that a Specialized LLM will be preferable. It will be cheaper, it will have less of an impact on the environment, and it will have detailed knowledge about your industry domain and products. After all, customers are probably going to be asking this AI about how to get help with your products - not just having a general conversation about world history.
LLMs are generally too broad and too expensive to consider for customer service solutions. Don’t fall into the trap of believing the media hype that Chat GPT can answer any question. If you are tempted to believe that it is this powerful, then try asking it some detailed support questions about your products today.
Then, ask a CX specialist about how much better a Specialized LLM would work
Jonas Berggren joined Transcom in 2020 as Head Of Business Development Northern Europe. Prior to this, Jonas was the co-founder and partner of Feedback Lab by Differ. Earlier in his career, Jonas held the position of CEO at Teleperformance Nordic.