Four Safe Applications Of Generative AI In The Contact Center
- AI and Automation
- Customer Experience
- Contact Center
- Cloud
- Agent Efficiency
- Agent Empowerment
- Agents
- Five9 Blogs
- Five9 Success
The following post was originally published Sep 17, 2024, 9:45 am ET by Forbes Technology Council.
When generative AI (GenAI) burst onto the scene with the arrival of ChatGPT, it became immediately apparent that this technology would have immense applications in the contact center. After all, ChatGPT was the best chatbot the world had ever seen. One could easily connect the dots and believe it would be applied to enterprise use cases, replacing existing chatbots and removing the need for human agents to handle customer interactions.
In reality, we haven’t seen this happen yet at a large scale. The main reason is hallucination, defined as the tendency of these models to make up answers not supported by their training data. This puts brands at risk of harming customer experiences, and even making them financially liable. The recent case with Air Canada has created some hesitation in going all-in on this technology.
While the industry works on techniques to improve accuracy, including retrieval augmented generation (RAG), there are actually many other uses of GenAI in the contact center that are safe to deploy today. Safety can be achieved through any of the following techniques:
- Adding human oversight to the results of AI.
- Limiting the usage of AI output to company employees and not end customers.
- Post-processing AI output and mapping it into one of a set of classes, so that hallucinations get turned into classification errors.
- In combination with #3 above, aggregating its results so that any errors manifest as margin of error.
With these tools in hand, here are four safe applications of GenAI in the contact center:
1. Live Call Transcription For Agents
This feature provides agents with a live transcript of voice calls with customers as they are talking. These transcripts are only ever shown to the agent, never the customer. They are used by the agent to speed up the conversation, avoiding the need to ask customers to repeat themselves. They also help agents for whom English is not their first language, by augmenting what they hear with seeing text, increasing comprehension.
This application is safe because it uses two of the techniques above — it adds human oversight and limits the usage of output to company employees.
2. Post-Call Summarization
Arguably the "killer app" for GenAI in the contact center, this feature has seen rapid adoption. Traditionally, after a call is complete, agents are tasked with writing notes about the call to add to a case management system. These notes serve many purposes.
One of them is helping the next agent should a customer call back. If these notes were complete, accurate and brief, the next agent to handle the customer wouldn’t need to make that customer repeat their whole story again. Unfortunately, it often takes too long (and thus is too expensive) for agents to produce notes like this. The result — the experience consumers have today — is that they do need to repeat themselves to the next agent.
Instead of agents writing these notes from scratch, the GenAI can take the transcript of the call and create a concise summary. This takes just a few seconds. The summary is shown to the agent, who then reviews it, makes any edits to correct errors or hallucinations, and uploads it to the case management system after the call.
Like live transcripts, this AI application is safe because it adds human oversight and limits the usage of output to company employees.
3. AI Analytics
Another key application for GenAI is analytics. GenAI can analyze conversations between consumers and brands and extract key insights, including the topics that were discussed, whether the customer was satisfied with the interaction, whether the conversation resolved the customer issue and so on.
The results of such analysis can be aggregated across many calls. This gives brands answers to key questions like, "What percentage of my calls are resulting in satisfied customers?"
This provides insights that brands can use to improve their products and services. It achieves safety by using three of the techniques above — limiting usage of outputs to employees, classification of outputs and aggregation.
4. GenAI Intent Detection
GenAI can be used for self-service use cases without giving the AI complete control of the conversation. A voice or chatbot can take something spoken or typed by the customer and determine their intent using a GenAI model. In this case, the intent is one of a finite set of intents that are supported by the chatbot.
Once the intent is detected, the response from the bot for each intent is crafted by a human. The process of intent detection in bots has historically relied on pre-GenAI models that needed to be trained for each specific use case, which increased both costs and deployment time. GenAI eliminates the need for model training, reducing costs and speeding up deployment.
This use case is safe because it involves post-processing AI output by mapping it into one of a set of classes so that hallucinations get turned into classification errors.
To summarize, as an enterprise technology manager, even if you are worried about deploying GenAI chat and voice bots to your customers, this doesn’t mean you shouldn’t do anything at all. The four use cases outlined here should all be on your radar as available, safe, and cost-effective strategies today.