Skip to main content

Tailoring Generative AI: Making AI Work for Your CX 

Image
Placeholder image
Jonathan Rosenberg Chief Technology Officer

Jonathan Rosenberg is the Chief Technology Officer and head of AI at Five9. Jonathan has dedicated his career to transforming the telecommunications industry and joins Five9 from Cisco where he was CTO for the Collaboration Technology Group (CTG). Jonathan is also well known for his authorship of the SIP protocol, which is the foundation for modern IP-based telecommunications. Prior to Cisco, Rosenberg was the Chief Technology Strategist at Skype, where he guided the company’s technology strategy.

The Five9 Genius AI process has four steps: 1. Listen, 2. Analyze, 3. Tailor, and 4. Apply. Each of these phases is accomplished through a combination of technologies in the Five9 Intelligent CX Platform, and our experts help customers use these technologies to accomplish their goals. The part of this process that I am most excited about is Tailor. But this isn’t about hemming a pair of slacks of course – it is something completely new that is only possible with generative AI.  

The Traditional Approach to Training AI Models

AI has been around for long enough that everyone has become familiar with the idea of training AI models. The purpose of training a model is to use data to get the model to perform a particular task well. Restated in business terms, the purpose of training a model is to harness data to deploy AI for use cases specific to your business – which are different from those of other businesses. For example, say you want to build a self-service bot to streamline the creation and tracking of shipment packages. To do this, you need an AI model that can recognize speech related to these use cases (i.e. tracking numbers, city names, and so on) and understand customer intents (check status, report a lost package, ask for a new shipment). Training is the technical process which adjusted the AI model to achieve these business-specific goals.  

Historically, training required a lot of data specific to those use cases. You needed recorded utterances of people speaking about tracking numbers. You needed example transcripts of people reporting lost packages. And this process was offline, meaning that it was done ahead of implementation and, once deployed, the model was “locked in.” You could train it again in the future, but you certainly did not train it for each day or for each call. Once trained, you could use that model in real-time to take a customer utterance, something like “I need help figuring out where my package is right now”, and then produce an output. In the case of our example, our output would be an indicator that this is a request to track shipping status.  

This process is shown in the figure below. 

ai_model

Generative AI has changed all of this.  

Game-Changing Impact of GenAI on AI Model Training

Businesses still have the same underlying need. They need to take a generic model like GPT-3.5 or Llama 3 and make it work in a way that is unique to their business. With GenAI, the way we do this has changed. We have a new capability we didn’t have before – Large Language Model (LLM) prompting. A prompt is an input into the model, which the LLM then evaluates in real-time to generate a response. These prompts can be huge, containing massive amounts of data. This means we can now perform our customization, which we call Tailoring, by putting data into the prompt and combining it with customer input. This new process is shown in the Figure below. 

genai_ai_training

In this approach, when an utterance is spoken by the customer, the Five9 engine performs an operation called prompt construction. It fetches the right contextual data on-demand and dynamically constructs the prompt by combining the input utterance with the selected contextual data. The AI model evaluates this prompt and supplies an output. How does the prompt-construction engine know how to create the prompt? Through a new offline process called prompt design, where the sources of contextual data are specified, along with static prompt elements like instructions and guardrails. This is what GenAI Studio does.   

We have eliminated the up-front training process. The only offline process is prompt design, and it is much simpler and faster. More importantly, we are no longer using training data to train the model ahead of usage. Instead, we provide contextual data that is processed in real-time. Best of all, because the contextual data is pulled in on-demand, we can personalize it to the specific consumer and use case. This is something we could not do before generative AI.  

In summary, the ability to tailor GenAI models to specific use cases is a critical capability for its application to CX use cases. Five9 GenAI Studio lets customers do that by leveraging the contextual data present across the Five9 Platform.  

Image
Placeholder image
Jonathan Rosenberg Chief Technology Officer

Jonathan Rosenberg is the Chief Technology Officer and head of AI at Five9. Jonathan has dedicated his career to transforming the telecommunications industry and joins Five9 from Cisco where he was CTO for the Collaboration Technology Group (CTG). Jonathan is also well known for his authorship of the SIP protocol, which is the foundation for modern IP-based telecommunications. Prior to Cisco, Rosenberg was the Chief Technology Strategist at Skype, where he guided the company’s technology strategy.

Call 1-800-553-8159 to learn more about Five9