Dialogue Understanding
Video
Dialogue understanding is a key component of CALM, which enables the assistant to understand the user’s inputs and how the end user wants to progress the conversation further. In this chapter, we will dive deeper into the dialogue understanding component, how it works, and how you can configure it.
Dialogue Understanding in CALM
The way Dialogue Understanding works is very different from NLU-based systems. If you built assistants with NLU-based systems, you are familiar with the approach of processing the user’s inputs and representing them by predicted intents and entities. While the NLU-based approach simplifies the task of understanding user inputs, it makes it very difficult to scale assistants built using such systems. First, it requires big amounts of training data to train the classification models and dialogue management components. On top of that, it becomes increasingly more difficult to introduce new intents because they are too similar to existing ones.
Dialogue Understanding in CALM uses a different, intentless, approach. Instead of interpreting each message in isolation, it takes a generative approach to the problem. By taking the whole context of the conversation and business logic into account, it generates a set of commands the assistant should run to progress the conversation.
The CommandGenerator component
The component responsible for Dialogue Understanding in CALM is CommandGenerator
. It aims to represent how the user wants to progress the conversation to achieve their goal. At the moment, there are two command generators available:
- LLMCommandGenerator
- NLUCommandAdapter
The overview of LLMCommandGenerator
LLMCommandGenerator leverages Large Language Models to interpret the user’s inputs. For that reason, this command generator needs access to an LLM API of your choice. You can use any OpenAI model that supports the /chat endpoint such as gpt-3.5-turbo or gpt-4. You can configure LLMCommandGenerator in your config.yml file as follows:
pipeline:# - ... - name: LLMCommandGenerator# - ...
LLMCommandGenerator works in a way that it takes the user input and produces a set of commands that represent how the users wants to progress the conversation. In some cases, the right command might be starting a specific flow:
In other cases it might be setting a specific slot:
Or, it might be a set of commands. For example, setting a slot and starting a flow:
LLMCommandGenerator is really powerful when it comes to handling ambiguous user inputs. For example, in a situation where the user start the conversation with a
single word card
, NLU-base system would struggle to clasify this input correctly. For the LLM-based system it is easy to ask for a clarification and proceed with fulfilling the user's request:
Under the hood, LLMCommandGenerator needs to render a prompt that is then used to produce the output with the commands. It uses a template consisting of a static component and a dynamic component filled using the following details to generate a prompt:
- Current state of the conversation - This part of the template captures the ongoing dialogue.
- Defined flows and slots - This part of the template provides the context and structure for the conversation. It outlines the overarching theme, guiding the model's understanding of the conversation's purpose.
- Active flow and slot - Active elements within the conversation that require the model's attention.
Defined flows, especially their descriptions, play a vital role when it comes to the performance of your assistant. The descriptions of the flows are used by the Dialogue Understanding component to decide when to start a specific flow. If you notice that a specific flow is triggered when it shouldn’t, you should go back to the descriptions of the flows you defined and tweak them by making them more descriptive and concise.
You can also customize the template itself by defining which variables should be used to incorporate dynamic information. For more details on that, see the documentation.
The overview of the NLUCommandAdapter
Another CommandGenerator you can consider using for building your assistant is NLUCommandAdapter. NLUCommandAdapter relies on the traditional NLU-based approach and requires an intent-classifier component configured in your pipeline: pipeline:
# - ... - name: NLUCommandAdapter# - ...
NLUCommandAdapter relies on a traditional classification for classifying intents and extracting entities. It then uses the predicted intents and extracted entities to find a flow with a corresponding NLU Trigger defined.
When to use the NLUCommandAdapter?
You can consider using the NLUCommandAdapter in two scenarios:
- You want to migrate an existing bot that relies on NLU and stories / rules to the CALM paradigm. Using the NLUCommandAdapter you can convert your stories seamlessly to flows. This ensures the desired flow starts given you already have a solid intent classifier in place.
- You want to minimize the costs by not making an API call to the LLM each time. The NLUCommandAdapter does not make any API call to a LLM compared to the LLMCommandGenerator. Using the NLUCommandAdapter saves some costs. Make sure you have a solid intent classifier in place when using the NLUCommandAdapter; otherwise, incorrect flows will begin.
You can also use multiple CommandGenerator components in your configuration. For example, if you want to use both NLUCommandAdapter and LLMCommandGenerator for your assistant, you can do that as follows:
# - ... - name: NLUCommandAdapter - name: LLMCommandGenerator# - ...
Keep in mind the components are executed one after another. So if, for example, NLUCommandAdapter successfully predicts the start of a specific flow, LLMCommandGenerator will be skipped.