logo

NJP

How do you improve the Virtual Agent from failed utterances?

Import · May 15, 2020 · article

In this article, I'll explain where you can find the users' utterances' to utilize them in improving the Virtual Agent experience. As the Virtual Agent admin, you'll want to review these utterances as they will serve as a barometer for the quality of the current NLU model.

Before reading ahead, if you didn't read the Virtual Agent Topic Discovery article, it is highly recommended first to understand the three scenarios that occur when the Virtual Agent interprets the user's utterance. One of those scenarios is the Fallback route, which occurs when the Virtual Agent does not find a topic that matches the user's request. In this article, we will focus on this scenario.

All the users' utterances get stored with their NLU prediction results in the table Open NLU Predict Log. You can access this table by typing open_nlu_predict_log.list into the filter navigator.

image

This table, Open NLU Predict Log, contains the logs of all the Virtual Agent predictions. It is labeled "Open NLU" since it is not unique to ServiceNow's NLU logs, and can also contain the logs for IBM Watson and Microsoft LUIS NLU integrations. Here are the most important fields that you should pay attention to:

Utterance: The User's utterance that was sent to the NLU model for prediction intents or entities
Message: Summary of the response, it contains the counts of the prediction results
Request: JSON request that was sent to the NLU engine for prediction
Response: JSON response of the predictions, it contains the prediction results like the intents and entities that are predicted and above the threshold

One of the key ways to improve the current Virtual Agent and NLU model is to review the failed utterances. When an utterance fails, there are two reasons why this occurs:

  1. There is no topic that can handle the utterance. For example, let's assume the user asks: "_W_here can I find the closest supermarket" but you limited the Virtual Agent to help the user with IT or HR-specific topics only.
  2. The user asks for "Help with leave"; although there is a discoverable topic, the NLU prediction didn't discover the topic for reasons that can be fixed by re-tuning the model. To do this, you would first need to locate the utterances that resulted in failed predictions.

The way to find all the utterances that have no predictions (= 0 intents) in the Open NLU Predict Log table is to set the condition in the column Message to Sync Predict Results: 0 intents:

image

Tracking utterances that return 0 intents, or even multiple intents, is a first step in determining which utterances make the most sense to add to the model for tuning. This can be done by using these tables in reports.

In the first scenario, it is recommended to evaluate the quality and frequency of the utterances that are returning 0 intents. Users may be requesting something that you do not yet have an intent created for, and this could be an opportunity to expand on the number of available topics. Alternatively, they may be typing in utterances that you will likely not choose to support, as in the example of the "Where can I find the closest supermarket".

The problem in scenario 2 can be rectified by re-tuning the NLU model. You may notice trends in your data where several users are requesting something that does not result in a clear, decisive topic discovery. Such utterances are generally indicative of the new utterances you will want to include in the model or perform model accuracy tests against.

This is a prime opportunity to use your data for continuous improvement, and you'll also want to track any progress with topic discovery using the open_nlu_predict_log table. It is a good practice to establish a baseline metric and gauge the impact of your model tuning and accuracy results against mapped topics.

View original source

https://www.servicenow.com/community/virtual-agent-nlu-articles/how-do-you-improve-the-virtual-agent-from-failed-utterances/ta-p/2306326