Now on Now: Improving the agent experience with AI Search
(upbeat ambient music) - Welcome to our session. So thank you for taking time to attend. We will be sharing some of the experience that we have seen, how we improved agent experience with the AI Search, for some of the internal use case. And hopefully this will give you good understanding of, and identifying what we did and then applying the same thing for your use cases that will be applicable as you are interacting with many traditional solutions. Go to the next slide. Before we jump in, let's get over with the safe harbor statement where if we do indicate anything with the forward looking, it's commander, the standard disclaimer. Please watch out for actual details about the release nodes of the product line. And here, my name is Ravi Penikala and I'm the head of software engineering, we work in GCS Cloud Operation. I have my counterpart Ritesh who is a subject matter experience expert in this context, who has lived and delivered the solution to our solutions. And GCS Cloud Operations as you heard in the knowledge keynote session where CJ called out, this is a foundational element. Now regional cloud is foundational element for solution platform and we are part of that organization, and making sure we keep up the cloud infrastructure, the industry best practices, and that's what our team does as part of the organization. Pull the next slide. Now as you are attending knowledge and learning a lot of contexts, the GenAI is a real game changer. We wanna call out attention on this particular aspect and I want you to learn as much possible in this session, yes, such as a foundation for it, but at the same time, do attend other sessions where you get a lot of insights of the GenAI. And GenAI can be deployed with, not much significant technical knowledge, but there's a very less organization and complexity and it's part of the platform deployment. You do benefit. And we internally realize even within ServiceNow, when we did deploy it, we did about 10 million plus tangible benefit and 50 FTAs in our early deployment of GenAI. And since we benefited, we thought we'll extend it to the team and then make you learn GenAI more actively in this knowledge too. Now the context of that AI Search is foundational a little bit for GenAI, but we wanted to find out how many of you are actively using it? So during our session knowledge, we did poll and then audience did respond to either they're actually using, I would say about 30% of the participants were using out of the box AI Search solution. And about, I would say 20 to 30% were customizing something for their needs, so that means they do know their advanced users, they have customized something for that. And I would say 30 to 40% wanted to know about AI as such, they were not aware of the AI as such. So this is what we found in our active poll during the session. And I want to call out a little bit attention. In this session, we'll be covering about in a high level, AI Search, what the concept is, but we may not drill down into AI Search. If you are interested about it, I would suggest you to find, I think we do provide actual links in our presentation where you can find more detailed discussion about AI Search topic itself. But what we are going to focus is around applicability of the AI Search and one of the use case that we found where we had a, little bit glitches and optimization needed and how we applied that, and what we benefited from that and what the outcome is. That's what we are gonna be focusing in this session. Okay. And here is our agenda, the information overload. What does it mean? What is the impact that it has in our organizations? An evolution of search. What is that? The search, how did it evolve? And then what does it mean to our use cases and what we have observed from our own internal use cases, and implementing AI Search and in helping and optimizing our solution. As well, the learning and key takeaways. So this is what they are, Ritesh will share from a practical experience what we have practiced within ServiceNow. The world of work is very challenging. If you look at it, the information that the workforce is handling is ever increasing. like Bill calls out in his keynote. We heard difficulty in creating knowledge articles, we provided solutions to you, and customers have produce about 15 plus million articles in 2023. What does this mean? The Asian workforce that is working is having increased information that they need to scan and process and to use it for their own benefit. And if you look at it, the industry standard studies are published, are calling out attention of these, the impact of this information overload. What does it mean to users? As a 9%, 9% of the time lost in reorienting for the agents when they're working with the information between task switching. At any given time, I think Bill calls out and it's again, organizations, large organizations have got about 15 plus applications. In any given use case, the workforce may be switching between two or three applications at a time, which is costing them. You can look at it, about nine to 10% efficiency. So without doing anything, this is the time that you're losing from your workforce. 20% of the time lost searching for information to complete a task itself, and that's what McKinsey is calling out. the next Gardener is increasingly saying 47% of employees. They're struggling to find information, forget about doing the job, 50% of the time is lost, they're finding information. So what this highlights is the importance of finding solution to optimize, helping our workforce in managing the workload that is there, the information overload, and how do we optimize that? And that's what we realized, when we found, we have to be, and this problem is going to increase, it's not going to decrease as the time goes by and we are calling attention on it. Each one of you, I'm sure you are responsible for a few business use cases, scenarios that you're using our solution for, watch out, call out, learn from this session what we observed and optimized, and get best out of it. From this experience, if you go, we call out attention on two critical business aspects about efficiency and improving the user experience. And these are the two mission critical aspects we as a business leaders need to pay attention on. And what we have done is about applying the available technology and solution, and making sure optimizing our agent experience without sacrificing user experience, and in fact improving the user experience so that the users come back in using the solution because they really see the benefits out of it. And that's what is a key to it. Sometimes we do find solutions, they're efficient and they improve the efficiency, but if you make it difficult for agent workforce to use them and if you do not extend the best experience, then they struggle and they may not come back and the real benefits to the business use case will not be drawn. And that's what we draw attention. And we call out for all of your business leaders, we would say pay attention to this aspect of the use case that we're going to share, and learn and see how you can optimize for your own experiences of it. And having said that, let me hand over to Ritesh, who is our subject matter expert in deploying this solution and monitoring, maturing it and seeing the benefits and experience out of improving agent experience. Ritesh, please do share. - Yeah, thank you Ravi. So before I continue, when we were presenting this session live at Knowledge 2024, I had asked the audience members of the audience a poll question in terms of how many in the audience were experiencing an information overload because of all the data growth that is happening around us, either professionally, personally, or maybe even both? And overwhelmingly majority of the audience, most of the members in the audience had raised hands saying that they are experiencing an information overload of some sort of the other, right. Tagging onto that and building on top of it, in this context, I want to say personally it's true for me as well, and it's definitely true for us here at ServiceNow, as we are growing, our customer base is growing, and that transcends and applies to our agent workforce as well. So our agent workforce, and when I say agent workforce, in this context, I'm referring to members of personas such as site reliability engineers, which is the SRE, SRE, DevOps, technical support engineers, technical support architects and the like. when they are working on customer cases, their primary intent is to resolve the case as quickly and as efficiently as possible and provide a quick turnaround time for the customers and give a good experience, right. Now in order to do that, the agent workforce, but previously, I mean, when they're trying to resolve a case, oftentimes they are looking for complementing information from maybe other similar cases if there is, I mean, maybe for a different customer, but was there a case created with a similar issue? Or they want to look at complementing information from a knowledge article or a problem that they can leverage to solve the case that they are working on. And as you can see here, earlier this experience for our agent workforce was disconnected and disjointed. And when I say disconnected and disjointed, what I mean is they had to go to different places to extract the information, search for information first, extract the information, connect the dots, and kind of come up with a good resolution. And we look at examples in the coming slides. Now the same agent workforce, when they take their professional hats off, and let's say they put their personal experiences hat on, and this applies to everybody like us and everybody in the industry in general, right. The agent workforce is experiencing is certain other kind of search experience where, for example, I just, if I take just an example of Google, Google is transcending and pivoting towards from not only providing the contextual search results but also is coming with an AI generated summary related to the search topic that kind of gives an overview or insight into the search text that a user has entered. And in general if you see, as an industry, we are pivoting or transcending towards that trend, leveraging the GenAI capabilities if you will, right. Now, if you take a step back and look at it from our agent workforces lens, so this is the disconnect that they were experiencing, where professionally they're experiencing a certain kind of search experience, and personally they're experiencing other kind of experience. So this was the gap that we identified and we wanted to bridge that gap for our agent workforce to make their professional experience so much better so that they can be so much more efficient at their job, and then resolve the issues for our customers as quickly and as efficiently as possible. So how did we do that? We've leveraged the AI Search framework from the platform to bridge that gap. For those who are familiar, this will be an overview of what the AI Search framework is. And for folks who are, wanting to know more about the AI Search framework, let me just give a quick overview. So AI Search capability is a platform capability that's available from the platform. Think of AI Search as our cognitive search framework that provides machine learning driven search results, and a consumer grade search experience. So how do we go about by either leveraging or implementing AI Search, right? At a high level, basic level, you start by identifying the data that you want to include in your search result, which will finally surface as responses when a user of the system searches for something. This data that you want to have or index upon. And by indexing, basically what I'm referring to is the ability to have that data should surface as part of search responses. This data can be either internal or external, right? And selfishly speaking from a ServiceNow standpoint, we would love for all of this data to be internal, but in reality, practically we know that there will be use cases where external content needs to be indexed and ingested. So that capability is also provided from the framework. You can leverage the integration hub. Some examples are, for example, SharePoint, confluence as external content that can be ingested. For internal data, you have tables, I mean, again, internal data is, this is basically your data, your system of record or truth. And some of the examples that you can leverage is tables such as case tables, problems, knowledge base article tables. That mean you can configure to include as part of your search application. So that's the data, now behind the scene, like I was saying, AI Search as a framework leverages machine learning capabilities to constantly index and then keep refining the search results based on the relevant search terms. It also uses or has the capability for natural language understanding, which is basically the ability to convert a natural language text a user has entered, and then surface or come up with the most relevant contextual results. And it also has an analytics dashboard that comes out of the box that can be leveraged for as part of the AI Search framework. Here are some of the avenues where AI Search can be leveraged. For example, the Global Search Portal Virtual Agent, I'm not going to go through all of them. In our case we have leveraged this on our internal portal and we will look at some examples in the coming slides as to how we've leveraged on them. Now after, I mean, implementing it, how did we go about enabling this AI Search for us, for our internal teams? What we have done is we have taken a crawl, walk and run approach where as part of the crawl phase, basically, I mean, we've labeled it as a turning or enablement phase, but to give you some context around this, what we've done is we started by doing a small proof of concept, engaging or working with identified stakeholder partners as part of this initiative. And then developing this application, configuring it, and then kind of taking it through the whole general SDLC lifecycle of development, testing it and then getting this released in production. One of the feedbacks that we had received as part of this initial exercise was initially we were, indexing on tables such as cases and knowledge articles, but based on the feedback that we received, we also included a table such as problems. So that in large and large, that was our enablement or the crawl phase where we kind of through the regular SDLC software development lifecycle took it to production and then enabled it for the larger, opened it up for everybody, our partner teams. Now as adoption increased and as we started seeing data flowing into our backend systems, this is where we kind of took what we call as the walk, we moved into what is called as the walk phase or the optimization phase. Now the system having seen enough data. And the having seen enough adoption, we were able to fine tune behind the scene the framework by adding in suggestion, adding typo handling or adding synonyms, for example, just to give you some examples, right, Wifi generally would translate to let's say wireless connection or internet or things like that. Or I'm positive, every organization has certain acronyms that are used internally. We had some in our case as well, and as part of this optimization phase is when we saw the data coming in and then translated that relevant search terms that were acronyms to their fully blown actual word, I mean, the fully blown text, so that the AI Search results would take that into consideration and then show relevant search results. Similarly, we started with the stop words, which were basically out of the box English stop words to begin with, right. So this is large and large, the optimization phase. And then the third phase is basically the run or the self-learning phase where the system having seen enough data is now with a certain level of confidence, come back and show not only the search results, but in cases where it is applicable, come back with genius results and then we'll see what genius results mean in further slides. But to give a high level overview, genius results, think of it like the capability to take actions as part of the search response record itself. So for example, if the most relevant search response or the result as a knowledge article, the ability to view the knowledge article, or if it's a service catalog, which is the most relevant associated to the search term, the ability to interact with the catalog from this search response is basically all categorized as genius results. And all of this is defined, the system is learning and evolving on an continual basis behind the scene without any human interaction. And this is where it, the other nuance of it is, the implementing team or the organization does not necessarily need to have any data science or machine learning expertise in-house. The system takes care of it behind the scenes. And this is kind of an, I'm in a continual loop where you, we try to optimize on an ongoing basis, but largely speaking net, net, this is the approach that we took, the crawl, walk and run approach and got this enabled for our partner teams. Let's look at an example of how the experience changed for our agent workforce from before implementing AI Search and how it drastically improved after implementing AI Search. So this is an example where an agent has received a case in their queue and they're working on a case, and the issue is related to router platform where upgrade has happened. Now in order to resolve this case, an agent will probably want to see, are there any related cases relevant to the similar issue that they can leverage? And if there is any resolution that has been proposed that might apply for the case that they're working on. So here they're coming to the case list and then they're trying to do a keyword search and trying to search for relevant information. Similarly, they might want to see, are there any knowledge articles that have been published associated with the issue that they're trying to resolve? So they're coming to the knowledge portal in this case, again, doing a free text search and then trying to see if are there any knowledge relevant articles for example. If you notice, here the agent is hopping from one portal to another. Here this is an example where they're trying to search for are there any related problems associated with the issue and if there is any workaround mentioned Right. So if you'll see the agent has to had to hop around to multiple portals, not only search for the information, but pick out the information that they thought was relevant, connect the dots between these three sources of information, which is your similar cases, the knowledge articles, the problems. Connect the dots, and then come up with a solution. And like Ravi mentioned in the earlier slides, when there is context switching happening and then you're processing through so much information, there is a chance, that information is lost and which might be critical in providing a resolution. So how did we resolve that this or how did we streamline and make this experience better? Is basically by leveraging AI Search where all of this information, contextual information associated to a case that might be relevant in solving the case is now all presented to them in a single unified contextual view. So this is an example of one of our internal portals where we've implemented AI Search for our agent workforce. And it's the same example if you see where now the agent is searching for an issue related to router, which has crashed after a firmware upgrade. If you can see all the data that is presented to them is all in one single unified contextual view, like earlier where they had to go to different tabs, all that tabs switching is avoided and they're within the same one large contextual view where they can switch between different relevant tabs and see what information that they want to pick and choose in order to resolve the case. So data is presented to them in different tabs. The search results, like I mentioned earlier are all based on relevancy, stacked ranked in the order of relevancy with the most relevant being on top. And this is constantly evolving as a certain record is clicked, often that starts taking precedence and starts showing up on the top. We have the capability for dynamic filtering, for example, if somebody wanted to further drill down into a certain specific category. In this case it's a category in the context of knowledge articles. Again, these are all configurable and that's why it is dynamic filtering because depending on the different tab, the search filters will be different and it's all dynamic without having to explicitly configure all of these. And like I mentioned, so in this case, if you see the genius result is the most relevant is a knowledge article and an agent has the capability to view the knowledge article right from here without having to go to different places. So all in all, the pre-experience before implementing AI Search was where an agent had to go to different places, search for the information, pick out the relevant information, and transitioning that to this single unified view that has all the relevant by leveraging AI Search. So this is how we've kind of transitioned and enabled AI Search for our agent workforce. Now obviously after having implemented this, wanted to take, gather some feedback from our partners. And this is just one example where users of the system have had to say that it has helped them improve their day-to-day job by where earlier they are, like I've been saying, going to different places. Now all of that relevant information is available to them in a single contextual view. Putting that feedback into perspective from a metrics point of view, what we've seen and realized is we were able to, bring in an 8% efficiency by leveraging AI Search for agent workforce. And again, keep in mind, so this number is just based on one quarter worth of data and we already have seen an improvement in this and as adoption increases and as the system gets better trained based on the data that it has seen behind the scene, we expect this number to go up even further. But at a high level, we've seen an 8% efficiency gain by implementing AI Search for our agents. This is again in contrast to where earlier they had to go to different places to search for information. Now some key takeaways we'd like to share with you as part of our journey in evaluating, implementing, and then finally getting AI Search out for our partners was we identified a gap and how they were leveraging or the experience that they had. And we optimized, bridge that gap and made that experience so much more better Then we obviously continuously collect feedback from various systems to improve the relevancy and accuracy by adding, reporting rules, by adding synonyms, stop words, whatnot, depending on the data that we see and the feedback that we get from our partners. And the final is basically also, apart from that we also various other metrics that we use for further enhancing and then take appropriate actions as we see fit. So in net, net, I mean, these are the high level key takeaways that we'd like to share with you. And I'll just touch on one point that Ravi mentioned earlier is when you're evaluating AI Search or thinking about AI Search, also do look at the next phase of it, which is the conversational aspect of it, and look forward for that from the platform as a capability, yeah, that will come. When we presented this slide, when you presented this session live, there were some questions that were asked. Obviously since it's a recorded session right now, I'll just highlight a couple of questions that were asked. One question was around the enablement or the availability of AI Search on regulated environments. So just to answer that question, yeah, the AI Search is available in the GCC regulated environment. And if you have further questions, please do reach out to your support account managers for further information.
https://players.brightcove.net/5703385908001/zKNjJ2k2DM_default/index.html?videoId=ref:SES1395-K24
Ravi Penikala
Ritesh Taunk