logo

NJP

How Machine Learning becoming pervasive on the Service Desk to deliver a modern employee experience

Import · Apr 04, 2024 · video

hello and today I'd like to talk to you about how machine learning is becoming pervasive on the service desk so the service desk you know is very much a Cornerstone of IT service management it provides that single point of contact for users and ultimately provides a range of services but the expectations of what people have today are changed from when the service desk originated you know people expect very much an always on service they expect that to be tailored so they understand about them and their needs and ultimately in this day of social media they expect immediate gratification they expect results quickly and efficiently so they can get on and do what they really want to be doing delivering a game these expectations is a challenge um you can't just keep Staffing up to deliver these expectations you have to work in a different way and meet the users where they're at one of the ways of doing that is to use machine learning and today we're going to use the instant management life cycle to illustrate what that means um from the initial interaction with the user user all the way through to resolution and how ultimately machine learning and those Technologies can underpin the service desk and drive those efficiencies so let's have a look and see what that means so let's have a look at the instant management life cycle and it's broken down into four areas so it's the initial interaction with a range of different channels available to the users no longer is it just a telephone call to an agent there's the identification and classification so once you know the demand has reached the service desk what is it and you know where should that demand go ultimately there's a response to that demand and then so outside of that cycle there's the analysis itself so meeting the users whever they're at is really important today you know meeting them 247 on a range of different devices and methods is very important efficient classification and identification is important because you want to get that demand to the most appropriate resource that can deal with it and that resource obviously no longer has to be human it could be a machine but we'll look at that a little bit later on response response is where the bulk of a time goes so you really want that to be efficient and as effective as possible then ultimately your analysis needs to look at you all that data you're collecting through those incidents and ultimately drive to a root course of those so they don't reoccur so let's start picking apart the life cycle and looking a little bit deeper into each area so the initial interaction so meeting the users where they're at so you can see here this is sort of traditional view that might have some messaging Channel um could be that they're raising things through portal or email or maybe a telephone call and that's been rooted to a live agent now you very much don't want your live agents having to deal with all those demands you know a lot of them could be quite simple and also users themselves don't want to have to always talk to an agent for those so you know it's very typical in in between those is to introduce some sort of chatbot in this case virtual agent which is providing a messaging interace between the user and the live agent and drawing on the information that you know about that uh user as well and presenting to them in as natural a way as possible that information so that's a very typical step and one that um many companies uh go through but still things like uh chatbots can require you a significant amount of work to identify topics build out the flows and ultimately to give an environment and experience that is good for the users one way of really driving this forwards is moving away from those traditional sort of chat Bots but to something which is using a large language model um for that work and in this case you can then um still provide that so interactive experience to users on a messaging Channel but it's a much more natural conversation it's using S two-way conversation with the AI bot and potentially things like AI search pulling information from a range of different sources and presenting that back to the user and ultimately this is going to give a self-service hopefully with you know instant that immediate gratification answers to sort of common support and really doing that in a personalized way as well because it knows about use it knows what equipment what profile they are where they are and really making that ultimately a more efficient um method and that's really using advancements in this whole area of conversation interfaces from the original sort of chatbot all the way up to sort of large language models and what you're trying to do ultimately is minimize the number of transfers from that messaging channel to the live agent so if you're having to root to a live agent then you can make that routing as efficient as possible and there's a number of methods for it these methods are not strictly machine learning but they are using that data that's accumulated to make the operation more efficient and just looking at some of the methods so capacity based so are the agents able to handle that work not just looking at the the work they're managing through channels but also perhaps the assign find instance that they've they've got um if everybody's busy then uh who Could That work go to as a backup you know it's always good to be able to handle those spikes in demand and do that gracefully rather than just sticking people in a queue skills based routine who knows how to deal with it you know who has the appropriate skills so that they're not ultimately causing an escalation to a group that has those skills to deal with it so get it to the right person quickly and agent Affinity you is this something that the agent has already been dealing with is it a customer is it a user that they've work with historically over time so we buil up an understanding and again that speeds up the resolution so once you've got it to the agent then it's important that the identification and the classification is done correctly because that's the thing that's really going to help you direct the work and Source information to ultimately resolve it and you know classically you know V done by the agent looking at a range of information that's coming in um so and maybe populating that and on the basis of that is then classifying it and how helping to further narrow down what ultimately can be done to resolve it if you get this wrong when incorrect identification classification can result in delayed resolution decrease user satisfaction increase costs because you're uh doing further instant handling and I think one of the things which really is important here is when you're doing this manually it relies on the experience and the training of an agent service desks traditionally have quite a high turnover so if you've got a high turnover of staff at level one then your error rate can be exacerbated by this so this is clearly a step where if you can remove the human out the loop and make it a data Centric approach then you're going to drive efficiencies for the organization one way to do that is use something like task intelligence which takes that wealth of data that you've got and really performs the same action that the agent is doing but based on the data that's accumulated in the organization and you know the human Loop doesn't necessarily have to be totally out of it because things like task intelligence can have a range of recommended actions so it can just autofill those fields and Away you go it can recommend which the agent can then decide to accept or not or in the background do prediction which again provides additional data to understand the confidence that the um machine learning is doing and ultimately allow you to tune and improve that model so that there's less and less of that manual handling so let's move on to the response so we've we've worked out what it is we've sent it to the right person and now it's really down to somebody to work on the response of that incident and you know typically versus sort diagnosis and resolution uh steps if we call the first part which we just gone through the triage we're now into the diagnosis and read solution so you can see here the agents works on incident and they're going to use a range of different sources to help them in the diagnosis you know this can be knowledge bases suar incidents looking at the details supplied you know working with other colleagues and ultimately using this Collective information to determine what's the the right steps to resolve and when we're resolving you know they're probably doing some other actions there ultimately got to keep the the user informed of what's Happening um to it so that the user is confident that something is being done and they might also do things like create a knowledge article for reuse so that again building that Collective understanding so that it speeds up um any reoccurrence of similar incidents in the future now there's various methods which we can use to again break this down and speed it up so let's have a look at that so firstly the whole search can be made more e efficient through the use of AI search AI search indexes data from a range of sources and using that data to improve the user search experience and this is not just only for the agent but can be also for the the user themselves so that those channels that the user using uh are made more efficient and you can see there it's presented on a range of different applications to help them so that's making that diagnosis step more efficient resolution is still looking pretty much the same we can step this up again in terms of the diagnosis with the use of similarity so we spoke about agents searching for s similar incidents you know what what's happened like it before similarity as part of predictive intelligence can really do that for the agent it it can present the agent with similar incidents and say look here's three others that look pretty much the same and so they can look at those resolution steps it can also take a step further and say actually there's quite a lot of incidents which all look similar maybe that's a major incident so again individual agents never see those you that spread of incidents across the desk but things like the models through predictive intelligence can and help make some of those calls for the agent because they're looking at a wider range of data and ultimately what you're trying to do is reduce your mttr so that mean times to Res you're trying to bring that down through the use of predictive intelligence resolution step still the same but we can again step that up with now assist now assist using large language models can do a lot of that resolution work and really accelerate that for the agent so we can summarize the incident you know if you're say you got a follow the sun model and you you're taking on incident that somebody else had worked at there might be several notes in there that you've got to sift through and get your head around before you can do some work instant summarization really does that for you it condenses it into um a much more succinct um uh summarization without working through things like resolution note and generation so whatever step you've done uh and presenting those in a way that can go out to the user or ultimately you know those resolution notes can be the formation of a knowledge article and then chat summarization you know chats can be quite short Snappy uh sort interactions so bringing all those those little bites into something that summarized and makes better uh sense you again now assist can help you with onbe so that's got to the point of the incident being resolved but what's going to help stop those incidents reoccurring again let's have a look and see what's available well on the analysis side is we've got things like clustering so clustering really looks across um the range of different incidents you know it can help uh alert service owners to Major incidents as they uh develop and things like process mining so let's just dig into each one of those you know clustering devides data into groups that then can be used to identify patterns after the cluster are made you you get a tree map plot appears on the cluster visualization and you can review that to identify use cases candidates for things like automation you know recommended Solutions can also be identified for those use cases and ultimately you know reducing the manual effort of workflows impacts both resolving team which is a reduction and the users supported you the end users productivity and this combination really helps drive your mttr and MTF down process mining provides you with powerful insights into your business processes so it looks across all that data and helps pinpoint opportunities for improvement through efficiency autom and more so it helps you identify areas such as where is the business spending or wasting the most time what's the root cause behind slow performing processes where should the business apply automation next in its process you know where's rework occurring you know what's the estimated cost and potential savings if you to apply these improvements and how much deviation is the business seeing in its process so ultimately this leads to improve visibility an increase in efficiency and reduction in costs so that's quite a lot of the the capabilities but let's just try and summarize this and where all these capabilities fit within the cycle interaction you are trying to make that contextually aware by using the resources that you have available when interacting with the user identification classification you're taking back wealth of data that you have available to you to make good informed decisions around routing categorization for response using Technologies such as now assist to help summarize create knowledge bases Drive search engines and ultimately make that a more efficient and effective part of the process and you combine both identification and classification together with response with things like service operations workspace which provide that single pane of glass for the first line agents and then with analysis using Technologies such as clustering process mining to take that data and look for efficiency gains process improvements uh and also you can use capabilities such as continuous Improvement management to then drive those Improvement activities within the organization we said at the start all this had to be measured and typical measures that people look for outcomes is things like reducing the instant workload escalations and increasing the resolution efficiency there's a number of metrics that are appropriate for these and here are a few that could be used and these are available through success dashboards and you can also compare yourself against your peer group in an anonymous way by using benchmarks as well so you can measure not only your performance but how you're performing in The Wider context the three takeaways from this really is firstly identify your outcomes this is important identify them get people on board with them delivering phases you don't have to do everything immediately but work on the most important things first and deliver against those and finally measure the results is what you're doing is effective demonstratable and communicate what's happening as well so people buy in and allows you to move forwards how can I do this well there's a wealth of information on now great but it's readily available against all the capabilities we talked about today along with me RG for delivery so recommend having a look there thank you for your time

View original source

https://www.youtube.com/watch?v=X1l7M3DZJHE