logo

NJP

Process Mining Use Cases

Import · Apr 22, 2024 · video

well welcome everyone to today's service now process mining jumpstart series session this is the the fourth and final session in this series targeted customers who are entitled to our in platform process mining solution but I haven't really started to take advantage of it fully yet uh in our first session we introduced the topic of process mining what it is why it's important in the second session we looked at the building blocks of a process mining project and walked through the creation of of our first couple of projects together in the last session we went through all of the different aspects of the analyst workbench and the different capabilities that help you explore your process data and identify Improvement opportunities and and finally today we're going to start discussing some of the common use cases or initial types of analysis that we see customers using process mining for uh the purposes of these sessions is to to help us all get a little bit better educated on service now's process mining capabilities how the sessions are structured or how they work we have the scheduled topic that we usually cover for the first 30 40 minutes or so um and then as I go through the content just post your questions in the Q&A and we'll do our best to address them either during the session uh or at the end and if we we can't answer them today we usually post the answers to the community site this is the slide that says hey anything we say and do here today can't be held against us in the court of law it is also the slide that says hey as part of these conversations if we happen to make any forward-looking statements about things that may be coming on future releases uh you should take them as forward-looking statements make no purchasing decisions based on them because they're always subject to change or things are always subject to change and I'm assuming at this point you've all read the fine print uh who am I my name is Dan Grady I'm part of the product team here at service now that is focused on our process mining solution and what we'll do today is we'll we'll just do our quick little process mining 101 refresh for those that might be joining us for the first time then we'll run through the different use cases and slides to uh get the juices flowing and kind of give you some ideas and then we'll jump into a demo and walk through some of those use cases in action unfortunately I don't have demo data to support all of them but we'll do the ones that we can uh just so you can get a feel for what it might look like in your own instances we'll get your questions answered and we'll wrap up with some other res resources that are available to you above and beyond what we we cover here today now I always like to start these sessions with this quote do the best you can until you know better and then when you know better do better and why I like to quote is it aligns to the reason that we've all gathered to learn a little bit more about the service now platform then take that knowledge go back apply it at our own organizations and help us get more value out of our investment with service now and it also aligns to the solution that we're talking about process mining which is designed to help us automate that process of x-raying our workflows and starting to show us where and how we could be doing better for everyone involved with them now as always we'll just start with the quick process mining 101 for the newcomers so with everything we do there's a designed and desired path in our minds for how it should work and how things should play out whether that be planning an event like these this webinar series or a business process or workflow and when we design things we design for both efficiency as well as to completeness as well as completeness sorry to provide the best experience possible for as many people as possible now unfortunately what we design isn't always what ends up happening in reality right the reality is not all the work is going to be flowing through the optimal path and that's going to have a negative impact on both the people that are trying to request service have are having as well as the people that are trying to deliver that service identifying what's actually happening within our processes and then improving them though is isn't always easy and obvious what process mining allows us to do is use the audit log data that is generated as records move through a given workflow and we use that audit log data to create a visual representation of what's actually happening within a given process and this new level of visibility helps us accelerate our ability to identify inefficiencies non-conformant process activities and ultimately Improvement opportunities process mining gives us the ability to answer process questions that have historically been pretty challenging to answer so where traditional analytics will allow us to answer a lot of the what style questions about our processes process mining helps us answer some of the wise so things like where is the process getting stuck where is there unnecessary rework happening where are incidents cases tickets of some sort pingpong between different groups and where aren't we conforming to the process that we designed these are all things that have historically been difficult to answer and act upon and now they're available to us in a couple of clicks we like to say that process mining is going to help you get to the why behind all of your kpis and in doing so help us Empower everyone involved in the process to make data driven decisions about which improvements to make all right let's get to the use cases so we've covered in Prior sessions how to create projects and then we went through all the different capabilities in the analyst workbench in this session we put both of those together and so we can look at specific ways to do certain types of analysis like analyzing which types of cases or incidents are going to an on hold State more than others or for longer periods of time or which types of cases are consistently going through multiple hops or teams or perhaps we've we use process mining to identify our process Improvement opportunity now we've acted on it and we want to see the impact and if there's any adverse effects of that that process change that we made uh for each of the use cases that we're going to cover here today there's a full article and post and recording on the community site uh in the process mining use case Series so we've got this session here today but after today if you want to go back or share with others what you've learned go out to the community site and you'll find the process mining use case series there and you'll find individual recordings for each of the ones we cover all right first one uh on hold and on hold reason analysis is probably one of the most popular use cases we see customers start with so if you think about it from a a service delivery perspective putting a piece of work into an onh hold State can have both positive and negative impacts on the positive side right it allows us or the support team to temporarily prioritize and address more urgent issues or potentially improving overall response times for critical matters however it can also lead to negative consequences as customers May perceive it as a delay in resolution causing frustration hurting your C set scores or eat scores now effective communication with the customer about the reason for placing the case on hold and providing real realistic expectations for resolution is is crucial to mitigate these negative effects and maintain good customer relations process mining can help organizations get visibility and provide an understanding of the bottlenecks so they can reduce the frequency and the duration of work in the on hold state so to do this type of analysis what you'll do is you create a project with State as your activity definition then you can use the histogram to identify situations where work is going into on hold more than once or a transition filter to isolate work that is sitting in the on hold state for longer periods of time um and then alternatively and this is one of my favorite use cases we can create a project that includes the onh hold reason as an activity definition I in addition to the the state activity definition and this allows us to focus in on specific onh hold reasons and what we'll do a little bit later on today was we'll walk through how you actually not only use a project that has on hold reason but we'll create this project in the demo portion as well now my favorite use case and this is every I love the on hold one and that seems to be customers favor but mine happens to be this multihop analysis example um I like to call this the finger pointing report so when we design a workflow we design for both completeness and efficiency to ensure that everyone involved has a has an optimal experience right and as part of the design of a workflow we'll likely build in the ability to reassign work to the appropriate team to ensure that the tasks get completed successfully now even if reassignments are part of the design of a process they can be timec consuming and impact overall productivity so anything we can do to reduce the number of reassignments or how efficiently they are handled is is a plus uh from an organizational perspective so process mining can be used to help us get a better understanding of how often certain reassignments are happening uh which handoffs are taking longer than others and where do we have those dreaded pingpong situations where things are going back and forth between different teams so to do this type of analysis we create a project that has both assignment group and active as your activity definitions and this will ensure that we get the full duration values that we want for our analysis and then we can start to use the variation analysis to isolate those multihop situations um just so you're aware if this is your first time joining us we created this very specific project as part of the second session that we had out there and to refresh your memory uh you can always go out and check out the recording it's it's on the community in the first process mining Academy session and I'll Point everyone at the end of the session to where those recordings are posted now one of the many things that makes process mining different than your more traditional reporting in analytics is the ability to use the steps or transitions within the life cycle of a piece of work to identify opportunities um so for example one interesting way to look for self-service opportunities or potential routing or staffing issues would be to focus on pieces of work that take a long time to get to the in progress state but then take a relatively short time to get to the resolved or closed state so to do this type of analysis you can create a project with State as your activity definition uh and then you use the transition filter capability to look for work that let's say takes longer than two days to move from new to in progress but then let less than two hours to move from in progress to resolve like there's got to be some self-service opportunities in in that set of work that that's out there a really powerful way to look for opportunities um and obviously those time thresholds you can adjust them based on your organization and the data that you're looking at now approval bottleneck analysis I I I love this one now approvals are an important part of certain business processes but they can also disrupt the efficient execution of processes and extend task completion times so to mitigate these issues organizations should focus on optimizing approval processes within the platform maybe set clear slas for approvals automate routine decisions and and improve Comm you know it's nice to improve Communications and transparency among users and improvers out there process mining can provide visibility into those situations where the approval step of the process is causing significant delays so we know where we might want to make some adjustments to that approval process a very common example pretty much every customer that I work with they want to start looking at their approval steps and their processes now we could do the approval analysis simply by setting up a project on the requested item table uh but to really do it well you'll want to set up a multi-dimensional map with one table being the requested item table with both state and approval as your activity definitions and then you'll want to add a second table of SE tasks as the child of the requested items and use State as your activity definition in that se task uh table setup and then you're also going to add both short description and items as your breakdown in that in that se task table configuration and what this allows us to do is identify where in the process we're losing time is it the approval step or is it within the Fulfillment tasks themselves um unfortunately I don't have great demo data to to show you this one live today but it's a very strong use case for customers and there's a recording on the process mining use case series on the community site that walks us through this entire process all right SLA breach analysis so slas can be used across the organization departments like HR facilities and it uh the intention of an SLA is to provide the customer with an expectation of service so within a known time scale and the ability to monitor when the service levels are not being met most organizations have some form of kpi or metrics that help them understand their SLA performance like what percentage of the work is breaching slas and which categories or types of tickets miss their slas MO most often but where organizations tend to struggle is get a better understanding of why this work is missing the targeted SLA process mining can be used to help analyze the work that breed slas and help you isolate and efficiencies in the process so uh you can approach this analysis in two ways um you can use the condition Builder when you create your project to isolate only the breached work or and this is probably the better way to do this is to use the conditions option in the project kind of on the workbench itself the analyst workbench itself to isolate isolate the data and why we'd want to do it that second way is to kind of mine all the data and then use the conditions on the workbenches that way you have the ability to save as a filter set and do some comparison analysis between the work that's breached SLA and the work that hasn't breached slas um really the trick with this use case is that you'll likely need to use the related list option on the condition Builder whether it's at the project setup state or on the workbench itself to isolate the data that you need now the next one is is kind of there twofold here this is we're talking about reprioritize work but it also works for categorization as well so if you think about it wrongly prioritizing an IT incident can have a negative conse quences for an organization right when it incidents are not correctly prioritized it can lead to a a whole bunch of problems that may affect business operations customer satisfaction and overall productivity and to mitigate the impact of wrongly prioritizing it tickets companies should establish clear and well-defined incident prioritization processes regular review and Improvement of these prioritization processes can ultimately contribute to a better incident handling and overall organizational resilience process mining can be used to analyze work that is being reprioritized so to do this type of analysis you create a project with priority as your activity definition and then you can use the variation analysis capability in the workbench to look for situations where work is being wrongly prioritized um you can also do this with categories as well so instead of using priority as your activity definition you can use your category as an activity definition and you can look at how things are being recategorized inside of the organization I have a number of customers that have tried to do that because the category of the incident or ticket uh dictates where things are being sent and they want to clean up their categories and kind of make sure that from an intake perspective things are getting categorized appropriately right up front uh so just different way to look at the data typically we always start out with those State and assignment group projects but you certainly can apply activities like priori priority and and category to help you better understand how work is moving through the organization or what's going on within the tickets themselves Channel analysis is something you you can do this with traditional reporting but process mining takes it to another level um so if you think about it self-service channels like portal um allow quick customer Employee Service uh but unlike let's say a phone or a walkup or a virtual agent chat experience it's most likely a one-way communication so which makes intake more challenging and not everyone knows how to describe their problem or what they need accurately or completely so consequently you've got a work that's created by self self-services often has uh higher reassignments or more frequently requires us to go back to the requester for additional information now these situations where additional information is required are pretty frustrating for the customer or the employee they have a negative impact on satisfaction scores as well as service team productivity so to do this type of analysis you you basically create a project with State as your activity definition and then Channel or contact type I customers kind of call the intake Channel or contact type differently in their instances but those are the two most common ones Channel or contact types of the fields that I see if you use that as one of your breakdowns um that helps you get an understanding of volume and velocity across the different channels and then you can use the histogram to isolate work that is going to on hold or a waiting call or info state multiple times and then see the impact um on the different channels in the organization that that you're accepting work from uh this is useful for identifying opportunities to improve those intake experiences and eliminate some of that back and forth all right reopened work analysis so most workflows provide the opportunity for an employee customer citizen business partner whoever it might be to reopen a piece of work after it's been uh resolved if they're not satisfied with the response or if there's further action that's required now regardless of the reason why the piece of work is reopened this is probably not the optimal experience for the given stakeholder which will impact our satisfaction metrics and the fact that we didn't get it right the first time will also have a productivity impact so all of this reopened work it's valuable feedback and should be used to identify coaching opportunities process improvements or content quality updates um whether that be to our standard operating procedures our run book or knowledge articles whatever it may be so to do this analysis you can create a project with State as your activity definition again then use the transition filter to focus on on work that is in the resolve State and then not followed directly by the Clos state right so it's going someplace else after it's resolved and then you can use the histogram again to isolate work that's being reopened more than once now last but not least um I was actually was just showing this to a customer yesterday the response was this is exactly what we need right process mining is designed to help us get visibility into opportunities to improve our processes but once we make those process improvements how do we know if they're having the expected impact and and are those process changing having any adverse effect on other areas of the process so you can use kpis to track the impact of a process change on things like closure times and satisfaction scores but getting visibility into adverse effects on other areas of process that's going to require process mining so we can use the condition filters to isolate work that closed prior to the process change then save that as a filter set then we can use the condition Builder to isolate work that has closed after a process change and save that as a filter set then we can use the comparison to look at them both side by side this comparison feature really powerful um when you're doing your analysis so using it to analyze the impact of a process change is one thing but starting to get some metrics around hey things that closed and uh went into the on hold State versus things that didn't and the the impact of things moving to onh is another way that we can use the the the comparison feature and we'll look at that here today in the demo portion all right so that was kind of the most common use cases that we see customers starting out with obviously they're meant we just shared those in slides to get your juices flowing a little bit about you know how you could start applying process mining inside of your organization and now what I'll do is I'll just jump over into an instance and we'll look at some of those live all right I'm going to open up this incident State analysis project right this is one that a project that we created in the prior session in which we've set up State as our activity definition so we're looking at how things are moving from state to state to state in the process and where they might be bottlenecks in those State transitions the first thing we'll do is we'll just take a quick look at you know how that CH Channel analysis might work out or where you might start your channel analysis Journey here so on the summary and insights page what I'm going to do is I'm going to come down here and I'm going to look at some of my Improvement opportunities that were generated by the content packs and the findings that we've we've configured and I can see here I've got a bunch of incidents that're going into that state of awaiting caller info um so what I might want to do here is I'll just jump in and look at those 8,400 incidents that went into the state of awaiting caller info at least once and we'll narrow the process map to just those and then once I get my process map which is showing me again those 846 incidents that touch the state of awaiting Coler info at least once I may want to start using my breakdowns to better analyze those specifically the channel breakdown right and this could also be called content type contact type in your instance and what we'll start to see is that the majority of these incidents that came in and went into the awaiting color info State came in Via self-service second on the list is Portal but then you see this email Channel look at the average duration I've got a whole additional week of time packaged up an incidence that came in via email and waiting into a went into a waiting caller info so I might want to start digging into that email Channel and get a better understanding of anything we could do there uh to potentially uh improve that experience and claim reclaim some of that time now I can just bring this back to the start another way to do this analysis right and I'm using a waiting call info here you can do the same exact thing on the on hold State maybe we only want to focus in on those situations in which things are going into a waiting call or info multiple times so we can click on that node here and then we can use this histogram to say only focus in on those things in which an incident in this case goes into a waiting caller info more than two times so two through nine and apply this filter and then that'll narrow it down to those multiples or situations in which it goes into a waiting caller for multiple times and then I can start to see which channels that's happening most often for self-service seems like a a situation that I might want to dig into a little bit deeper so things coming in Via self-service going into a waiting call or info multiple times that's a problem area so I might want to drive dig deeper into this one or maybe I want to dig deeper into the situations in which things are coming in via the portal and going into a waiting caller info multiple times so now I can narrow it down to those 242 and then of course what I can do at any point in time is I can always go out and then get the records in which we have situations and have that to continue the conversation or continue the analysis so just couple of different ways to do some sort of intake channel channel analysis combination of the histogram on the state of awaiting caller info or on hold along with a breakdown on the left hand side of the screen now let's take a look at that reopened work like how would we do some a reopened work analysis well one of the ways to start that is to use are transition filters down here in the lower left and what we can do and like I mentioned during the presentation portion of this right one of the things that makes process mining different is the ability to use steps within the process to help us start to filter out our data or filter our data so I can come in here and I can say hey show me all the incidents in this case in which the state was resolved and then wasn't directly not directly followed by the state of clothed meaning it went somewhere else right instead of going directly from resolved to closed this work went someplace else after it moved into the resolved State and what I can do here is this narrowed it down to 3,500 incidents that went into the state of resolved and they did not go next to the state of closed then I can do a couple of different things uh one is I can come over here and I can maybe just look at this by category which incidents did this happen most often like got reopened after they were resolved and I can see the majority of them came in Via it or categorized as it business service so maybe I want to focus in on that specific category of incidents that are being reopened after they go to resolv I might want to use my bottleneck analysis here to look at what paths are things taking so we can come in here and we can say resolved then a little trick is if you do this space with the dash it gets all the things in which resolved was the first step and then transition and I see a couple of interesting things here of those 1,700 incidents that were categ ored as it business service that got reopened 1300 of them don't even have a closed Step at all like is that the way that we design the process if it's not we might want to look into those I can see I have some that are going from resolved to closed here and then I've got different ways that things can be reopened it looks like I have 215 of these that go from resolved back to assigned and then another 208 that go from resolved back to impr progress why do we have two different paths when things get reopened so this bottleneck analysis can be really helpful and help us help us understand how the process is actually behaving versus how we've actually designed the process up front and look for opportunities to improve it and maybe drive more conformance from a process perspective inside of the organization just a couple of different ways in which you can start looking at your reopened work now let's take a look at how we would do that use case around longer to Route than resolve and again uh this is one of those situations you'll notice I I use the transitions filtering a lot here um but we can come in here and we can say hey show me all the situations in which an incident comes in get set to the state of New and we want to make sure that it's the first time it got set to the state of New so we can use this occurrence options here to say focusing on things that were new for the first time and then we'll say then they're eventually followed by the state of in progress and then eventually followed by the state of resolve so I've got this flow look for things that go from newe in progress and eventually followed by resolve but I also want to add in some time constraints so in this case here I want to focus in on these things that are taking longer than two days to go from new to in progress but then once they get to the in progress State I want to focus in on the ones that are taking less than two hours for us to actually do the work and resolve right we're losing a chunk of time up front waiting for the work to start but once we get the work or we start the work it's really easy work to do like these totally should be self-service opportunities for us so in this case here we can narrow it down to the 299 specific situations in which that happen and then maybe I want to look at the intake channel for these and I can see that a chunk of these more than half of these things that took longer than two days to get the work started but then less than two days for or two hours for us to resolve it they're coming via self-service like how do we have a self-service option that's taking that long for us to start the work so maybe we want to narrow it down to just focusing on those and then like always if we wanted to we can drill down and get to the detailed records behind these to see if there's any opportunities to improve that situation those time buckets of two hours two days completely configurable and obviously up to you but they're very useful if you start look if you want to start looking for self-service opportunities and reclaiming large chunks of time from organizational perspective all right last one here in this state example that we want to look at is let's look at that before and after a process change example uh so what we can do is we can use this condition Builder so we've been using the transition filters up until now to isolate different steps or paths that work has taken in the process but you can also use conditions just like on the reporting engine to isolate the data now let's say we have a situation here in which we've we made a process change on on Halloween of uh 2019 and we want to understand how the process was be forming before Halloween and then look at how the process was performing after that process change on Halloween so what we can do is we can use this filter criteria here to say show me all the work or get me all the work that closed before a certain date and we'll come here and I know my demo data is 2019 and we'll say uh October 31st great and we'll apply this and now that we've applied this this is a pretty important step right is we've we've isolated our data to these 177,000 incidents that close prior to the process change and what we want to do is we want to come in here we want to save as a new filter set so we can use this as part of our comparison and we'll call this jump start prior to process change and we'll save that that and we'll clear and now let's isolate the data to be after the process change so we'll come in here to conditions we'll say closed after and I'll go back to my 2019 and we'll go to October 31st and we'll apply this and this will get me the 4,300 incidents that closed after that process change and again we'll save our filter set and we'll say jump start after process change filter sets are super handy when you want to start doing this comparative analysis and now that we've got both those filter sets what we can do is we can say clear all this I'm going to apply my one filter set over here prior to process change on the left hand side I'm going to hit my compare button up here I'm going to clear this filter set and then I'm going to apply the after process change so now I got sidebyside view of before and after my process change that I made and I can use my comparison statistics to get an understanding of the impact of that process change and it looks like here I can see that after the process change we were our average durations are down about 5 hours so a success if you will but that side by-side comparison super useful um we did it here with before and after using condition Builders but it could be for any um any filter set that you want to set up maybe we'll do one now or in a little bit around isolating things that go into the on hold State all right now let's take a look at uh those assignment group analysis so two different things here my favorite one we'll look at the the finger pointing report so in this project what we've done is instead of State as our assignment group uh State as our activity definition sorry we used assignment group as our activity definition and we'll look at two different use cases in here um the first one is we'll use the variation analysis just to help us understand our multihop situations that are long running so what we have here again instead of State we're looking at the volume and velocity of the tickets moving from group to group and then if we click on this variation analysis what this looks at is all of the different routes that the work is taking to get to closure and you can come in here and you can use this filter criteria on the variation analysis to do some filtering on those routes themselves so you can say go find me all the situations in which steps are greater than x in My Demo data that's going to be four that will get me the multihop scenarios and then also find me the routes in which there's a little bit of meat on the bone so records are greater than in My Demo data I'm going to say 10 and hit apply and this will show me now the routes that are taking multiple hops right and have a little bit of volume to them and I can see here I've got this one scenario 26 incidents that went from it support Americas to support it back to it support Americas and their average duration was one month all right so a lot of time there to do all those handoffs but like where is the time actually being spent so what you can do is you can click on this apply and it will narrow it down to just that rout and you can see here that the tickets are coming in they go to it support Americas they hang on to them for about a day before they transfer them back to support it then support it is hanging on to them for about four weeks before they get it back to it support Americas we now know who's holding on to the tickets the longest in the scenario we can start having a conversation in the organization around hey do we even need to go to that for first level team before we get these types of tickets over to support it why does support it have to pass them back maybe we can train up that first team to handle these types of cases right lots of different conversations can be had but very good op way to use process money to help you identify opportunities to streamline processes and reduce the number of transfers of a piece of work and of course you can always get to the records another thing or another way to use this data let's bring it back to something a little bit more manageable right to use the assignment group analysis to look for the long running handoffs or the handoffs that are taking a longer amount of time and in this case here you can use the transitions filters again to isolate that data and you can do something like show me all the situations where assignment group is anything and then it's followed by another assignment group is anything and then we can use that time constraint again and say show me all the situations in which it's taking longer than a day for a piece of work to move from one team to the other and again you can choose your own time periods but but this will now narrow it down to these situations in which I have work that's taking longer than a day to go from one team to another and then I can start using again my variation analysis to say let's look at those most records these high volume situations where things are going from one team to another let's clear the filters on that and what I can see here is I've got these 222 that are going from it support desk level one to support Americas and then that are taking longer than a day for that handoff to take place in fact on average these 222 are taking four days for that handoff to take place and we can drill into those records and continue the analysis but helps you get a starting point for looking for where there might be opportunities to improve so a couple of different ways to do that uh or use assignment groups as your activity definition and do some analysis into how work is moving from Team to Team all right now let's trans transition over to that on hold reason analysis and we'll look at um two different flavors of this first we'll do on hold and I I will tell you right up front I don't have a lot of data for on hold so I just created some of my own uh a couple of 11 incidents and put them into the on hold State just so you can see how you might do this type of analysis in your instance and what we'll do is I'll just expand this out so you're going to have a flow like this where you've got things going from nude in progress but some of that work is going to go into be going into the on hold State one of the things and most common thing to do is let's just take a look at these things that go into the on hold State and isolate them right and now that I've got the work that goes into the on hold State maybe we want to create a a new filter set that says jump start on hold State save that so now we've got a filter set for the things that go into the on hold State and what we can do is say okay I want to understand the impact that things going into the unhold state has on my overall closure time so you can come over here use your compare again and now do a side-by-side analysis of the entire process versus things going into on hold and you can see the impact from an average duration perspective that on hold might have uh or the things that go into on hold might have again once you've got the things in the on hold State you can start using your breakdowns to better understand the channel or the categories of incidents or tickets or any type of work that go into the on hold State more than others let's clear this we'll close the comparison the other thing that you can do is you can say well use the histogram to say maybe I only care or I want to focus in on things that going into on hold more than once so we use the histogram to say let's just focus in on these things that go into on hold more than one time so apply the filter and now we'll narrow it down to those things that go into that onh hold State more than one time so just two very simp ways to start getting a better understanding of the work that is going to an on hold State and then looking for opportunities to reduce the number of times that happens or shorten the amount of time that it sits in the onh hold State another way to look at this and this is one of my favorite ways to use process mining is um looking at the onh hold reasons themselves um and you can configure a project to do that and I I love this use case because on hold and on hold reason are transient Fields right those of you that are familiar when you set a piece of work to onh hold um there's a UI action that pops up that onh hold reason field that somebody populates and the onh hold reason value is there while the incident or piece of work is in that on hold state but as soon as somebody moves it out of on hold that value goes away which makes it very difficult to report on your on hold reasons but because we're using the audit log data in the work itself or in yeah in the incidents the cases we're able to use that data for our analysis it's available to us so you can configure a project that allows you to start looking at your different onh hold reasons and you can do the same analysis that we were just doing before where you start saying well maybe I only care about things let's focus in on things that are going to unhold in our awaiting caller right and we can narrow it down to just those situations in which it goes into on hold a waiting caller or maybe we want to come in here and we want to look for things that are a waiting vendor but you might want to build a a transition that says something like hey let's focus in on situations in which the onh hold reason is awaiting vendor and then directly followed by let's say state is anything but I only care about those situations in which we're waiting vendor for more than let's say SE a week seven days and you can isolated to just those so being able to use this on hold reason in your analysis really really powerful and not something that's very easy to do with the traditional reporting that you might be familiar with I mean setting this type of project up we've not done a recording about setting this project up yet as part of this jump start Series so what I wanted to take the time to do now really quickly is just walk you through how you might set something like this up so we'll we'll create a new project and we'll call this uh jump start on hold reason analysis and we can add a dashboard we'll just use this one and we'll hit save and then we'll configure our table right that we want to mine which is going to be our incident table I'll say incident we'll pick our table incident and of course we're going to scope the data using our condition Builder here we'll say active is false and just hit preview and that's get us 40,000 but I I've created a a subset here for this on hold example so short description contains uh let's say process mining just hit preview and that'll get me to my 11 records that we've been using and we'll hit save now what we'll do is we're going to add two activity definitions uh of course we're going to add State as our activity definition but one of the things that we'll want to do to keep the map clean and help us make help make help help us do our analysis a little bit easier is we only want to include certain States because we're going to be using the onho hold reason as an activity in addition to this we don't necessarily need the onh hold node on the map itself right so what you can do is you can use this choose activity values to allow you to start selecting the specific activities that you want to include on the map and I'm not going to include on hold because that's just going to make it a little bit muddier right so we'll just include the these states but not include my onh hold State and we're going to replace that on hold state with my we'll say okay and submit it we'll replace that with the unold reason so we'll come in here we've now added that and we'll say new and we'll add our on hold reason field on hold reason and again we'll choose values because there's we don't really want the none value we only want to include the situations in which on hold has a value so we'll include those great submit it and maybe we'll just add a breakdown I think most everyone here if you watched the former recordings or been to the prior sessions you know how to add a breakdown but let's maybe slice and dice this by assignment group and we'll hit submit and at this point we can be done we come into our project definition and we hit our M button we're not going to include Improvement opportunities today that's it I I wanted to run through that one because it is a unique way to create a project uh using those two on activity definitions of state and on hold reason um and then filtering out certain values uh from a a state and on Old reason perspective but if I I open this up we'll now have our on hold reason an now is and you can start looking at how things are going from in progress to that on hold state but using the specific reasons uh instead of the on hold box itself to just get you another level of granularity that you might not have been using before all right we ran through a bunch of different stuff hopefully that got your juices flowing in terms of different ways that you might want to start using process money and applying it to different use cases you have in your organization uh just to to wrap up with some things here and I I'll get to the questions if you if you had questions and you haven't posted them in the Q&A section yet put them in the Q&A and I I'll get to them uh so just common question is okay this looks great DN which workflows can we apply process mining to well process mining is available for itsm so things like incident problem change request if you're using our customer or industry workflows so for customer case you want to use process mining for that totally available to do that if you're using us for for HR a life cycle event or an HR case you can use process mining there field service management strategic portfolio management so things like ideas and demands if you've created your own applications with Creator or app engine you can apply process mining to those and then as many of you know as of the Vancouver release we've opened process mining up to external uh process data as well via an automation engine entitlement so if you're interested in any one of those other workflows you'll notice that I used largely incident data today to drive my demonstrations it's because it's the best demo data I have but there are recorded Academy sessions out there for each of those other workflows if you want to get a little bit more contextual information there so what do you do next well if you haven't already you want to go out you want to install your the process mining plugin from the store um instances that you want to install this on they should be at the pro level or above we use some of the pro level cap abilities inside of the process mining solution itself so once you have that core plugin installed there's also content packs out there for the individual workflow so depending on the workflow you're interested in you're going to want to install the relevant content pack as well some additional resources above and beyond what we covered here today one there is a now learning course out there it's called process mining Essentials so if you're interested in taking that course it's about 90 minute self-based there's a really good white paper out there on the community site focused on our own service now process mining Journey um and some of the initial findings that we had and the use cases we focused on uh there's a process mining Community for so if you go out to community. servicenow.com there's a specific for where we post a ton of information one of the things on that form is our process mining academy uh these are monthly sessions that we do that do deep dives into different areas of the solution we host them once a month and they're all recorded out there so there's about 20 plus sessions recorded out there if you wanted to do deep dives into different areas there's a use case Series so we just ran through the majority of the use cases that are in that use case series but in addition to this session if you wanted a little five minute demo recording of each of those individual use cases that we covered they're out there packaged up in that use case series um and then of course if you want to show others or get others exposed to process mining in your organization there is the online demo available on the community site if you just search up why and what of in platform process mining removing pictures the overview presentation and demo are there um if you so I mentioned this is the fourth session if you happen to miss any of the prior sessions the relevant content or comparable content is posted out there on the community site so the first session we did the Y andw of process mining if you search up that demo that I just uh mentioned on the community site that'll be equivalent to the first session that we did here the second session when we built our first couple of projects we've posted that recording according to the process mining Academy session one uh so if you search up how does in platform process mining work and creating your first model you'll find that recording and then the last session that we did session number three using the analyst workbench to identify inefficiencies um that's that recording has been posted to the process mining Academy session number two so just search up using the analyst workbench to identify inefficiencies and you'll find the recording of of the session right before this one all right let's get to your questions uh if you've had questions throughout I hope that you put them in the Q&A and there's nothing there I'll give you a few minutes to put some questions in if you you haven't already any questions not seeing any questions come in so if there's no questions we'll just take the time to say thank you very much for your your time here today I appreciate you attending these jumpstart sessions and uh look forward to seeing you on the service now community and the process mining section appreciate your time everyone and happy mining

View original source

https://www.youtube.com/watch?v=GDTcKf7XXL4