How to use the Process Mining Analyst Workbench to identify process improvement opportunities
well welcome everyone to today's service now process mining jumpstart series session this is the third session in a series of four targeted at customers who are entitled to our in platform process mining solution but really haven't started to dig in and take full advantage of it yet in our first session in the series we introduced the topic of process mining what it is is why it's important in the second we looked at the building blocks of a process mining project and we walked through creating our our first couple of projects together and today we'll focus on the different aspects of the analyst workbench and the different capabilities in there that help you explore your process data and identify Improvement opportunities the purpose of these sessions is to help us all get better educated on service Nows process mining capabilities how they work uh well we have a scheduled topic that we'll usually cover for the first 30 minutes or so um and as we go through the topic just post your questions in the Q&A and I'll do my best to address them I've carved out some time before the demonstration and after the demonstration piece to to address those questions so get them in there if I don't answer them right away just know that I'm going to get to them eventually uh but please post them as the questions come up and uh I'll I'll sort through them when the time comes now this is the slide that says that anything I say and do here today can't be held against me in the court of law it's also the slide that says as part of this conversation if we make any forward-looking statements about things that might be coming in future releases you should take them as forward-looking statements make no decisions based on them whatsoever because things are always subject to change and I'm sure you've you've read the fine print uh for those that I've not worked with before my name is Dan Grady I'm part of the process mining product team here at service now and what we'll do today uh we'll do that quick little process mining 101 refresher uh we'll run through the different capabilities in the analyst workbench like bottleneck and variation analysis we'll run through them in slides first and then we'll demo a bunch of them but we'll talk about the platform intelligence capabilities that have been baked into the solution to help you do your your digging and Mining I should say and then we'll jump into a demo and show you all these capabilities in action we'll get your questions answered and wrap up with some other resources available for you Above and Beyond Today most of you know I love to start these sessions with this quote do the best you can until you know better then when you know better do better and I like the quote for two reasons one it usually aligns to the reason that we've all gathered to learn a little bit more about the service now platform then take that knowledge go back apply it and get more value out of our investment with service now and it also aligns to process mining which is designed to help us x-ray the workflows we have running on the service now platform and then start showing us where and how we could be doing better for everyone involved with them now as always we'll start with that quick little process mining 101 for the newcomers so with everything we do there's designed and desired path in our minds for for how it should work and how things should play out whether that be planning an event like this session here or a business process and when we design things we design for both efficiency as well as completeness to provide the best experience possible for as many people as possible unfortunately what we design isn't always what ends up happening in reality the reality is that not all work is going to be flowing through the process uh based on the optimal path and that's going to have a negative impact on the experience that both people requesting service are having as well as the people trying to deliver that service now I identifying what's actually happening within our business processes and then improving them isn't always easy what process mining allows us to do is use the audit log data that is generated as records move through a given workflow on the platform and then what we can do is take those audit log data and we can use it to create a visual representation of what's actually happening within the process uh this new level of visibility is going to help us accelerate our ability to identify in efficiencies non-conformant process activities and Improvement opportunities process mining gives us the ability to answer process questions that have historically been a little bit challenging for us to answer so where traditional analytics will help us answer a lot of the what questions about our our processes like how many tickets or incidents or cases did we work and how long did it take to close them process mining helps us answer the wise so things like where is our process getting stuck where is the unnecessary rework happening where do we have incidents or cases ping ponging between different groups or teams where aren't we conforming to the process we've designed these are all things like I said that have been historically difficult to answer and act upon and now they're available to us in just a couple of clicks um and you learned how to create these projects and those clicks last time this time we'll look at how a few more clicks can help us get to the answers to these questions we always like to say that process mining is going to get us to the why behind all of your kpis and then in doing so Empower everyone involved in the process to make more data driven decisions about which improvements to make all right let's get to those building blocks last time we covered all the foundational Concepts you needed to create a project and then visualize your process in this session we'll cover cover all the different ways to work with that project and the visualized process map to identify inefficiencies and Improvement opportunities it looks like a lot in when it's in list form but we'll get through it pretty quickly all right so once we configure in mind uh we're going to be brought to the process mining workspace to review and dig into our results there will either be two or three tabs in the the workspace for each project depending on how we configure things the first will be our summary and insights page which will give us High l information about the data we mind volume of Records velocity to get through the process number of different routes or variants we took to get the closure it will also surface all of the Improvement opportunities based on the finding definitions that we have configured for that table um these Improvement opportunities will be grouped and categorized you'll be able to use those as a like a starting point for any analysis that you want to do uh the other thing to be aware is that the summary and insights page has a preconfigured dashboard that comes with the content packs uh for each workflow like itsm or CSM or HR and that's going to contain some out of the box performance analytics kpis don't worry if you're not using performance analytics you've got your own kpis these are completely configurable you just might need to adjust them to to match your environment the other thing that I'll call out here is that in the Washington release which comes out imminently tomorrow I believe uh We've added an autogenerated dashboard that's based specifically on the data that you mind so for those of you that aren't using performance analytics you'll now have something to put in that spot the second tab is the visualized process map and and this is where you'll spend most of your time um and where you'll be kind of we'll be spending most of our time here today um using things like breakdowns and other capabilities like bottleneck root cause and variation analysis to allow us to start having a conversation with our process data right you ask a question you get an answer based on that answer you're going to follow it up with another question based on what the dat data is telling you I always like to say mining is a verb right you have to do it you have to interact with the data to get to the the results that you want and the process Improvement opportunities you're going to make now on the analyst workbench or the process map screen one of the primary things you'll be using to better understand your process data are breakdowns breakdowns allow you to filter the data on the map but they also give you statistical insights into the data to help guide the questions you're asking so break Downs can be things like categories locations priorities HR Services uh that you have inside of the organization and you you're allowed to configure up to 10 breakdowns per project and these can either be choice or reference Fields as you as you learned the last time the bottleneck analysis capability helps us isolate which process transitions are causing delays so a transition can be a record moving from one state to another for example a ticket or case moving from in progress to on hold or maybe it could be from you know a case moving from one assignment group to another maybe we're looking to see how long it takes to triage a ticket and move to a second level group uh this is all information that's found in the analyst workbench um and we can configure uh to to help us dive into it a little bit deeper um just so you know that this bottleneck analysis it's surface on the right hand side of the screen uh when you get into the analyst workbench you click on it you expand it and you click on on that the the top thing the top icon you click to expand the right hand panel and the bottleneck analysis button is there I'm in the xanad do release we're going to Res be resurfacing this bottleneck analysis on the summary and insights page itself so bottleneck analysis focuses on specific transitions the process variation analysis focuses on complete paths or routes a record is taking it'll show you each and every different route and when you see the reality of your processes there going to be way more routes than uh or variants than than you anticipated um this is one of the ways that we can see how well we're conforming to our designed processes uh like which tickets and teams are skipping steps and which routes are long running uh with too many hops involved in them or maybe we want we start looking at a map of uh priorities and categorizations and we get to see which things are being reprioritized or Rec categories recategorize more often than others using the variation analysis in combination with a project that has a Simon group set as the activity is a great way to identify unnecessary group transfers or multihop or pingpong situations a work that is touching three or more teams represents a great opportunity to potentially streamline a process reclaim some time for the organization and drive more productivity this is one of the most popular scenarios this assignment group analysis or with the variation assignment groups with variation analysis with both bottleneck and Vari ation analysis you see that you have the ability to drill down into the map that focuses on a given route or bottleneck but sometimes you're going to want to isolate a very specific set of transitions like let's say incidents or cases that group go from group a to Group B and then back to group a that maybe contain the word zoom in the short description or perhaps you'll want to look at tickets that are taking longer than a day to assign and then less than an hour to resolve right that's a great feeding ground for self-service opportunities you have advanced transition filtering capabilities to ask and answer these types of questions um these inprocess type of queries are just not possible with reporting like using the different steps in the process or the history of the path that a ticket took or an incident took or a case took you just don't have that ability with reporting it's super handy to have that with within process mind to use the audit log data so you just keep in mind that the transition filters that we have access to they use the audit log data to allow us to narrow down or scope the data set that we're looking at on the map the condition filters they act on the current State records just like the operational reporting on the platform does so there's two ways to filter the current state reporting way and then using the autop log data with our transition filtering now there's a a histogram available on any node or Arc that you click on in the map and this is going to provide Us distribution analysis to help us pinpoint specific sets of work that are either long Runner or bouncing in and out of a specific state or assignment groups uh whenever we see these types of things they're typically Improvement opportunities right the long Runners are the things that are bouncing in and out of States more than once or teams more than once right the histograms also very helpful in validating some of the average duration values you see in the workbench um so you can you know you might see a high average duration but if you look at kind of the the median it might be a little bit uh smaller and so what the history does is help us see where do the majority of the records for fall like are there a handful of long Runners that are potentially skewing the average um this type of information is really useful and you can then use the histogram itself to isolate the map based on the data B the data bins in the histogram so maybe you just want to focus in on the outliers you have the ability to click on those on the histogram drill down into them and just focus on the outliers now I added this slide it might seem obvious but I did want to make sure that I pointed out as you're having this conversation with your process data at any point you can always drill down into the detailed records if you have the rights to do so of course now this is a big difference between some other process mining Solutions out there and the reality is this is where a lot of the actionable information lives um so one thing to note here is that the The View records option is on the Node and the arc popup uh when you're when you're on the the primary map itself but when you when you swap over to do variation analysis um it's not going to be available on the popup you you need to go into the map itself on the canvas you'll see a little drop down that says variance options that you'll get your show records from it's I've seen a few customers get a little bit confused like wait a minute in one place I see and I get to the records here but then I go over to this view I'm not seeing the view records option and that's because just look for that little variance options box now as you start using all these different ways to filter and isolate your data you're going to start seeing some interesting things and then you may want to compare a filtered map to an original um or something like that uh once you when you find these unique scenarios you're going to want to kind of come back to them later in conversations or with others in the organization and you might want to do some comparison analysis as I was just saying so just know that you have the ability to save the scenario as a as a filter set so each time you remind the project um this save filter set is going to be available to you so you might be going through and you might be like the slide says you might focus in on incidents that go into the state of waiting caller info more than once um and then also touch the or come in via the email channel that might be a something you want to come back to over and over again or show somebody at a later time saving as a filter set just makes it readily available to you the combination of things that you've clicked on then when you you do save those filter sets they're very handy when you want to do some side by-side comparison of the total data set and then just that filter data that you have or you can do this process model comparison to look at different regions or different locations and how the process is being performed in different parts of the organization or maybe you're Outsourcing similar work to two different vendors and you want to see how one vendor is hand performing versus the other this side by-side comparison is also very useful for situations in which you want to um look at the impact of a process change that you may have made so you want to do a sidebyside comparison of the process and how it was working prior to the change and then you put on the other side how it was performing after the change to see if there was any adverse effects to any of the changes that you made so you have this ability to do this side by-side comparison and it's very very useful as we find all these inefficiencies and opportunities you're going to want to share and collaborate with others involved in the process you can take snapshots of your findings and invite others to look at the map um just so you know there's a pre-built connection with continual Improvement management uh if you're not familiar with continual Improvement management application it's designed to help you capture align prioritize and track all of the Improvement initiatives you have in your organization so as a process owner or a business analyst is exploring the process map in the workbench when they find an opportunity they can immediately create a new initiative or link to an existing initiative so the findings don't get lost in the shuffle and this just helps Ensure that insights get acted upon and that value gets realized right so that's that's one of these things like I've always mentioned when you start doing process mining there's going to be no shortage of opportunities to improve but you can't act on them all at once right creating these initiatives in continual Improvement and linking a process project to them just ensures that things get followed up on holds people in the organization accountable to acting on these insights so we realize some value and then from a continual Improvement record you you can create all these additional types of Records depending on what your process is in the organization process mining also has a pre-built connection with automation Center so automation Center is meant to give you complete visibility into all the automation activities and the value they're providing as well as future automation opportunities and requests so let's say a process owner is analyzing a map and they see a virtual agent conversation opportunity right from the analyst workbench they can create an automation uh request and and that's going to be tracked and followed up on inside automation Center all right so that that was already a lot but this is like one of those uh TV infomercials but wait there's more um there's a number of built-in capabilities that take advantage of the platform intelligence components uh so let's just look at those real quickly um and then we'll we'll we'll kind of get into the instance so there's automated findings cluster and root cause analysis and then the option to include an automation Discovery report in your project as an additional tab so let's run through each of these before we get into the live demo piece first and foremost we have finding definitions um and these are meant to surface or they get surfaced as Improvement opportunities on the summary and insights page so finding defin definitions are preset rules that you can apply to the data that you're mining they basically allow you to bucket and highlight inefficient or non-conforming activity things like multihop issues or process that are bypassing steps can be highlighted like right up front before any Persona even gets into the visualized process math um we provide some of these pre-built finding definitions via those content packs that I was mentioning earlier just to help you get jump started but then of course you're going to be able to create your own finding definitions to test and prove out the hypotheses that you have about uh inefficiencies within your own process now for for most organizations there there's no shortage of process Improvement opportunities but identifying all of them and then prioritizing which ones to focus on first can be a pretty daunting task um the combination of those rule-based findings and then with these automated findings that we added in the Vancouver release really help streamline that process so the automated findings you can you can simply just kind of Point process mining at a set of data and we're automatically going to start to surface work that has some form of of inefficiency um whether that be rework so things that are kind of going from let say a b c and then back to a in the process so we you can configure a detector to look for all those rework situations uh we can also configure a detector that was going to surface all the pingpong scenarios where work is bouncing back and forth between steps or groups or individuals so going ab ab ab um there's also an extra step detector that's going to highlight situations where we have a variant of the process with one additional or or extra step that could be slowing us down those those three the rework the ping pong and the extra step detector those were available to customers in the Vancouver release and then in Washington we're adding four more uh pattern repetition detector an extreme duration detector extreme repetition detector and a slow transition detector um well all these things like when we start flagging all these with all these detectors will they all be things that you need to act upon probably not but by automatic surfacing them and then allowing someone to flag the ones worth following up on that significantly is going to accelerate your ability to identify process Improvement opportunity so like for example where the rule-based findings are great for situations where you have a specific inefficiency that you want to better understand or help maybe capture data to help you prioritize acting upon it the automated ones are great for those situations where you don't know what you don't know cluster analysis uh this takes advantage of the in platform machine learning capabilities so you may have used the predictive intelligence clustering framework in the past to identify patterns in your unstructured data that lives in let's say incident or case short descriptions and descriptions uh we've taken that capability we've built it right into the process mining solution so once you've narrowed down a set of Records via let's say bottleneck variance breakdown filtering and findings you're likely want to get some going to want to get some more information from the records themselves uh clustering is going to help us get that information and determine how best to address the inefficiency root cause analysis helps us find connections and influencers in the data based on the record record attributes like category Channel or assignment group um and then we can then use these different things to help us further filter down the records the unique thing about root cause analysis is that allows us to use the data at the creation of the record um for analysis so if you think about the way you do reporting today you're you're when you use the operational reporting on the platform you're always looking at the current state of the record um and the data points that live in that current state root cause analysis allows us to use the initial value set on the record to do some analysis so you know perhaps the assignment group has changed over time or the category has changed over time when you run a report you're only going to see that last category root cause will tell us hey 22% of these incidents that went into this um state or cause this inefficiency they started in this assignment gr they started or were initially categorized as this just another way to kind of another lens on the data and then finally we have automation Discovery so this solution uses a library of 180 common IT issues and automation opportunities and we map these to the records we're mining via machine learning technique uh the dashboard is then going to show you where your biggest potential impact um would be based on acting on any of the recommended automation opportunities and these automation opportunities are primarily virtual agent conversations that come with the out- of the-box content packs um so just know that this is there there's an IT taxonomy that you can use and we re recently added an nhr taxonomy that you might want to configure if you're mining your HR cases all right let's jump into the demo piece our portion of the exercise and like I said if you've got questions please put them in the Q&A and I I'll address them right after we complete this demonstration so what I'm going to do here is I'm just going to open up this incident State analysis project and we'll run through what we got on the screen here first thing you'll notice across the top here right we've got our 22,000 uh incidents that we mined in this case and our 1500 different routes to get to closure you've got your pre-built dashboard that comes as part of the content packs uh that we have and this one this would be the dashboard that would be aligned to the incident table if we choose to mine anything like that but of course you're going to create your own and then as we move down the screen we get to those Improvement opportunities that are based on those finding definitions both rule-based and automated you'll notice that you've got this row here that's basically categorizing these different Improvement opportunities that we captured as we M the data one thing I will call out is um if you've not mapped your finding definitions to key performance indicators you're going to find that this widget here will say no data um to rectify that saying no data you need to make sure you're mapping all your finding definitions and Improvement opportunities to impacted kpis um I think we covered that in the last session as well and then once we get down to the bottom of the screen we start to surface all our different types of improvement opportunities so this first one here was an example of a rule-based one uh where we specifically said go find us all the incidents that at some point in their life touch the state of awaiting call or info so we mind data and it Flags 846 of them that touch or 38% that touch the state of awaiting caller info we see the impacted kpis and then we get the most important number which is our total inefficiency like 43 years worth of time that have baked been baked into things is going in and out of the state of awaiting caller info and then right from here we can jump right into the workbench for those 8400 we can run root cause analysis we can run clustering analysis we can add a note or create a continual Improvement initiative or an automation Center request the next one on the list is an example of an automated finding you'll see that we have rework on the in progress state so what it's telling us is that 274 incidents or 9% went into the state of in progress left the state of in progress and then at some point in their life cycle came back to in progress and in fact one ticket or incident uh touched the state of in progress 11 times then again probably most important number the 33 years of toal inefficiency packaged up in these incidents that move in and out of the state of in progress more than once um and the same trick here we can drill down to it so this first screen here very powerful and helping us understand where potential opportunities might be for us to continue our our journey and ask some follow-up questions of the process data and it's also very good at giving us some uh areas to focus in terms of prioritizing which ones to dig into first with this total inefficiency data or total bucket of time that we potentially could reclaim a piece of um like 43 years even if we just got 1% of it back it would be a win for us and then like I said right from here we can jump in and just get to the workbench itself for those 8400 incidents so it'll narrow it down if I come over here to the right hand side of the screen up to my model options you'll see that now we've Ned it down we just came into the map for 846 one thing to call out when you are working with the analyst workbench the numbers across the top are going to stay the the same and represent the entire data set that you've mined and then if you ever want to see your record your routes and your average duration these metrics for the data that you filtered you would come over here to the model options piece to see that the filtered data activity what I'll do now is I'm just going to hit this clear all to bring us back to the entire data set and you'll see here we went from 8,400 to 22,000 again um one thing I do like to call out is you see that we have 1500 routes here on the map itself you don't doesn't look like we're working with 1500 different routes and that's because by default we just show you the top 20% so what we can do here is we can expand this slider and this is going to expand it out to start showing us all the different routes I've had a few customers who uh when they've turned process mining on for the first time they say I'm not really seeing all that much um and it's because they didn't know that they needed to expand this map out uh to show them all the different connections that potentially could be available on the map itself if I just zoom in real quick one of the things you'll notice in this case we've chosen a state as an activity so we're looking at the volume and the velocity of the tickets going from state to state in the process uh here so here you've got your volume and velocity if you wanted to change uh the metrics that you're looking at in on the arcs themselves you have the ability to adjust from the default unique occurrences to maybe you want to look at the max duration or the average duration in this case here you might want to look at the the total um duration or maybe you want to turn this the secondary metric off so you can adjust those here to dig into the arcs themselves and look for opportunities you may choose to use this bottleneck analysis piece here and what bottleneck analysis allows us to do is look at all the different transitions or arcs on the map and then start sorting them by like hey I want to look at the arcs that have the longest average duration or the ones that have the greatest standard deviation or the ones that are happening most often or unique occurrences of it and we can do things like hey come in here and say show me all the situations in which uh transitions that involve the state of awaiting caller info and I can see here a waiting caller info to in progress I've got 19 years worth of time packaged up in that transition and then the first half of that in progress to a waiting caller into this 22 years worth of time so there's where a lot of our time is being spent things moving in and out of the state of awaiting caller info at this point we can then do some different things we can come over here and we can use the map to say well if I really want to dive into that awaiting call or info issue maybe I just want to click on the Node itself and apply a transition and narrow it down to just focusing on a waiting caller info if we bring back our transition over here look at this we're right back to that same place the 846 that that Improvement opportunity that was at the top of the list on the summary and insights page called us we could have just and we did jump right to here but I wanted to show you some other n capabilities inside of the map to narrow it down to this once we do that what do we do next well we've got our breakdowns right on the left hand side so our breakdowns allow us to slice and dice the map by things like with things like assignment group priority category any way that you've structured your data uh you can use as a breakdown one thing I will cost CAU you on is um try to keep your breakdown elements you know in the hundreds um you know you don't want to have millions of different values on the leftand side of the screen here it's just going to take forever to paint the screen so like something like Ci is not a great thing to use as a breakdown unless you filter it ahead of time uh very often when we're starting to do looking at things going into the state of awaiting call or info we maybe we want to do Channel analysis to see if we can improve an intake experience to eliminate some of that back and forth that might be slowing us down so what you can do and you can see here is that we've got these initially by default the channels self-service portal email they're stacked or ranked by the volume the number of Records right so the most records we can change the way that we're ranking these maybe to look at the channels that have the longest average duration and we can also filter these breakdowns if we wanted to do that we'll do that a little bit later on in the demo I can change the way that we're visualizing this to get a sense of the kind of the scale of of each of these channels in this case but one of the most interesting things to always look at is you've got your volume right but then you also have your velocity that average duration what we can see here is that things that come in via email on average take one week longer than the first two channels so it gives you visibility to both the volume and the Velocity in one view which is super helpful when you're trying to make decisions about kind of where to focus your attentions and now we can just drill it down to just those things that came in via email and went into a waiting caller info then you can do some additional things right we can come over here and we can start to use these options on the popup because you're in the platform you can always get down to the detailed records themselves uh that's a huge advantage and of course AC are going to apply here as you drill into this if you don't have rights to see these records you're not going to see them the other thing that I always like to point out here is you've got a couple of options on this list view one is you could launch process mining if you just wanted to create a process mining project for just these 1300 you could do that right from a list um also I like to call out this like hidden feature of the platform called interactive analysis uh if you remember me saying multiple times probably that we're having a conversation with the data one of the cool things about the platform is as you drill into the records it gives you more capabilities to continue that conversation to continue asking questions about what you're seeing and a single click we can turn this into an interactive dashboard to continue that conversation we may want to get some uh use the root cause analysis capability that we talked about earlier to start seeing what some of the leading influencers or contributors to things going into the state of a waiting call or info were or right from here we can run cluster analysis and start looking at that unstructured data inside of the tickets the short descriptions and the descriptions so we can see here that we have clusters a cluster of tickets that came in via email went into Wai and caller in for that that have people talking about changing their email address and maybe we want to focus in on that um or changing their cost center or their manager maybe we want to focus in on that opportunity and then right from any one of these we can drill down to those records and continue to dig into them but the clustering will help organize them a little bit better and kind of guide us to next question we're going to ask now as we do these things once we find an opportunity to improve two ways in which we can capture that U one is we can come here and we can add a note and say something like hey at able uh take a look at intake experience or email update attach a snapshot post this email that's going to send a notification to Able able is going to click on the link it's going to bring them right into this project they're going to hit preview and that's going to kind of take them right to the same place in the analysis that I was I was doing here um or I mentioned that connection with continual Improvement management where we can either create a new continual Improvement management initiative or link to an existing one or that connection with automation Center where we can create a new automation Center request or link to an existing one it's just going to link the project so in those two interfaces somebody has the ability to come back in here and start looking at the data that we saw saw and why we thought we might want to align or make this Improvement or Justify the Improvement initiative itself all right we've covered a lot we looked at a lot of different things couple more things that I want to show you in here uh so I'm just going to clear all close this um one of my favorite capabilities is this transition filter so know that you have two options down here one is the condition filter this works just like the standard reporting so you want to filter these incidents by a category you maybe want to use the related list condition to filter this out and only show incidents that have breached uh an SLA uh so you can come in here link to that and say I only care about incidents that has breached is true hit apply this is going to filter it down to only the incidence that have breached an SLA um or my favorite one is using the transition filters to kind of look at the audit log data or use that for some filtering so doing things like saying hey I only care about show me the in this case in that went from the state of new uh to in progress state is in progress and then from in progress we're let's make this eventually followed by um and then after they went to the state of in progress they were eventually followed by the state of resolved but you know what I only care about ones that took longer than two days before we started the work and then once we started the work uh uh I only want to focus in on things that took less than two hours to resolve so again we lost a lot of time waiting to start the work but once we started the work it was super easy work for us to do those potentially are a feeding ground for automation opportunities and an area in which we can reclaim a ton of time from a a productivity perspective we can hit apply here and this will narrow it down to just those tickets that took uh less than two hours to go from in progress to resolved right and then what I can do is I can actually start to use the histogram here right histogram we can look at you know repetition so maybe we only focus on the things that went through this process of in progress to resolv two times or the ones that went through it once or from an analysis perspective this is breaking it down by the time it took for these things to move from in progress to resolve maybe I want to even n this down further the things that took less than 15 minutes for us to move from in progress to resolve I can use the histogram to apply that filter set here and then what I can do is I can come in here I can show the records again if I wanted to look at showing the records from this Arc all right I can run the cluster analysis or maybe I want to save this as a filter set and just say um super easy things to solve maybe automate and save this filter set and now I'll have this available to me next time I come back in I'll have the data around this any time we remind it will isolate just based on these conditions that I put in in this case here and then once you have these save filter sets these are super handy for doing like side-by-side comparison one of my favorite use cases is to create a filter set that says like hey prior to this date go get me all the data so prior to a process change let's look at those and then I can come here and I can hit this comparison and I can say hey now go get me or let's use the filter set for things that were isolated or bucketed for after the process change right you can do this side by side comparison using breakdowns so you can look at Region a versus region B vendor a versus vendor B and then once You' got these side by side you can use this comparison statistics option to start giving you some data about the difference like hey after the process change look what we've done uh We've reduced our our closure times or our average duration of these tickets by five hours so a win but it just gives you data around any adverse effects that might be or visibility into any potential adverse effects that might be caused by a change that you made last thing that I want to touch on here um and we're going to use a different project for this let's use our state uh we've just been looking at how things are moving from state to state in a process but as I've mentioned and as we did the last time we built a project that used assignment group as an activity it's one of the most popular use cases is to kind of use assignment group on the map and look for those multihop situations or where we're losing time uh in team transfers or or tickets being transferred between different teams in the organization so what we can do here is we can expand this out to just look at kind of how things are moving from Team to team just like we had on the other map we're looking at um the groups and the volume and velocity of the tickets moving between groups but a great way to start narrowing down this data is to use our variation analysis so the variation analysis allows us to look at all of the different routes that things took to get to closure inside of here so we can start looking and sorting these by the long running routes the ones that are taking the routes that we're taking most often the ones that have the most steps which is always an interesting thing to do but we can also start to filter those routes and say you know what I only care about the routes that take greater than x number of steps multihop situations let's say I mean you want a little bit of meat on the bone from a records perspective right you don't want the onesie tws you want to look for these things that you know have happened more than once so and let's just use 10 here and we'll hit apply and this allows us to narrow it down to those specific routes that went through multiple steps and you know have a few records included in them it's just not a onetime thing and I can see here I have these 26 that on average took one month and one month in 17 hours to go from it support Americas to support it back to it support Americas and then you can filter and just show that route and we can actually start to see where the time is being spent so it comes in to support it or it support Americas they hang on to it for a day they pass it to support it they hang on to it for four weeks and then they pass it back to it support Americas so you can see now where the time is this is useful for saying like hey do we even need to transfer to that second team can we train up the first team or maybe we bypass the first team allog together and get it to that second team and reclaim some of this time and like I was mentioning during the slides just know that you if you want to get to the record a run root cause or run clustering on these you need to use this option up here un unlike the prior example where you were kind of had all those options on the popup it's just a little bit different view when you you start using the variation analysis and I think I covered all all the different things that we covered in the slides I think we looked at each of those in this demonstration piece here so let's jump back over to the slides um so now at at this point many folks said this is great Dan you used incident for a lot of your examples because that happens to be where my best demo data is or my richest set of data to show you the capabilities are but you can apply process Mining and the things that we just saw to most workflows running on the service now platform so there's an offering for itsm so things like incident problem change if you happen to be using us for a customer or any of the industry workflows we have process mining is available for that if you're using us for HR cases and life cycle events you can apply process mining there feel service management strategic portfolio management all options if you've built your own applications on the platform with Creator workflows and app engine you can apply process mining to those and then as of the Vancouver release uh we've opened our process mining solution up to external process data so you might be running a process in sap or Oracle um or maybe uh we use Smart recruiters here at service now you can if you can extract audit log level data from those um systems you can import it into service now and process mining on top of those to look for bottlenecks in those workflows um if you wanted to learn more or or spend some time specifically on any of those individual offerings uh there's a on the community site there's a a I guess a blog post that we'll call this that kind of has a link to each of the we have recorded sessions for each of those different areas so if you wanted to dive deeper into that external data offering there's a recorded session Academy session that covers just that if you wanted to dive deeper into field service manag there's a recorded Academy session just for that so you can learn a little bit more about those individual workflows if you want and then what do you do next well if you've not already turned on the plugin uh you can go out you can turn the plugin on um you should be turning this process core process mining plugin on an instance that is pro level or above that's like the the prere but you turn this plugin on um and then in addition to the core plugin for each of the workflows as I've mentioned throughout the presentation we have content content packs that get you those finding definitions or Improvement opportunities for each of the workflows that you're interested in just make sure that you install the content pack for the workflows that you're going to be mining uh in addition to that core process mining content pack uh so in addition to these sessions that we're running um there's also other resources out there um there's this on demand training on the now learning site called process mining Essentials if you wanted to check that out we use process mining here at service now we've applied process mining into over 40 different processes and we have a white paper out there that looks at how we've approached process mining from an organizational perspective and it also it covers five or six of the initial findings we had when we started standing up process mining there is a community form uh page it's called a product Hub on the community. servicenow.com there's a specifically for process mining product Hub feel free to go there post questions we answer them on a daily basis when you post questions there uh we also post tons and tons of uh content there whether they be blog posts that answer specific questions or uh recorded sessions like our process mining Academy Series so there's about 20 plus Academy sessions out there now that do deep dives into different areas like if you really wanted to learn and spend 45 minutes talking about the histogram there's a 45 minute session about the histogram visualization and how to use that best uh in that um in that library of Academy sessions we also have a use case series these are five minute videos that walk through using process mining to do things like SLA breach analysis Channel analysis longer to Route than resolve type analysis that perform after a process change scenario so a lot of the stuff that we covered here today is actually packaged up in that use case series um and we'll be diving deeper into five or six of these use cases in in the next session and then lastly if you have folks in your organization that you want to see process mining in action or kind of get a better understanding of it on that Community page there is a why and what of in platform process mining post there's a 10minute overview recording as well as a six minute demo available on that that page if you want to share that with others in your organization that exists all right hopefully you've been posting some questions in the the Q&A section but if you haven't and you've got questions about anything that we covered here today uh please please post those in the Q&A or the chat now and I'll I'll do my best to answer them so I'll give you a minute now to to post any questions you might have about anything that we covered here today or any process mining topic get those questions in there well if nobody has any questions not seeing any yet just a reminder about the program and and kind of the next dates uh so we we this is session three and it's the second running of session three um which we focused on how to use the analyst workbench to find Improvement opportunities in the next session next month we'll just dive into five common use cases like how to use process mining to do SLA breach analysis um or that uh before and after a process change scenario we we'll walk through a five common use cases and and at that point you should have what you need to to kind of get going and start being successful uccessful with process mining um if you've missed any of the prior sessions uh that that why and what of process mining that demo that's available on that link that I just or that uh blog post that I just spoke about we did take the session number two how to create your first process mining project and we took that video and posted it as part of the very first process mining Academy session available on the community site so if you you missed the last session you want to go back and and learn the content uh feel free to check out process money Academy session one on the community site and uh you can get things answered there or can learn a little bit more there all right last chance for romance anybody have any questions or any anything that they want to get addressed before we turn into a pumpkin today if not I'll just thank you for your time say happy Mining and we'll we'll see you next time for
https://www.youtube.com/watch?v=e0YL5nf882w