logo

NJP

TechTalk - Using ServiceNow Predictive Intelligence to build a business case for automation

Import · Sep 14, 2020 · video

hello everyone thank you for joining our previously recorded webinar on data mirroring using predictive intelligence to analyze consumption and build a use case for automation on the servicenow platform as always we ask that you do not make purchasing decisions based off of the content presented during this webinar thanks enjoy the webinar so if you're an existing service now customer you might be experiencing challenges with some of your processes as they stand today a lot of organizations are struggling with delivering better customer care reducing their security risk and addressing long time long mean time to resolution or remediation when it comes to incidents and requests on top of this with a shift to a remote workforce this year we're all asking how can i deliver better services to my employees to encourage workplace satisfaction and allow employees to be as productive as they can be and one of the ways that we're doing this here at servicenow is by using automation using automation it's possible to achieve outcomes your organization is looking for servicenow customers can use the power of predictive intelligence capabilities that's actually a module it's a machine learning engine in servicenow to derive key insights on how you're consuming the platform today by running what we call the clustering solution in predictive intelligence you can analyze large datasets in servicenow and start to benchmark how you're consuming the platform so some of that some of that information or the insights you can really derive are are you automating enough if you're part of the incident management team what does the mttr look like for incidents how about for requests how about for calls that are coming to your help desk and what does your employee experience look like are employees able to use self-service capabilities to log requests if you're part of the security team you may be concerned with resolving security incidents in a timely fashion in order to reduce your cyber risk exposure later on we'll actually talk about how you can frame a conversation with your c-suite stakeholders internally in order to help you identify what are the key outcomes and priorities that your c-level executives care about and if security is really top of mind should your organization be adopting sore capabilities to reduce risk if you pivot over to the network operations center maybe you have too many events coming through so you may care about correlating these events using solutions like ais and servicenow's acquisition of loom systems to reduce mttr and avoid service outages how does this process look so let's talk about what this looks like in a tactical fashion this is a real capability you can use today this is not something that's conceptual predictive intelligence is actually a module that i mentioned you can turn on the plug-in in your pre-prod environment if you're on at least the new york release of servicenow so step one in this process is that you really take your production environment and clone it to your pre-prod instance in order to actually run this analysis you turn the predictive intelligence plug-in on in pre-prod and you use this plug-in to set up clusters against multiple tables so examples of that could be an analysis on how you're consuming itsm or itom or security incident response you analyze the cluster output from predictive intelligence to identify areas for improvement and translate those to recommendations so automation capabilities like security incident orchestration or ais for event management and virtual agent for chat bot capabilities can really be mapped as remediation recommendations based on the output from this exercise the clustering output will actually include a readout of incident volume and mctr and we'll show you an example of what that looks like add those together to identify how much total time is spent on resolution whether it's an i.t incident a request a call or a security incident and the next step is to really calculate and benchmark what your cost looks like today so what is my cost per hour to deliver these various services to the business for example if your level 1 incident analyst spends 5 000 hours a year resetting passwords manually at thirty dollars an hour what does this mean in terms of current cost to the organization to reset passwords and that's really where you start to benchmark how your implementation and consumption of the platform is today and work on ways to actually implement automation tomorrow to have cost savings and so lastly we'll go through how we can actually build a business case based on real dollar savings from both a simple business value analysis as well as some more complex financial models let's run through a sample case of how this was conducted as a customer case study note that the metric cc today and customer information has both been redacted from our output first we'll go through the findings from a sample cluster output this example is for incident management and we really want to understand what are the major clusters of incident types that i'm seeing in my organization these could be anything so for example i mentioned password resets could also be onboarding requests or backup failures just as an example so the first thing you're going to want to do is run predictive intelligence on any table and service now as long as you're looking at a large enough data set so ideally we're targeting between 30 000 and 100 000 records for this analysis and it should be time bound so maybe we want to do this over the course of the last 12 months as an example and again i just want to reiterate that in order to turn on the predictive intelligence plug-in you need to be on at least the new york release to support this activity in this sample view we actually ran clustering on an incident table within the icsm suite we analyzed over 40 000 records in a pre-prod environment and the analysis was done on the short description field in this case the output was that we produced 134 clusters and if you know anything about machine learning a cluster is basically a machine learning technique that involves grouping of similar data points so the output of this produced 134 of those clusters based on similar short description fields so it's telling me now here are the top incident types that you see based on the like description within that incident the engine also outputs cluster quality which actually which actually reflects similarity of the incident short description field so what we mean by the cluster quality is now i can actually say in confidence that i'm looking at the top cluster types that the quality is above the threshold that i've defined internally and so i can actually build a business case based on this output when we drill into a single cluster summary within the predictive intel module we're actually able to see the actual incident records considered for this cluster so even though we're abstracted we can still drill down directly into the incident and just confirm that the short description meets the requirements of what we're trying to analyze short descriptions just an example of a single field that can be used to group like incidents together and really the recommendation is that you guys understand your data set best so you want to expand to include multiple attributes for example maybe you care about the category of the incident or the description field or the assignment field in this example notice how the password is misspelled but it's still picked up machine learning engine groups based on relative similarity of the incident giving you accurate results and insight when you run this analysis when we zoom out of the cluster analytics output now we're able to take a look at the top clusters that we've identified so i mentioned we had 134 clusters we're going to take a look at the top ones and you can slice this and maybe say i'm going to look at the top 10 percent of the clusters produced and analyze these even further to gain some insight here's an example of the dominant clusters that were uncovered the cluster size and duration reflects the total hours spent per incident so right here cluster size and duration for example for this top one success factor password reset on average it takes 82 hours to resolve and we see over 70 000 instances of a password reset request happening in our environment annually so what do we do with this information now we can actually propose remediation recommendations in this case if we're spending a large amount of time resetting passwords how do we reset them faster and scale as volume increases um an interesting uh an interesting observation we've noticed that servicenow is with with folks returning uh or with folks moving towards a remote workforce we've actually seen an increase in password reset requests i think people are more productive than ever they're resetting their passwords they're trying to log into new applications that are being spun up to deliver services as folks work remotely and so now we're gonna zone in on how am i actually gonna to how am i actually gonna automate those password reset requests once once we benchmark where we are today and how our consumption looks then we can actually come up with recommendations of how servicenow capabilities can reduce our mean time to resolution or remediation so going back to this example of a password reset we can actually position virtual agent to intake the password reset request and virtual agent is our self-service chatbot capability we use it here internally as servicenow it helps me uh if i'm trying to look for a kb article for example internally or log a request i'm actually just chatting with a chat bot and we can also use a solution like integration hub and flow designer to hook into active directory or an identity and access management solution or even success factors itself to actually reset the password so now what we're proposing is let's have a chat bot to intake the password reset request and then let's use servicenow's integration hub and flow designer capability to hook into a tool to actually carry out carry out that password reset without a analyst having to pick up a ticket and manually fulfill it once we derive the insights as a part of the clustering tool now we actually focus on translating these into a business case for our internal stakeholders so we've gotten the technical part out of the way and now we're going to build a business case if you are part of the servicenow platform team for your organization this is where you can prepare to have c-level discussions to really understand how servicenow will align with outcomes your c-suite cares about in the current covet climate i know we're sick of hearing about this but unfortunately this is our new reality there's been a major shift in the priorities of your c-suite your c-suite cares about continued customer delivery and support they care about reducing costs across the board they care about how they're going to support a remote employee workforce and they really want to re-evaluate how to continue business in the new norm so operational and business resiliency is really top of mind prepare for these conversations with your c-suite ask them about what their initiatives are and their priorities are and reframe your servicenow roadmap for delivering services to power these priorities this is an outcome-based discussion i i want to emphasize that based on real customer conversations that we've had at servicenow with customers we've captured the following anecdotes from the cio so we've talked to some cios they've shared with us that they're not prepared for the health 4.0 initiative they don't know and they need to improve or automate antiquated processes to reduce costs they also don't know how to address compliance and regulatory concerns while still staying agile so you start to get a sense of what outcomes of c-suite is really focused on and now you can start to build your case for automation so here we've got a list of some uh example kpis or key performance indicators that are critical to measure your current state so the before state and then the after state in order to validate that your organization is making progress by implementing automation technology an example of automation impact is the reduction and average duration of incidents and requests so tying this back to our prior example how do we reduce mttr we recommend solutions like virtual agent to shorten the intake time we use machine learning to auto assign requests we use integration hub to orchestrate actions to other systems and then lastly we use servicenow's performance analytics capabilities to report benchmark and continuously measure our kpis and identify if we're meeting the goals that we set for automation so so far in our business case we've identified our c-level priorities we're focused on the outcome conversation we've positioned automation technology and we've defined kpis to measure value and now we want to start building a financial model to project cost savings a light business value analysis is easy so i mentioned earlier that you can take what something costs you today so the cluster analytics output will actually show you your mttr which is your hours works times the cost per hour so again i gave an example where maybe your level one analyst has a cost of 30 per hour to your organization and you multiply that to identify what your cost is today annually for carrying out a particular incident or security incident or actually responding to an event that's happening with your network operations team if you roughly want to do a bva you can actually assume a 33 or 66 reduction in time so what that means is if i'm spending 10 hours today on a password reset i can assume that i might either save one third of the time using automation or if i go and implement the full suite which is virtual agent and integration hub then i'll save two-thirds of that time spent and you can translate that to a rough estimate of hours saved and that actually translates to cost saved over potential span of time so that's a basic outline for bva you can get into a more complex financial model where you start to actually calculate your return on investment your npv your net present value and payback period in order to scale out over a longer period of time so you'll see here some examples of of a potential business case for automation where we're saying that we're going to save between 125 000 hours to 368 000 hours annually based on the implementation of automation technology and that actually translates to a cost savings of between 5 and 10 million over five years this is how you start to benchmark where you are today and where you need to be in order to to do left with more we've had a number of customers run this exercise and here's what they had to say so what what i found particularly interesting was that one of our servicenow sales executive team thought that this was the most customer-centric tool that we've actually ever produced because it really allows customers to mirror their data and build a real business case to identify areas of improvement where automation is key so what we're going to do now is we're actually going to we're actually going to transition into a demo where we're going to show you what that after state look like would look like for your organization so once you've gone ahead and implemented automation technology what does the experience look like to the end user so with that pronoun i'm going to turn it over to you thank you cyber so i'm just going to share so let me just take you through a example of the automation capabilities of the now portal using the virtual agent so for this i'm gonna quickly take a scenario in which we have susie now susie is a sales manager and she wants to request a group on microsoft teams for her uh team members as a sport for collaboration now the usual flow of request or work would have her go through this catalog of items and capabilities as displayed here on the service portal which is available to internal stakeholders now technically this is a very time-consuming process which has a series of emails or messages on slack so now using virtual agent i'm going to take you through this demonstration of the automation capabilities of susie requesting a microsoft teams so upon clicking the chat bot she's initially displayed a message of what is the issue or request that uh that is present at the moment and since the service portal as you saw before is a very form-based request it it it involves the user to really find the particular form that they're looking for with with much difficulty so in this case it empowers suzy to really find what they need using entirely through the virtual agent she can either type what she wants or in this case she wants a microsoft teams request so she's going to click that and now she's going to be displayed a series of messages requesting certain information from her end which is very similar to what she would find in the service portal so i'm going to quickly use a team name and the agent is actually intelligent enough to recognize certain members that it thinks will be part of susie's team so in this case susie can accordingly pick and choose who she wants to be part of her new sales team and finally an initial message to the team so as we can now see she's ended her uh service request and the entire process has been done completely through the virtual agent of requesting a microsoft teams using entirely the chatbot which is again very user-friendly and saves her a lot of time and she can in further check on the status of the microsoft team's request by going on this and great she's very delighted that it's already been approved and completed and she can also further go in to check the status and also the variables that it may have such as the team members or the initial message in case she wants to change it as well as the status now to quickly go through the technical workflow of this particular request i'm going to pass it over to patrick patrick thanks yeah that was certainly easy so before i get into the technical specifics i would like to share with you uh the why right so this is actually a real use case for for one of our customers uh american university just like many of us and all of our organizations was forced to transform very quickly uh due to covid right they all of a sudden rather than working in the office they were working virtually like many of us are today and their tool of choice for collaborating virtually was microsoft teams so within the first month of building this automation they were able to auto fully automate 150 requests and the creation time on microsoft teams dropped from 29 hours to 8 minutes so their end users were waiting 29 hours for something that can be very very quickly automated yeah maybe it needs an approval um unfortunately what was happening is you know that approval would go out via email then we'd have to wait to make sure the approval came back it was just uh it was just an unnecessary process and a lot of unnecessary overhead so we got that team creation time down to eight minutes and what this did is it freed up 37 and a half hours so almost a full-time engineer's work week was freed up by adding this one automation and obviously you can read the quote uh from the vp of human resources as well there was other side effects such as just a much better employee experience as well and they used integration hub they used our azure active directory spokes uh flow designer and action designer so all out of the box functionality and i'm going to share some of those technical specifics with you now so if we go into flow designer you can see that we have a pretty uh basic flow we can see it has a set of triggers and a set of actions right and then there's some subflows here as well so the trigger is that service catalog item remember that that demonstration that pranav showed that was just a virtual agent conversation that was front-ending a catalog item so we're really here to meet your customers where they're at do they want to use chat do they want to use the service catalog do they want to use mobile the now platform can really help you meet your customers where they're at and how they want to interact with servicenow and your fulfillment teams so the first thing it's going to do it's going to pull in the catalog variables those all become reusable data pills so there's no scripting all of this is drag and drop um then there's an if statement right so this is going to be an if the user isn't a manager go ahead and ask for an approval susie happened to be a manager so we quickly bypassed that approval and then we have our create ms team subflow and the reason that we create these sub flows is when you're building your automations in service now you're going to need this for another project for example i had a customer who um uses leverages servicenow project management on the platform and they said wouldn't it be great if when a project got to the execution phase we could create a team for that project so we could all collaborate and they were going to you know just create a process for that and i said wait wait wait we can just automate that we can take the reusable subflow and we can use it for this use case as well so it's going to become best practice when you're building out these automations in your environment try to break out reusable blocks of work into subflows uh so with that i will hand it back to pranav and then i will uh i'll close this out with one more demo off to you pranav thanks patrick so similar to the previous demo in which i showed where susie was requesting a microsoft team so i'm going to show you a similar demo in which now she requires a uh a cloud environment for her team to work so the digitalization of this entire process through the now portal so she can initially launch a stack from the catalog and in this case she wants to request a customer wordpress with rds and the va automatically inputs certain data based off of her persona but she can still add in her team name and the group and um [Music] upon submitting this request so now she what we are essentially doing is requesting cloud resources for the organization without the long review process so wrapping up this entire digital or wrapping up of this digital workflow around this cloud resources and reducing the resolution time from say a few days or week to just a few hours so i'm gonna skip it back to patrick for him to show you the technical details of this thanks yeah so when you provision your resources through servicenow we are automatically enforcing some tag data uh within those resources and enforcing that tag data allows us to dynamically map your application in near real time so as soon as the resources get provisioned out in the cloud we're pushing updates into our cmdb we're also pushing that tag information so now with that tag information we can dynamically map that application so we're grouping all the components together and we're providing business context in the next step in that process is logically to bring in your monitoring data so we integrate with a number of monitoring tools we have an open api we can pretty much uh integrate with anything so i'm actually going to simulate a cloud watch alarm in this environment so cloudwatch is aws's monitoring system for modern cloud resources and we're actually going to watch the operator workspace which is that graphical view of all of your services in your environment light up now traditionally in some of our knocks that are less mature we're swivel screening between different monitoring tools we're copying and pasting data into service now so we're missing a huge opportunity for automation that ca can help uncover so you'll notice that we've lit up the aha creative labs wordpress if we look at the map it's lit up as well and what it's doing is it's showing us root cause it's showing us the thing in the environment that's having an issue so now we can route engineers maybe we take this process full circle and we actually have an automation in place to resolve this issue so that's the end of the technical portion of the demo i'm going to transition one more time into a couple slides and then we'll do some q a so i'm going to share one more uh use case around that ai ops story so accenture right one of our partners one of our customers by using servicenow's ai ops integrated with itsm service management itsm pro they were really able to reduce their mean time to resolve by 40 so they're re they also deduplicated and correlated a lot of their tickets together so 90 due duplication correlation rates and 50 configuration item uh accuracy right so it's very important that we are binding a specific issue to a specific configuration item in the cmdb so we can provide that business impact like i just showed you on the map in the last slide is um you know an engagement that we ran with a customer right and this is what they use to roll up obviously the customer information's been removed but this is what they used to roll up to their c-suite and their leadership in their organizations to show the the cost uh the the operating cost reduction around it process automation so i know it's a bit of an eye chart things like jvm restore starts uh adding users to active directory microsoft exchange disk cleanup alerts etc so 60 it processes which accounted for 65 000 requests a year and these are all things that they said yeah we can definitely automate it oh yeah our engineers have a script for that anyway okay so let's put that script in service now right and let's put a front end on it with virtual agent or let's put a front end on it with uh the service catalog and they were able to reduce 2.2 million dollars in operating costs so um i think anyone would like to come to their cio with that level of analysis um so lastly uh everyone on the phone today presenters cyra pranav thank you so much uh we'll end with a call to action you can start using predictive intelligence today in your lower environment to create a business case for automation at your organization and you can be the individual who gets to report a 2.2 million dollar savings to the cio

View original source

https://www.youtube.com/watch?v=wEssRRJ0SGM