Intro into Predictive Intelligence - Dec. 27, 2020 - Performance Analytics & Reporting Office Hours
okay i see everybody's just about ready so um let's make sure that we welcome you to this office hour section session and the off-sider session is around performance analytics reporting or predictive intelligence or a combination of these three and um i would like to welcome you all a good morning a good afternoon and a good evening and we're glad that you're able to join us for this session and we do this session every two weeks and for those of you who are new welcome and for those of you who are have joined us before welcome back now if my slides would turn then i could also flip to the next page um before we get started i'd like to run you through a few logistical items for today's session this session is for you this is your opportunity to get some fresh ideas gain a better understanding and get some practical advice for some of the world's experts in the field today i already saw two rahul and leonard on jumping on on the session so get your questions ready the fire the typical format for these sessions is that we kick off with a presentation followed by q a and there may be cases where we can't solve a technical issue or get into the details about you about the the question or the the answer or the instance so we may ask you to open a case on hi uh or uh we need to follow up offline so we can cover as many questions as possible during this session uh please feel free to ask questions via preferably the zoom q a panel hopefully uh everyone can find that q a panel because that allows us to manage the incoming questions a little bit better than the normal chat you can open that up through the interface if you really can't find it just use the general chat and we may ask you some follow-up questions to clarify so that we can ensure we're answering appropriately uh we do have quite a few people on the line so please keep yourself on mute unless you're asking a question so my name is david van housten and i will be your host today i work for the now intelligence product management team and with me today is adam stout and he will be giving us an in-depth overview of predictive intelligence today um following this we'll open up the floor for your questions and as said we have two experts from the field directly with us one is in product and the other one is in our pre-sales organization it's rahul and leonard so get your questions ready to fire so we got these great experts today helping us out all right please note this session is being recorded uh so if you or your organization are not comfortable with that please disconnect now the sessions are posted to youtube and you can find links to the playlist on the page you registered on as well as the community home for analytics intelligence and reporting please note that the k20 labs in now learning are still available and still free so take full advantage of these free labs to learn and help your organization there's also a great tip a great article on writing good questions in the community so use this link so this this link will be posted with the pdf after the session and just use the link to go to this article or search for it in google and you'll get a let's say a a little lesson free lesson in how to better ask for uh something you're yeah you're you're troubled with or that you can't figure out and then this will help you basically give a better description and um increase your chances of a good response that will help you out all right um that covers the basics over to adam good morning adam good afternoon david thank you for the introduction i'm going to take over sharing and we'll get started okay here we go so today we are we are talking about an overview of predictive intelligence we're not going to go super into depth about how you about how we implement everything uh but we want to take the opportunity to share with you what predictive intelligence what servicenow can do to complement what you're doing with performance analytics and reporting um we generally focus on on performance analytics and reporting but there is just a whole world out there um that that i think we can take advantage of and we want to make sure that you know what's there so that you're not getting stuck with something that you're trying to solve performance analytics when there's really a better option already available on the platform with predictive intelligence so predictive intelligence is machine learning it's the service implementation of machine learning and one of the things that we want to we want to stress about just like with performance analytics where we we have analytics that are targeted to the end user and the process owner and your executives machine learning is for everybody in the business so whether it's the end user to help us understand what they're looking what they're looking for um and get the right information to them the agents is key to optimize what uh together just a very tactical relief helping it help us get the right priority to the right group as quickly as possible identify things i don't know so i don't have to manually search for things uh the agents it's a huge help for them and service owners uh to uh just like we would do performance analytics understand the overall trends to help us look at what's going on look at the data look look at the knowledge articles the incidents and take take immediate action uh based on what predictive intelligence is is recommending so it is again just it is for everybody just like performance analytics is all throughout the business for predictive intelligence the frameworks that that we operate in so the predictive and whereas performance analytics we count or sum or average whatever it is we want to see that's in the system predictive intelligence is using a few machine learning models uh frameworks so we have classification we'll talk through some of these examples but we have a classification framework a similarity framework the clustering framework and regression framework and as we go through these understand it's very important to know which ones what you're trying to do and which one is the right framework to use if you try to solve the the right problem with the wrong framework you're not going to get the answer you're looking for so let's talk about a couple a couple of these examples of on how we use these um before we get there just like performance analytics and i think i might be singing to the choir just like performance analytics gives us that power of analytics in the platform predictive intelligence gives us those same advantages for machine learning in the platform so it's it's not just a a product it's it's it's a product not a project you can turn it on you can use the frameworks that we have and you're not starting from scratch every time in in open spaces we have we have some we have the frameworks for you to leverage to get the answers that you're looking for just like everybody else just like we have the content packs this will help you apply machine learning in a very tactical way for your business and not just tools that are out there that you're gonna have to start from scratch but you can apply based on the best practices that we that we've found also like performance analytics where we can reduce the the dependency on the your business intelligence team we can reduce the reliance on data scientists it's not to say we're going to completely exclude them that we won't ask some questions but the idea is that we can give you the power to meet to answer your customers needs right we want to give you all the tools you need to meet the business need without having to get experts to do every little thing that's out there and again quick time to value we want to get up and running get this applied in the right place as soon as possible without having to to create a six-month project to implement machine learning in some unknown way we have a lot of ways to get you up and stuff up and running quickly now here's something that i want to stress in this next slide as we talk in performance analytics you really have the option we have content packs which are which are great and they help you get going you can configure them but you can also build things from scratch you you can start and go i want to create this and do whatever i want and predictive intelligence you have those same two options but to really get the most value to get the quick the quick time to value out there what we want to look at is using the out of the box uh applications for this so this is an example these are equivalent to content packs that these are these are plugins that we turn in components features of the applications that come out what i want to make sure that you understand is that for predictive intelligence you really really want to start with one of these scenarios there is more to it than just uh putting it on a dashboard that we have the performance analytics and again where i want to build it myself and and i know most of the people on this call most people listening to this can build performance analytics by themselves from scratch right create your indicator source create breakdown sources your indicators run your jobs there's more to it with predictive intelligence so not only do we have the one paradigm which pa has but we have uh the multiple frameworks so again you have to know which framework to apply but then you also have to implement it so instead of just looking at it like we're doing with performance analytics instead of putting it on the dashboard looking for what what's happened predictive intelligence has the added piece of setting it up like we do with performance analytics but then also implementing whether you're putting it into the ui to see similar hr cases or you're having it predict the doing the auto categorization auto assignment there's there's more implementation part of this there is an implementation part to actually make these things happen on the fly don't wait but push that information forward to the agents to the process owners to the to the end users lots and lots and lots of content that are applying this and again i think the most important thing we can get from this is start from one of these for the most part if you implement one of these things you're going to get a lot of value in your area and if you are going to modify what's what's there i recommend that you start with one of these if you were going to create something for a custom application or something outside the space if i wanted to understand how to create how to identify similar similar widgets you know i want to look at similar cases similar incidents so certainly look at what's there before you build something else but if i'm looking at my first project i'm really going to my first few projects i'm going to want to make sure i'm looking at what's available that's using predictive intelligence and try to implement one of those things because there's again a lot of thought already put in there so what do we get right and we're gonna we're gonna walk through some of these use cases but what do we get optimize resources and reduce costs with performance analytics we're monitoring the cost for monitoring the process for predictive intelligence we're gonna be able to actually influence the process as it's going to get it to the right people so not just see that that we have a reassignment problem but be able to do something about it and get it assigned to the right people we're going to increase productivity let's figure out uh categorization's one one of my favorite uh or assignments one of my favorites so instead of having to get a ticket have somebody look at it and then figure out where it needs to go to predictive intelligence can just route that for us as well and maybe i'm saving them a minute somebody has to pull up pull up an incident and and tell me something or i look at it and figure out what's going on uh instead of that we can just have it send it to the right team if i'm saving a minute per incident and i have 10 000 incidents per minute that's a pretty significant savings and we can again improve that business efficiency all along the way that's the first the first level is assignment but all along the way we can look for how to increase business efficiency where do i have duplicate uh cases and a problem coming up where do i have knowledge articles that are really similar that should be combined predictive intelligence helps us find that without all that manual scrutinizing looking at those things and at the end of the day we're going to get improved customer satisfaction i've never had a customer who's complained to me because we resolved their incident too quickly assuming we did it right um so we want to make sure that this is going to help us drive that customer satisfaction efficiency productivity and reduce all of our costs so as an example of this university of maryland was able to decrease the incidents per month because now they can they can optimize their their knowledge base uh they can have a good catalog so it's very significant incidence per month reduction by implementing perf uh predictive intelligence correct routing assignments so nothing is perfect humans aren't perfect but by by quickly automatically getting it to the right team most of the time we can reduce those reassignments and when we get to it again we can close incidents faster and and i've never had a customer who's complained because we closed their incident too fast uh time is money getting those incidents closed it's not just i have to do less work but my customers are happier the business runs better right very significant savings uh and an effect by implementing uh predictive intelligence so again let's walk through a couple of these frameworks so we have classification similarity uh clustering and regression so when we do when we do get into it i will we'll see this and i believe this is in paris um where we'll start to see the predictive intelligence workbench um for those of us who are who are working uh before paris which which is certainly a good number of us today um predictive intelligence is a is a newer technology for this and you're going to see a lot of improvements to make it easier to configure so as you go in today if you if you're running orlando there's more to configure so that's where you really do want to stay with the out of the box solutions that we can that we can adjust just like you would performance analytics we might adjust tweak to get exactly what we want but there's a great starting place um as we move as we move into paris and then and then on to quebec you'll see more uh tools to help you configure things like the predictive intelligence workbench which should guide us through setting these things up um allow us to tune them to test them it's not just flip a switch and it works because there are we we all have slightly different businesses so we do want to make sure that we're starting with the templates making sure we're logically looking at is the configuration correct is it do we expect it to do the right thing and then testing it to verify it does and then we can implement it to automatically uh help us out so if you get the classification what's a real use case for classification so as uh an incident comes in i have my short description and we can classify based off of the sort description so looking at at this text we understand that this is uh about email it's important and i i and i need it done by today right so by looking at those words and classifying based off of that based on what what's happening in our system what are the words that our organization uses to make sure that these things happen we're able to uh again route it whether it's routed to the right team for classification uh auto assignments classification of using the classification framework we can reduce the error rates because it's not getting bounced around all the time and make sure we get the right priority the right urgency set so we understand what needs to go on all somebody has to do is is tell me this right i'm not getting my email i need a fix today and we can from that just like a human can get there's a priority and what this is about again nothing is perfect but it will help us uh route that stuff appropriately so that they don't have to set every level and go through setting every field that needs to go on let's talk about similarity similarity is one of the most interesting ones to me as i go through it is that we are looking at similar incidents similar cases similar alerts and i think there's a couple questions um in the q a about this as well uh proposing major incidents to look at all these incidents that are coming in and suggesting you know maybe we need to create a major incidence because a major incident because i'm seeing a lot of things that look the same right it might have hopefully that's the same category but there's more to it than just the category so similar incidence is actually looking at the text and saying these these similar to me that's really hard to see one at a time right and by applying machine learning by providing applying predictive intelligence we can we can take advantage of the computer's time to help point these things out to us that otherwise it'd be difficult to see 101 or we'd have to have a process owner a service owner reviewing all these incidents to go hey these really are all the same right this is a huge time savings a huge a huge benefit to the company to find the similarities sooner now on to clustering so with clustering we're able to to group again uh the similarities are looking a lot of the text we can group the similar records together to optimize efficiency so one of the one of the good use great use cases for this uh that we have in one of the solutions that's pre-built is prioritized prioritizing kb article creation where am i missing articles let's look at incidents where do i get my incidents from and where do i have my what do i have in my knowledge base and where do i have gaps just really amazing uh it takes a lot of work for a human to do and a human can't read uh 10 000 incidents that come in right and 5000 kb articles but the machine can and it can start going we're getting a lot of incidents here and there's just really nothing that's matching into this to to kb creation so we really get to expand rather than just looking at what is and what's going on but expand that to what can we do right what do we need to do so a great great use powerful use case and again this this is coming with one of this pre-built solutions so don't try to build it yourself but look at what's out there and see if you look at that list of all those things that are out there where can my organization take advantage of this and again this is this is following up to that with the demand creation about how do i get increased deflections where what are those articles let's use the machine languaging uh we can we can use predictive intelligence and machine learning to help us understand this vast amount of data to make it better directly better for our users and then regression so that the last framework that we have which uh again i believe was introduced and raul can correct me but i believe this was introduced in paris as well um with regression so you may not see this one the other three i think you'll see today but if you're not on on paris yet then you won't see regression that's correct adam thank you um so regression uh similarly where we have where we have the the description of of the what's coming in we have text that's coming in we're able to predict the uh and estimate uh a number right to build a regression so the common use case for this would be how long is it going to take to close my my incident now we can do something similar to this with performance analytics where you go on average how long did it take um but then we're going to be we're going to be restricted to the category and the priority two levels of breakdown which is useful as well i mean it's an interesting comparison and what makes more sense for you what gives you a better number this group takes this long this group in this category takes this long but regression allows us to just kind of go straight to the text and understand um my uh my oracle database is out of space how long should it take to solve that so regression gives us that framework to be able to to predict a number and predictive intelligence is integrated out throughout the stack so we're looking that we're looking through again not just a point solution and i get a number but the advantage of using predictive intelligence is that it's it's part of servicenow and can be used in servicenow so in the example here in agent workspace in asia workspace we have the related search results and i get the uh we can recommend the the related uh cases we have the similarity with cases similar knowledge articles so embedded into the experience of the end user into the process owner um into into the fulfiller those are predictive intelligence is not about going someplace else but taking this information what we can get and taking it to the user directly to take advantage of it they don't need to know that they're taking advantage of it it just happens right it's there it is auto routed for them it is auto assigned for them auto prioritized for them the similar cases isn't somebody manually tagging this seems like the same thing or things in this in this category but by looking at the text looking at the content it's saying we think these are similar do you want to take a look at these as well super powerful to have right at your fingertips and then a follow-up example to that in major workspace again is propose a major case right so there's a lot of these coming in and again if i have this assigned around the world to different people i may not see that i may not see that these are similar and just because i get 20 cases for email in a category that doesn't mean they're the same but if we start looking at the text and we start analyzing it we can start to see no these really are the same right i can't access outlook that's the same as opposed to i need a new email account i need you to change this email aliases alias that comes through we can get deeper right predictive intelligence allows us to get deeper than just categories and trends and again also not looking at even daily trends or monthly trends and am i predictive intelligence is not helping me understand do i have a big process problem that's not helping me understand if i need to staff differently but tactically it's helping me understand we need to create this major incident we have a problem right now right get this to the right person right away set the right priority so a great balance to what performance analytics has given us historically predictive intelligence allows us to move that up and get more tactical uh to implement what we've what the machine is telling us right what we can learn implement that tactically to solve our problems and skills prediction so another one as we look through it is well what do i actually need to solve this right generally we're looking at categories um or somebody has to read it and when somebody reads it goes oh i think uh susan can help us with this she knows what's going on we're able to take that with predictive intelligence and push that down to actually have the system recommend here's the skills we need and if we're if we're curating our skills on on who has what skills we can start routing it that way right take advantage of the data we have it's a general theme with servicenow make sure we have the data and take advantage of that data and this does connect into performance analytics so what well performance analytics again is giving us us those big trends and understanding what's going on and that doesn't go away right we're getting more tools with predictive intelligence not not not replacement tools and we're still going to use performance analytics to monitor how good our predictions are right no matter what happens uh if we're predicting again we're not going to be at 100 so we want to moderate are we getting better or predictions getting better do we have our data stored the right way if not then we we want to take it we want to go look and how do we tune the model to give us better predictions so it's still important to use performance analytics to sit on top of and complement predictive intelligence so that we again keep our eyes on those big pictures to let's make sure that the cases really are similar that we're routing to the right team that people aren't having to change the priority all right so we've spent some time going over what what's in uh predictive intelligence and hopefully this this has been helpful to understand kind of those tools that are out there it's just uh we're trying to open your eyes but then how do we do it right that's the next question this looks great what do i do about it so i i it's too bad we're not in a in a classroom because i would throw out some candy you know what's the one thing i'm going to say about where do we get some training we go to now learning now learning has great free on-demand content a lot of it so there is a predictive intelligence fundamentals um uh an implementation uh class server uh similar to uh server to performance analytics fundamentals which i think a lot of us have taken um this one's not in person it's three hours um and i believe the paris version of this is coming out shortly uh with some with great some great uh additions from raul but this is there now if you want to get started and again uh shortly there'll be a paris version which you could wait for if you want to or take or take again and go through it there's a predictive intelligence implementer if you really if somebody's trying to go from scratch the fundamentals and implementation as part of that path as is servicenow fundamentals it's a little bit more expanded certainly getting to implement servicenow these are two examples of what we have in now learning there are also quite a few labs from knowledge a lot of stuff talking about predictive intelligence and how you apply it and how you apply it in specific situations um so it's not just uh generic predictive intelligence which is the which is this but you'll see some how do i apply it for customer service management how do i apply at night with itsm so take a look at what's in now learning again just some great content great labs from from knowledge to give you some hands-on experience um some of this we're going to learn some theory and when we get to those labs they're uh that gives you hands-on experience to get stuff running um a question uh specifically about this is the fundamentals uh learning course fee based so the in-person ones are fee-based these i don't think we have any live classes and if we don't have live classes that today everything that's self-self-paced is is not doesn't cost any money um i believe all all or at least most i most of the classes for predictive intelligence are free free and self-paced take them when you want to take them all the knowledge labs as david said earlier are are free to take out there um i don't think that there is a class you can pay for it for predictive intelligence and raul or leonard you could correct me but i believe everything you're going to want to hear is is ready to go right now you can go hop onto now now learning and take it and the documentation information to contradict you on that adam i think that's right okay um uh the uh the the exception would be um are not the exception servicenow fundamentals which is key to to understand anything with servicenow i recommend everybody take servicenow fundamentals um if you haven't taken it take it that there is a instructor-led version of that uh which costs money but there is also a self-paced version earlier this year there was a self-paced version introduced so um i like instructor-led but um if you want to take it self-paced uh especially in today's world um you can take that so i believe everything you need is is free to do all of that for the predictive intelligence implementer path um again documentation great documentation that that's that's in there um you don't want to skip it there is there are a lot of concepts and things that you can drill into um it'll explain what's going on so make sure if you're looking at it go go look at the documentation an important part for documentation for predictive intelligence is there is a lot of development going on with with predictive intelligence so if you are running orlando i'd recommend that you look at both the orlando documentation and the paris documentation um our documentation does get better every release as we make improvements as we rewrite things as we as you ask questions and and and we refine our answers we do improve the documentation so i generally like to look at the latest and greatest documentation however there are features like regression or the workbench that are in paris that are not in orlando so if you are running a version uh an older version then you do want to go back and kind of compare those things you can sit in the orlando documentation if you'd like but then there might be some better explanation so it's a bit of a challenge if you're on the latest and greatest uh it does make life easier for you and that'll be true i expect for the foreseeable future because of the the massive amount of resources and effort that we're putting in to improve predictive intelligence for you and the last one that i i will always plug is the the community so if you have a predictive intelligence question then the same community you've been using is the right one the the same forum so it all goes into analytics intelligence and reporting forum so if you have performance analytics questions you have reporting questions you have dashboard questions you have predictive intelligence questions go ahead and ask them there and we'll we'll get you we'll do everything we can to get you the answers um there is a no if you have virtual agent questions and nlu questions uh there is a separate form for virtual agent um if you ask them here you might get the answer but you also might get redirected to someplace else uh so nlu we normally answer in the virtual agent forum um but the predictive intelligence content that we talked about today this is the right form to ask those questions there's there's not a separate one all right so let's get to some questions i know some of them have already been come in and answered um and leonard you already got the question leonard you wanna you wanna talk about the question that sure is coming in yeah uh thanks adam yeah jose asked some really great questions here so uh does predictive analytics also leverage archive data for a larger data set you know it it might be easier and adam is it okay to show my screen real quick i grab the screen please thank you all right let me get into um all right so you can access predictive intelligence and jose this might answer your question when we just sort of look at it but if you type in predictive intelligence and this is paris you'll see the different algorithms that adam just went through classification similarity clustering regression so these are the different frameworks that adam went through the solution definitions are the configuration of the machine learning model okay so when we say solution definition that's what it is and then the solutions are the trained model so with clustering we use that to identify uh patterns so for example let me see here if i can find one like this is an example of a cluster run against uh the i believe this one's incident so this is against the short description and incident and so this is showing you uh clusters of different uh assignment groups so network hardware software inquiry and then you can break it down and you can look and see okay under network what are the clusters of issues so here we've got an example of within the network cluster we see a token rsa issue and then when we drill into that we can see the different incidents underneath it so going back to um jose your question on the clustering will multi-stream events um the way this works it's really simple so we mentioned earlier that this is meant to put the power of machine learning in the hands of the platform users so you don't need to know um you don't have to know like all the data science uh languages like python or tensorflow to uh you know or r to use this which are the some of the popular data science uh programs out there you you just have to fill out this form so if i wanted to do clustering to identify patterns i'd say maybe like say incident short description cluster you can select a word corpus which allows us to uh it's necessary for us to convert those short descriptions into numeric vectors so we can run it through the algorithm so that's what it is so we've got um we've got some good uh you know adam went through some of the courses that go into this in much more detail but the word corpus is essentially needed to convert these short descriptions which are text into numeric vectors so we can plug it into our machine learning algorithm and then i hope this clears it up but essentially what we're using is we are looking at any table that is in the servicenow platform so in terms of incident for example i would choose incident that means i'm going to train against the incident table so jose going back to your question around archive data i don't believe we can go hit archive data i don't think it's accessible like this to be a table so let me know if uh you know feel free to pop off a mute if that if you've got more questions on that so so i'm doing a short a cluster on incident short descriptions i've got my table and then i select the fields i want to cluster in this case i could cluster on short description i can cluster on assignment group i can cluster on you know all these different items but in this case i just want to create a cluster on short description and then i would select the data that i want to look at so i could say like i just want to look at created uh created on a certain time frame right because we don't necessarily you know most of the time your your incident table your event table is going to have millions of records in it we typically don't look it's with machine learning it's more important that you get the right amount of data versus like a ton of data right the quantity is not as important as the right amount data so we'll just say i'm going to grab the data from the last three months and then i don't have that much data in this particular instance so i'll just say for example you would grab that there was a question around the processing languages so these are the ones that we support right now in paris this will continue to increase as we go into um quebec and forward then you have these stop words which essentially are things uh that i like words like the and contractions prepositions so we already have a default stop words uh here selected so we can kind of filter out the noise right we don't want to cluster on the we don't want to cluster on and right and so it eliminates that and then there's a update frequency and a training frequency so how often you update that but in a very high level all the machine learning model configuration is typically two to three steps and then you just hit submit and train i won't do that because i've got a ton of these clusters already but when you're talking about well what kind of data does it go after it's it's any data that you can uh it's a table or a view a database view or a table that you can point to and then you pick the fields from that database view or that table you move them over and uh you you use that to for example in this example uh cluster so and then go ahead well dad into that so archive tables are our tables and you could you could do this on there um i think generally we wouldn't want to we wouldn't want to do that for for two reasons maybe three um one we are this is all about action right as we go through it so if we're looking at uh like incident we're using the short description we want to learn about incident how do we route incident today so your archive i think you run into the data that that date is a year or two old um that's why we've archived it so it's not as relevant technically your archive table probably will show up here um but i believe when we archive data i don't know if we flatten out the references or not um so i while it would look the same i don't think you'd be able to use that model anywhere um so i probably wouldn't do it even if you could even if there is an archive table showing up here um yeah so we have the old data and then the the other part is it's it's the same issue we have with performance analytics uh or that potential is that uh bad data leads to bad decisions so just like a performance analytics solution uh we're pulling we're implementing performance analytics the first thing we have to do is data cleanup right do we have the right categories what's what's going on you really need to look at that too for performance analytics if i if i just have anybody else's thoughts if they want to chime in but if i just have garbage categories um then i'm going to predict garbage categories right it is quite literally garbage in garbage out so if i if i go through my assignment groups and i never actually assign them to the right assignment groups i never i never reassign it to the correct assignment group then the machine can't learn what correct is so it's good to look through these things but if you have just bad assignment groups and bad data hygiene particularly historically bad data hygiene then training your model on the bad history was going to give you bad results going forward right so if i if i implement my if i do a clean up exercise to get the right assignment groups and i i really put some effort in to get great good i'm gonna go with good data hygiene just like my performance analytics information will be better the same thing will be good for for machine learning teaching it on too much data is gonna is gonna help me teach or it's gonna continue the same results yeah and just adam you bring up a great point right so performance analytics and predictive intelligence work hand in hand right so we typically use um performance analytics up front to explore as adam was saying the data and the quality of the data so for example the classification algorithm which is used to drive intelligent routing right route me to the right uh the most efficient path resolution we would want to run performance analytics and understand for example which assignment groups are or which assignment groups are being hit highest which categories are being hit the highest do i have high reassignment counts maybe i want to do some text analytics so i can see kind of the makeup of what is in my short description so we're always using you know these two things in conjunction uh the performance analytics gives us a great way of figuring out where do we need to focus the machine learning to find a pattern that is not obvious to us through analytics okay let me see if there's any other questions let's see here uh so uh just uh a quick question um to clarify do we um do we support clustering on journaled fields so specifically uh notes fields right work notes customer comments is that supported today no no uh not not in paris unless rahul and david you guys know if it's coming in uh beyond beyond paris but uh but journal fields work notes those kind of things aren't supported right now so that no not my knowledge there um and if none of us on the call know that it's happening then i'm gonna go with it it's not happening right now um so with that it does come back to the data hygiene if you needed to go through um uh the the question comes from the initial ticket information is not good the description they type in is not good um one of the things i might look at is on the form maybe adding a a summary if if that's just a problem i have adding a restated summary i probably wouldn't rewrite what the customer wrote or make sure i keep that but internally if we know that's not any good then i might have my agent put in what's the real description what was the real problem just like we have resolution notes uh maybe i'd have a summary now ideally i don't want to have that but if that's that might be something i look at right the default is just copy those fields together but allow the the agent to to restate it in a more clear way that another user might put it in right so there's certainly a balancing point in there but uh the journal field just gets a lot of data and i i i think one of the issues we might have with the journal fields is that you'd end up with a lot of data that's the same and the overall incident would get a little skewed um it'd be hard to see the signal for the noise um another question we have a few more minutes uh another question came in about uh about clustering and i think uh this is for go ahead yup got i i see it from john adams so i've got the uh the configuration up here so john the um so there there is a job that runs so you can see here on the configuration you set the update frequency um as well as you know you can set the update frequency and the training frequency uh so and then there's an api too so you could run this within a script if you wanted more complex kind of scheduling but um yeah it's it's based on this update frequency and this training frequency so hopefully that answers uh gives you some insight on that and how often you know why don't i train it every day is there any um how do i determine what an appropriate frequency is to train yeah so uh another good question uh so we run performance analytics uh reports to take a look at the sort of like the the effectiveness atom of the uh of the machine learning model so you know we leverage performance analytics to tell us you know the precision the coverage of the different um uh machine learning models and so you could train it every day but it's not necessarily necessary right if the model is predicting at a high precision um you probably want to just stick with that model because it's working for you but uh what's nice is that we provide out-of-the-box prediction results dashboards where if the model's precision starts dipping then we may want to train it retrain it because your process might have changed maybe or maybe we've encountered data that we weren't uh that the model wasn't trained on because the majority of the models out of the box that will do things like uh routing routing a a a case or an incident or finding uh similar incidents uh similar kind of incidents that might help you solve a the current open incident that you're working on those are all what we call uh supervised machine learning models meaning it's looking at the the data from the past and as long as that data from the past is an accurate representation of what you have in production your model should be fine but if you start seeing this the precision dip or the recall dip some of these different metrics then that's a good indicator that you should retrain great um so it is it's very much tied right like tied to to my process to go my process didn't change if my predictions are good i don't need to i don't need to change anything but it runs the prediction so and and if i had but if i had this uh so something from two years ago might not be might not be that good but something from yesterday's generally going to be fine um and all right um and i i think it relates uh to jose's question the question about historical archiving as well the follow up with that um if my process hasn't changed in a year then in theory whether i che whether i train it from a whole year's worth of data i train it for months i train it from last month's data or i train it from a month six months ago i get the same results so just like uh to me this is very similar to performance analytics where we look at the data month by month day by day week by week we don't look at cumulative numbers there there are very few to know indicators that are all time or or even annually i don't see very many indicators annually that other than something that a cio wants to see because i i don't want to treat all data equal i don't think all data is equal right the older it is the less valuable it is to me so if my process has not changed then whether i train on them if i have as long as long as i have uh enough data to train on which i don't know if we if we answered that question about how much data do you need but as soon as assuming i have enough data then we'll leave out what enough is but i have enough having more doesn't add any more value if i get the same results every time then it doesn't matter whether i have uh 10 000 or 10 million records the model comes out to be the same unless there's 10 million variations and the other part of it is is if my process has changed and gotten better which we hope we have right we hope we have we get better assignment groups now than we did last year at this time if you train it on all the old data then the predictions will actually not get any better right actually be worse so it is important to be looking at your process get the good model and go forward and if your model was good six months ago and worse now then you actually might want to train on what you did six months ago and you you could do that that's a little odd to me but i could see that i guess so again you need the right amount of data i don't want to train on just a day's worth of data but again assuming that i did better last month than i did a year ago you don't want to look at a year ago you want to train on what's going on now okay uh can can anybody talk about synonyms uh and i think in clustering if i have yeah sure i can do that so um my understanding today is that synonyms is an nlu functionality and it's not something that we use within clustering today um just like you suggest you could all resolve pc laptop and computer to computer but if you wanted to cluster and you're actually looking at technical issues uh mac and pcs should probably not be clustered together they might want to see two separate groups so i think there's a fine line there of when to use synonyms and when not to um and longer term being able to use nlu in an in a predictive intelligence pipeline might actually make sense for generating a model but that's not something that's done today great thank you um hopefully that answers your question john and there is a uh did we go over what the right amount of data is or at least the minimums i did answer it in the q a i can talk about it as well so um the the real answer is the minimum number of records that you need is really going to vary depending on the solution type but a good rule of thumb is you want 10 to 30 000 records for optimal results with less you may be able to create a functional model but you're also going to need more tuning and more advanced configurations to see value in that model so i know it's not an exact answer but hopefully that answers the question there in terms of how much data to use and there are there are properties um i like i think the property is 10 000 for most of the pi solutions i believe that's right so you know 10 is not enough right it's just not going to be enough um 10 000 is pretty reasonable if you had nine nine thousand nine hundred and fifty it's probably gonna be okay um the property will stop you but that could be changed um and i think the upper limit depending on the solution is a hundred thousand or three hundred thousand um i think that's right and i think that limit is due to memory space not anything else like performance you right right and it's um but from that is if if you have 100 or if you have 100 000 incidents or you have a million incidents there hopefully are not a million different assignment groups there aren't a million there aren't even a hundred thousand different assignment groups so you end up getting the same answer actually this brings a great point so even if i have let's say 100 000 records but 90 000 of them are from one class and i have 30 classes i'm trying to categorize that's not going to be a good selection of data set so i think there's a really important distinction between enough data and enough good data and so i'll just leave that as a thought and in classes that would be what you're predicting that's the assignment group if i'm routing that's the category if i'm categorizing correct correct um and can you can you bring up an example um of of where we and maybe leonard if you have some other ones too where there's data that will often exclude so we picked dates but what are there other types of data that we would be looking for to exclude from our prediction yep sure so if you're trying to use the machine learning for example to identify opportunities of automation for human events right like i need help resetting my password or i need i can't open up my microsoft outlook uh when you're going in and you're doing configuration um let me show my screen again and you're defining your solution you'll want to go through and for example filter out machine generated events right so you'll use this filter for example to i want to filter out the the machine generated vents because those are things that i wouldn't try to use a self-service portal or a virtual agent to automate but now if there i know there were a lot of questions on itoms but if you're looking for itom kind of events and trying to find patterns there you want to filter out all the human events right and just focus on the the event kind of categories so you can do event grouping or event aggregation um so you need to kind of figure out what you're trying to do with the with the machine learning i saw a question on loom i'm not i would have to get back to you unless the team knows on the loom integration i i have worked with the loom uh team uh but they were using our machine learning that adam went through in the beginning to help identify areas where loom and itom would would work together and that's um i know that i don't know if there's anything in paris i think that a lot of that work happens this is happening for quebec um so par part of every acquisition we've done is that when we when the technology comes in we do re-platform it so we don't uh service now has acquired a good number of companies especially in this space the idea is that we don't want you have to learn anything else right when you see it it'll be part of servicenow so it actually becomes really difficult for for us as users to know well what was loom and what was not loom and what was um uh dxc it doesn't matter at the end of the day um we're when the companies get acquired we we merge that in so that you don't have to learn something else you'll just have additional capabilities so sometimes sometimes that appears as you have a new checkbox in some place because we do it all behind the scenes sometimes there's new functionality that you know a new module that will come in a new a new framework or something behind the scenes um but uh i'll say it's not that we don't want to talk about it it's that we we've put such a focus on merging it in and blending it that i think we're successful when you can't tell it's just there and even if it specialized in itom originally we're going to try and reuse that technology every place that we can use it um thanks adam i think that's a great uh assessment of that case um jumping in custom applications having maturity to not because there's not benefit on training on a weekly basis so let me clarify this statement what i think i i understand it to mean is if i'm going to define my own custom solution uh when can i benefit from retraining it and when do i train it to begin with so i think the earlier question we talked about how much minimum data you need so based on your use case let's say you have thirty thousand rows of data you can train your classification model how frequently should you retrain the thing and i think the answer there is i think adam talked about one way of knowing if you see predictions fall precision fall over time was that adam or leonard one of them showed a screen of that graph that's one way of knowing that you should retrain the other way of knowing yeah right here the daily prediction precision so if you saw that fall that's probably a good sign you should retrain another way of thinking about when you should retrain is when you set up auto retraining if you know how many incidents you're getting in on a weekly monthly basis you can say my old training data set has been 50 replaced on a monthly basis then maybe you want to retrain every month and that might be a good indicator of when is a good preemptive time to retrain and that's hopefully that clarifies the question uh i'll stop there all right we're nearing the end of the session uh i kind of lost track on all the answering questions so if uh folks can we will do we answered the folks on the synonyms so i think we're good so let me just wrap up there is one heat mapping question that leonard was going to answer live i think david and then wrap up all right just with the heat mapping so jose i know you saw that um so again that's going against journal fields we would uh we'd have to let you know in cuba i know in paris again we talked about the the input type we don't support journal fields uh or work for workflow variable fields yet uh so we would have to have a roadmap discussion uh and um have to update you've done okay uh thank you very much rahul thank you very much leonard for your valuable answers to the audience um let me just make sure that we cover up so all these sessions that we do in the um office hours are recorded as i mentioned earlier and you can find them here and you can also see what's upcoming in the next um two weeks and uh actually the next two weeks the the next session is about data collection best practices so that's more performance analytics related and we'll have someone diving into the best practices for collecting the data there i already saw one question or at least a reference to a problem that's open so maybe that one can be uh answered during the session so make a note who i guess thomas thomas is talking about that we will take a note uh thank you for asking that question um we'll we'll address that in two weeks all right it's like a good one so we still need your help so as always we're searching for uh topics to cover so chime in there's an article out there that you can put in your comments as to what questions you'd like to see answered or what topics you'd like to see covered so until next time um don't be shy use the community for questions make sure you catch up on any of the previous topics during office hours go and submit and vote ideas on the idea portal and still there there is free training from all of the knowledge 2020 labs so while they're still out there make sure you get yourself trained thank you so much thank you adam thank you leonard and thank you rahul
https://www.youtube.com/watch?v=n04U2GSXLs0