logo

NJP

Modern Change Launch & Learn Session 3 - Dynamic Risk

Import · Aug 05, 2024 · video

welcome everyone to our third session of the modern change launch and learn Series today we're covering Dynamic risk uh we have Daniel back again and we have some more folks um who he'll introduce in a little bit uh G to do some demos so let's jump in real quick before we go we have the Safe Harbor again so like like we've said um there may be some forward-looking statements in this presentation uh they're based on our best understanding at this point in time so please don't take anything as set in stone as it could change so some familiar slides for those who've been on webinars and um what we're looking at is we're looking at how change can transform and why change needs to transform so why now people have been talking about change and change management for a long time I've been working in the field for about 15 years and we've had lots of uh talk about how change management works and making change management better but um what we're seeing is a drive from organizations to uh to make change management happen faster to be able to handle a higher volume of change um more emphasis than ever on the stability the organization through change management and the ability to control governance and to control um compliance and things like security uh patching uh through making changes faster through making changes more efficiently um there's new ways of working coming along um you know things like devops out there which mean that changes are normally smaller uh need to happen more rapidly and that means more changes so again velocity and volume and if we don't do these Chang changes at the velocity that these teams require us to do them at then changes get stacked up and actually we can end up affecting stability negatively because we can't move fast enough so the Paradigm that we used to have which was valid at the time that changes kind of need to be slow moving so that we can do a lot of assurance around these things they're quite big objects that we're putting in has changed we need to be able to automate a lot to do with change a lot of types of change but we also need to be able to um work with our more Legacy or or um estate so that we can continue to do the change we need to do normal standard ability changes with an iil um but we need a way of working with these new Federated change processes these different ways of doing things Cy liability management devops just being a couple of examples of that but also things like Cloud infra um yeah just examples of that so a multi uh so a modelbased approach is what we're looking at for change management we've covered that in the previous webinars but it's a way of pushing the envelope with velocity volume stability compliance uh pushing into all directions of that envelope in one way and and we covered models and we covered approval policies in the previous webinars and we did um a webinar right at the start where we talked about the overall Concepts we're going to cover throughout this series but the the topic of today is around risk management which is a big part of change um so uh one view of the change adoption Journey um so we were talking before about how we replace our Legacy Workforce with flow reduce our Tech techical debt um start defining change models and we do state Transitions and we use those things we've talked about them in the previous sessions um we've talked about governance as well so make sure the governance is right for the context that we're using it make sure that when we have shift left processes we have governance that's fairly automated and keeps guardrails in place or where we don't have that we're making traditional bigger changes that we have the governance kind of embedded in the change and we do a more traditional process but we need to make sure policies adhere to throughout the change process um we want to be more data driven and we'll talk about that today so we want what we're doing doing our approval policies we talked about when we're doing uh creating our change models when we're um doing risk evaluations we have both the opportunity and the benefit of being more data driven we have more data than ever as well as as teams adopt things like we have Integrations to devop systems that have a kind of wealth of stories planning items security scans vulnerability scans all those kind of things are in there and code sniffs those those those things are all in there so we can start pulling that data r as our CNB matures we can start using not just the individual CIS the CB which is great we can also start using relationships to look at things like impact so as we drive or as we as we get more data in the systems we can look at that data we can drive automation as well and we can take the subjectivity out of the change process where we where it's appropriate so really it's about driving traceability and velocity in the change process and this is part of the kind of maturity curve so um and move towards a place where where appropriate again we have full change automation where we only uh interact with changes where there's a need need to interact with them and that might be every change that a certain team are doing or it could be like the example of devops it could be that we're interacting with the minority of those changes which happen at a certain time maybe there's a P1 in production right now and we need to have a look at that change or maybe we get the feeling that they're not adhering to the guardrails because things are failing or maybe there's something specific in nature about that change they're making that we're picking up from the structured data that means that we need to think about intervening with that change and maybe slowing them down a little just on that occasion so as we mature in the change process we're looking to improve velocity volumes to ility and compliance so today's focus is dynamic risk I'll explain what we mean by that in a minute the actual layout of the webinar is a little different from the last ones this kind of this topic is I think a little less conceptual and a little more practical uh than we've been talking about in the last two webinars so what we want to do is we want to dive in and show you the platform more there's more Demos in this webinar and less slides uh so that's one thing to bear in mind we're going to see if we can get through quite a lot of demo material and we've got Andrew Lee on the call with us who are the smmes from the engineering team uh who are the best place people to do those demos I'm going to do a couple of them maybe as well unfortunately for you guys so the overall Journey or change management the overall adoption Journey we've covered this before but we see these as a logical set of steps and they're not necessarily something your whole organization needs to do at the same time you can have different teams different areas um different bits of your business or organization moving at different speeds but they kind of we kind of feel like there's a logical path through these things it may be that for your organization some of them happen that you're working on some of the balloons on this line a little bit earlier or later it's it's not something that you need to follow prescriptively but it just feels when we've talked about it when we've talked to customers when we've talked as an engineering team uh when we've talked to our implementation teams and partners this feels like a good logical path through adoption so we talked in the first webinar about change models uh we talked in the second launch and learn about approval policies and now we're talking about Dynamic r so if we drive into that a little bit more uh what does that maturity J look like for risk So within each one of those balloons we have you know our own it has its own maturity curve um and so we're looking at um taking you from probably where you are now where you have risk score on a change and that just might just be high medium low that might be manually set for the change you might go in and you might set a change manager or some kind of change assessor would go in and say the change is high the change is medium the change is low and maybe even right description work notes um or you may be using uh risk assessment so a questionnaire based approach um but we want to show you is the maturity curve through going from there to start using embedded data like success scores to start using things like Risk conditions which are more structured success scores then allow you to start looking at the overall picture around the change and then predictive intelligence allows you to take into account or or to use machine learning to use AI techniques to assess uh the change risk from a number of factors which maybe you know just there's too much data for humans to assess and we've actually found this to be a really good use case for machine learning we've had some good results but more about that later okay so I'm going to dive in now with the first section which is change success scores so when we talk about change success scores in a way they're not really a risk component right so change success scores are there to calculate and indicate um how well a team would or how are you doing with a particular model of change or particular standard change template uh we're going to have a quick demo after this just a couple of minutes that I'll do um but what we going to show you is by using success soures and this is a pro feature by the way I some Pro feature just call that out right at the start I think most of our customers are now on Pro or above but it's a pro feature and what this does is is there's a job that runs and it collects data uh and it applies multiplier to certain scores within that data to certain numbers within that data and that produces a score for a team or for a model so what that means is if you look at say an assignment group they will have a success score against them for Change and so what we're doing is we're looking at the past history the last 30 days of changes and we're seeing how well they've performed over those last 30 days um and we're applying some weightings to to the factors in that performance and we're producing a score from it and this allows us to do a couple of things one of which is directly related to risk the other is kind of indirectly it allows us to assess team performance that's not really what we're here do today it allows us to to to see how well the change process is being applied by teams to see how well they're doing with changes um what we want to do today with it is we want to use it as part of risk so I'm going to cover it briefly just as a bit of background to how we use it risk and then we're going to show you how we use it within the risk calculations uh for change management okay look so what we're going to cover in the demo is just what are success scores and how do you configure them and how do you use success scores be very brief just going share my screen okay so diving straight in we have a dashboard this dashboard is actually uh in upcoming releas is going to be replaced to something somewhere else in service now be documentation on that to follow we're not quite ready to show you that yet but for the moment this is where the dashboard lives this is our change success score dashboard and you can see in here we've got scores back from uh overall change success scores and successful changes but down here we've also got uh the change success score for um different assignment groups within the business so we can see we're scoring this and we've got a raw number we can also see Trend over time so we can see how they're performing uh over time it's kind of difficult to mock this within a demo environment you need you know it relies on a lot of data you will all have a lot of data within your production environments and we're not using a real production environment for this which is why you can't see scores going all the way down but we have enough data also when you look at the changes for this that we're using within this data set they were generated right we have to do that um to create the volume of changes to be able to show you things like how machine learning works or how success scores work so uh you know don't rely on the fact that the wording within the changes makes any sense um we've had to to generate these so if I go into now and have a quick look at CH model success so that was the team success we also if you're using change models again if we covered that we covered them previously one of the things that change models allows you to do is to start breaking down success a little bit more so it it allows you to see how the purpose of how successful the change with that purpose is doing so how good you are at change registration but you may start making models like for instance sap changes or you start making devops changes um as you start using those change models you can see how successful that change is and again that allows you to intervene um with those teams so I said this was going to be a brief demo and it will be so how is it configured so you can see in here we have some thresholds and this gives us a rating for those scores so you can see in here we've got uh score start and score end and they match each other they carry on from each other um and from there we can give a rating at the moment out the box we have four ratings low medium high and excellent and that gives us a view of how good that particular particular team's success score is one more thing I'm going to show you is the configuration of the metrics I can't see it within here I'm just going to skip over that and we'll probably just cover it a bit later if someone can can come on that later so that was when we were the slide before um I was showing you I just share that now so when we were looking at this slide before that's where you can configure uh the multiplier for your success scores and as we've SE before this job within um within platform analytics runs on a daily basis to calculate the scores and it will look at close changes um and it will calculate the success score based on those close changes for the last 30 days and that's not configurable at the moment those those time gaps the daily job and the 30-day period is not configurable at the moment but that allows us a good bite of time a good slice of time that's recent to calculate the success scores for so what it'll do is it'll look at clo changes it'll look at to see whether there's any P1 incidents attach to those changes it will look to see whether were any outages caused by the changes um it will look to see whether they were successful or unsuccessful um it will look to see how many of each of those things there were within that time period for that team or for that model and then it'll apply the multiplier um to the those to give you an overall indication of how successful that team is um the default score for change success is 500 so if a team hasn't done any changes they will sit at 500 they will decrease from that if they're particularly unsuccessful they will increase from that if they're particularly successful so moving on recap just make sure I covered everything so we can use the success scores as an input for change risk um and that happens out the box and we can also use success scores for approval policies so we can directly check the success scores from approval policies if you want to we talked about that on the last launch and learn and so from within the approval policies launch and learn um and we'll talk a little bit more about the kind of when to use approval policies when to use risk when they interact with each other right at the end um of this launch and L okay moving on so Dynamic risk evaluators okay we have four ways to calculate your risk score of service now uh and most of you will have come across the first of these which is risk assessments so we'll cover that first we'll do a demo of that um the next one is risk conditions I think quite a few people are using risk conditions so this is a more structured way to capture data um we have a calculated risk score um which looks at the success scores and and so it's really looking at probability and impact and then we have risk intelligence which is our machine learning and predictive intelligence-based solution um so for calcula risk score risk intelligence they require an HSM Pro license at least so risk assessments so it's an established way of working right we've been doing it for a very long time um and the the Assessments in service now are built for change so there is a surveying assessments module or product that we have within service now um and we extend that for change management we will a special one for change management and what you do to set that up is you go in and you set up some surveys you bring some questions into the into the assessment and then you score uh those questions according to how important they are so some of them will have a higher waiting than others and then what we do is we combine all the answers from that questionnaire and we provide an overall risk SC um you can do multiple risk assessments for a change you might just want to do one but you can do multiple and people can refill in the the same risk assessment or possibly even trigger multiple risk assessments of different types for a change um so we can so we record all that againsts change so again although the answers fairly um subjective in those questionnaires um and there's a possibility that you know they could be gamed I think people have seen that so people can fill in multiple questionnaires until they get the right answers it's all recorded right we have an audit trail of who filled it in when they filled it in and what score they got on the last one so it's not something that though it's subjective it's something that is auditable and recorded So it's a it's a good technique in that sense um it can be a little slow so there's no real automation opportunities here apart from the automation of creating the risk assessment you can't you rely on a human to then make time to fill it out um we can use it where we have little or no structured data or we have poor structured data around the change so if you have a low maturity cmdb or maybe that area of the business have a particularly low maturity area of the cmdb maybe it's a vendor or something like that but you don't have visibility of their cmdb um the a risk assessment is an appropriate way of working but where we have structured data a lot of the questions we ask in the assessment are questions that we could answer with structured data like for instance are you making a change to a critical service well if you had a good idea of what services are impacted by a change uh you could check to see whether there's a critical service and we'll touch on how you do that later um and it's a real it can be a real block of velocity of change so really in some areas of the business it's it's a non-starter uh you know you don't want to be starting to send out surveys to send out risk assessments to devops teams because first of all they probably won't fill them in and second of all the amount of time it takes to do that process is going to be it's going to considerably slow down High Velocity change process so we want to introduce more automation there so it's not really relevant to our shift left processes like devops okay we're going to jump to a demo in a second I'll just introduce what we're covering Andrew are we good to do the demo always good to do the demo you just want me to focus on change risk assessment this stage and you're going to go through here are the three questions we need to answer so what are change risk assessments how do I configure them and how do I use them in the change process we good on that focus on answering those three questions okay thank Andre Dem we'll just do a short demo just a couple of minutes it's quite a quick one this we'll go into more depth around things which are probably newer to you like predictive intelligence like Risk conditions um like uh derived risk or sorry struggling a bit my brain's full we right Andrew over to you of course I'm muted hopefully you can hear me and see my screen I'm going to shuffle Zoom around because Zoom is always in the wrong place cool I'm going to do it with s so show you a risk assessment actually happening I'm sure I seeing that most of you already doing this but I I'll do it anyway and I have a cold too so I'm going to be coughing I should have had a glass of water before this I'm going to create a normal change I'm logged in as an iil user popped into the instance and we'll see risk evaluation here at the bottom and I'm going to hit assess risk and here we can see the the questions uh does the change effect have a critical CI I'm going to answer all of these questions uh to which uh in a way that I know will give me a moderate risk so I'm just going to do that and submit that and that means the the risk evaluation call to action has finished here on this uh right hand side we now see uh that the risk is moderate um I can click here to view the risk assessment which shows you the completed risk assessment that you've just done the last completed risk assessment uh in a readon format and it's uh and it evaluated to moderate and for that reason it is it is moderate I'm going to stop from that that aspect because I'm sure all of you have seen it um running and working and I'm actually going to stop impersonation and I'm going to go back into the instance to show you how you create uh where you find uh risk assessments change risk assessments and how to configure them and we start by typing risk assessment and under the change uh Administration risk assessments we'll go into what we have and out of the box uh the answer is how many will you have out of the boxes it it depends how long ago you actually started this because originally we had a legacy risk assessment which I will not be demoing but I'll just touch on that and say we used to ship two we used to uh ship um software change risk assessment and Hardware change risk assessment um but these were Amalgamated into a single risk assessment and I'm going to kind of Click into that to kind of show you what we ship out of the box and I'll mention how we actually build built it because we're actually you we didn't build it from scratch we we want to reuse as much as we can in service now so that there's a familiarity across products and in this case we're building on surveys and assessments uh version two architecture the Legacy built on the version one architecture and what what I'm showing here is that when you let's say would create a new risk assessment I'm just opening an existing one you would say well I need it to match certain uh types or change models and that where it kind of activates from arguably and I think uh Dan has probably touched on this at some stage but standard changes we would argue that you probably wouldn't want to change risk assessment on a standard change the idea being standard that it really is kind of low risk out of the box but that's not to say it is but um we'll kind of focus on this aspect of a risk assessment we say that it is really geared towards uh changes of type normal or emergency or of models uh change models of type of model type uh normal or emergency and that it only matches in certain States like the new assess or authorize now uh a lot of people will get a little bit frustrated when and I and I'll understand why um when you actually create a a definition or a config uh condition you have this thing called a assessment category now as I say it's built on surveys and assessments technology we recommend that you only use one category and in that category um uh you will have a filter and um and you want to repeat this is a filter whereas the previous one was a condition and we simply say make the two identical as I say we build on something we really want the to have one category one definition and the category has all the questions and so here for example the questions uh and we also have like waiting and things like that and why do we have the waiting why do we uh our formula is sum of actual value multiplied by weight um of an answer and I'll go into one of these waiting if I can remember where it is uh and here we uh so we have the values which are for example four 2 and one but the question will have a waiting uh I think I um here is the weight so let's give this a weight of 10 and the idea behind the waiting is that um some customers choose again it's it is subjective and some questions will you know is there a backout policy yes or no you know it may only have a value of one or two but the waiting will be 10 so it'll actually give a value of 20 because guess what that's actually more uh more serious with regards to trying to evaluate that risk um and it's it's attempt it's a kind of an attempt to reduce the amount of gamification because you can put in your answer um and try to get the answer that you want I want this to come out as a low uh a a low a low risk uh change I'm going to give the answers I I would say that we don't tend to uh focus on that veracity aspect too much but I would say that each change that we create and I'll I'll point that out and I kind of ramble and um on that but you for every single uh assessment that's been completed to get the the result you must complete it and we will have a list of um instances assessment completed assessment instances against a change request it it becomes obvious if against one change request you have 20 kind of filled out answers that someone's likely to be gamifying that but at the same time once you've some somebody has gamified it I guess they already know what the results will be for the next change request it's we never really focus too much on that veracity aspect but that's where we have the uh the weightings I suppose but partially to discourage um uh gamification partially to identify gamification but when I said we have this sum of actual values multiplied by uh weight um that comes to a number whatever that number may be and there we've got the um uh thresholds and we call it thresholds because they have to be surpassed and I'm just going to order it by that that such that you really always want to have one that's zero because we don't allow negative values nor does service and assessments allow negative values if you ever have a negative value we used to have them in the past it's a defect no negative values um so you can't have for example values of a minus value it it was possible in the past to kind of mess around with that we we try to avoid that now um so if you let's say have uh an answer you filled out your answers and it surpasses a value of 0 to six uh it's a low risk it's you know it's just you get an output number we surpass a value when it's seven it will be moderate when it's uh above 11 it's a high risk that's basically it so when you're configuring it you're you're creating an change risk uh assessment you're create you're creating your condition and your filter sadly you have to do both it's a bit annoying I get it um and you match a record a change request record in certain States and from that point onwards it is enabled such that you need to let's say fill in a risk assessment you have you should I mean we don't enforce it but it can therefore afterwards be enforced um and and then of course you also have the assessment instances which have been uh created against this definition and you can go to the uh change request itself and you'll see the assessments instances of this type of change definition against uh the actual uh change request um and the one final thing I'm going to touch on uh having create having done um a um having completed one I'm going to put in this change risk details uh and often there's a there's a question to say well how do I know if someone's completed a risk assessment should I go and use the risk assessments API how do I know a risk assessment or what I would call a risk evaluation has been completed we look at the change risk details um record if there is a change risk details record and if there is one you've actually done a change you you you've done a a risk evaluation that's all of the evaluators of which um risk assessment is one of them uh I've forgotten the three questions Dan but does does that show that's perfect okay so I'm going to pop back to my screen um thank you Andre um and just recap on the demo quickly so an extension of surve Assessments uh they're configured and will be triggered using conditions um and they're weighted so that you can apply different waiting to different questions in the the assessment you can run it many times for single change and people often do but you could the latest risk assessment is the one that's used for scoring and the history of risk assessment answers and the risk assessments themselves the St audience that change so you can see for compliance okay um moving on to risk conditions so risk conditions are a way of capturing more structured data around the change they are more automated so uh you define property and condition based risk determination so you say things like are there any critical Services affected in this change are there any uh is this change during a Blackout Window you can look at these things and you can evaluate and set the risk accordingly on the change um the first condition to evaluate is true sets of risk so you'd be wanting to use your highest rating risk conditions in the list we'll cover that in a second uh as the first ones that fire um there's also from point of view of it's it's pretty low code or no code sometimes if you if you're Gathering a lot of data on the change if you want to go and inspect a list of things like the list of affected Services iterate through them and pull back if there's any critical Services there are ways of doing it we'll talk about that at the end but that's a code-based approach but it's very low code based approach um uh sometimes they're just that the conditions themselves can just be set using a condition Builder um it takes advantage of things like CNB if you've made investment the CNB risk conditions are a great way to go because they leverage that investment um and also they drive change velocity because they're automated they just happen no one needs to fill them in you don't need to wait for anyone to fill them in they they should fire in a few seconds and then you'll have a risk evaluation done on the change um they do require good referential data so they do require you to have either a go cdb or good data on the change itself depending on on what you're looking at they take a little bit of thinking in terms of your process to implement so you may have to do a tiny bit of rethinking about if you've got compliance documents around risk assessments and you need to cover how risk conditions fire that may be something you need to cover um and they require a little bit of engineering effort they require a little bit of going in as a service now admin and you know maybe scripting a condition there are lots of examples out there on community we provide some out the box for you um hopefully we'll be looking to provide more um but uh they are a great way of speak the change process they're a great way of using structured data they're a great way of taking subjectivity out the change process where you can also they can be used for particular model so if you have areas where you know particular model like devops has a degree of um of of good structured data by Nature you can use them there and you can choose to use assessments where maybe that data doesn't exist so you can mix and match these approaches uh and the actual risk evaluator which we'll cover in a few more slides is a way of bringing all these things together in one place so we're going to go to another demo now Andrew over to you but we just want to see what risk conditions are how do you configure and build them uh and how do you use them in the change process so Andrew over to you again cool thank you very much I'm gon to start sharing my screen yeah Yik and I just grabbed a zoom thing the zoom is always in the wrong place every single time cool once again hopefully you can see my screen this time we're going to focus on risk conditions in which case I will just start by saying saying to find them you'll type risk conditions I hate the term risk conditions I've always been an advocate 11 years or 12 years working on these I believe that they're known known risk issues of which the last one is there are no expected risk issues found and I'm going to touch on Daniel's favorite one which is the insufficient lead time because that is actually the simplest one to look at it's a kind of a joke it's probably not a really a good and accurate one but nevertheless it's it gives a great demo whether it's a good predictor of risk is questionable um but to start with it simply is to say if you're actually going to create a uh a change request and you're scheduling it to start this afternoon my goodness it sounds pretty rushed surely that's a high risk surely you need at least three days for people to kind of uh look at it analyze it approve it Etc so if you're scheduling a change request within you know within Less Than 3 days um it's it's high risk so here's an example we say insufficient lead time that we we say I believe it to be High um I want the it also has an impact field I'll I'll unfortunately I have to touch on that a little bit sorry Dan um uh I also need to talk about the order the advanced conditions the script conditions and the the condition Builder but as Dan said it can be very very low code it can say well okay I want to set the risk to High I don't want to affect the impact fair enough um I want the order to be 100 uh it's first to match the first risk condition that matches is the one that will be applied and if no risk conditions match um I will focus on the no expected risk conditions found because otherwise nothing will bring risk down um because sometimes you may say well I want the highest of all the risk evaluators but unless you have something that actually brings it down it will never go down the only way to bring it down is to have what I guess a default that using the risk conditions to bring it down uh just but but again quickly kind of touching on this one so we're saying okay it's got a description great and we're saying well the plan start date is not empty and the plan start date is a relative date on or before 3 days from now nice and nice and kind of simple and we only want it to apply in new assess or authorized well why is that well okay so um what happens afterwards let's say imagine that it's an Implement let's say you continue to do this at Implement when you do an update this will likely as not match and it will be when you're implementing saying yeah I've implemented this successfully it was a low risk I did it 3 days before relative and I'll implement it update implement it to Successful by the way this is updated and it matches insufficiently time high risk uh what so you want to be careful with this one as well because you want it to only match in certain States that's probably why it's not an ideal kind of outof the box one but it kind of gives a good example of what a risk a lowlevel um risk condition could be I'll have a look at another couple which are more complex let me see if one all right let's start with uh one which uh is a little bit less little bit more complex which is use Advanced conditions so instead of having a condition Builder you want to um instead of looking at the change request record itself you want to query let's say the cmdb service table please beware don't just query the CM dbci service table it's big so we say where you know where it's where this has got an association to it for example and if the cmdb service that's being changed on this happens to be critical uh then the answer is true so here we have like oops here we have some uh code that's written so there's a kind of a a a slight barrier to entry you do need to write code in order to actually uh get the value but all your answer all you're responding here is true or false and if the answer is true it will set the risk to high so that that gives you that instead of a condition Builder an advanced scripted condition but what is it the the the risk set to well it's still a drop down list so you don't need um to actually do any code by the way I'm going to point out again impact risk conditions should have been called risk and impact conditions because impact is used and impact is almost an in input to risk output but I'm I'm just going to mention that that unfortunately it was built a long time ago it it does two slight things like that and then I'm going to focus on the most sort of advanced one where it's code everywhere and that's where you've got the advanced condition and the script field although you could just have like still uh a condition Builder and then have the output being set to something but you can have it fully um uh fully scriptable where you're saying well I want some script um uh to sorry just some script to actually run to see if it matches and then it should just answer true fals and if it does match then I want to set the current which is at this stage and always is change request or an extension thereof change request and you're setting it manually One impact one risk two whatever uh you you can set it to anything please never ever ever ever put current. update in here I know it happens this is an effectively a before business rule you split the transaction and um it's something that we really really never never do uh it will break everything in in the risk evaluation so I'll just repeat it never put current do update or update in either of these Advanced conditions or the script values you can don't do it please please please um those are the three risk conditions that we uh put out of the box but I will show this one last one which kind of um brings uh the it's almost like nothing brings the risk down once it's been set to let's say moderate it'll stay moderate the only way we can bring it down is with this default one so if nothing else matches we will set the risk to low and the impact to low and that's kind of your default one uh and that's why it's um 400 I I always talk um I'm going to stop talking unless Dan you want something else for me to show yeah no we're good that's good thank you again do you want to quickly see it in action I'll T you and just show it show you it happening sorry apology I forgot why you while you find that just a note on the lead time it's it's it's been uh shown a lot these days that lead time is not a good indicator of change success and that's why we are not advocating using that as a risk condition there are there are places where it would certainly make sense we know that but um there you know things like devops changes actually low lead time normally leads to a greater chance of success so it just it's very easy to demo which is why I use it here we've got the um the risk as moderate so I'm just going to oh and I sorry I do apologize I'm going to do a standard change I don't want to do a risk assessment I just want risk conditions to run so I was being silly um preapproved I'll take one of these um save that the risk is moderate I'm going to calculate risk and that will do and execute the risk conditions and it says risk has been set to low uh impact remains unchanged because it was already low and why because the risk condition rule um no expected risks found and what I'm going to do is I'm going to say well okay I'm going to schedule it for today starting right now finishing tomorrow um saving that actually I could have calculate risk because it will save it for you calculate risk and now it will be a high risk um why is it a high risk because there's insufficient lead time yes it's not a good predictor of risk I I I know but it demos very easily so I'm going to stop there thanks Andrew uh so I'm just G to go back and share my screen uh we'll just do a quick recap on the demo uh everyone see my screen um so risk conditions automate the process of assessing risk uh they can be triggered according to the data that's in your change so you can trigger certain risk conditions based on certain uh features of the change certain attributes of the change uh they allow you to ensure that your organizational compliance is met if you have an organizational policy that says that uh any change that is uh to a critical service needs to have certain types of approval you can drive that through making the change high risk and sending it to count as an example they provide an audit Trail so you can see what condition was met and why it fired and so therefore why the change was risked in a certain way and again take the subjectivity out of risk decisions and allow the use to structure data so making good on the investment we've made around service now and the cdb and other the surrounding data the relationships Etc uh and also allowing that change to flow faster because of it okay so calculated risk score so we're getting a little bit more complex as we go along we're probably getting into areas now where people are a little bit more unfamiliar I think most almost everyone was familiar with risk assessments uh less people familiar with the um risk conditions uh the calculated risk score is a combination of success probability and impact um and the success probability is calculated from our success scores which we saw earlier and the impact is determined from things like Risk conditions um so it can be tailored to individual models as well there is the ability and we won't show this today I don't think because we probably don't have time but there is the ability to tailor this to particular models and if you get into that kind of space then there are articles and blogs out there and the our doc site that show you how to do that um probably an impact Matrix so combining those two scores together to give you a risk is a is an established best practice within the industry whether you use it or not it's up to you but it isn't something that's kind of weird and whacky and out there it's something that's been done for years um and it's a no code configuration as well on the um the cons are the derived risk cre can be slightly confusing right because it's derived so you're not seeing a therefore B you're seeing a little bit in between um so you need to understand that you need to build that kind of understanding into the change team into people who maybe need to understand a bit more right risk of change in detail um the components again there's more components so it makes it a bit more complex to understand um we talked before about this but requires good practice in Change review and closure because you're using success scores um and also if sometimes it can be a bit misleading if teams are low volume change users and they have that score set to 500 in in the uh in their change success score just because they haven't done any changes um but those are all things we can take into account and as we will show later in the demos this isn't the only way of assessing change so we're looking at different aspects of the change risk and we're going to use the most risky of those to assess the change overall so in the demo we're going to see what the calc risk score is and how to configure it and how it's used in the change process can I hand over to you Andre yes yes sorry I was trying to ask questions absolutely I will calcul R yep okay cool uh so calculate risk is also the um as we said it's uh I'll I'll show it that's the easiest one I'll start by showing it and we'll see it in in two places um and I I'll show it here on classic and I'll show you here that it's they've you've got this aspect of calculated impact plus change success score and it gives a in this case a value of high and that doesn't give you a great deal of information we and where in fact we get a little bit more and we're working on giving it a lot more kind of uh background and information so I'm going to go into s so to show you a little bit more of that and try to remember which change request it is it's uh six for example okay and here we actually see a little bit more um when we have this risk card uh here we see for example the change success score um which is actually a team success score but okay the chain success score and it's a it's a numeric value 769 for example and we have a model success score 83% but I know we're focusing a little bit on the Chain success score I'm going to lean on uh my colleague um Mr Lee for this one on how to configure this I'll also point out in the uh in in classic to get more information about the team success scoore the chain success schore uh you go to the actual assignment group and you'll click on this little side button here and this is the bit which um uh we kind of uh talk about and show with regards to what what this number has come from for example this team has actually got a total of 170 changes so you actually get a breakdown of you know how many they've done how many of the change so because obviously you could have a a great success score but in fact you you may have a count of one and one uh and so this is this gives you a breakdown more of that but again it's it's not um it's not so much the it's similar to in a subjective way we know that there's veracity but it actually really affects only one aspect in the calculated um uh risk store and that is the probability and I'm I'm going to point this out so if we remember that there we've got this 762 and I need to I I I don't remember the the tables so I'm going to try to find what my my cheat sheet of tables um ah okay that all like that that's I I remembered one all right we're going to go a little bit backwards I'm going to put that in there and I'm going to say okay so first here's that Matrix so that you've got how do we get to let's say moderate um or high risk we've got this aspect of uh impact and probability so if an impact and remember was touching on the risk conditions the risk conditions also set impact and if you want to use the risk and impact uh or calculated risk score I know we focus very much on the change success score but the change sucess success score focuses on the probability this this block here so whatever your change success score is this is this is part of it and it certainly does affect the risk and of course if there is a you know a veracity aspect you know it shouldn't be completely relied upon it can be gamified at the same time what you can then do is well if if that's the case you would adjust your Matrix to say we don't we don't trust the success score because so then in which case you would just kind of um say well moderate and high actually lead to a high risk and that you give more gravitas or more you lean more on the impact and how do you do the impact the impact is subjective it is based on the uh change risk conditions for example is this a critical service are there more than 10,000 people using this or 100,000 people using this etc etc what is the actual impact so you've got this Matrix of of of working out this this risk and as I say the probability is only um on on this part of the uh of that Matrix but let's go back one step because and I'm going backwards I know but because we've got this number how do we get from this number to let's say high moderate low um so I think it's in my um uh let's see uh calculated lookup in my history cool so here we've got the change model success score the change success score and the calculated success probability and here I'm going to group these by um by definitions and we're going to and the one again we we do out of the box go with devops and Cloud infrastructure but the one that matches everything that definition that matches everything generally is the site reliability engineering and here we've got again another Matrix and it's always this kind of Matrix of matrixes that so here we we're taking the change model success score comparing it against that team success score or change success score and again we're we're setting the uh probability so remember even though we're doing Impact versus probability um first we work out the probability which is you know a matrix of model success score which I I kind of touched on and show not I'll show it again I'll show the model success score here so for example add these are some of the up ah clicked on it that'll teach me uh you can see the success rate of uh most of these standard changes are 100% But Here you've got change V the success rate of this particular standard change is 67% and if I go to models uh emergency changes 82% and 83% so you've got that coming as your input here and uh your uh your input against there um but that's still you're going to ask well hold on where's this you know this number 762 where's that coming from because we still I want to know where this particular one you know uh column comes from and off the top of my head I'm going to type uh success score and Mr Lee is going to save me to remind me where I go to get that but I'm going to type success I had a cheat sheet sheet um and I'm I'm thinking it's this one but it could be the one above um let me click into this I think this is just the same one I'm looking for the actual scores to the and here finally yada yada okay so and this table is the SN change probability success I really should learn these tables absolutely not I don't want to learn these tables um so here we've got the kind of scores for the Change Model scores as percentages 0 60 and 80 because they're percentages and here we've got 0 uh 430 850 so if I get a if I get a score of um uh 7 whatever it was 69 what we have a moderate degree of success and U where it's let's say over 80050 like 900 then you have a very high probability of success and and vice versa so when you actually get your score all it does is give you a calculated success probability of low moderate High which is then compared to the change model success low or high which is then compared to the probability as the pro uh which is compared to impact success low or high which finally gives you the risk so while yes it is B success score is based on the veracity of your teams it's really a very small piece of that puzzle um I'm going to stop and say does is that good enough yeah that's what we need to see so so as you can see that there are a set of tables in there and this is documented right and the reason there's a set of tables is we want to derive a score so we want to be able to first of all work out what's more important the out of the model success or the team success and produce a rating based on that and then we want to be able to configure that so that we use it to compare to the impact of the change uh compare the two together and then that will give us a risk score so there's a couple of jumps away from the change record itself that's all recorded and that's all a part of the initial configuration once that's configured it just runs for changes so that we can then get um a derived score for use there I'm going to just share my screen and go back onto the slide so takes into account the past performance of the team and the model being used to change and the probability of success is then combined with the risk of a change and a score is produced um like I said before it's a little bit to get your head round um all those different tables to jump across but it is something that once you kind of look into it there are just a couple of tables there's a couple of matrices involved to configure once they're done and you're up and running uh you can use that okay so now we're going to talk about um Dynamic risk evaluation using risk intelligence so if you are an itm Pro customer you have the ability to use this um it's a transformational way of working so it's doing things differently to how you do them now probably right we we have um we have customers who are using this uh in anger in their instances we have large customers who are using it it works with large volumes of data um there are two ways of configuring this you can do it via classification or via similarity you don't need to worry about that too much they are explained on our doc site but for use of classification which is the more accurate of the two engines you need more than 10,000 changes in the system which it can look at or within the set that it's you're looking at to create data um the good thing is though you can use this in information only mode so if you have it you can switch it on um and then that allows you to then run it to see what it would have said the risk of the change was using machine learning now we have found we testing and we have found with customers that it's very accurate it's a really good use case for ML uh and we're quite proud of it right we're quite proud of how well it works uh we'd like more people to use it um and also when we look at the next topic which is around how we combine all these things together it certainly isn't the you know it isn't the only thing we'll ever look at we always we always take kind of Paran approach to risk so if any of the methods you use to create a risk score for your change come back with it being a highrisk change we would use that if any of them come back with it being medium and it was RIS as low the other on RIS it as low we will use the medium risk rating so we're always taking the more paranoid approach to these things um it is actually a low technically a low point of entry you don't need to understand machine learning you don't need to be able to write code to do this out of the box a solution works really well you can tweak and train it and that again is quite kind of easy to understand you need to get your head around a little bit um around what's effective data to train this on um and until you get to that point we suggest using it out of the box but I'll show you that in a second it takes into account The Wider change landscape so when we look at the risk of a change we're not saying uh we're just looking at this change and what's happening right now to risk that change we're looking at past change history to give us an idea about what the risk of this change is um and that's effective with a large number of changes um some cons about this I mean it may take a bit of organizational shift to get your um organizations compliance people around the idea that you're no longer having a human in the loop for one of your risk processes the others are still there but for one of the risk processes you're not doing that um and that may be you know something that requires a bit of a bit of conversation um and it requires a relatively high volume of change to be effective and to be used and again it's only as data good as a data you provide it with um so it's not a magic one that will fix your data for you but if you have reasonably good data it will provide good insight for you okay uh I'm doing the demo on this one so um we're going to show what the predictive intelligence solution is and we're going to show how to configure it and we're going to have a little look at how you might train and improve that model uh and how to use predictive intelligence in the change process which is very straightforward share my screen okay do this jeez talkong yourselves why we just get into the we're very near the Q&A by the way we're not going to run over we're on time so we'll have plenty of time for Q&A after this okay so if I go in here and I type um you can see in here there's a whole section of predictive intelligence um and if I go to here for predictive intelligence homepage we can see that there are some solutions loaded this is a plugin you need to enable how to do it is on a doc site uh once the plugin is enabled you will see that there is a change risk classification and change risk similarity uh machine learning solution we would suggest if you have enough records to do classification then that's the approach you should take um again we don't need to go to the detail of why that happens but uh classification only works with over 10,000 records so if you have over 10,000 change records and go into this one um once we're in the classification uh record we will then Define uh what the machine learning training will take into a cabin when it runs um so what we're doing is we're configuring first of all to create a data set for so we don't want to look at standard changes within the set because they will probably give us a false readout so we're excluding standard changes this is how it's configured out the box by the way um and we want to only look at changes that are closed because we want to look at past history of closed changes that makes sense as well um and the output field we're going to use risk again that should be obvious right we're looking to set the risk with this thing and the fields we're looking at a short description and implementation plan so the only fields on the change that it's going to config consider when it's building its magic risk knowledge is the short description implantation plan you may want to look at other fields but for for the start just keep it simple because actually though it may be compelling to put more fields in there to turn to look at more data machine learning Solutions work better when they have large data sets and and they when they can draw analysis across having a lot of similarity between things the more things you put in there the more it will kind of spread out the data for the algorithms that work to configure machine learning so run it first as it is out the box see what you get see how it works you don't have to use it anger in the change you can just run it and see how it works okay so we then train the solution and this solution has been trained um so once the solution is trained it will then rerun periodically uh to update the data in the um in the model in the model thank you Lee Andrew please feel free to step in if I start garbling my words on this one okay so you can see in here we got 00 records matching the condition you probably want a few more than that in a live instance you'd have a few more than that um okay so when this has run it will then produce an output into risk for you so when we look at uh change space to change one of the to this change I'm struggling here to get the window up that shows me oh it's not asking me to run the risk calculation why is that okay so we can look at that in a minute but I W I won't when we look at the risk one of the aspect of the risk will be set by Machine learning it will just come back with a value for us high medium or low it won't really tell us how it's done that right because it's machine learning it just imagine it does it kind of by Magic it's not magic but kind of imagine that what we find is that the best way of assuring yourself that it's working is by looking at how it's applied to changes is by using it not using in anger but using it just as a kind of pilot alongside and that can be set so you're not actually going to use it for risk assessments the next stage after that and you can look at that and see how well it compares to you know have to change his clothes how how good a job it made of of determining the risk again we found it to be very accurate um the next stage you can go to is start using that as part of a whole set of risk conditions which are part of our risk evaluator and that's what I'm going to show now Dan before you do show I've given you a link to show which one of the changes is actually has the uh risk ml running thank you very much that's brilant amazing so presuming you can see this one okay so in here yeah we can see that risk intelligence has rated this change as high risk and it actually matched the calculated risk score and the risk condition rules you'll find when you look at your data this will probably match a lot but that that's what we found from customers talking to us and what we found from our own testing um and so the risk of the change has been set to High um so we will go into how the risk evaluator works and you'll see how this will be used when risk intelligence fires it will only take that that into account if risk intelligence thinks that the risk is higher than your other methods sort of so it's a good way of uh increasing your ability to detect risk within the system it will never lower the risk risk intelligence okay so I go back to my slides just a recap okay so it's an now the Box solution provided with it pro you need a pro license to use it um it works in most cases with very configuration you can just switch it on and start running train it start running um it has to be configured a little bit to work with your data from processes if you have a highly configured change process if you're not using the short description field for instance and you decided to call it something else and put it somewhere else it it won't pull much data from a blank field that you're not using so it may be something that you need to configure to work with your own implementation um you can try it without Live use um and it's both accurate and useful okay so the risk evaluation process and Andrew you can jump in with this one as well because this is kind of a this is kind of Andrew's slide um so when we do the risk evaluation framework we look at the set of things we've just shown you and we move through them and we will only ever take or we will use the highest risk from those uh things which are in the risk evaluation framework so we look at the risk conditions the risk assessment the risk intelligence and the risk and impact probability and we will take the highest one of those we will display to you what all those different aspects of change risk are on the change and we store them so you can see them for the change and we will highlight the highest one of them and that will be our derived risk value so quite simple in a way that you can use one or more of these types of assessing risk you can use or four of them if you want when the change is then when the risk valuation is done done it will set to the highest of those values and it will come up with a derived uh risk score um Okay so we've seen that already the demo I was going to show you was just really around uh showing the box and showing you that you know the high the several methods of evaluation have taken place and we've picked this one because it was the highest one that's all we used for the change and that's why the risk of the change has being set to high of course change managers people depending on your implementation can always override the risk on a change but it's telling you why that risk was calculated by the system by our risk evaluation process is there anything you should I should add about the risk evaluation process Andrew um a couple of things that I I suppose only the risk of intelligence one can be so to speak run but turned off because if the other ones match it will take the highest risk of whatever matches except for risk intelligence which can be turned off for for for the reasons you've already described yeah so just recap on that again so you could have risk intelligence running in the background and it will not be used by the risk evaluation framework until you tell it to be used in anger so that allows you to experiment and gain some confidence that it's producing good scores for you Andrew am I going to quickly share again for this one I think I think we're good I think we've shown how it works I was just GNA point out I suppose then you've got the um the change risk details table uh which shows the previous risk evaluation and the uh latest risk evaluation it gives all the values that you actually had and you can use that there's often the case where people say oh I don't want to move forward until um risk conditions have run until let's say risk assessment has run until risk is evaluated drive that on your risk change risk details table because it has each one has run or has not run um individually uh or holistically which you know is what what the risk was so that's where you would for the process aspect based on whether you want to move forward or not allow move forward or know what has run what hasn't run Etc please use the change risk details as your single source of truth that's what we mean by the framework all of these will feed into that table so that's if you want to create uh say business rules or if you're uh in the future going to be using State transition conditions um we can in fact we we've just come off call we about making how we can make this easier for you to do and provide some stuff out the box for you um but we want to be you to be able to say don't move from this state to this state until a risk evaluation has run move automatically from this state to this state when a risk evaluation has run those kind of things um again in the future we're looking in the near future we're looking to make that a lot easier to do okay so one last thing which is really just a quick discussion is on the difference between risk and approval policies so this is a slide you have seen before so using approval policies to drive approval because ultimately the risk On A change is kind of useful right saying high medium low but it should drive what you do with the change and what you do with the change is you drive an approval uh uh you drive an approval process now there are two ways you can apply what we've got here so you can just put the risk into the approval policies and say if the change is high risk then root it to cab that's valid right we're we're not arguing with that at all um what you can also do is you can directly look at some of the things that you've been that we've shown here from the approval policy 2 so you can say well if it's if it's high risk send it to cap but also if this affects more than three critical Services we also want it to go to cap right regardless of the risk of the change so without wanting to confuse everyone too much but just to kind of U highlight how this is implemented there's a combination of risk being used in approval policies directly that derived risk or being used directly in approval policies but there's also the ability to look at some of the aspects you've identified as important to the change in the approval policy directly as well and that way not only are they used in the risk assessment they are also audited as being part of your approval process so again it's it's like belt and braces you can you can use it to drive uh approval policy which is then auditable and again helps you meet compliance and what we what we what we have at the moment is we have a rich API in change that allows you to make calls so you can make calls to go and get the success call you can make calls to go and see whether impacted services are affected by the change um and one of the things we're looking to do in the near future future as well is is just to make sure that that's very kind of complete and robust so that you can just make on line code calls your developers can just make on line code calls and they can make that from a risk condition saying does this affect critical service or they can make it from a an approval policy and say does this affect critical service and that way because you're just making that one call that on line call it's making sure that it's making the same call in both places so you don't have confusing data or conflicting data so what we're trying to do is we're trying to drive the approval process through risk which is good and valid and best practice and we're also trying to allow you to think about the things that are used in your risk process to directly call them from the approval process if necessary which is a kind of higher maturity thing because then you've got the ability to say you know a lot of compliance documents say this change needs this type of approval a service manager approval if it's affecting their critical service okay so that's what we wanted to cover uh we're going to go to Q&A now um but I think uh we just have one last slide which is just a recap on the framework so risk conditions risk assessment risk intelligence risk impact and probability uh all drive a derived risk score um uh and we take the highest risk that comes out of them to get to the derived risk or and that is the risk on the change which you can then use in places like approval policy okay we are good for q& a okay let's see what we we got Aaron do you want to come off mute and ask your uh your question I just uh enable you to talk yeah sure uh this is Aaron Vil um it was mentioned earlier on the call there was uh there was a new feature coming uh called incidents caused by change ml is that coming with xanadoo uh or is and is there any information related to to that feature it's it's something key to key for us to work on in 2025 we are currently working hard to try and make sure that we understand better the incidents caused by change um the the the the the feature we had to do that did not produce very good results so it did not link incidents with changes very well um and it's something that now we have new opportunities because we have uh new products and Technology with ML um it's certainly something that is a kind of a PRI to build and do we don't have any particular uh version in Target for when that happens but it's something where as soon as we have something that we know is robust and we know Works um we will be making available that sounds great we'd love to do it we'd love to do it and it's all about that thing we talked about before which is first of all understanding how good you are at closing out change you know was it successful was it not um then attaching incidents to a change but then there's also the kind of picture around a change which is like I have a change that was successful because the change was done perfectly but another team didn't know the change was coming and they didn't update their API so their system failed right that's still a successful change but it had some impact um and we don't want kind of hand ringing about whe that change was unsuccessful or not we just want to be able to show you that there was some impact and maybe incorporate that into our risk management ideas okay questions we have nothing else in the Q&A so uh if folks have any questions please just raise your hand we can talk about it live okay uh Jason may allow you to talk can you hear me now yes hi guys hi there um so the question is quality of data and cmdb so is it possible in the risk to write rules that um reflect the quality of data in the cmdb um I'd say possible but extremely hard I've not really thought about it uh what we have thought about doing rather rather than that is improve change manager ability to analyze over lots of changes rather than that specific change but work out where whe there's areas of business where there is a risk and maybe have some way of informing change managers about why that risk exists so it's I think there's more opportunity to be able to go and say this particular area of the business seems to be having a lot of failed changes we have a discrepancy between what they think the risk is going to be or impact of A change is when they assess the change to what actually happens afterwards so before and after comparison of risk and impact and be able to work out are is where that um that that that isn't lining up where the people say it's a lowrisk change and it's high and then it goes wrong and it's high risk oh sorry low RIS low impact change and it goes wrong and it's high impact or whe they're failing a lot and they say they have low risk changes we want to be able to provide more insight in that areas and that that's something we're working with both the ml teams and also the Gen teams to be able to provide kind of human understandable output saying with the AI saying we think this this area needs to work on the CB relationships because they have poor quality um so we're going to do some follow-ups on that when we get again when we get more concrete information on what we're doing there yeah because obviously the calculations are only as good as the data yeah the key is knowing whether the data is good and if you have teams where you know their data is good then it's more reliable confidence so we want to be able to give some idea the the kind of working idea is we want to be able to give a a confidence score I mean I'm not saying it will literally be that but in my head that's why I'm thinking we want to be able to say risk and impact and also confidence in what that thing is so if we have a low confidence then it's going to be low risk then it makes it higher risk inherently yeah cool thanks sorry anyone else I I uh this is Aaron again um I always I always am looking for like um like good evidence for like a crawl walk run approach to implementing these features that come from service now and like what the exit criteria might be before moving to uh enabling risk intelligence um I in in my current environment I don't I don't think that our quality of data is high enough I think talking to a point that was made earlier um to enable risk intelligence um I want to enable it with it kind of um toggled on but not incorporated into the risk core yeah um uh as has been detailed here but um I'm just trying to get a sense for how you would you know have a confidence that it's actually going to deliver something that of value well first of all with each implementation you're going to have your own specific issues and also because ml isn't using kind of human thought processes to produce what it produces you may be surprised at what it at what it brings out from the data that you have right so that's the first thing I'd say we've seen that before um as you've pointed out don't use it in Anger until you're confident that it that it's working well for you but by all means you know switch it on ensure that it's not running as part of the um the um the the the the risk process uh the risk assessment process but um I don't think I mean what we're showing here is kind of how you would the maturity cover around how you would use these things so you know first of all you've got risk assessments risk conditions which are stat kind of Fairly static risk scores that just look at that change uh you've then got success scores which are looking at the wider picture of change and again they don't rely on the same DB they're just relying on people doing um filling out the change success correctly risk predictive intelligence is probably something that most people would consider further on along the line because they've got to do some organizational work to adopt it but actually if you just pointed out you can adopt it uh earlier in in a kind of non-live sense and see how good the data is and then watch it if NE necessary watch it improve as you improve your data landscape and use it at the point that you think is uh is appropriate just to back to J point we have failed change feedback loop and generary insights for risk there on a on our road map which is what he was talking about just then okay who who was just uh I just want to add uh what already said yeah in terms of data volume um classification Sol solution M solution is what we recommend out of both but that requires minimum uh 10,000 change request record to be trained on so um if customer has less than 10K um records then similarity solution can be tried instead um yeah I think the minimum requirement is 1,000 so from user perspective one is that um use the similarity and another thing is that try out different um field combination instead of the out bookx short description and Implement implementation plan and then see what kind of um result it it generates and let see if that yeah generates some value for your instance we find a lot of cases where people or I've seen a lot of cases where whether the change is being filled in what we consider incompletely like maybe there's no CI on the change quite often people are typing that into the short description box or the description box in those cases so because they don't have that Ci or they could be bother searching for it what whatever it is so it you can be it can be quite surprising how good the ml solution is at assessing risk when you don't think you've necessarily got good surrounding data I'll I'll add to it that in in certain hackathons we also wanted to see how we can gamify the machine learning and of course we know what algorithm is using so we're easily able to gamify and and work out um how to get a risk to be moderate high or low the the the aspect is that unlike risk assessment which you can gamify in a few minutes um with the ml solution we were feeding it a lot of data to gamify it and that means you're feeding a lot of change request which are kind of bogus so it it's harder to gamify I suppose if you're having veracity issues and uh with regards to things like Risk assessments and risk being you know being played then the ml gets around a lot of gamification unless of course you're allowing your users to enter a lot of should we say bogus change requests how do you how do you counter the let's let's say they plan a change and they it was a like the plan itself didn't depict what actually was happening and and so it ends up ultimately failing um how do you like how do you correct for that in the uh in what the machine learning is modeling off after like if if there's a group that is just submitting terrible plan changes and they're not depicted in a in a way that's meaningful isn't that like kind of um muddying up your model I think the fail I think again it's round analysis and understanding of failed change um as well so uh Jason have you got a question hand up or is your hand just up so just J just finish off on that one you know we want to be able to drive two ways with this we want to be able to understand with better use of ml improving the ml Solutions we have to extend them beyond the kind of raw risk um uh capability here into understanding what the impact of a change was and making the actual change process better um understanding also patterns around where teams are maybe not filling in things properly or not filling things truthfully possibly as well we have opportunities around that with with text processing and and gen there's lots of stuff happening right now but we're in the early days of that and um so but you know to give you an idea of what we're thinking fail change analysis understanding the detail behind why certain areas of the business are maybe able to do fail change properly and some aren't putting it properly and then understanding for those fail changes which ones had the right impact and which didn't have the right impact and then further from that understanding when when you risk changes for certain areas and certain teams how how how confident we can be in that risk from looking at fail changes is really important too and because it's large sets of data it feeds well into generative AI it feeds well into uh machine learning and and I'm just going to extend that a little bit further say you've got one team that's feeding in should we say bad data they have to feed in a lot that that's the thing so if if that particular team is doing the majority of changes and pushing them through then yes you'll muddy your model but generally that's not the case and if if there's one team and it's let's say it's 10% of the entire it's not going to muddy it that's we found that you really have to put in a lot of data a lot of bad data to um to give it the results you kind of want to skew the results if you want to skew the results um it's got to be a large amount and that's of course it can be done if there is a team that's determin to mudy your model um it's possible of course but it'll it you you'll also be able I guess you would then look and say goodness why is this team doing 100,000 changes a week and the others are doing like 100 you know it's really got to be quite a large scale of different uh to to to kind of to to swing sway the model GRE great I think we're at time uh thanks everyone who's still on we please uh complete the survey uh after um we we read all those responses and we really want to make these better so it's it's really helpful if you do that um and that's it we'll see you in I think two weeks for our our devop session thank you very much everyone thanks everyone

View original source

https://www.youtube.com/watch?v=lN-CRC17qYU