logo

NJP

TechTalk - Infrastructure as Code - ServiceNow DevOps and CPG

Import · Nov 13, 2020 · video

welcome everybody uh we are pleased to have you on board to answer the simple question how can we manage from uh devops point of view the fact that on one side we have smooth score to deploy but also on the other side infrastructure to deploy and all of that with the governance layer on top i'm john wagner i'm scs from france and i have with me today dennis jessie's from sudan and denise can you give us a bit of your background please yes uh of course julian um so i'm dennis karpers i'm based in switzerland original dutch and worked now two years for service now before that i was working 12 years for creative space as an i.t operations architects specialized in monitoring event management automation everything basically what has to do with it operation management before credit space i was working for companies like veritas semantic uh precise um and so on so a lot of experience in in the uh in the let's say the monitoring space but also in the automation space all right cool thank you and i'm also with uh dimitri today i'm dimitri finas i joined sui i joined servicenow as part of the swiggle acquisition so i was acting as architect technical architect in swigil and i'm based in paris as you can see behind me thank you alright dennis shall we give the stage to you all right so let's let's start um what we're going to show today is um an integration between what we call the the pipeline and the servicenow platform that's basically managing the pipeline from governance formation um in this example i will show you how this can work uh as infrastructure as codes with automated governance as we call it so uh today we will have different uh technologies involved in in this uh in this demo uh we will see jenkins we will see terraform uh of course the servicenow platform in this full extent uh we will see some uh some coding we will see a lot of things today so uh it's gonna be a very exciting demo uh end to end and uh of course the the focus will be around the pipeline but i will also show you today how that can be let's say integrated with things like item event management discovery and so on so we will have a lot of components in this demo and this is basically what we're going to do so as you can see we have using the use case here and use case is that we have in the pipeline which using um um coding uh navigation that is being built through jenkins uh jenkins will then create the mighty effect and the artifact will then be deployed on the cloud infrastructure so if you look at uh how this has been done in a lot of companies they probably will go straight into the terraforms or the ansibles to deploy that artifact on the new cloud infrastructure um of course you can do that there's only one thing how many controls do you have in place that really uh make sure that things are being traceable uh eligible um and from the security point of view also kind of secure enough to to be deployed and and those are the things that basically is missing in a lot of cases so what we will show you today is that we will bring in those controls to make sure that if we gonna do a deployment that our deployments are you know secure enough they have all the necessary controls um they have um they have been checked against different validation engines they will have an automated change change approval and of course we make sure that everything is according to budgets and quotas so today we will have sweegle we will have devops we will have cmp cloud management we will also have grc being in part of it and the last one is of course item in visibility and health so that will be the demo and we will show you how that works um the reason why we do this day now is because as i mentioned um there's a lot of deployments going on but how many of those deployments does have the the controls in place to make sure that those deployments are secure enough all right so having said that then we can go into the demo um so i will start with with the tool chain i will start on the left upper corner as a developer the purpose of this whole concept is that we will say that developers would work in their own environment so they should not be bothered with too much governance um because you know that's always kind of things that first of all they don't like it second it takes a lot of time to fulfill all those different paperwork and um they want to be really focused on what they like to do and that's developing so the developer will basically work only in his environment the rest all of the other interfaces to the governance part of this game mode is done through apis so there will be no portals involved uh everything will be connected through apis okay so let's let's start as a developer uh me as a developer um i will change some codes and of course the most uh popular one are the eclipse and the intel ijs so here i have my corporate websites where i have all my code stuff and what i can do now is do some code changes so if i go to source and i will go to my jsp i will just do one very simple chains on my code in this index jsp where i can uh this is uh where i call it some text so this is a second test and this is for a great danger this is [Music] so i changed my post and as you can see there's in my version control there are four files that has been touched during my absence or during my changes and that's something that i like to commit to my repository so what i can do now is commit that means that the commit will be now pushed to my repository which is in this case is gitlab so normally a good exercise from the second point of view and from a different point of view is that you commit always a message in my case i will add the work item to this message the reason for that is that i like to associate my story to to uncommit so that i can basically trace back what i have deployed in production so that means that if i deploy my codes in in production i know exactly which uh which stories are beings associated with that so that's why i always commit with uh with a work item and that makes it also possible for servicenow to correlate the work the commit with the work items so that's the reason why i'm doing this all right so um before i commit i can explain how it works so it will be pushed through gitlab and will keep that it will be pushed through servicenow through on webhook so that means all my commits that i'm doing as a developer will be automatically pushed to servicenow as well not only to the to the repository but also to service so let's do that i commit and push so confirm push so as you can see when i switch screens and i will go to my servicenow instance then you can see that the commits has arrived when i go to commits here this is part of the devops application as you can see so in the devops you have different uh different configurations things but here as you can see here i have on the mix and this is the commit i just done so today is the third of november and this is the commit and this is the word item that i'm working on now the work item can come from itbm agile or from any other planning so in this case i'm working with uh with jira so same as as my repository in chair we also have a web book so as soon as i created a story then that story is automatically posed to the devops module not to the ibm angel model but to the devil's model and this is where the places that will arrive in the work items so here i have all the work items that i receive from the planning tools um now when i um when i receive the commits there's a kind of a process running in the background that associated work item automatically with the commits and that's something that we will see back later on in the day so what i have here is basically my first integration i have my commits and i've been working item associated together so if this is going to be deployed to any stage through a new t state or production states i'm always able to see what i have deployed and that's a very very important thing for um for the networks um quick question what what is the role we should be able to have that view i'm guessing that's not the developer that will go into servicenow or the comic right good question so of course this is not the developer then we'll go into uh service now and we'll check this so um this is basically kind of meta data that we're to use for um for change management later on right this can be used by the chains management owner or the change management manager because here you can have more insight into what kind of chains have been going through his process so what what kind of changes and that's something to that's the traceability that i was talking about so here basically already kind of you know in a kind of a change management world but the developer of course is not ready yet so the next step when he pushed his code or a committee's code he like to build his application and for doing that i will switch over to um to jenkins where i have my pipeline um so the pipeline um in this case i'm going to use i'm using jenkins but of course we will support azure and we support gitlab as well so how does this thing works um when i do my commits in um in gitlab um then it can be the case that jenkins will be ultimately triggered for to start a new i adopted with build i do it manually because here in this pipeline i'm not only building code as you can see but i do also create a new infrastructure so and that's where the infrastructure as codes comes into place because it's kind of a common practice nowadays to build new infrastructure radio and deployments and this is something that we can support with um with our cloud management module where we have our catalog items available for the developers where they can create automatically their new infrastructure through the scattered item using apis in the background we have terraform connected to the cloud management module which basically doing the the builds of the new infrastructure in the azure glass so that's basically the pipeline um a very important thing is the check config this is the integration with swigel so before i do any deployment i want to make sure that i have the right configuration files that my configuration files have been validated for for different reasons and my my colleague dimitris will explain everything about that later on but this is the integration that i'm using to make sure that my configuration files are right and validated of course i have my test space and between the deployments of my new broad infrastructure um i can erase and change requests to deploy my application so between these two cases status there's an automatically change request on you so how does it work uh to get this information from jenkins into servers now um to make it work and this is an example for jenkins of course but for azure and gitlab it's more or less the same idea um when i go to my scripted pipeline you can see different functions in this basically the plugin that you need to install in jenkins for service now so here you can see the first kind of function which basically sends the information from this stage and this space is the preparation of the conflict lines to service now so as soon as these states will be executed this information is also sent to servicenow when i scroll down a little bit and i will go into the um the check config and here you can see how that will uh integrate with this regal suico does have his own plugin as well and this is this regal integration so also here we can see the service now step but here i'm calling sweegle to do the validation of my confirmation file when i scroll down a little bit down to the deployment of my notification i can see where the change request your kicking and that's exactly here so the block in will will give you some some functions to uh to let's say to [Music] inject your uh your pipeline and to make sure that the information from the pipeline has been sent over to service now good so everything is in place to uh to trigger this build so let's do that um so as i mentioned um there there are different phases um at different stages so every stage will uh will be sent to servers now and this of course we can uh we can use in our change request which i'm going to explain okay so let's let's start um this build will take so well because uh the build of my infrastructure which um which i can show you later how it works gonna take approximately five minutes and i think this is a great time to switch over to swivel as soon as they reach the check config phase to see a little bit more about squiggle so dimitris yes dennis thank you very much so i will share a little some slide so that we can define what is swigel and what is the value of swiggle inside this pipeline okay so first let's define when we are speaking about swingline speaking about configuration data management and we should define what we do with config data and what is config data exactly for us config data is any parameter or value that is used in order to personalize your application or your id infrastructure so it could be an api key an url username passwords feature toggles also anything like this could be used and as you can see when you deploy a virtual machine most of the time you will use 20 50 parameters in order to define this virtual machine but at the end if you look at the vm itself it will contain hundreds or several thousand parameters just with any patch level of any library that is installed list of administrators so that's all these parameters that we are able to manage with servicenow swiggle so it could be application link release link environments or infrastructure parameters and what is important is the current situation is that these parameters are spread everywhere some of the parameters more linked to the application are mostly on the git systems some parameters are in your cmdb some parameters are in other referential so we try to collect all of this data that is important to validate consistency and validate that it is adequate before deployment so how will do we do this so if i look at high level we will try to manage all this configuration collect them centralize them check them we will secure them also because when you're on regulated market or where you want to avoid attacks from hackers or if even errors from the inside from people you want to secure what is sensitive you want to encrypt some values you want to keep role-based access control based on the duty of everybody and at the end you want to validate compliance the pro the problem is you want to do this but with speed because you want to get the control but you don't want to slow down your developers you don't want to slow down your business and your applications for deployment that's why we integrate to pipelines so the purpose here is to do all these tasks inside the pipelines in automated way in a few seconds purpose is to get speed with control and that what we will do with swiggle as i said we can do it for any kind of parameters and we can do it across the full life cycle of your parameters meaning we will take into account all the parameters the future parameters that are in git for the next release but we can also discover all the values that are currently deployed in this use case the purpose is to be able to manage deviation ensure that nobody has deployed something that is incorrect comparing to your enterprise standards ensure that drp sites are correctly configured your primary site versus your backup site and ensure that everything new that is deployed in your pipeline is correctly validated before deployment what are the benefits of this i would say that the best example to to give you the benefits is to take customer example and here you've got an example of a customer moving to cloud to microsoft azure to be a little more precise and what they were searching for is they were searching for some things that ensure their compliance enterprise compliance but also that we don't secure data for all their um sensitive values and it would be agnostic from any cloud provider and any cloud solution why that because for the moment they are going to microsoft azure but they are sure that in few months they will go and they will split their infrastructure between many clouds meaning they will have an hybrid and multi-cloud target so they need an agnostic solution and that's why they use swiggle as a whole for their sensitive data but they use also the compliance and validation rules as a control tower in all their deployment pipeline at the end they are building with swigel a source of truth for all the variables and values that are deployed on their cloud and as we've got some capabilities to audit and track or change it is building also an evidence repository in case they've got an audit in case security officer wants to check something or just in case some operational team wants to compare what has been deployed last two weeks because they've got an issue in production at the end all this has a business benefit and the business business benefit is first time right if you avoid rework if you avoid the errors you reduce your cost and you go faster to production let's take a look at how it or it is displayed and or you can control it so this is a gui mode so it is a web interface where you can check everything that is happening but at the end what we'll do is show you is that everything is integrated it means the purpose is not that you go in this gui the purpose is that you got the error directly in your pipeline directly in servicenow that you create automatically incidents or change requests so nobody has to go here except the swiggle administrator most of the user will find the information directly in their user interface in swigel what they could see is a data model and here is the data model for a typical application where you've got an application with these different environments and for each environment you've got the infrastructure that this environment is using and for example here i can see that this this environment for this application is using two virtual machine and for this two virtual machine you can see that the infrastructure that i'm using here is a shortcut to my infrastructure model this model is built automatically when you import or collect information from your pipeline so it is a dynamic cmdb not on cmdb that is put manually by people but really collecting data directly from your inputs your pipeline so it is building and building again and again based on the latest information and what you can see about this virtual machine in particular is that it is shared between a lot of different applications and environments meaning at the end that if somebody is trying to update a value here through an upload or through the gui if i change my http setting automatically the compliance rule will check the value will check the configuration for this virtual machine but it will check also the configuration for all this application and environments to to see if it is still consistent and if enabling the http firewall is okay for web portal 2 3 4 and all the other environments at the end the result of the rules will be availa available through a dashboard like here where you can see everything that has gone good or wrong so here is the evolution of my configuration with each point being a picture a snapshot of your configuration meaning an unalterable picture for traceability auditability and also the result of the rules here with the different rules that you have assigned for this confer configuration like for example here a comparator rule between environments to be sure that you don't reproduce in your prepared environment the same settings and in qualification and here there is an error because one value is the same which is an url value but also you can have some security rules like here this db compliance check check that you have enabled security for this connection db so all these kind of rules could be controlled on the fly when you deploy you will get the result in this dashboard for the swiggle and min and for traceability but you will get also the result directly on your pipeline like denise will show you um yes so um i'm really depending on sweegol um because it's very important to uh to deploy the right confirmation files um not only from you know from from an error point of view i mean we know that a lot of outages are caused by the wrong configuration files wrong pious words or the wrong user ids maybe the wrong environments you know that kind of stuff so it's very important to to make this part of your pipeline so what we have seen so far from a control point of view uh is actually the the suicidal part the confirmation part so um as you can see my pipeline has been has been done and as you can see everything is green except for the config part which i gonna explain you right now because when i go to my pipeline details as you can see i have my streak of validation report here which tells me that i'm using http instead of https so those are the things that sriegel can tell you um you know if something is not properly configured this report i can use now for let's say in service now into my change request which we're going to show you later how that can work but this is already kind of the first kind of uh confidence check that that we're doing so the first covenant check is now um the the suicidal part the confirmation part so um let's go back to the pipeline so yeah so the pipeline has been has been done uh successfully everything is green so that looks good um so what happens after the config of course there's some testing has to be done and of course it has built up the new infrastructure and it has deployed the application and in between it basically automated the whole change request without so approvals so that's something that we're going to show you a little bit more in details how that works um so first thing that i'd like to show you is the information from the pipeline in service now so when i go back to my servicenow instance and i go to pipeline executions then i can see that my pipeline execution has been executed as you can see i can see the number of steps um i can see that the test passing is 88 um and that's that's because we will uh take into account um the test results from the j unit um so when i go into the pipeline um then i can see the details of the pipeline in servicenow as well so i will have the same view as uh as uh as in jenkins but as you can see here is a little bit more enriched so you can see first of all all the different unit that's test so i can see the unit test of my kafka client i can see the unit test of my corporate site i can see the unit test of my czech config which is 50 we continue with our pipeline for the devil's purpose but this of course uh can let's say stop the pipeline saying hey you know your confirmation is not right uh you know fix it first and then uh do it again right so but for demo purpose we will let the pipeline uh going on and then of course the uat testing which is home percent so all my g unit tests already done successfully uh my czech conflict part which is the cereal part is 50 and that's because as i show you in the pipeline that the http uh is configured instead of https why does uh junit uh but this uh swingle thing uh shows up as a junit because we have a kind of uh integration there with sweegle to show up as an snj unit in service now but of course when we have suico onboarded on the platform this will look a little bit different good so when i scroll a little bit down to the right then you can see that the whole pipeline is there and if every state smoke with my pipeline stream this is good and here you can see the chains so it did with the deployment of my infrastructure and there you can see here the deployment of my application has been successful so when i look into the chains you will see a normal change of course with you know uh with the out of the box chase uh normal change request but as soon as i will scroll down you will see a little bit more details about the change itself so here we have the typical change request stuff here we can see that some of the fields are being also filled in but that's something that you can change but here when i go down then i can see all the things that comes from the pipeline so for instance i can see all the test results of my j units and here i will have my junit test from the swiggle point of view so here i can see um that the no http has been done not successfully because it's using the wrong uh https so i mean it's not it's not using hps but using http and you can see that the password checker is basically successfully that means that there's uh no passwords in plain text so all the passwords in the confirmation files are encrypted here i will see some other implementations for my corpus websites which are all done successfully so this is already included in in the champion class so as as a change management owner or change management process owner this is already great because this already gives me some good insight in the chains and if they fulfill all the different policies and of course some of the most of the change policies are are related to tests and they should be honest right so the test results would be at least from an essential point of view successful then of course i have included all the commits that have been part of the build so um here i can see what kind of code chains are being part of this change request and this is something great because this is not only from all the point of view but maybe also later on in front of from an entrepreneurship that's coming shooting points um here i have my change policy um uh policy that said tells me you know what kind of decision did that took for this change request so in this case it will be an auto approval because i have all the policies met so that's great um there's more um there's of course the work item so also the justification of this change is in here so if i want to see what kind of tip what kind of change it is i only have to click on this link and i will have all the details of that particular change um of course then the normal stuff the affected ci the impact services and so on so as you can see there's a lot of information in exchange requests so if i have to prove something from okay hasn't has been deployed then i only know what i have to do is go to this change and over this change so i have all the information so from the traceability and from the business point of view this is really great okay so um as i said um this is basically the change request between the deployment of the infrastructure and the deployment of the application now as i mentioned the pipeline also take took care of the deployment of the infrastructure which is basically an api call to our catalog item in the cbp module so when i go to the to the cloud to the cloud portal then you can see that my new infrastructure has been built this is the recent spec that has been built as you can see it's been created 18 minutes ago so when i click on the details then i can see all the details of this deck so let me see where are the details here so in this view dependency i see first of all all the parts of this deck what kind of what kind of components has been created but more important as you can see the stack has been built on the azure cloud infrastructure but in the mean in the same time it sends the information back to our instance so our cmdb is automatically populated with the new stack that has been deployed so that means i don't have to to manually change it in my cmdb but it's all automated it's basically integrated so i have full information of this of this new stack and as you can see it has created the filter machine um and it creates also a new application service which my new application has been deployed to so if i go to um if i go to my azure alignment and i will go to my resource groups and i have to refresh then you can see that new resource group has been created which is then more than 29 if i go in there then i can see all the different components that has been created so to see if i my change from the beginning has worked so that is the change that i have done here let me highlight it this is the wrong one and i have to go here so this is the change that i have done so let's check if it's there just by clicking on this one here which is the application service and then the application service does have a url it's this one here so let's click on it and let's see how the website looks like so what i have done in this case of course the [Music] the terraform has built has built the new infrastructure and but deployment has been done through in this case through jenkins but it can be also done by playbook invincible for instance now as you can see here the four create demo shows up on the deal website so perfect so that has been done successfully my new application is there it runs on the new infrastructure as you can see here which is the azure application service but it also has created created the vm where my kind of coffee client runs so everything is there that means that my build has been done successfully everything is green so i'm happy now for kind of feedback into uh into the into the tool chain uh what i have done is basically integrated also with um with uh with the jira so i can notify the developer that when i go to the story that is involved in this in this demo that i can say your deployment has gone successfully so the feedback into the tool chain is also supported from here of course when when something is failing i can also send send some information back to the developer in his in his environment let's say tool of choice where i can say hey you know your deployment has spilled this is an incident ticket which is assigned to it here are your details from jenkins and uh please uh you know solve the problem so that's something that we can also send back because we have the jira spoke which you know can be triggered at the moment one of the stages is failing which i can't show you later okay so as i said the application has been deployed successfully we have seen that but how does it work from the change management protocol because as you can see here we have all the information uh it has been auto approved so how does it work um for showing that i go to the change policy and interchange policy which basically and functionality of the change management model is where i can define my change policies so in this case i have different ones and i have used this one in the demo the dev of change policy quickstart and in this policy i can bring in some inputs and those inputs are is basically the data that's available on the platform now with our integration from devops we can pull in the data from uh from the on the tool chain we can pull it off we can pull in the data from sweegle but we can also leverage the data from the models like grc or any other module that we like to use for making the decision if you're going to do an ultimate government so in this case we can see some different things um the top one is here the number of ouches from the last month so if i want to deploy an application and i know that there were many altitudes in in the last month then maybe i don't want to do an ltp coordinates i may want to do a review first um so that can be used and this information comes of course from uh from altitudes from the outage table and this is of course itsm um this is of course a typical example of um of the leveraging data from the tool chain so the number of line chains so if if i have only five lines of code chains then the risk of something goes wrong is not that high so i can do an auto approval but if i talk about thousand lines of code chains that's a different story in that case maybe i want to do a review or i do want to do a cat meeting first and this is the one that is kind of devops practice where i don't allow any deployments where the commit doesn't have a work item here's where the traceability comes into place i want to know what i'm deploying not only from a planning tool point of view but i like to know also what i really deploy in production and that's something that is very important so any commit without a work item i should not accept and then of course the different stages will be ground between which is basically normal so the whole pipeline should be green um including the testing and here here's where i gonna bring in the the um the grc module so i gonna check from vulnerabilities um because so basically i'm guessing if that policy is made to auto approve based on certain criteria a change i'm guessing we could also include uh criteria such as change maintenance periods or back out windows stuff like that right that's that's automatically already included because it's part of the uh part of the of the flow so if the flow runs it will also take into account those kind of things so the black windows conflicts uh stuff and so on so yes all right and we can make it even complicated right so let's say that uh if i could deploy over 15 ci's which is you know kind of a lot of cis at the same time which of course that's a higher risk so in that case also maybe you don't want to do an out deployment but say hey you know let's do a review first because deployment of one ci okay what can be the impact right now so as you can see of these examples we bring every data that's available from the platform in this decision model right so we can make it as complicated as we want we can bring any policy from any type of accord right and this is the corner from the security where i have some controls in place um to make sure that my application is secure enough so as you can see i do some informability scanning with a varicose for instance so i want to make sure that if i do a deployment that my code is first of all has been scanned and there's no vulnerabilities in spawns there are some multiplication separation of maintenance decisions sessions so all those controls that basically need some different process i can bring in as input in this change management so here is where the grc module can play uh an important role in the deployment so if the controls are not being met or don't they are not compliant then you know i can stop this this um this deployment because you know let's let's fix let's fix the compliance first so this is the integration with the grc module as you can see good so go back to the policy right so again we can bring any type of data that's available on the on the platform and of course the cbd is playing a big part because the would be give me all the relations between these guys so i can do some impact analysis and so on um so when all this uh policy has been set then of course i can work on the decision mode so if if all my policies are met right then i can give you an auto approval and of course this is something that you can filter uh this is just an example of course but this looks like you know test results would be more than 70 number of lines of change code should not more than 25 a commit without a work item should be zero and done with uh out from the last month should not be no more than two right so if those policies are met then i can go for an auto of course if i have not fulfilled my compliance then maybe i want to do a cap approval right so here you can play around with with your policy so this is kind of the second line of defense like the third line of defense so we have seen swingle from a confirmation point of view we have seen deployment of the infrastructure which goes through the cmp where we can check against quotas we can check against infrastructure policies and budgets and stuff so we want to make sure that that is also covered and this is basically the third line of defense where we check against the division policy from the change management function actually the fourth one we also show is the security one because this is what these controls are doing for us so we're bringing here the controls from security into the change management so we have now four lines of events to make sure that our deployments are being successful okay so that's cool now how does it work from a flow point of view um as you can see here uh this is the flow that is responsible for uh for managing the change request so it can trigger when the category is devops and then we'll follow this this path here as you can see here it picks up the policy data so the policy data as i mentioned comes from the platform some of the policy data can be easily created against the table maybe some of the policy data you need to kind of uh kind of process in javascript so that's where the actions uh items are coming to place but when the policy data are being picked up from uh from the from the platform then we can look at the decisions and then make the right decision based on this data so as you can see this whole process is fully automated so me as a developer i'm still working in tools like the jiras the uh jenkins uh the the the intel ij or the eclipse i don't have to go to service now to you know to fulfill my work because i want to say the boring part is taken care but it will definitely do all the checkings for me so normally i have to go to maybe to an infrastructure team or to the cap meeting where i have to discuss those things but now everything is automated because i have the data on the same platform which i can leverage to make the decision so as i mentioned i had four lines of defense i have three or four configuration management i have the infrastructure of cnp for quotas and for budgets and for infrastructure policies i have change management where i can leverage the whole platform data to make the right decision and of course there's grc which i can bring in for the security points so all those things being automated and again the developer doesn't have to go to servicenow to copy paste data and fill out some nasty forums cool so this is the flow and of course at the end it will uh feed it back to uh to jenkins in this case to say hey you know your change has been approved go ahead with the deployment now i can imagine that uh during uh doing the pipeline something is failing you know the infrastructure is not successfully whatever then of course i can this pipeline will be stopped if the change requests kind of improve the pipeline we stop this people say hey you know you're not fulfilling all my validation rules the pipeline will start now how do we feedback that into the pipeline again because we like to do that automatically as well and for that case i created some kind of a flow where uh it checks basically the stages of the of the pipeline so if one of the pipelines as we have seen here not being successfully or green then that flow got triggered so anything that uh occurs here that particular pipeline flow will be treated so let's have a look what he's doing um when i go to the flow here so as you can see here this is the condition where this flow cut triggers and it's definitely the case when the uh the stage is not in the successful state so the first thing he does it will create a task in this in this case it's an incident ticket now um what it will do it will go to jenkins and get the console output so it will update the created incident ticket to with the output from the console from jenkins so there's some kind of you know information from troubleshooting in there so me as a developer i have in the instant already some information about the history um but what i can do now is because we have jiras and we have commits in on the platform we can sort out you know who's responsible for that so what i will do next is to look up the particular jira id that is part of that build uh with the story id and then send the information back to the jr id so the developer will then have his internet tickets and the console outputs in his jr story so he will see that right away he doesn't have to go to servicenow or some other tool to see hey the deployment has failed because that's now basically how to make it so here's what we have to call the feedback into the tool chain so this is what happened when the deployment fails um but suppose that the deployment has done uh successfully uh but still there could be an issue with with the ghost because you know it starts generating some some issues so as you can see here this the application what we're talking about and what we what we what we want to see is everything is green so after deployments it looks like green and me as an operational person application support looks okay you know good deployment nothing happens but still you know it could be also passive deployment could be some issues and that's something that we can also kind of correlate to the deployments because as soon as we start uh getting the alerts in uh for instance i have to just give them a sec just a quick question how did you get this uh service topology right here is it because of the tag-based mapping or is yes well what i do uh there's uh this is um this is a special discovery service mapping uh pattern that i'm using with kafka because as i mentioned i deployed kafka um so this is based on service mapping where i met basically the whole infrastructure that i have created so after the deployments i gonna run automatically and discovering process that builds up this picture from remember that we saw this other picture here uh in this picture here so this is of course the cmdb this is not the the server this is the uh i call it the dependency view sure um and this is basically the the the topology map from the application itself so what i do as the deployment i gonna re-run this process automatically from jenkins that basically updates my whole new service map and as you can see after deployments i see that there's an alert coming in and that alert tells me that i have my cpu usage exceeded 89 so when this happened and i'm responsible for this application then first thing is try to understand where this is coming from so as you can see i have different buttons here which tells me can help me to proper shoot one of the things that i can look at is the changes in this environment so as i mentioned after deployment i trigger automatically and discovery process and part of this discovery process is looking for files that has been changed and then second part is then building up this uh this graph again but as you can see here it notified that a new file a new charcoal has been deployed and that's basically the application that i was deploying during my deployment now as you can see the time from the deployment meshes with the time that you're working right more or less as you can see it's about uh 20 minutes later so i know there was a change and now i like to troubleshoot so what i can do now is go to the changes and open this string because this is the change that is basically associated with the file that has been changed and then go in here open the chains and then we'll have all the details about these chains so now i can even go back to the to the developer and say hey i can see that there's a correlation between high cpu or my application and your chains that you have done automatically um so you know please trust uh start probably shooting the only thing that the developer has to do is to go to his commits and look at the particular part of his code that has been changed and cf has kind of created that alert that was sent from the monitoring tools so as you can see there's there's a lot of correlation between what i have deployed right which is this this java file this is my coffee place remember i have two applications that i'm deploying i'm deploying the clients which runs on the vm and i deploying the uh the website which runs on the tomcat server so in this case this is the copy client which connects to the kafka broker but the cockpit line is spinning around the cpu so maybe because of code changes that the developer has done right so this basically you know you cannot associate the alerts with the changes right and all the information probably is basically here so you can based on this timeline you can make the correlation between the file modified the change has been created and the alerts coming in and of course you know if they were incident created automatically then i can see also the incident ticker so this of course will help you to speed up the troubleshooting so first first i can deploy very fast but i can also react very fast if i have some issues now let's um let's wrap up what we have seen in this demo [Music] so this is the full picture and as i mentioned i had bring a lot of modules from servicenow into this day mode so we have seen item item health and related visibility we have seen grc with his controls we have seen that with his chain's automation we have seen swigel with his configuration management validates against against corporation funds so different modules working together to make sure that deployments that we like to do today automatically can be done automatically because normally when people people say about pipelines they talk only about this thing but they forgot those things that are basically very important to make sure that you know to minimize the risk and to speed up the process because if we don't have this in place people still have to raise uh mentally their change requests maybe has to do some uh controls that they have to uh fulfill before it stop employment and so on so now we have everything in one tool chain so we have the pro tools chain and this is what i call the uh the governance tool chain that's in place to make sure that everything works together and how to make it right and again developer doesn't have to go to go this side here it can still work in his own environment we have seen the change manager we have seen the application support all working in the tools of choice so that you know that everybody can see if something happens in the system or something has changed in the system with the infrastructure that they can basically trace it back to the original um starting point um before i close of course having all this data on the platform we can also summarize it in nice dashboards which is available with the devops module where we can bring all these data together and have a good overview of our performance of of our deployments uh commits and and so on and we can either associate it with uh with things like from high design and change accelerations system health stability and so on so we also can bring this data together [Music] thank you very much denise and thank you dimitri uh we hope this demo brought you some lights on how we manage an end-to-end infrastructure coding service now have a nice day thank you thank you bye

View original source

https://www.youtube.com/watch?v=GuLqos6kFlc