Best Practices guide for ATF Test Generator & Cloud Runner: ATF Walkthrough
um hi everyone um so I'm inini and I have here with me Nick um this is a continuation to earlier series the ATF walk through for Washington DC Series so um this time around we'll be taking you through some of the best practices for ATF test generator and Cloud I've got Nick who a software engineer on the team Nick give me bit of in hi I'm Nick I worked on test generator and cloudrunner so hopefully I can help with best practices awesome so as usual we might be talking about some things on our road map here so be uh showing you this um safe AB notice just so you aware about some of the things we might discussing and this is our agenda for today we're basically going to go through the general usage guide for test J and cloud and also we're going to talk about some best practice stes um after which um Nick will do demo and show us some things around test gen and cl so I guess the first thing we talk about is we we had a couple of customers talk about the fact that it takes too long for the test generation to to get completed executed and some of the resolutions we have from my engineering team here is we they would advise that the first thing you should do is to limit the test count per table and um basically you want to also um have an overall test capacity such that you limit the number of test that are generated and the third one being the query based fi TR if you are aware about our new update you would see we have some presets that are based on users the tables and maybe some service catalog items also so those are the things we advice when it comes to you experiencing um your test generation to taking longer to execute or generate it um the next additional guide would be for you to pre modify your Su execution as you can see from the explanation um is place to execute the entire test before implementing any changes then once you've done the changes you can now execute them again the next one is to understand the fact that ATF is a regression test test to be are not a test to even though we can actually do some of those things but our best um um strength is in regession testing your your customization then um we'll come over to demo later once Nick is ready then some of the best practices for poen is to always regularly update your your HS um we on quely basis we push out some updates on our on on the store and since test generator and cloud is available on the store you can always be sure that you get an update on quckly basis and also um Cog item management some of the best practices we we kind of tell our customers also um using this can actually significantly reduce reduce the noise in your test um we also advise that you optimize the test part table like I said earlier um there are some requirements like we we we usually customers when it comes to optimizing the test table and some of them are while you're testing your table prioritize quantity over quality sorry quality over quantity you want to ensure that you're you're testing the right and and not generating so much test when it comes to then the test generator I think another one is also um acceptable limit like it's generally acceptable to limit the number of tests per table to two or three so the whole point is you're trying to um ensure you have a comprehensive coverage when it comes to um um generating your test um additional ones are related to test suit maintenance approach which Nick will be talking about once you get to the demo I think here it's U I think this is the right time to actually ask ni to go show some of this um things we're talking about all right sure um so I might be retreading some of the same ground of what uh he's already covered but uh the main important thing for Best Practices is keep your store app up to date um if you notice an issue where something you think is happening that shouldn't be happening um first thing you should check is is there a new store app version and if there is update it and see if the problem persists because um we quite frequently have uh cases where people have encountered errors that we spend time debugging that end up just being bugs that were already fixed in a new store app version so you can save yourself a lot of headache just by keeping up to dat um and then after that the next best uh practice is if you notice your tests are aren't running or your test generation isn't generating um check the cloud user page and make sure that this value here is still here because if um your user becomes unable to log in for any reason which can be like a script that requires the password to be reset or any other number of uh reasons uh we clear that user from the property um and this will be empty and so that's how you could tell that your user is no longer valid and you'll need to ensure that test generation and the cloud rner both have a uh valid user the whole time they're in operation otherwise they won't be able to log into your instance and do anything right U yeah Nick I have a question for this so what if a customer upgrades the instance does this still retain um if you upgrade your instance in the middle of a generation or a test run um that will cause stuff to pause you can't run tests or generate test during an instance upgrade um and the store app uh isn't upgraded with the uh instance upgrade you'll have to upgrade that on its own separately through the store app page oh I see makes sense and so the next um thing I want to get to is uh I want to show off um presets which are a little bit more complicated um but are a pretty cool tool um these so um as you as you mentioned uh uh an easy way to just say I have too many test and I don't really know how to handle this number of tests the easy thing to do is just drop down the number of tests per catalog item and tests per table um this is because um the way test generation approaches tests um is essentially the same for every user um but what the differences arise are between the different users roles um and how they interact with business rules and ACLS and so on and so um having a handful of users like like will be useful for capturing different roles in different role groups but you get diminishing returns so if you have like 10 users or 20 users or 50 users you know you're not really getting much more for the last like two-thirds of them as you are for the first third so if you are encountering too many uh too many tests in your test suite and you want to get a handle on it you can uh this is an easy way without having to make too many decisions um the next one is you can limit the overall Max count if you're getting anywhere near this number you can drop it down to a th and then when we get to a th successful tests it will stop generating and the more complicated but more powerful tool is you can use these filter conditions here on the bottom and so you are able to filter um different uh use cases here you can go to this filter section and like for example uh you have uh walkable Fields here so you can do all applications whose name starts with the letter a you know and then boom you filter all users to be or all tables to being a part of applications that start with an A and so this um has now dropped the number of tables from every table on the instance down to 41 um and this gives you um a much more powerful way um to control what is targeted by test gen so so yeah um I another a question so now that we have 41 tables here and initially I think we said we wanted three test per table does that mean the total test that will be generated would be 3 * 41 that's 123 or so so what the test per table means is so this filter condition on the bottom controls what is even considered not everything that you filter in here will necessarily appear it's what we will try to make tests for and we will not succeed on a good number of those some tables are completely inaccessible and so we can't even open them um because all the fields are read only or something like that um there's any number of reasons a test couldn't um not be generated but what this controls is we will um attempt to create this number of successful tests and so for like agent assist recommendation here we'll keep going and we could have two failing tests and then three successful then the three successful end up in your Suite or we could just get three successful right off the bat but this is the number that will cap the amount of tests you will try for each individual table and this is the number that will cap the number we'll do for each individual item oh interesting makes it clear thanks yeah and so um all these configurations you've done you can now save like if I wanted to save this as the a application tables I can do that and then you hit submit and then now we have this preset here and so if I refresh this page and I come back you will have your prese set here and then if I go to the tables you'll see it still has that query I added and you'll also see it saved the number of test counts I had and the maximum test count that been modified and basically everything in this form that you can uh modify will be saved as a part of these presets so you can create quick configurations if you have um Generations you would like to run frequently and so I'm not actually going to hit to start generation because it would take a while but you can hit start generation here it'll um send off the request and start doing generation and the next thing I want to show off is the browser orchestration queue this is um how you should track the progress and manage any job you are running so uh this will have all types of jobs it'll have test runs and test Generations are mostly what you're concern concerned with test users are login requests for when you are setting the cloud user and we also do one before every test generation but you can go here and you'll be able to see the estim ated progress that it's progressed if the browsers have disconnected and had to reconnect you can see the retry count here you can see the last time a browser um heart gave a heartbeat and uh showed that it was connected to the instance and you also have this info log that will log um as it moves through different states and it'll log um uh lots of different types of errors that can occur on a job will get logged here um this message right here job was ended at Cloud that indicates that the um browsers were torn down in our Cloud infrastructure it's not like necessarily a bug it just means they were torn down and so you can see here this job completed and when the job completed the browsers were torn down so that is normal um if you see this message in the middle of a test generation or something it might indicate that some sort of error was hidden um but if the retry count increases and the test generation progress uh continues um it's generally not an issue and the other thing I'll point out is this cancel job button so if you have a test gen that's taking too long and you want to be able to do something else instead you can hit this cancel job and that will stop the test gen from running or whatever else is in the table will stop it from running and and and when you click on the councel job does it also um generate the message job was ended in the cloud um yeah the job was ended in Cloud actually comes from the cloud infrastructure so that happens when the browsers are actually torn down so if you hit that button and there was some bug where the browsers never actually torn down you would never see that message it's essentially a confirmation that the job has ended um and it should be um fairly robust so if it you can uh also see that um the last check in time should stop updating if browsers aren't connected yeah um sites yeah and the other thing I'm going to show now is oh um I forgot to show it when I was on test gen page but let me just mention um catalog items tend to have a lot of use cases um and so catalog item use cases can be like a big part of people's instances and they're very useful to have tests on but there's also can be a lot of them um and so if you're even after dropping it if you're overwhelmed um if you have to start making Cuts uh a good place to start is catalog items just because of how many use cases it sometimes has um and this isn't the same on every instance every instance it depends on the data present in the instance where these tests are created so this may not this isn't a this is a good idea to start with but it's not necessarily like a rule of thumb where you can always just cut down catalog items oh awesome and so now I'm going to show off a test Suite that I've already generated and I'm going to show you um how I suggest that you approach debugging on these uh test Suites so I created this test Suite earlier on incident and so the idea behind behind these tests are that they are regression tests and they capture um how the instance should be behaving and so the way we do that is we open a form and we assert all the states and values that are present on the form we make it change assert all the states and values and step by step using data we found on your instance we do that and then we submit open the form check the states reset some more values submit again and this allows you to um implicitly walk through a lot of different ACLS and business rules because we're attempting to exercise this form just how a user would um and by using real data on your instance we're hoping to capture real user um usage patterns and so now um let's say I am ready to do regression testing on incident and I would like to run this test Suite I generated earlier so I can run this test Suite I hit running Cloud this will start up in the background and uh these tests will now uh send out a request if we actually checked the browser orchestration queue right now you'd be able to see this job is now queed there and you can see when the request is sent and you can see so on um but we just wait for the um browsers to connect and run the tests it'll just take a second good so um I think one thing we need to also um tell them the fact that um just the way they use the client test run where they can actually see they can actually manage all the test they running on your client device the browser cration key does similar for the cloud um Cloud executions so it's it's they both similar but they the they focus on different execution method the client focuses on the client device or orchestration que focuses on all the jobs or the processes that are being run on cloud Runner exactly and so I'm just going to cancel the rest because I already it's already failed like how I oh no there's an error so now I would like to go debug this error I've encountered um the first thing I suggest people do when because realistically if you're debugging uh failures on a um generated Suite there'll be a lot more than four tests first thing I suggest is you group it by output and you put similar outputs next to each other and so um by filtering by output here and just doing literally ordering the list by that you can see that it's grouped these two errors next to each other side by side and so I have these two errors that have both occurred on the same table they're both on incident and they both have very similar errors and stack traces and so they're very likely to be related and so now the first thing you would do is go investigate um customizations that you have on that table see if there's anything that's changed and so in this case um all of a sudden I there's this business rule I don't recognize very surprising here um so you could go and investigate and you can see oh there's this business rule somebody created which throws an error uh that has some uh piece of code that doesn't work and so now you know that this is um you've triaged this use case and you know it's a failure um what I would suggest is that um in all likelihood these two errors are almost certainly from the exact same cause they're both on the same table they both have almost the exact same stack Trace um and so what you can do is you can just filter out um those errors and move on to the next one and just step byep walk through the suite um a grouping errors that seem to be similar and then analyzing one of them and if it seems to be a problem you can put it in the I'm going to fix it later and if it's not a problem you can ignore it and then just step light by step making your way through the suite like that and that's how we recommend getting through the generated suits yeah but when you fter out and you go back to run the test doesn't it generate the same error yeah um filtering out here is just for my benefit to put it to the side so I don't see it anymore um it is possible to um allow errors on this uh On Any Given test if you go to a test result oh this one was canceled if you go to a failing test result you have the um add all errors to warning add all errors to ignored and you could ignore it if you don't care um but in this case um it was it was a this error was not intended to be thrown and so we this is something we'd want to deal with because it stops form submission um but yeah um the filter out is just meant to help you uh be able to parse through the list without having to repeatedly look at the same thing and sometimes the errors won't be exactly the same like the stack traces will be slightly different and if that's the case you can do something more like a contains like stepped failed because like like you could like let's say we don't want to look at any client errors we could do this step contains root cause was blah blah then you can filter out like that um if their errors aren't an exact perfect string match or if it's not an error and it's like failed to do field values on blah blah form because of some ACL or something yeah that makes sense and also it was mentioned in best practices um but uh I don't have it recreated here um Flappers are are sometimes occur um when we generate a test where you generate a large number of Suites a large number of tests uh and create a very big Suite um sometimes there's field values that will have different values depending on the day the form was submitted sometimes there's things that have some sort of hash implemented and so depending on your session the hash will change and there's things that are very hard to detect when we're generating the test that hey this test will not always pass um so it's a good idea that after you've generated the test a day or two later uh run the test and anything that fails right away without having made any changes to your instance is uh probably not going to be a very helpful test and so you should probably either delete those or just remove them from The Suite go inside oh and then one more thing um since test generation uh generally produces a large number of tests a lot more than four um it's not very feasible for a human to go through all these tests and fix them quote unquote so um when you notice um when you run your test and you notice an issue you want to look at um make the change fix your in fix whatever the issue was um like get your instance into the state you want it to be but then don't update the tests you shouldn't be maintaining a a 4,000 tests created B by the test generation instead just kick off a new generation and allow that to be the new representation of your instance and the other thing is um sometimes over time uh behavior of the instance just kind of changes a little bit Um this can be because of things like um example records we were relying on no longer existing so like a user we impersonated is no longer a user and CIS user or a specific record that was inserted that we reference is no longer there or something like that um because we pull all our example data from real usage records so if someone sets a reference value um in a reference field we just do the same and so if the value were referencing isn't there anymore then uh it is uh it uh like it won't the test will fail when that value is removed and that's not necessarily indicative of a you know a bug because it's just a value has been removed but that does mean that over time um it's possible for um very old test Suites to become uh inaccurate um for the instances behavior and so if you have a test Suite you had in the background for 6 months and you haven't touched it um it might be a good idea to just regenerate a new suite before you go and do your regression testing generally the closer you generate your Suite to when you actually want to do regression testing the the uh the closer you generate the suite to when you make the change so you do this generation make the change and then uh run the result and that will give you the best results for using test generation yeah that's a very key one because I I remember my my um manager says call test gener disposable test so once you're you done running them you can always dispose them and generate them again as much as you want yeah and uh using presets you can save like presets like I want to do regression testing on a specific application I want to Target like specific high volume tables or things like that if you find that uh the because if you do not provide any uh criteria under Advanced we just do as many tests as possible because we assume you want to cover the widest um possible number of tables on your instance but um people have a lot of tables and so sometimes that's too many tests um to deal with uh and so if that's the case um start looking we've given a couple ways of how you can try and um get a more manageable number yeah I I personally believe the preset is very good too when it comes to test gen because you can always generate them again and once you get them disposed you can still go back to your pre engineer then Mak like Nick said so it's awesome that we have that in the in Cloud Runner right now in t yep and I think I have run through everything I wanted to demo today so that is the end of the best practices demo
https://www.youtube.com/watch?v=3qmCtiqPRH0