logo

NJP

Teaching ServiceNow to read using AI powered Document Intelligence

Unknown source · May 12, 2024 · video

- Hello and welcome everyone to this lab about teaching ServiceNow how to read. We'll be using our AI powered document intelligence to process unstructured information and get that back into the ServiceNow instance. My name is Daniel Draes, I'm a principle product success manager at ServiceNow in the area of creator workflows. We cover pretty much anything about app engine, automation engine, and ERP products that we have. Part of that is document intelligence. I am with the company for more than 10 years by now in various roles and teams. But lately, as a product success manager, I help customers to start using the products they actually buy from us, and get to their ROIs as fast as we can, hopefully. I am supported today by my colleague, Dale. Dale, please introduce yourself. - Hi, yeah, sure, Daniel. My name is Dale Dunkerley. I'm also principle product success manager. I work with Daniel in the same team as a product success creator for ERP workflows. My focus is a little bit the same of Daniel's, but I do focus a bit more on the app engine side, but I do have quite a heavy involvement on the doc intel and RPA side as well. Been with doing ServiceNow stuff for a very long time, almost as long as Daniel, but I left ServiceNow initially, and was a partner and a customer and have recently rejoined ServiceNow, so I've got a full facet of my experience. Anyway, we're not here to do introductions. Daniel, do you want to crack on with the lab? - Yeah, of course, so in this lab today, we have actually prepared three exercises for you to learn how to set up document intelligence, give it some basic training, with just a couple of documents, so you can learn how it works, how to set it up, and how to train it. Then we will connect this trained use case into an application that we pre-cooked for this exercise and get data from these unstructured documents into a real application. And lastly, we'll wrap it up with showing you what automated versus manual processing does and how that actually works. We start with a quick overview of what that exercise will look like that you will go through for the next round about an hour. So the scenario is, if you'll start in the middle of that green box, ServiceNow instance, we have a custom expense app. So we are working for a fixtures client that wants to use expense claims management within ServiceNow. Now their employees are kind of used to submit the expense claims on a paper-based format, PDF files essentially. So we'll take these PDF files, run it through document intelligence, which uses use cases, fields, and tasks, and you'll learn what that means in just a bit as we go through the lab. Document intelligence will use machine learning, which you see there in that bluish box on the right-hand side, which is actually off instance. So that will be processed outside the ServiceNow instance. And the results will be brought back into your instance. If that sounds confusing, nevermind, we'll get there just as we go through the exercises, we just wanted to let you know what their role setup is. All you need to know is, we have a customer app, we have some PDFs, and we want to get data from that PDFs into these customer app tables. With that, let's go to Exercise 1. Dale, wont you lead us through that? - Sure, so let's talk about our exercise as you've just described it, the first exercise is gonna be setting up that use case and basic training, where we'll be becoming quite familiar with document intelligence, its module, its overview and the different features that it has. We'll then set up a new case and start extracting those fields into individual fields, and then start to look at even listed fields, because it's not just about initial form data. A lot of the information we'll want document intelligence to look at is rows and tables of data. Then we'll just show you just exactly how fast document intelligence can really pick up, the confidence it has in getting those correct values from the structured documents we'll be providing. But anyway, not everything always goes according to plan. So, Daniel, do you have any tips for the first exercise? - Yeah, just a couple call outs here as you go into that first exercise. Machine learning, as we showed on the previous slide, is a shared infrastructure, so that means everything you run through the machine learning progress will be offloaded or pushed out from your instance to another environment. In real life, this is in the same data center, so you're not need to be worried about that data shipping across regions and some things like that, but it's still off your instance. So there is some natural delay as we push it to the machine learning environment, and get the data back. Typically, in this lab environment, it shouldn't take more than 30 to 60 seconds to get your responses. But yeah, there are some delays. The lab guide will tell you where that is, and if everything goes wrong, we have pre-canned task records, or document tasks with the training data already loaded, so you can go on with your lab exercise in this case. Just keep that in mind. Good luck or have fun with the first exercise. - Welcome back, everybody. Hopefully, you've managed to get through the setup and understand just how document intelligence really works, and the overall overview of what you're looking at. In this particular exercise, we are gonna be talking about connecting document intelligence to an app. In this particular one, we'll experience just how easy document intelligence integrates into the ServiceNow platform and specifically apps that you've already built, where it can generate the data that it's already learned from and extracted from those particular PDFs. We'll also understand how we can streamline those integrations into automatically created flows and really start to kind of generate the data and generate the information that is required from the document intelligence, just basic reading. And then we'll enhance those prebuilt flows to tailor to any specific requirements like lists and other elements that were probably already part of the PDF. Anyway, Daniel just wanted to understand, as we are getting into this particular harder type of scenario, is there any other call outs or anything you'd like us to know? - Sure, absolutely. So when we create these flows that's basically driven by flow design, or our workflow engine, these flows are pre-generated for you, or you will be generating them as you go through the lab exercise. But these flows will be inactive by default. The lab guide will also tell you to activate them. So don't forget that, otherwise your exercise will not work. Just keep that in mind, you need to activate these flows as they are generated. We don't wanna basically impact the productive customer environment with an act, with a new flow that nobody has ever tested. That's why we do that. And a quick reminder, yes, also, what goes to machine learning and comes back, you have heard that before, it's a shared infrastructure. You will see some delays as you go through that, same as before. In this case, we did not pre-cook any of the training data for you, simply because you have generate the flows and the data around it, that is more complicated than the first exercise. So have that in mind, and enjoy that second exercise. Hopefully, you can clearly see now just exactly how machine learning and the document intelligence kind of integrate into apps that we can use already within the system. And you can start to see the data populating into that. However, as you can tell, it's a very manual process at this particular point. So how do we get it to automate versus that manual process? In this particular exercise, we'll be getting to look at the Recommendation Mode and the Autofill Mode. We'll be setting some confidence levels of, if it's a certain level in certain areas, how do I make sure that it populates that saying, "I'm really confident with it," "I'm going to continue on." Also get to the point of those thresholds of how will I even look at straight-through processing where I've got a a basic level of confidence, but I'll want it to really make sure that it processes as fast as I can if it's above a threshold. However, as we are using thresholds and confidences, and we've only got a newly trained lab, I'm sure that there's something that we probably even need to think about or watch out for. So maybe Daniel can kind of give us a a bit more of a heads up on that. - Absolutely, Dale, so as mentioned, there are thresholds, right? We talk a bit about here about AI machine learning, which can be a bit fussy at times, right? If you put the exact same task twice in the exact same AI, it might not come back with the exact same result. You might be used to, like by ChatGPT, you can ask him the same question, and you get different responses every time you do that. It's a little bit similar with our AI, not as as difficult or as as wrong as ChatGPT might be, but there is sometimes it comes back and gives a confidence of 80%. Then the next time it comes back with 85% in the same document. So these thresholds in the exercises, we try to tailor them to what we have seen as we tested it. If it doesn't work in your case, try lower down the threshold, and just run it again, right? Which to get to that point where it actually picks up the autofill on the straight-through processing, that is a bit of a trial-and-error in that case. And also, purposely for this lab, we have put in lower thresholds that we would recommend anyone to use in a production setup, right? If you want to have an automatic processing of these kinds of documents, we would definitely recommend that the confidence level of the AI should be way above 90% to be confident that it really is a good data being extracted. Now, in this lab with only 3, 4, 5 documents, some of the values will reach that 90%. Some of them will be just under that, obviously that's why we try with lower numbers. Just keep that in mind if and when you go for that in your real world exercise. So go out, do the exercise and we'll see you back for the wrap up in just a few moments. Hopefully, you got through the entire exercise even with a couple of those kind of call outs from Daniel and ourselves. But essentially, what we'd like you to understand is just how powerful the document intelligence module can really be. Once you've trained it up, and you've got the right level of data and the right level of kind of machine learning and confidence, this can process information even faster than any human and actually put it into the system. But the three takeaways we really want you to take away from today, first of all, it's the admin experience of that simple administration with a unified UI, that ultimately means that you can kind of see what is happening with document intelligence. And how to set up the use cases, the fields, the matching all of the confidence levels. You can look at it, you can see the AI powered extracting, where you can see how quickly it can learn, and what the confidence level of that information is as you're trying to process it. You can really clearly understand, just the more structured the data, the probably the better and faster it'll learn, but the more unstructured the data, it might learn a little bit slower, but it will learn. But it learns very quickly. We'll also be looking at, and hopefully you've understood that there's some streamlined integrations. You can kind of just point and click into things like ServiceNow apps and custom tables within that. Daniel, is there anything else that I'm missing that you would like to kind of say as part of wrapping up, and thanking everybody that's within the lab? - Well, I think all I need to say is thank you very much for joining us today. Go through that lab. We hope you really learned how documented intelligence works. You gained some experience with that and you had as much fun as we have preparing the content for you. Thank you very much.

View original source

https://players.brightcove.net/5703385908001/zKNjJ2k2DM_default/index.html?videoId=ref:CCL1167-K24