UX Research and AI with Jessa Anderson
and this is what human centry AI is all about really helping to build that collaboration between the technology and the human their human expertise plays a very very critical role and that we need them to put human eyes on it and review it we need to help guide them and how they can best get started and interact with AI to guide those behaviors welcome to another episode of tech bites I'm Bobby Brill your host for this episode but more importantly that is Jess Anderson manager of AI machine learning and ux research here at service now and Jessa has spent over a decade in researching human behavior and motivation the backbone of user experience and in this episode of the podcast we will talk about how ux research and user experience tackle just how AI is being used so Jessa ux research is a giant amount of exciting important groundbreaking thought and now you add AI which is Giant and exciting and groundbreaking how do you meld those two exciting things together in a giant groundbreaking way no we just take it you know one one step at a time but it is a really exciting area to be in currently though what we can do is rely on a lot of existing ux research tools and skills we already have and apply it to this newer seial you want to really understand what that user experience is and what their needs unmet needs needs are pain points blockers and forecast even in future needs ux researchers are often very inquisitive and impath thetic so we'll be curious and ask questions one of the big things we like to do is when we come to coming up with a generative AI experience we want to start with the why are we solving the real problem for real people is this something that's actually needed right that that's the bigger question because ux and correct me if I'm wrong is how we do something how we utilize something it's everything from how a button works on a washing machine to an interface correct yeah exactly so any type of experience how you're using it and interacting with it we want to make sure it's tasks and things that actually will help benefit humans how does that question get asked deeper when you were talking about being curious and learning about gen so I think there's two parts to that one is the learning about generative AI is that we really encourage the ux researchers who are coming on is to learn about the technology and the capabilities aad has been around for a long long time many decades um it's not net new in that sense generative AI is a different capability we've seen it kind of hit a Tipping Point right when Chad GP exploded onto the scene I think hit a bit of a Tipping Point where suddenly everyone was talking about J of AI but the reality is when we currently are talking to most of our users especially the end users of the technology they don't differentiate between these different AI capabilities they just want to solve a real problem for them or work or help them work better or answer their question faster whatever their use case is right they just care that's getting done better and faster well that that makes sense because a lot of this user experience is the end user not the technical person not the engineer for lack of a better term who's making this giant machine work it's the person at the other end in HR or customer service that has the button that says AI on it how is this going to make the job better for me how am I going to serve my customer better now correct yeah that's why personas are so important in user uh experience research always so service now we're in a unique position because yes there are those folks you talked about more of the data scientists machine learning Engineers Etc who have deep expertise in this space however we also have more of these technical personas who have to implement our AI Solutions we give them so that might be at our customers companies like an admin role so they don't actually have that deep AI expertise and knowledge they have great service now knowledge or technical but not machine learning Engineers that is a Persona where when we come to them we have to help them be enabled and empowered to implement to set up to configure to deploy to Monitor and maintain these AI Solutions in a simple empowering way where they're really put the center and then there's the one you brought up kind of the person who's consuming the generative AI in their day-to-day well that makes sense because you had mentioned that you know we have the expertise in our service in service now and our customers they're going to have a very heavyduty knowledge of their customers they understand that even better and we're building these tools to leap frog and build this even better would that be the right way of looking at it so basically our customers are managing and maintaining their service now platform right they're the ones who then Implement different solutions one of those would be general of AI so service Nows are now assist right let's say they wanted to bring the case summarization feature to go into their agent we deliver it and we then give it to their admin who has to set up and enable it so there's certain considerations we bring in for that user who's setting up and configuring the AI solution and there's a level of transparency and explainability where we help them peer into the black box a little bit to help them know what they're doing and feel in control and have appropriate guard rails set up but those are very different considerations than what we would bring to let's say the agent who's using summarization they don't need to necessarily know the same amount of information and be able to understand on the back end what was happening to the degree that admins may need a little bit more visibility into that but the agent definitely has their own needs for example they would need to know that came from AI right not another human is that's they're going to look at it potentially in a different way um they need to be able to give it explicit feedback so beyond like a thumbs up or thumbs down but can they edit it or correct it if it was wrong they need to also potentially have certain guard rails in place of when it's a good time to use it when it isn't things like this are things that we can bring into the experience but the way we bring it in is going to vary based on who that Persona is and what their need is so another Persona we could talk about is the requester let's say it's like you or me as an employee who goes into a portal it's like oh I need a new laptop um I spilled coffee all over my eyes M much more common than you think it is so much more common than you think and so that person coming in can use AI to get that pretty much automated and completed for them but what we would bring in for them to know is like hey AI maybe maybe they search new laptop and then AI would be behind the scenes helping to already know what their current laptop is could prompt them with like hey what happened right and maybe give them a few use cases so there's a lot of different ways they could interact with AI too that's different than the agent who's using it in their day-to-day workflow for long periods of time that makes a lot more sense where it's time consuming and already answered the right questions and already solved essentially solved this solution it's just ticking the boxes so to speak exactly are all nuances we bring in but ultimately what we do across the all is promote and Elevate the human and The Human Experience in it okay well I want to ask you more about personas cuz that's always the secret sauce I think when it comes to ux what are some questions that you think are important that matter when building out personas absolutely so for the first is to help differentiate between those different personas so is my user ultimately responsible to set up or configure an AI solution or is my user expected to use these AI generated outputs in their daily work so when they're in their daily work we have to think about it in a different way versus is this someone who will come very infrequently to interact with AI generated output and that that sounds like a big one if they're going to use it every day they've learned the nuance and if they're not using it every day it's like do I care that it's AI or not yes we have to help them um if it's in their day-to-day work right let's say it's an agent working on cases and let's say it applies for 80% of the cases they work on they could use AI to help improve their productivity and that would add value and save time for them so they have to learn how to best use it when to best use it and also be able to ensure that it's giving them value and they trust it so it's a really important one that we get that right um because they have to keep using it so if you think about like us being maybe consumers if there's an app that's using Ai and you don't like what's doing you can exit that app or not use it or you use it for five minutes of the day versus if you're using it for seven or eight hours of your day that experience is really going to become critical to get right and Empower them and make sure they're getting value from it I'm not saying for requesters it's not important to get it right or for people who are doing it more frequently that can also be a really critical one to get right because it's a bit more make or break you got to get right on the first time or they may not try it so maybe they come to a portal and there's this new AI thing that pops up for them and says hey let me try help you and so they're like all right I was looking I was going to come here and call a human agent but I'll try the because it came up so getting it right that time can be really important too because if we don't then they won't try it again that's almost like a a an inherent marketing problem but really not it's usability as well it absolutely it's usability and making sure that we are helping people really have a good understanding of the capabilities and kind of competencies of the AI so they know it'll help them solve a problem when to use it and build that positive experience that kind of brings up one of those underlying questions and I know you have a different way of looking at this from the ux perspective but what happens when AI is wrong is that good or is that bad and I know we can go deep on this but I know you've got a a different spin on it yes this is my my Spiel so one of the other big questions you want to ask no matter who your persona is are what are the risks or Consequences if the AI is wrong and it can be wrong in a variety of ways and the reason before I go into those nuances the reason it's so important to think about this is there might be some industry for example specific ones in healthcare let's say where that risk if something is wrong may be much higher so we're seeing that AI is assisting in helping to interpret for example x-rays so that may be a use case where if it's wrong if it misses a tumor or cancer right the risk and consequence of that is much much greater perhaps than if let's say our agent case example where it maybe just missed one part of a case note but being wrong is that helpful I don't know if you can answer it in terms of is it helpful or unhelpful and what people need to understand is have realistic expectations of how AI performs in this current state and the reality is right now generative AI is in a very early stage and it can be an accurate more broadly AI as a whole the umbrella term of AI so looking back over decades classic AI as well uh machine learning right is not always correct it's probabilistic but with generative AI which still Falls in that probabilistic bucket the concept of what's accurate or inaccurate gets a lot harder um and that's because humans often we like to think like hey it's just right or wrong but we were just having that conversation what happens if AI is wrong but in reality when we're thinking about generated cont there's a huge range of ways it can be wrong so one of the extremes are hallucinations where it can literally just make up information or responses that may not accurately represent reality and there's some real risks with that because it can often portray it in a really confident way so we've probably seen news and headlines about this I know isad who was like a big topic of discussion the one about that lawyer who went and presented right right and the Precedence he was using was made up that he had asked like Chad GPT or to help him with his case and it made up the prior cases right it sounds good if it sounds legal e it's legal e and therefore it's legal right so there's a real risk of that and again where these risks are and the consequences that then matter too to think about for your persona industry Ed taste Etc um however it can also just have some missing information so if we're talking about more like summarizing let's say we had like a long chat or something and we wanted to summarize for us a typed out chat everything we talked about it may not include information that's important it may just leave it out to be missing information that we needed it also could misinterpret information so an example with that is an HR use case and let's say someone's putting in for parentally and the AI summarizes what they put in but it misinterpreted and put them as the non-birthing parent but they were actually a birthing parent and there's a lot of different leave and benefits with that right so it misinterpreted that's why it's so critical to have what's often referred to as human in the loop making sure there's always a human in there to review the output and that they again know that they need to look for things that could be inaccurate in some way or imperfect or missing information in some way from a ux perspective what are things that re SE archers can contribute to to make these experiences better yeah great question a lot of things that we already know from just years of work with AI and human censored AI that we can look to do and bring in some of these factors like are some of the terms I brought up like transparency feedback contestability explainability guard rails you know making sure there's human oversight and human in the loop what these exactly look like can vary again based on use case percent um Etc and you can dive it really deep if you want to because transparency alone can be broken out into numerous different things but the point is it's always about building that trust and elevating the human and so some of the things that we can do when we think about that is right now at least we know that setting the expectations is critical most people are coming in not understanding that the AI can be imperfect um and so we need to help them know that it can be imperfect and that in fact their human expertise plays a very very critical role and that we need them to put human eyes on it and review it we need to help guide them on how they can best get started and interact with AI to guide those behaviors and let them know this is when you should use it these are the good times for example to use it this is what the machine does really well as task to help you versus this is something that you know you'll be better at as a human uh because the reality is and this is what human cented AI is all about too is really helping to become collaborative and build that collaboration between the technology and the human um another thing to do is help them get value from it so maybe when they just get started they're not seeing as much value but over time as they learn how to interact with the AI and as the AI is improving and the technology is improving they may get more value from it as well right that's the one I think we've all seen is everyone's dabbled with Chad GPT and some of those other ones it's like oh it's not as scary or oh I see where I actually need to prepare a little bit differently and I need to actually think about what I'm doing and not just bang on the keyboard and hope for amazing answers I actually have to put a little forethought and thinking into this yeah I think that's a really excellent point is the exact one right I think sometimes when people come to let's say chap GPT they bring in their existing mental model which might be more using traditional searches or traditional chats so it's keywords and short right but really what we're finding is that actually putting in a prompt a wellth thought out prompt will actually deliver better results um and there's some awesome studies done by like the Neilon Norman group and everything too that looking at the differences and prompting how people learn to do that um because some people for example prefer to start with a smaller one and build off of it but ultimately that takes them more time versus just having a wellth out prompt and you'll see websites dedicated to helping build prompts and provide prompts as well but that's a really good example of how we have to help set those expectations and guide people to know how to interact with AI to get Valu from it so that that's really that's a a key thing for usability research is how do you and this might be you know something we talk about later on a whole another episode getting people to to be better at their own usability or their own thinking is that the right way of describing it or is that a real question it it's definitely a real question um I think that's what we kind of think of is how we build their new mental model this is a different way of them interacting potentially with AI um with machines with technology than they're used to so people come in with an existing mental model that may not best Al line again they might be used to a traditional let's say Google search where you don't want to put in a long sentence right because it's just not going to give you the results you need giving a few words might be better so if you wanted to let's say to know the score of the Lakers most recent game um and who was the best player but if you go to Google you can't type that in you have to just type in like Lakers game score and then probably have to go click find right a breakdown of the players so that's not very efficient but if you use um gener AI to do it you could give your full prompt like the most recent Lakers game I want to see the score and who the top scoring player was from both teams and you can get that result in a single sentence so it can really help you say time but you need to learn that that's how you should be asking the question and not come in and just say Lakers last score but going forward transparency seems to be the key what is your feeling and your thinking when it comes to transparency with AI it again falls into the broader human censored Ai and the principles how we Elevate the human and build trust and so transparency just across the board regardless of who your persona is is absolutely critical so at minimum our person approach is that users need to at least know they're interacting with AI um and not a human that's going to help already start to set more of the expectations for them um we need to make sure especially in this early stage that we are encouraging and enabling feedback from the users across that experience because it's going to make the technology better and ensure that we're getting that human oversight in human transparency in terms of usability we have to balance of course when that fits in how to do that Etc but at least give the option for that feedback contestability another part is when you work with your teams because all this happens as a team we would never just be doing research alone without an amazing cross functional giant team so if you see a risk you need to speak up I think for people in ux to speak up and be the voice of the user and show those different things that other folks may not be thinking about or may not have put that lens on yet because we can all be SE in the weeds and that we forget who maybe our end user is and that they may not be aware of certain things so there you have it a great starting point or a primer for getting the best experience when using AI if you've enjoyed this episode please hit subscribe on whatever podcast platform you're listening on so you never miss an episode and if you have any questions on working with the service now platform head over to docs. servicenow.com or head over to YouTube and check out the service now Community Channel thanks for listening
https://www.youtube.com/watch?v=cthY7jau9Bg