logo

NJP

Controlled Friction and AI - wtih Di Le and Hayley Mortin

Import · May 22, 2024 · video

does anybody remember the Choose Your Own Adventure books as a kid because even though it was a very structured environment and it was designed for you to feel as if you were actually choosing your adventure each point of the story where you had to make a pivotal decision depending on whether you go down the volcano or you enter the Dragon's Cave that is a point of controlled friction intentional friction welcome to another episode of tech bites I'm your host for this episode Bobby Brill but more importantly that was di who serves as an AI ethicist here at service now and also joining us on the episode is Haley Morton a ux researcher at service now as well both of whom have been our guests before but because they have such unique perspectives on where AI is going we wanted to have them back on to talk more in depth about the concept of controlled friction what that means and how it pertains to creating good AI so D please reintroduce yourself to us and give us a bit more about your background hello world my name is di Lee and I am a human- centered and ethical AI design strategist at service now and I work on the trust and governance lab as well as the experience organization on the product side to really help put forth responsible AI development and design at service now a lot of my my expertise in human- centered AI is around building out the Frameworks and tools that promote responsible AI development practices and I'm also a technology Creator I'm an ambassador for Google women's Tech makers and I'm heavily involved with CES as an innovation Awards judge but mainly I just want to encourage the dialogue and the discourse around responsible Ai and what impact does it have on society and people so just how the world is going to be changing you're going to point I am part of the group of people that are out there with flags pitch for expires and everything saying hey let's make technology good for people and let's make sure and be critical about the technology we're making and that we're looking at it with a magnifying lens to ensure that the outcomes are what we actually want as a society so no small feet fortunately there's many of us in including my co-ho co- podcast person here today one of my favorite people Haley I hate to say because it's a tough act for me to follow so please Haley introduce yourself too because what you're bringing to this conversation is really unique and special as well so Haley please tell us a little bit about yourself my name is Haley Morton I am a ux researcher here at service now I'm primarily working on the AIML platform team so what that means is that I get to go out and talk to the end users of folks that are actually going to be consuming and using the AI products that we're building I get to do a bit of that Vibe checking that D is also doing to make sure that we're going in the right direction and I'm really lucky that our team is really focused on advocating for that more human- centered AI design experience I get to go out and actually watch people interact with our tools and our technology and come back and say hey they think it's scary maybe we should rethink this I like to say that Haley and her team and the ux research team they are the voice of reason behind products and they are the people that are trying to Champion and be the voice of reason to make the rest of us do better be better and provide better that's why I wanted the two of you on today because this voice of reason this controlling how AI works is really important especially from the idea that D you mentioned human centered Ai and Haley ux is all about usability how the human does the thing and interacts with the thing there's humans creating AI there's humans using Ai and there is a friction point there yes I want to give a analogy that Haley and I have both been using when we were describing generative Ai and intentionally we've been comparing it to a not very smart in terms of AI capability system but smart as in a deeply Rich research-backed heavily industry founded algorithm capability that also has a ton of financing around it which is slot machines the interaction of pure humanity and robots yes the the interaction of pure humanity and robots perfectly said because there are few Realms in technology that exude that weird allore of unpredictability as cap a inly as generative AI has been since chat GPT kind of democratized this access and slot machines and there's a lot of talk and there's a lot of Industry discussions in every vertical you have creative people that are concerned about what does this mean for Content generation for the writers of the world for the artists of the world you have movie goers that are saying you know like how can you discern between AI generated movie clips and video clips and audio clips versus human generated one and the one thread of consensus around all of this even in technical industry when we start looking at software generation and ux work in any type of work that is generated by AI there's a Common Thread in all of the discourse around this which is that there's still an inherent Desire by a majority of humanity and people that work in a specific industry to delineate whether something was created by artificial intelligence or Humanity earlier when you and I were getting to know each other we had a conversation about vinyl and how analog is still alive there is something inherently valuable in what people create in that creativity and that slow pace of creation that has a very deep value to people in general now as I merg that topic with ux design and the modernday corporate culture and businesses and startups It's banging on the point that generative AI brings in unpredictability there is no perfect system even before generative Ai and classical versions of AI but there's this point where that unpredictability can actually enhance in addition to the challenges it brings to ux there's this thing that tension and friction and like working out and doing a 100 sit-ups does or like the one punch man workout there's things that those things do that make us as humans thrive in a different way so I think that in guiding the development of software and services in these two worlds we can find ways to harmonize them maybe not in a perfect balance but at least in a way where one can service the other in a positive way way I think one is like embracing that unpredictability for creativity generative AI has this potential to introduce novel and creative elements that sometimes in its weird output forces us to be a little more creative and I think it forces us now because of that wanting to discern between Ai and human it's making more people look at things from a user centered approach and applying a design process even if they weren't part of the ux world and didn't understand what a design process is and what it means to be human- centered it's interesting you brought up several points and I know Haley you're going to expand on some of these too that journey is the destination really only works when we have things to trip over and bridges to cross and rivers to Traverse so Haley explain that to us explain this what that concept is to make Ai and ux work the way we want it to I love this question so I think I have to answer it with a very short ux history lesson so when you talk about good ux A lot of people love to site things like Steve Jobs and the beginning of the iPod oh my gosh they were solving problems we didn't even know we had so easy to use so simple so intuitive the things that people continuously compl apple on if you think of both thinking about Apple products and this is all very intentional it's through years and years of research and development and design we can thank design thinkers like Don Norman who champion this idea of making things as easy to use as possible it's this whole idea of if a user has a task that they want to complete so whether that's listening to music sending an email whatever it is we want that as ux designers to be a simple and frictionless easy to use experience where our end user isn't making too many steps to get to that destination but with the rise of generative AI all of this starts to go out the window things don't behave in a predictable way anymore the same way that systems of early 2000s technology did where it was like I have a task I want to complete and here's how I complete it now we're looking at a realm of more here's something that I think might happen but I'm also not 100% sure what I'm going to get I'm not 100% certain of what the system will generate or do so we've seen that with things like mid journey and Cha chpt where it's really a blackbox experience you put your prompt in and you're hinged on this moment of what's going to come out on the other side so we have to design very differently for that experience compared to how we would design things in the past 20ish years of of technology I think all of us here in this conversation who are using AI a lot professionally and personally that fear of I don't know what's going to come out isn't a fear it's exciting and almost a little too exciting I think when people push out generative AI without any guard rails correct exactly and it's super exciting when it's just like me at my laptop on Mid Journey trying to come up with the most absurd images ever I mean when Dolly mini came out there were all these awesome things floating around on Twitter like cucumber Connect 4 but then we also have to remember that people are seeing this as a massive business opportunity they're like how can we plug this technolog our work how can we use it to optimize lengthy business processes What If it creates something that I don't like or maybe it denies somebody alone that was applying for it that should have gotten it the stakes are a lot higher than me just playing around on Mid journey and being like ah it's so funny yeah and hilly and I talk about this often but speaking of mid Journey or going back to the slot machine analogy it's I don't know if you've seen on Tik Tok on Instagram there's entire channels dedicated to for example a very popular rap but in the tone of Frank Sinatra or give me the birthday song but in the voice of Ariana Grande and Snoop Dog reading you the Bible exactly and initially it seems like people aren't selling it they're not monetizing their channels it doesn't seem nefarious but when you start asking critical questions that we do in thinking about responsible AI it really starts bringing up questions like do we own our own identity do we own the likeness of ourselves say you say for some reason you give the keys to someone else and say yes you can use my likeness to create a AI version of it an audio visual version of it do own it in perpetuity how long do you own that for sure so we start thinking about when people ask me what is my biggest concern when it comes to especially generative Ai and like immense AI potential is it privacy is it societal impact I go it's actually human autonomy and it's really understanding like how do we navigate the identity that we've curated around ourselves and who we are especially when we start thinking about artists and their identity and their work that now could be purposed on one hand it's the Ingenuity and the creativity of hearing Frank Sinatra rap to a Snoop Dog song but then on the other hand it would Frank Sinatra have wanted to and Haley I I want you to button that back up because you mentioned if you're doing this at a corporate level if you can mimic tone and voice and corporate speak and this email reads like a corporate company with a lot of prestige and respect sending out emails that could be fishing or whatever that's something that we have to prevent and that's really where this idea these ideas of friction and stop gaps and guard rails really come in correct absolutely so in my role I work a lot with customer service agents and some of the generative AI tools that we roll out to them we do a lot of concept testing so we cut a lot of things that don't exist yet as a prototype in front of customer service agents and say hey what would you think of this what would your perspective be of using a tool like this at work and that might come in the shape of helping them generate responses to emails helping them summarize the steps that have already been taken to resolve a case that's on their desk something really interesting that also comes up in those conversations are the ways that agents are already using generative AI in their work so there's a little bit of dipping their toes into it but there's also still a ton of hesitation about what is this thing that we're inviting into our work and and what kind of repercussions could it have so one of the biggest risks that continues to come up in almost all of the conversations I have with service desk agents about generative AI is this concept of authorship and I think it goes really tightly into what D's been saying about would someone have wanted that song to be written in their voice but the corporate version of that is is if I am writing a draft to an email and AI is making suggestions of how I can adjust my tone for the customer that I'm responding to who's writing that email is it me is it the AI where does that authorship and ownership um begin and end and customer service agents are super concerned about who's to blame if something totally wrong happens so say it gets a detail wrong and a case that we sometimes talk about is maybe someone's trying to submit a ticket to go on Parental leave and there's two policies there's one for the birthing parent and one for the non-b birthing parent and let's just say that the AI gets it wrong and says oh you only have a couple of weeks off or whatever the company policy is this is going to have tremendous effects on the actual person who's trying to get their time off while the agent is just spent resolving the errors and the mess that AI made on their behalf so there's this really line of is it my fault because I didn't check the AI is it the ai's fault because it wasn't trained on the right data they're really nervous about what it means actually being implemented into their work like that this is where we can put in some of those guard rails in some of those stop gaps to make sure that the experience isn't pushing AI into places where the end user doesn't necessarily want it to belong and it gives them a lot more agency in pausing and think about what's actually happening how does that work though how do how does explain this concept that that is controlled friction this the how do you how do you break a machine it's not breaking it it's not breaking it so controlled friction is sort of the antithesis of that notion we were talking about earlier about a frictionless intuitive experience the things that we typically associate with quote unquote good ux but sometimes you want their to be friction sometimes it's what the user actually needs to complete their task even more so let me anchor Us in an example there's a feature that's built into most email platforms Gmail Outlook whatever where if you're sending an email to someone and you say please find the PDF attached and then you try to send that email but there's no attachment there'll be a popup that comes up and says hey were you trying to send a PDF that's an example of controlled friction it's preventing you from sending the email but it's there to give you a moment of reprieve to sort of think about hey I actually should probably go back and attach that invoice or whatever it was I was trying to send I like to make the gaming analogy too for my RPG fellow nerds out there who play or actually taking it back let's step out of software entirely right does anybody remember the Choose Your Own Adventure books as a kid the greatest books ever yes of course the best book because even though it it was a very structured environment and it was designed for you to feel as if you were actually choosing your adventure each point of the story where you had to make a pivotal decision depending on whether you go down the volcano or you enter the Dragon's Cave that is a point of controlled friction intentional friction it was intentionally designed by the author to give you a moment to make a critical decision to understand and weigh the potential outcomes of what could benefit and then what optimized your desirable outcome and that's a version of it you had corrected me and I was too loud to to let you say this but I want you to I would love you to explain both of the both of you explain this I said break the machine and of course that's incorrect because this is helping it how do we convince the people we work with who want AI to be here now yesterday ready to go let's roll it out to implement a lot of these stop gaps and friction how do we do that um going back to intuitive designs and how we design for that Along Came our metrics for success as ux practitioners to kind of mimic that so as a ux researcher we have to measure things like time on task or number of clicks and try to minimize that we want the least amount of time to complete the most amount of things that's usually how ux manifest itself in a corporate way but we're at a really interesting point with generative AI where we really have to rethink some of those metrics because do we want people to be rushing through a task if it's introducing all of this uncertainty I think we actually need to be incorporating more barriers and more moments of pause for the end user to actually take a look at and evaluate critically what types of responses the AI is giving them so it is a bit of a interesting point for us to help our PMS designers who are so like make it as easy to use as possible make it more intuitive make it this make it that to take a second and rethink hey maybe we're actually mitigating a lot of risk down the line by introducing these stop gaps now it might not seem like it and maybe they're even taking longer to do things that were normally faster but look at all of the risk that we're mitigating by introducing these moments to pause I would like to even reflect the mirror back to you in the sense that individuals like you that have been curating spaces for Haley and I to have these conversations where it doesn't feel like we're screaming into an empty void actually feels very aspirational and hopeful for us in the sense that people want a more Discerning look at Ai and people are interested in hearing this aspect because the responsible and the ethical Parts aren't always the flashiest fancy but it's very encouraging to hear or have this space to be able to talk about these type of things and that seems to be an answer to the idea of friction that that we want this this is good this is a great thing I would love to earlier you mentioned like breaking the system but you're like not it's making it better I really wanted to Echo that point because you nailed it in the latter half which is that when we talk about AI just like regular software it is impossible to create a perfect system I want to put that out there with technology with everything in this world there's nothing that is perfect except ice cream ice cream might be perfect but Ben and Jerry did nail chunky monkey I have to admit but another version of controlled friction that is a collaboration between human and AI we often talk about human in the loop or human AI collaboration or human robot collaboration and varying themes of that such is that allowing the human to oversee various points of AI outputs and provide input on hey does this look right or not is a critical point and way of making the AI better at what it does for example if you have a AI that is summarizing a specific thing thing for work and at one point of summarizations you can show a human and say hey which of these summarizations are better which one resonated with you more which one was easier to understand that simple interaction can start telling us a lot about how to improve the system like the type of language we use the length that people are willing to read how to space things and so much more I'm going to I'm going to ask you both a final question on this because we can spend more time talking about this and we will for sure to wrap up this idea of controlled friction a question came up to me that sounds slightly pandering because we all work for a corporation and this in the best way possible will corporate gen AI be the type of AI that makes AI better oh I got to think about this one I know I know I just wrote this down I know the answer to this but from what the reason I asked that is a lot of what you're saying is and again correct me if I'm wrong because I'm not the expert on this that to make AI better we need stop gaps controlled friction we need control and who better than somebody that is dealing with mountains of regulations and teams of people going all right break out the No No book and let's go through this and figure out how we make this work is that a a a good assumption yes and I'm going to say this in oh I hope our audience is not rolling their eyes and I promise stick with me here stick with me here I'm going to say yes and no for two reasons yes because again with that November when chat GPT was released this fourth wall to the public was broken where a technology that was before only privy to researchers practitioners and the tech sector is now available to nurses doctors the florist teachers students everyone equally to some degree we have to start thinking about internet technology access but that's an entire separate different worthy discussion to have but what generative AI did differently is it put AI in a relatable lens I would like to say that most of the public and most of the people we talk to they don't discern between this before chat gbt and after chat gbt era they see it an evolution of capabilities and AI is AI it's just better and in some ways more humanlike now than it was before and I think with that new capability it forced people and expedited the types of conversations you just brought up around regulation and so forth and we noticed like gdpr and the European Union and the EU AI there was a lot of consideration on the Europe side on how to handle privacy and ethics and AI Singapore is a big thought leader in that space as well too but cha gbt forced America which has always been let the people they're very l a fair let people do as they will first not in the way you mentioned but in a different way Force us to improve and be more critical in the way that we're safeguarding AI the way I say no is I think it's still different in the sense that I don't think generative AI is the endall BL and the type of AI that we need and what we use there's still a lot of value like I think there's a lot of value in first asking the question do we need AI for this specific use case do we even need a Powerhouse system to do this particular things there's a lot of companies out there that are still using the very effective algorithms around classification models similarity models and so forth that have been around for the past many decades and I really want people to look at why do you need AI for this do you need AI for your use case and what do you want to do with it those are the questions that we're trying to get people to think about before thinking that they need this generative AI model in order to stay ahead of the curve Haley what do you think what do you think do you think it's going to be the the corporations or and I don't mean that in a fight club dystopian way I mean in the very good way of we're all smart people trying to make this work because we all see the potentiality of it and the benefits of it but you're in the you're in the trenches so to speak of making this really go what do you think what's your thoughts on that oh I love this question it's interesting because I think a lot of end users actually think that because their Enterprise and their organization is deploying AI that it inherently makes it more trustworthy so they think oh my organiz ation using this thing it must be great I've actually heard a lot in user interviews I've been doing uh ux research on AI for the past three years so I've seen it pre chat GPT and post chat GPT if we want to use that terminology and one thing sort of anecdotally that I've realized or that I've noticed in the past little while is that I've noticed a lot less of people bringing up terminator analogies when I say like like less Skynet terminology which I think is really interesting because there's more of a proliferation of it but because of that people are like I use chat GPT I can totally imagine using some other form of AI at work so I would be wary of people putting too much trust in Ai and over relying on it and saying blindly oh I I think that no matter what AI my organization adopts is going to be 100% perfect so my concern as a ux researcher is is making sure that our end users have realistic expectations that AI is not a perfect solution all the time because it does have the ability to generate really convincing sounding information and I think if their organization helped to implement that they might think oh it must be true so for us as ux researchers I think making sure we're informing our users empowering them with the knowledge to make critical decisions giving them little Peaks into that black box saying hey it's not going to be perfect that's on ux re uh ux design and ux research to make sure that we're doing our due diligence there and reminding folks that it's never going to be perfect with that being said what are some best practices what can we all do to ensure that the product that we're putting out is good and I mean good with a capital g o d good ready to go involve ux teams early and involve them often more feedback is what we want to aim for here especially in this time where we're sort of redefining the game for ux design and research and a lot of rules are being Rewritten in front of us but what I think is really exciting is we can decide how those rules look and we can do that by incorporating ux research consistently throughout the product development life cycle and making sure that things get put in front of users before they get put in front of developers fabulous die any any takeaways any things we should all be thinking about yeah and I wanted I promise I'm not pandering in this because in this case like for a lot of reasons I was the single person that met another single person that met another single person and then formed a responsible AI gang but I think when I first meet people and I tell them about this and they learn about our work there's this sense of this daunting task that is ahead and it doesn't have to be that way whether you are an individual or a team that just happens to be thinking about AI in a specific way it is very possible for you to influence the outcomes the Frameworks the positioning the way people talk about technology and how it ends up getting released to you and your team for example I someone reached out to me by chance and they were an individual from a mobile QA team and they ended up contributing a lot of feedback to some of the writings that we were putting out around Ai and I used their feedback directly so I think if you provide that guidance and that feedback even if you are not what you think as the portrait of an AI practitioner I promise you the people that are working about on this and thinking about it will be very thankful for for your feedback and contribution because many times it's hard to find people that are willing to have a conversation about it contribute and say something so there you have it a better understanding of what controlled friction is and how it's used to create not only good AI but a good experience and of course for even more answers on working with the service now platform head over to docs. servicenow.com if you've enjoyed this episode please hit subscribe on whatever platform you're on so you never miss an episode thanks for listening [Music]

View original source

https://www.youtube.com/watch?v=B0Lsbyo3goU