logo

NJP

ServiceNow Federal Forum 2024: Securing Intelligent Transformation with Gen AI

Import · Jun 21, 2024 · video

[Music] are you folks enjoying fed Forum yeah that's it that's more like it all right all right uh you know based on that reaction it sounds like you folks are having a good time but not necessarily a great time we are going to change that this panel is going to change that this panel comprises of some of the most insightful and Innovative thought leaders across the f space and this session this is the last panel discussion of the day they save the best for last obviously uh but this panel discussion is what stands between you and all the fun stuff right the happy hours the the networking hour the dinners all that good stuff so the five of us are going to do our best to make this as intriguing as thought-provoking as possible right with that said my name is gain Nazareth I lead the solution Consulting team that supports our DOD business your at service now I'd love to hand it over to a esteemed and distinguished panel to introduce themselves sure I'll go first um my name is Alexis panell I am lucky enough to be the CIO of the Air Force research lab serving both Air Force and space force I'm also the head of our digital capabilities directorate uh I'm our AI liaison and I'm also our AO and so the terrifying part of this job is that really I don't get to point to anyone else if we can't pull something off so uh that is me uh Dave laramore uh I'm looking at the screen in front of me here which is really cool high techy by the way that's a lot of words so I'm the chief technology officer for the Department of Homeland Security um I am officially one of the responsible AI officials for the Department uh a privilege I share with the office of policy at DHS and I predominantly support Eric heisen the Chief Information officer as well as the first Chief AI officer for the Department of Homeland Security go I name is John Lao I am the senior strategic it operations advisor for the USPTO and I support the oci and uh our CIO Jamie huls and Debbie Stevens are our DCI and I am in charge of the Enterprise road map for the USPTO and we did a lot of efforts in years past in bringing us into the cloud using more capabilities across and now we're looking to operationalize artificial intelligence across the board and get more capabilities in front of our staff in front of our customers overall uh very excited to be here and my name is Katherine manfrey I'm uh the office of personnel Management's Chief transformation officer and I'm excited to be at a conference and on a panel that has the word transformation in it um I'm broadly responsible for uh opm's overall transformation which among other things includes uh customer experience and employee experience and our uh agency data strategy so excited to be here awesome thank you all for the introduction my first question is to you Alexis um and this is on everyone's mind so I thought I'd kind of start off with it and hopefully this will set the stage for the rest of the discussion Alexis have you been to the Taylor Swift concert I have to admit um you know getting close to my 50s I was not brave enough to uh Brave it in person but I have caught it online feed up popcorn and able to fast forward yes hopefully that will change right hopefully you you'll make it out there sometime now switching to a more serious note uh talking about gen experts say that in the future we're going to have about 40% of the working hours that are going to be impacted by gen do you see that applying across the federal government particularly in the dod right and how does that affect job satisfaction how does it affect the impact on the mission and just National Security in general sure absolutely so when I think about AI what I really think about first of all I wish it had been called augmented intelligence instead of artificial I think you know had people really thought about the tool that it was to supplement and to complement you know what it is as our our human knowledge base but when I think about AI I really look at it as just simply a new opportunity to have a relationship with knowledge at scale and speed and part of why I think that matters right now is you know a lot of us have have lived through the last three years but we didn't really understood what shifted and what I mean by that is you know really in the last three years we went from at least in a DOD Horizon from kind of a 5 to 15E average change Horizon to a six months to 1.2 and just take that into account right the amount of compression of how much more quickly we're expected to change if you then complement that and take to a human level you're really talking about the average person having to make up to five times more decisions in the same period of time then you add to that the fact that about 90% of the world's data became accessible and so all of a sudden as a as a public leader you know I'm like okay I've got to change faster than ever I've got to make more decisions than ever and gosh darn it they got to be better than ever right because I have all this information and so you know when I look at it what I really do is sit back with the team and say what's the relationship with knowledge we want to have and so I actually would say it might be more than 40% because what it really does is previously our relationship with knowledge and government was about structure and control right everything had to be perfectly structured cleaned and controlled but the problem with that is that that's a fractional amount of our data that we have that relationship with and so when I think about AI what it allows us to do is to have a relationship with unstructured information which quite frankly is a lot of our treasure right it's our after action it's our research it's it's the things that we actually invest our treasure into to have and so to me not only is you know is AI going to play a role but if we embrace it and thinking about what is that relationship with knowledge the last thing I think that is at a personal level for me really powerful about this tool is that I don't know about other folks here but I'm tired right like it just feels like there's more toil you know often in the work of not only leadership but everyone at every level and if there's one thing that I've learned and I'll end on this it's that toil eats purpose faster than Mission can replace it and so when I think about what AI might allow us to do to have a relationship with knowledge at that speed and scale but even more importantly just to reduce the toil right the things that someone doesn't find fulfilling I think that there's going to be a real trickle down impact of um retention of you know greater sense of purpose that we can do if we approach this tool in the right way Y and I like I like the whole would you say augmented intelligence augmented versus artificial you should copyright that John where's Pat and trade oh right there there you go so so Katherine what's your take on that do you see it applying on the civilian side of the government as well and I know your team has rolled out the what's it called the the workforce of the future Playbook how does that tie into gen AI so this may be my bias because our agency has Personnel in it um however uh I think generally geni is really all about the people which I think is is a little bit of what Alexis was was mentioning and by that I mean both that as we think about applications of gen gen it's as important to think about how can we think about the pain points our customers or our employees or our leaders are feeling and how can this be a tool to fix some of those pain points and so that's one piece that I mean when I say gen is all about the people the other piece is you know I'm I'm broadly responsible for making change in my agency that is what transformation is all about and in order to make any kind of transformation happen you have to think about the impacts on the people the change management the processes that need to change and making sure that you have a Workforce that's prepared for any and all of the above and so I think that's the other piece that I mean when I say gen is really all about the people um and to your question gain on the workforce of the future Playbook so OPM uh has been broadly a thought leader on what does the federal Workforce need to look like in the future and how do we make sure it is agile it is engaged and we have the right skills to bring the federal government into the future and as we were putting together this Playbook which includes a series of we've called them plays um of different things agencies can be doing to get talent in to get talent to stay to upskill the Workforce to think about data we included a play about the integration of AI technology and recognition of how big of an impact that this will not only have on some of the HR processes but the broader emission delivery of agencies and the need to make sure that we have the skills that we need in our Collective agencies that's great and along those lines let's stick with Talent right uh David the DHS is has gone in a hiring spree they're hiring tons of AI specialist across the board right where are you with that initiative and what are the opportunities these Specialists are going to be focusing on absolutely all right so uh for some context here we're talking about the DHS a core uh the secretary announced on February 4th in uh Mountain View California uh we as a department as a unified voice are committed to bringing in AI talent to the department uh what we are really doing differently is instead of essentially group saying you know we're going to go put out some postings and commit to putting postings on USA jobs and just see what hits and everybody essentially poaching for one another we we said let's stop that craziness this is too important to the department it's too important to the success of the department so we said we're going to go out as as one voice to Industry to the public sector to the rest of the federal government and we are going to work together to bring in experts and once those experts are inside of the department then a really fun amazing work where we start to get to deploy them to components to solve a lot of the really high priority Mission issues cyber security talking about border security countering fentanyl uh all of those types of things so in one month we have received over 2,000 applications to the DH AI Core um I am personally having to interview everyone and that was by Design but we actually this week have sent out our first tenative job offer and have our first tentative job offer accepted for the DHS aai court so we do we get to piggy back on this whatever you don't take you kind of throw over listen you got to you got to talk to OPM here cuz uh we know someone from OPM okay so sharing is caring right yeah no absolutely right so so we designed a new PD uh called AI technologes 2210 gs15 and we are trying to standardize that and what we have found is that we've actually taken that original position description for AI and we've actually been able to scale it up and scale it down to really help us after we're done hiring the the the general AI Corps we actually use it to fill a lot of other AI needs across the department great quickly though in all seriousness on that we were having a conversation before in the back and one of the things that's been really interesting when we started to look at our AI Talent is especially when you look at generative AI you know yes there's the role of the creation where the data scientists right or the engineer or others have a real role but interestingly as we've been kind of running some basic AB tests when we we look at prompt engineering as a skill what we're actually finding is that the engineer the data scientist is not out of the gate the better prompt engineer than the pr person or the lawyer or things like that so as you think about AI are you thinking about when we get into you know NLP or people who are actually good with words you know or stories and the importance that they may have in your AI Cadre yes sorry that's put you on the spot this is like totally not by Script because the other the subject matters this this is its own panel its own thing so um we are I I I don't call myself an innovator I say I find patterns and let's repeat pattern so the US digital service has a incredibly effective pattern right they are not just bringing folks in for their technical Acumen there's also the software skilled project program management product management Etc so actually our PD is very Broad in the sense that we recognize there is more than just computer vision machine learning robotics um generative AI prompt engineering rag like it's a very broad skill set but what brings them together is that sense of product management of being able to bring that that mission capability or Mission Gap together with technology makes sense absolutely John what's your take on that I mean as the USPTO is the protect protector of innovation and intellectual property right so obviously with Gen in the mix there's going to be a lot of gray areas right how how do you kind of Envision that panning out yeah thanks thanks for that uh it's it's interesting right when when we call it a a center of innovation and protector of innovation that changes over time you know last year it will be a different thing this year right now is Gen but as we move forward in that realm the it's it's important to focus back on to the people and the mission and demystifying AI demystifying what gen AI is right where gen AI is not there to replace people was not there to to replace jobs and it's a and and we see that in the uptick of uh of patent submissions we see that in uptick of design patents there is a lot of interest in improving everyday American citizens lives across the board on there so as part of how USPTO is approaching this is one we've released a uh draft guidance and initial guidance on how to what leveraging of AI can mean for uh patent submitters uh design firms and the end user on that front there so that people don't have to keep guessing on that Innovation fund on the on the workforce font we're very focused on increasing that overall understanding on gen and AI as a whole where uh we have a set aside AI lab internally for our employees to be able to to subscribe in to to test some of the their day-to-day work within a confined space that is within our us PTO boundaries but enables them to essentially demystify the artificial intelligence the name right it's a it's at the end of day gen is a collection of very many different things uh on there and as we're we're doing these we're we're we want the the employees we want the people to Showcase what they've discovered from using these things what's worked for them what's helped them on it uh ultimately you know gen Ai and AI as a whole is going to transform how business is being done and the employees and people have a big a big say in how they do that you know I'd love you mentioned Labs I'd love to discuss this with you even if it's not today uh just understanding what metrics come out some come out of some of those labs right we'd like you know because obviously that will drive into continual Improvement and things like that right uh so that's that but anyways let's switch gears a little bit let's talk about the executive order the the one October 2023 it's it's all about you know safe secure and trustworthy AI uh agencies are looking the scan has gone agency are looking to fulfill the goals of of the EO while waiting final guidance from om right in the meantime what steps has your agency put in place to combat that yeah so one uh we have initi we have provided initial guidance to employees and as well as to patent submitters on AI guidance for what are what are things what what can AI help with for people who are submitting patents uh that guidance is at the USPTO web page that people can look at and there will be additional opportunities for the public to submit feedback into that uh for within USPTO employees on there we focus on pillars of uh of data of the needs on particularly on use cases we really want to help people understand that the it's Ely the use cases that drives the adoption of gen at the end of the day it's a tool it's a just like Cloud yesterday was a tool not always the most appropriate tool on there and one of the uh areas in the EO that has particularly talked about was a trustworthy Ai and one of the pillars that we've been engaging and working with industry as well on is how do we establish and increase that trust right in terms of uh AI capabilities their algorithms what are things that how do you how do you know the the the algorithms being used and the data is being used is there to that pours the answers that staff are looking for or uh that would otherwise pass the the human test right so sometimes that gets lost in terms of the I almost call call of Hysteria right you can go very easily One Direction and trying to find the best model the best algorithms and and the curated data on the other hand it's also important to to identify uh use cases that are high impact but low uh low impact in terms of failure so if it fails it's not going to take down your critical systems so that way people have an opportunity to test out these use cases without necessarily having to answer these large questions right off the bat and let let these uh let let these guidances work as well Way Forward work with o om and others that they produce these guidances forward we want to make sure that we are in alignment with the EO with the OM guidances and at the same time demystifying AI for employees at that point that's a good point right it doesn't have to be perfect for you to get benefits out of the llms I think one thing that's really important is for us to always keep kind of a little bit of historical context right and so anytime that uh you know humanity is navigating a new technology there is a period of time where we try you know we kind of have this you know figuring out stage and if we go back you know historically to the automobile if you go back and you look at the news there was for 20 years there were in essence two camps around the automobile one was that it was a dangerous tool of the rich that was you know killing people and dirty and and going to take away you know everything and and the other was that this was going to fundamentally you know uh kind of change human existence and the first thing I point out is that that was a 20-year debate right so it's not unusual to be kind of in the in the public realm of of AI becoming more a part of our daily life and to find ourselves in this conversation right or or looking at policy but what I want to highlight is that you know from the period of the time the car was invented to the period of time that we established seat belts as a regular occurrence was more than 60 years and so every now and then I think when it comes to these types of things we also have to lean in and get ourselves a little bit of credit right for being I mean how how short a time period has actually passed and how how intentional and how how quick we are and how specific we are in trying to want to get it right right and if you look specifically you know at something which is still a problem um you know when we think about bias but it's everyone's kind of first example and what I think is interesting about bias is the fact that number one bias is actually how we exercise our expertise and experience right but in the AI context it's always taken negatively you know but I think the second is that you know AI is a learning tool and so I think what when I've seen teams be really really effective and actually patent office is a great example of it they went in intending to learn right it wasn't a set it and forget it it was a what is the relationship with knowledge we want to have with this what does that look like and I think you know as we Embrace that but also as we're quick to realize you know again in the example of bias when that came up and when that started being you know being a real issue that people realize there's a problem here not only did the algorithm start to change or we were more intentional about what we trained it with but um I don't know how many of you actually kind of noted this but the camera lenses in your camera which wasn't what people were talking about when they were concerned about bias right they were concerned about the you know the algorithms or the wrong training data but actually the biggest difference was when the camera lenses in all of our iPhones or Androids changed to be able to to say like we need to show a greater variety of skin tones right so that the training can be good so I say that only because you know I sometimes worry a a little bit about the Doom Loop that we get in with this and I think sometimes stepping back and being like wow where are we where can we at least recognize that we're being intentional right that we're trying to get it right but do that in a way that doesn't stop you know our relationship with this technology evolving absolutely absolutely if I could just build on that I think the other thing is um from a transformation perspective you know one of the things that I'm always thinking about is how much change can our agency take on at once right we at OPM we have some big Ambitions about how big and how far we want to go with our agency Beyond gen geni and one of the things we're constantly thinking about is can we is this too much change that we're trying to take on at any one point in time and I think with geni there is a similar type of um lens that I think organizations should be taking broadly there may be some applications that you can very quickly scale enterprise-wide for example like things like the chat gpts of the world or something similar where people are already I think mostly playing around with it in their free time at least uh I know people in my household are um and then there might be other places like the use cases that are identified where you actually can't just use gen as the solution you have to actually think about the processes like the way the business operates the way customers interact with your agency and so I do think that it's important for agencies to in addition to some of the great lenses that you mentioned as well is to think about can we actually accept all this change at once and how do we make sure that we're prepared for it so that we can still deliver on our mission as we are being Innovative and using this new tool absolutely we don't have to boil the ocean right great consultant phase love it we don't have to boil the ocean and we don't have to get everything perfect as a recovering consultant I appreciate the usage of boiling the ocean so thanks just go live go live I used to do delivery at one point so you know working with customers like okay we don't come on let's just get in there what's your value just boil the puddle not the ocean let's neither you no there anyways let's talk about responsible Ai and I would like both your takes right David and as well as Alexis U Alexis on your end as the only DOD representative on stage uh what does responsible AI mean to you and I want to see how the how it all fits in right you think about the dod you've got you unclass networks you got your class networks you got disconnected systems you've got all that and then you've got your llm right how is all that going to fit together and then I know David sorry responsible user group I want to get to you in a second on that one okay go ahead Alexis yeah um so I think again this is part of why we don't look so AI is is so many things right so one of the big problems is that it's a suitcase term and the reality is AI is maybe an off-the-shelf ready to go translation tool as much as it is a predict what the adversary is going to do as much as it is a help you know you know be a wingman right to to some of our pilots and other things so I think the the first thing is to step back and to say again what is the relationship you know with this knowledge or with this capability that we intend to have and I use the word intend really purposefully right because the AI is not magic right it is it is math and it in essence does what we tell it to do with the information we tell it to do with and that means that all of the normal morals and ethics and and intentionality that we have to exercise in any of our mission areas is is naturally extended to any tool that we use and so I think you know right now it's really again what is that intentional relationship with knowledge I think the other thing you know that's really important is when we make technology decisions a lot of times we look at those uh collectively as government as well I'm buying a cap a capacity right or a functionality or a capability and I actually think we're entering an era where we actually have to be very careful about the commoditization of these things because what I mean by that is technology if you think about it we spend most our day interacting and actually manifesting our mission in technology and so one of the things that we're really trying to Pivot and think about is that that means our technology choices are as much about a manifestation of our values and culture as they are any particular capability or capacity so you know I think at the end you know at the end of of what drives us every day it's you know will this make someone's life better will it make it easier is it appropriate but I want to highlight the fact that as again we navigate a new tool it's very normal to come out with new policy or new things like that but like there isn't any new morality required here right there isn't any new ethics like I already signed an Ethics pledge every day I already try to make good choices according to our mission and so I think we have to be a little bit careful when we take a tool and we start saying that it needs a new set of values or a new set of Ethics because I'd like to be held to that same standard no matter what tool I'm using where you know across our mission abolutely like that technology is a manifestation of our culture of our culture and our values yeah that's something else we need to copyright right let's take note of that one uh David I promised you let's go into a responsible user sorry responsible use group right it's about mitigating risks assessments all that good stuff expand upon that so just started off by just expanding upon what was saying like department of human security specifically I can't speak for other agencies has to be using AI for over a decade now like it's not a new thing like we have our authority to operate process we have fsma fatara like you know we we have you know privacy laws and regulations privacy impact assessment like that stuff has been around for a long time I think the real differences we're being intentional about it now for AI which is a really good thing right and so that's that's where we start talking about the responsible use group at DHS so uh in April of last year uh Secretary of human security uh signed off the memo for the creation of the AI task force um as part of that uh the officer for civil rights and civil liberties uh shova CA prad wadia uh was deemed a vice chair terrible at the vice versus Co versus whatever else but uh Vice chair of the AI task force um and as part of that she instituted uh what we call the responsible use group group y right and so um in the beginning that group was a lot of forming storming and norming around the civil rights of Liberties communities um sort of circulating draft policies bringing in external um stakeholders from Academia and other organizations to really talk about how they're tackling the Civil Rights and subil issues um and now with the announcement of the AI Pilots at DHS we've got three generative AI Pilots that are mission focused um we're actually making that rug actionable to where members of that rug the responsible use group got to pick a better acronym rug don't sweep it under the rug so when you have the rug uh it actually consists of more than just um you know the Civil Rights individuals and officers from the components it actually consists of technologist and and some folks from the mission that are that are interested in civil rights of Liberties like we actually use that group to define a pilot Advisory Group so every single pilot that we have announced recently has a pilot Advisory Group with members from multiple components all providing that third party based upon that individual use case and so working with that group along with the implementation of these Pilots which whatever model deployment pattern Etc the idea is this group is responsible for not just identifying sort of real time individual with this use case sort of crcl concerns and mitigations and strategies but actually looking at that use case at the big picture of DHS for example one of those use cases is around training right using generative AI to mimic or imitate um a trainer uh so that you can actually have a trainee Go real live chatbot style and and sort of be able to answer questions and get graded on that right so that's a pretty cool use case that's not specific to an individual component that training use case is actually applicable to more than just USCIS in the Asylum Community they have there there's a law enforcement impact to that there's emergency management impact to that as well like there are multiple parts of DHS that can take that same core concept of you know leveraging generative AI to provide this feature and actually figure out if that is appropriate to implement or use that in other places in the mission so it's um it's just getting uh set up and formulated we've we've identified all the individuals we've actually have uh the US digital service is is helping us make sure we get this done right um and the idea is by the end of this we're also going to be informed sort of what is the future of the responsible youth group right so how does this become bigger ingrained into our culture at DHS to where as we start looking at new use cases uh for example you know language translation Technologies and transcription Technologies like what are the ethical crcl bias controls that we need to think about that's phenomenal that initiative is phenomenal I'm sure it will evolve over time obviously touches the mission which is most important right and sticking with that theme of risk and this could be Katherine John or maybe both what are the major risks of gen do you see in your agencies right now and then how do we plan on addressing and mitigating them in the future well wants to take it so yeah start I'll start off um so I think obviously there are a lot of risks that I'm sure many people in the audience and on the panel are well aware of around cyber security and data integrity and and the technology itself but I think I'll talk about one that is um is is relevant to OPM so OPM uh has a government-wide role in supporting the federal Workforce and I think that one of the risks that we are working to help mitigate is again to make sure that we get the right Talent into government at the right places at the right time to be able to not just be Hands-On keyboards developing the tech but then to be using it and I think you know one thing all uh everyone on the panel has mentioned is that there are people that are working on all of the things that we've been talking about uh like you know the machines have not taken over yet so there is a need for people to be involved and that's where OPM has been a strategic partner with agencies in um creating policies that can help agencies to more easily get that Talent into government helping to create guidance around flexibilities that allow for more easy hiring Pathways um there's also some other Innovations around pooled hiring which is essentially you were joking earlier Alexis about how can OPM help to use some of dhs's um certif ifications there are actually some mechanisms in place to allow for agencies to share those opportunities if done at the beginning um and so we're working on ways like that and and also one of our other big initiatives is getting more early career Talent into government which obviously there's not a onetoone with early career talent and gen AI but having people that are coming in earlier in their in their career is one of the many things we can be doing to mitigate this risk so I think that there is um to me one of the thing one of the big risks that we're trying to work on is is is all around that talent and I would also say not just getting the talent in but making sure the talent that's already in is upskilled appropriately that our leaders are equipped with the right decision-making Frameworks and the right and enough of an understanding about the power of this technology that they can also ask the right questions and make the Strategic decisions necessary about how they they want to prioritize the different use cases that they're rolling out makes sense John yeah I'll add to that and picky backing off off of what Katherine mentioned focusing on people right so just two aspects that i' like to delve into a little bit is one is the AI for technologist and AI for everyone and when we're talking about AI for technologist it's it's it is some of those Hands-On keyboard people some of those keyboard clickers uh on there and how to how to implement AI as part of the new generation of uh Tech uh applications products being built on there and when we're looking at that space the it's especially with more and more of of systems moving towards the SAS model towards Cloud AI is is bleeding into everyday uh work whether whether employees and staff are are ready for it or not more and more tools that are being used uh has AI built into it uh the when when you're start typing email AI starts helping you generate some some some words for you to choose and finish your your your language on there so it goes back to increasing that General understanding of AI and gen AI that is it's not there to take over people's jobs is help focus on how are these tools and help answer those questions and also creating a forum on it because at the end of the day uh these tools need to need to to help the employees need to help the staff to do that and that goes into the AI for everyone right as as you start using you know your everyday tools to do your work whether it's emails or or teams or whatever chat tools that you're using on it AI is going to be part of that conversation on it so it's it's really a a a multifaceted approach going back to how do we get staff and a new talent upscaling and recruit recruiting new talent and retention of Talent on there that is of interest uh one of the things uh related to interests is there's a there's more and more people joining the workforce that have specific interests and how to further uh further AI opportunities further specific opportunities so as an agency we also have a responsibility to enable and uh attract retain those those resources and you know share the wealth with David and Katherine and everyone else to build out this AI Core uh because at the end of the day the these resources are you know we're not just competing amongst ourselves we're competing with private sector so gen bringing up that General level of understanding and interest makes it a a job portable skill employees and enables the the mission forward further well one thing I I mean I'd love to follow and maybe Catherine's early point on transformation you know I find a lot of times in government what we tend to do is we we just think that telling people to do differently is enough and you know I think when I look at the best things that government's done we take people on a journey that's not just do this differently but here's how you might want to think differently and more importantly here's how you have an opportunity to feel differently right and I think part of what's really interesting about AI you know to John's point on on is like is the socialization right we have to recognize and accept that people have identity with the tools that they use right and a lot of them are are quite invested and so you know there I was talking with someone earlier that there are kind of three predictable phases you know of of Technology there's like Tada right which I think a lot of us felt like when chat G came out like Tada right and and there's like interesting like Wonder and think you know there's emot in that and then the second one is actually uh oh right and so right now we're in this kind of like interesting Tada uh oh stage right of okay you know and we want to be intentional we want to think through and we kind of slow down and then there's like aha right now I and the AHA comes you know it comes when when you start to actually have an emotional connection to understand that this thing has a place in your identity in your values and that it's not a thre threat and that it's an augmentation for example and so I think you know as As Leaders who want to see the very best of government part of what we've got to do is embrace the Tada right the a oh the AHA and recognize it's it's more critical for us to take people on that path then it is like here's the technology you'll use it it'll be great right and so I just you know and actually to that much kudos to DHS for you know I don't know if you want to call it out but the recent you know you talk about I mean but but DHS deserves lots of props for what David is going to share yeah so wrap in my little first off dead- on right so the the difference between Jer of AI and you know the technology we have in our pockets and showing the screen of it's it's 10% it's it's a 10% difference it's still Hardware software there's privacy risk across the board cyber security like you know have good cyber hygiene like come on like that's we we've been worrying about that for decades now that that didn't come about because because AI to figure out cyber hygiene was important thing so the Department of Homeland Security we have a multiple rung approach to generative AI the first one was commercial generative AI we we knew it was going to take a very long time for us to figure out sort of how do we embed generative AI into mission systems how do we trust it what are the Civil Rights and civil lities impact bias all of those types of things so he said we have this this really kind of core exploratory opportunity we've never had before let's fix that so we literally brought up chat GPT and said draft me a policy on how to use chat GPT uh and then we said draft me a training on how to use you know jatp policy and so it was uh several months we worked with civil right civil liberties privacy we brought in the office of Science and Technology uh cyber security and we formalized it and finalized it so DHS now has a policy we have training we have supervisor guidance we have worked out government friendly terms and conditions with with multiple companies uh which is now available dhs.gov if you were you know federal department agency you're more than welcome to copy steal whatever want just scratch out my name and put your name on totally fine I've already done that I've already done right right no that's fine that was the point um and so that's the precursor to the second point and that's that's about the cultural side of things so we have trained over 3,000 DHS employees against this policy it's compliance your rules of the road what to do what not to do and I will tell you that it is a cultural thing right now and we get questions on both sides like I get DHS employees is it going to take our job like how are you checking for bias like how are you doing this and we have other people like hey can you get this tool as well this is awesome I love this I want more of this like it is all over the place I think the biggest threat is not addressing the cultural impact of generative generative Ai and making sure that you being supportive of your employees and finding a path for them to be safe with the technology even if they're not ready to use it yet right so what is that relationship between a supervisor and employee how do you communicate that what's the training involved how do you provide awareness without shoving it down their throat and making them use the tool we found that to be very [Music] effective Alexus you were telling me backstage that you're an AO as well as a CIO so you're authorizing official and I know we're running short of time but I I'm going to call an audible and ask you this question oh jeez um I I think it's for the benefit of everyone right we've got atos in federal right we've got RMF process in the dod and I know with just my experience with the dod we've we finally come around to rfing and at authorizing Solutions SAS Solutions Cloud Solutions now you add gen into the mech right and you think about it most Enterprise Solutions will have their llm separate from their technology stack how's that atto going to work out how how do people go live with this uh yeah I think the moralist story is I expect to be very busy in the next couple years but I do think it's one of the Great things you know when I the reason I called out uh you know Dave and the incredible work of DHS is that we have to we have to be in learning mode right we have to have reciprocity so I think you know one of the first things is you know myself and a lot of other AOS and and Air Force and our AO leader you know leadership are talking about what what does that look like right what does reciprocity how do we shrink you know kind of our the mountain of death I'm not even going to call it the Valley of Death right between you know being able to hand off you know amongst us um but this is going to be you know an area where I hope people also have empathy for us right because everything's figure aable but we're going to have to figure this out the same way that you all and and we are exposed to quite a lot of risk as we do that right and so um you know I don't think there's any clear answer what I would say that we're trying to embrace and in short order is that right now you know for a lot of people in this discovery you have that kind of aha like I want to I want to use it I want to do this and then someone will become very committed to a tool and then you know a lot of times the the RMF process can feel a little a little bit like a punch in the face right or a punch in the gut of like uh and I think our hope is in that Spirit of learning and in that Spirit of Discovery to actually we're exploring right now like security is a service if you will to say how do we take what is an endstate conversation around security and maybe to the points that were raised on stage bring it up into the discovery level so that people are able to make better choices earlier on um but stay tuned AI will have a role in in making that happen and and be easier all all I got from that is we building the plane as we flying it correct which is typical government right always so yes nothing new there I'm sure all right we've run short of time but I'm going to do a quick question and hopefully I can you know hit all of you uh Gardner predicts by 2026 that's 2 years from now right 80% of the Enterprises are going to use AI in some form whether it's through their llms whether it's apis to some sort of for Gen capability whether it's AI enabled applications at the moment it's less than 5% usage right they think it's going to hit 80% by 2026 do you think that holds true in the government and do you have any predictions for us it has to hold true because again we are in the business of knowledge very nice and if we're going to put all the knowledge on the table and be as responsible public servants making data and knowledge informed decisions then I actually hope it exceeds 80% because the more knowledge we have the better we can do our jobs thanks am any predictions before we wrap up because 93% 93 well predictions on anything doesn't have to be just that well dang it I just love to use my prediction this is being recorded right so I'm going to come back we going to meet up in 5 years and see how it holds up so I predict predict that 80% of the software we buy is going to have gen in it and some whether they're ready or not they're going to have gen available and it's not a problem but it is definitely forcing that cultural issue I spoke about before yeah absolutely John yeah so definitely very high percentages on there and with the shift towards SAS to David's point it's it's without a doubt that we're going to inherit geni just from that aspect how long it will take for us for for us to build uh our own develop applications that might take a little longer but as we that increase the general awareness and knowledge of AI I think that that's that the the velocity is going to increase over time increase over time right absolutely Katherine so I'm going to rely on my uh Tech the technology experts on the panel for their prediction about what the technology might look like I think a maybe counter maybe not a prediction as much as a path is we've talked a lot about Workforce and people and culture and change management I do think there is a path which is that the technology may have kept up or outpaced but the organizations themselves are not able to fully use the capabilities that are possible within the tech if we haven't kept up with the skills made strategic choices and made sure that we bring and I think Alexis you said it very well like bringing people along on the journey so that we actually have the transformation that we're trying to see and not just really great Tech without the actual uptake so I think it's more of a path than a prediction but I do think there's a lot of work that agencies need to be thinking about outside of the technology to make sure that we can actually see some of these uh numbers come into reality awesome thank you all for your time today it's been an Absol pleasure to share the stage with you and let's let's enjoy the rest of fed forum

View original source

https://www.youtube.com/watch?v=156zaZ0x9-A