All right. We’re going to go and get started. We’re going to do a tentative roll call here just so LSA has an idea who’s here. And then when we’re done with that, I’m going to have everybody go around and introduce themselves and why they’re here, and then we’ll make some opening comments and get started. So with that, would you like to call it roll? All right. And then you mark them off as you get to there. Okay. All right. So we’ll do this. I will go around, we’ll start interviewing with Adam. We’ll just come around the top and then around the bottom and just say who you are and kind of your role here. And then we’ll have, I’m going to have opening comments and I’ll turn it back around. Anybody else that wants to comment before we start taking some date, some detail information. So, Adam, we’ll start with you. Adam Brown with the legislative services Agency. I’m the director of the Office of Technology Services, just here to advise the membership of the IGA however we can and technology endeavors. Thanks. Zach Mayer, it director for the Indiana State Senate, co chair, Senator Liz Brown. District 15. Michael Mullins, I’m the LSA counsel for the committee outlayment, District 79, co chair, Matt Pierce, District 61, which is most of the city of Bloomington. Bill Barrett, I’m an attorney with Williams, Barrett and Wachowski in Greenwood. Doug Hutchinson, with Indiana State Police, representing the law enforcement section. Tracy Barnes, chief information officer for the state of Indiana. Ted Cotterell, chief privacy officer for the state of Indiana. Josh Jackson, appointed by the speaker with expertise in AI, cybersecurity with law enforcement, particularly Department of Homeland Security. Kerry Sheehan. I teach ethics and professional responsibility research in AI and ethics, the different components, and I’m also an ethics attorney. All right. Thank you, everybody. So I was already asked this question twice walking in today. Why are we here? So let’s talk a little about that. We’re here basically Senate Bill 150. It was a little bit of a continuation. We met last year talking about some AI stuff. The summer study charge that we have in front of us is fairly narrow. It deals more with what is the state doing, the state departments, what’s the impact on Hoosiers. But all that’s going to involve a lot of players, a lot of entities. I also think it’s going to have to, we’ll have maybe two, at least two meetings, maybe a third to really get even, bring in some of the outside as to what is happening in that world. Around us. So to prepare for this today, I kind of started thinking, what are the pros and cons of AI? We’ve had this discussion before. I just kind of put some thoughts down here. There’s the pros are always economic growth. We look around Indiana, some of the economic productivity, increased productivity, healthcare improvements. We’ve heard from some in the healthcare world regarding enhanced diagnostics, operational efficiencies in education, personalized learning, accessible education, public safety, some crime prevention, and maybe some emergency responses enhancing that environmental benefits, energy efficiency, some climate monitoring. There’s cons, job displacement. We hear a lot about that. Taking, taking the workforce with automation. Creating a skills gap for some, there’s privacy concerns, the data privacy, surveillance risks, ethical issues, is a biased, is a fair, the decision making process, dependency, reliability, we become over reliant or we have technical failures, regulations and controls, lack of standards. And then there’s also always the big competition world. So some recommendations that are jotted down promote responsible AI development, invest in our workforce, enhance privacy protections, foster innovation, regulatory balance, and encourage public dialogue by having meetings like this. Now, for the sake of full disclosure, yesterday I went on to chat GPT and I asked this question. I said, I want to know the pros and cons of AI for a presentation to legislative leaders. And in 22 seconds, that’s what printed out. It is wonderful and it is scary. I think it’s both of those. And I think now the rest of the comments I have here are mine. These come out of my mind, which may you’ll see why they wonder more. You know, we defined AI in the incentive Bill 150, and there was something that jumped out of me when I was reading that. And it says that it means that computing technology is capable of simulating human learning, reasoning and deduction through process, such as in that list of things. If I recall, last year, one of the presenters was talking about AI, and they said, we have to understand, we’ve had artificial intelligence for decades. For decades we’ve had it. Only it’s never been able to communicate back to us. We’ve always communicated to it. And by its ability now to communicate back to us, it has now intuition. So I had to look up what intuition means. I know the word, but what’s its full meaning? Intuition is defined as the ability to understand something immediately without the need of conscious reasoning. And that concerned me. When I hear the term without the need for conscious reasoning. Come, let us reason together. That’s why we’re legislative body. So there’s a lot of conflict out there. So we have to find that balance between good results that make our economies, make our jobs easier and better at the same time, not create more fear and doubt. I was talking to someone there today and I said, what’s the biggest thing you fear on AI? And they said, losing trust. And that is, is what I’m seeing real? Are the pictures I see real? Are the data I’m seeing real? Are we starting to cloud or blur that line between truth and fiction? So I think we’re just starting really with this discussion of Aih hinging from last year to this year. But I think now, focusing on the government, we’re going to have a real big issue moving forward. How does all this play into the real world around us? So with that, that’s just my thoughts, and I think where we’re going to go, I’ll open this up. Does anyone else have any other remarks they’d like to make opening remarks before we get started? I’ll open the floor. And we do have one more introduction. I’m going to come down here to you, Corey. Yes, sir. Thank you. My apologies again. Cody Rivers here reveal risk director, but happy to be here. Okay. Thank you, Cody. All right. Any other comments? Anybody want to say anything? Matt, go ahead. Thank you, mister chairman. So as a legislative member of the committee, I just think that we will need to rely upon the experts on the committee to help us really understand what AI can and can’t do. Like most new technologies, there’s a lot of hype about what it can or can’t do. And, you know, some people even saying that it’s going to take over the world and that’s a real danger. So the question is that really a thing? I saw something on the Internet which I thought was kind of interesting. It said in order to combat the AI threat, we would just hire one of the smartest it people possible, pay them a huge salary, and had them sit next to the AI computer. And if it started taking over the world, he would pull the plug. And so you think it’s maybe like Hal or whatever. So I’d be interested just to know. It’s like kind of, where is the hype? It’s like several years ago we were told we’d have self driving cars in two years, and that was just that. And then it turned out that it was a lot harder to make that technology work than we thought. So that, you know, how accurate can AI be? We hear about hallucinations and things and, you know, if state government begins to use these things, how do we make sure that we’re actually getting accurate information out of it and, you know, just all kinds of challenges? I don’t. I think it’s pretty much proven. If you look at the history of technology, it’s like you can’t put a technology back in the bottle and say, we’re just not going to do that. So it’s here, it’s going to get used. And the question is, what role, if any, does the legislature have in creating policies, at least for state government, about how the technology might get used to make sure that we don’t, that we limit the potential negative impacts of the technology and get as much as we can of the positive out of it. All right, thank Representative Pierce. Anyone else, any opening thoughts? All right, I guess everybody wants to get going so we can all go home, right? All right, so what we’re going to do today as what we’re charged with the resolution from the Legislative Council is to, based on Senate Bill 150, is to look at artificial intelligence technology that has been used, developed, or considered for use by state agencies and then recommended, issued, issued by their states, et cetera. We’re gonna focus on that today. I think moving forward, we’ll maybe bring in education and some other venues or some other factors that might bring a little broader spectrum to some of this, but today we’re gonna focus on that. So today we have two groups that we’ll be asking questions of and providing us data. Indiana Department of Technologies and then the maintenance performance hub. And I believe, let’s see. I think who’s. I’ll represent. You said you’re representing the. Okay, Tracy, you want to start again? Tracy Barnes, Indiana Office of Technology I have just a few slides just to kind of set some groundwork and give some perspective of what we’ve currently been working on, some of the things that have been in place, some recent and some for a long time. The big thing that you talked about and you guys hit on earlier is just the reality that artificial intelligence is not new. It has been around a long time in a number of various flavors and variations and different names and titles, items from robotic process automation, machine learning, neural learning, things of that nature. But we’re coming into a new landscape that brings a lot of fear and hope as well, for efficiency and effectiveness as well. When you look at the autonomous ability that it can start to make decisions and take actions for you. And while that sounds extremely exciting, it is also very scary when you think about the potentials that folks could use it for when they think about letting the machine do my job for me. So with that, I’ll just kind of go through a few slides here of information. The big thing I wanted to make sure we just kind of really started out with is just a quick separation of the two worlds of generative AI and what we refer to as traditional AI. Traditional AI is what I would say has been around a long time. Both have really been ingesting large volumes of data and information and starting to put together thoughts and activities and options and opportunities that can be taken advantage of to identify areas of efficiency when you’re looking at supply chain data analytics, data research, things of that nature. But when you slide into the world of generative AI, it is starting to actually take some of those actions and make recommendations and do things on behalf of the individuals. That’s where, again, most of our concerns start to come into play. You look at some of the examples like Chat, GPT, and Google Gemini, versus things that have been around a long time, like Microsoft’s Word spell check, which is essentially a derivative of AI, and the type ahead features that you get in search bars. Currently, there are about three main products that are in use across the state of Indiana that are doing some level of AI activity. The most I’d say, forward facing is the Department of Workforce Development’s workforce recommendation engine. They have been ingesting a large volume of their workforce data. It’s a private, large language model that is not public facing and does not have outside data coming into. It is all based on data that the Department of Workforce development oversees and manages, and they’re using that to make recommendations to individuals that are falling for unemployment to determine here’s training that’s available. There are jobs that are in the certain areas that fit the qualifications that you have and that system and went into production back in November of 2023. Our office has two AI tools that we’ve been working on, one for a very long time and one more recent, the most recent being our Ian dot gov comma, our web platform, a chatbot that we went into a beta test in June of 24 this year. Essentially, we put a chatbot out there and used Microsoft’s OpenAI tool to ingest all of the public facing content of our website. So it’s already public data. It’s already data that the agencies have deemed available and interested for the public to consume. And the hope is to start looking at the chatbot and let it answer those questions and make navigation and finding of resources and available information a lot quicker, instead of folks having to try and navigate and understand some of the intricacies of what agency does what. So you interact with the chatbot, ask questions, and it will start to give you directions on here’s the information that we’re finding, here’s what agency it belongs to, and we’re actually capturing metrics on that as well. So we can identify when it’s misidentifying information and we’re still in a learning mode that is still in a beta stage. Yes, ma’am. Good. Senator Brown, thank you. I have a question about that. With respect to the chat box, are you looking across all of in Gov. Correct. So if I have a question, DNR, how can I find a lodge that’s open or get my fishing license? Or if I go to BMV, it’s across all the agencies. That’s correct. Okay. And so it allows the ability to not have to say from DNR. How do I get a hunting license if I don’t know what DNR is, especially if I’m a visitor from another state. But how do I do hunting? Get a hunting license in Indiana or to have a fishing license to fish in Indiana, and it may bring you the information from DNR from office of tourism on different lakes and properties that we have available or potentially lodging opportunities as well. Okay. So it’s in the very front. So I don’t even have to find the DNR to know that that’s where we get that. That’s the goal. So I have my, then I have a question. Do we have a way, let’s say I really meant a marriage license, not a fishing license. You know, who knows? My head was. And so I go the wrong way. Right. And so who is or what? How do we audit that? To make sure you know? Or I just give up in frustration because it didn’t answer my question. Right. Because sometimes I hate using jackboxes. So how do we monitor that? That’s the big question. Right? So we’re capturing that information of what questions are being asked. We’re looking at the results that are being returned. We’re asking the individuals, did this get you the information that you’re looking for? If not, we’ll direct them back to the main website and start going down a path. And we have our folks on the other side analyzing that data to determine do we have bad links or bad information that we’re pointing people in properly or is it learning improperly? So we’re doing analysis on both sides. And a lot of it comes back to asking that constituent, did you get the information that you were looking for? So that we can make sure we guide you down a better path. Okay. So I’m gonna stop you there because I think this is where people are like, hmm. So it’s the same with the DWD. So we want the public to know I’ve asked all these questions about a hunting license, a fishing license, maybe I’ve even gotten very specific, like, cause I’m taking a vacation, I’m gonna stay up at Pocaghan, yada, yada, yada, and you, you. My something gets back to me and says, did you find everything? So how do we assure the public that they don’t know that I am going and going to be gone from my home on blah, blah, blah and all these things? Because that’s what everyone worries about. So how do they know that you have all that information? Not you, but the state has all that information about me, but that’s not really about me, right? Because that’s the concern, right, about our personal privacy? Well, that’s the scary part, is we can’t control what folks put into a question like that. So if they, instead of coming and saying, I’m looking at a trip to Indiana, I like to go fishing, what are some great options? If they divulge the level of information that I will be in Indiana from March 14 to March 18 looking to do hunting or fishing, and they start putting that information into the various models that are out there. That’s the concern that starts to raise as to what information starts being made available from our models that we’ve put in place. They are specifically and strictly access to Indiana estate employees, from our technical side, our tenants, and our infrastructure and our footprint only. So our data is not being fed into public models or public chat GPT or public LLMs in any capacity. And that’s the big piece that we’re working and focusing on right now, is governance and protection to make sure that any data that we ingest does not get made available to anything on the public, on the outside. And so that’s really, you know, I think Representative layman said at the beginning as well, is starting to manage and make sure we continue that level of trust. That’s how we continue to make sure we put the information into disclaimers on our website to identify and notify folks. This is information that we’re gathering for tracking purposes of the metric of the question, the results, the response. We’re not sharing this information with outside individuals. We’re not capturing it and tracking it for details about specific requests that they’re asking for. But the fear is the public and the average constituent being diligent enough to make sure they don’t over share that information and put information into these chat bots and questions that aren’t necessary. Right? So the difference between me and a chat box, say, with a store, say it’s a fishing store and I give all this information out there, they know a lot about me. That’s now kind of part of their data system. That’s right. You’re saying with the government, if I’m asking all these questions about getting a fishing license, going to pet Kagan, that’s wholly within, kept within state mph, that’s wholly within our system. We’re not sharing that. You’re not selling that to the fishing store. So now I start getting all these requests. That’s correct. But the other part of it is when you come back and ask me, did I get what I want, how do. We know. Well, two things. So if I overshare to the state, okay, so maybe I’m. Let’s do the conspiracy thing. Maybe I’m a person who adds too much information, like, can I get reciprocal carry privileges hunting? Because I plan to do something awful, horrific, illegal. Right. And I put all that in. How do we backstop that in case we have a security issue and at the same time, it’s anonymized? So that when I’m asking, you’re just checking. You don’t know. It’s Liz Brown in Fort Wayne, Indiana, yadda yadda, asking. You just know this request came in. Did you get what you want? Right, so you’re not tracking me, I guess, but that’s correct. Unless. Is there an unless? Unless I do something crazy. Ask something crazy. There’s not an unless on that. Okay, so purely on the public facing side, we’re not. You’re not logging in. We’re not asking for personal details or authentication or id and password or anything of that nature. So we don’t know. We do have the ability, just because of analytics, of web monitoring, to know ip addresses and different locations and ranges that folks are connecting and asking questions from, because it’s always good to know, if we see a large volume of individuals from Michigan asking about fishing in Indiana, do we target more campaigns for fishing activities and exercises to that jurisdiction? But we’re not getting into any personal, specific details of the individuals and the questions that they’re asking. We’re focused more on, and our system is set up more to focus on what content is being asked for. Are they getting served that information directly? And they’re being asked right then and there at that situation, at that request, is this the information you’re looking for? So it’s not a matter of coming back a day or an hour or a week later, because we don’t have the ability to do that. Okay, perfect. Thank you. You’re welcome. Very good questions. So the Indigo chat bat that’s been in a beta test for a few months now, we’ve gathered some good data and information. Lots of learning still to go on. That and another tool I’ve mentioned here, you see the production date back in summer of 2021. Again, just to make the point that in various forms, AI has been around for a long time. This is a tool we use called elastic, which does a lot of our log monitoring. So as individuals within our state, operations only are logging into our systems, we’re able to see where they’re logging in from. So that if we happen to see anomalies that all of a sudden a user id and password is being logged in from overseas or from a location or jurisdiction that they’re not typically logging in from, or at a time that we don’t use to. We’re not used to seeing them log in. We’re able to get alerts and notifications to make sure that we’re able to identify before a situation gets worse, if something, an account has been breached or credentials have been stolen or harvested. So this is much very specifically focused around our cybersecurity footprint and protections of the data and the systems at the state. The other thing I’ll share with you all is some of the potential uses that has come to us, and this is where you’ll see. I’m sure Ted will get into it a bit here. From the MPH side, our offices are working very closely together ensuring that AI, as well as any other technology, is being managed and procured and utilized very efficiently, effectively. But some of the use cases that we’ve been asked for from agencies, you look at user productivity, email summarization, calendar review, the Microsoft Copilot tool is the tool that’s been talked about a lot. So we’re starting to have conversations with firms like Microsoft and understanding the impact of incorporating that into our platform and our environments. Video camera footage monitoring and notifications. Being able to look at camera footage without a person per se and seeing, again, anomalies in the footage. If you happen to see our security cameras, happens to see a large crowd shifting in a very rapid pace, a notification gets generated. You have situations where not to speak for the law enforcement world, but the difficulties with staffing and recruiting and retention on the law enforcement side, being able to use and supplement some of that footprint with camera footage and camera capabilities that can give alerts to individuals so that you don’t have to have folks and bodies always sitting there at the desk watching the cameras to identify and see when a situation has occurred. Conversational response our contact center footprint, that’s another one that’s being looked at pretty significantly, very similar to the web chat bot, a conversational where folks can use natural language and communicate with an AI generated voice that can guide them down the right path to identify the data and information that they’re trying to find. Whether it’s within an agency or within the call center side of footprint, document review, interrogation, notification, creating cases. You know, once you’ve determined, hey, I’ve had. I’m looking for a hunting or fishing license and I found it. Now how do I apply for it or can you get me to apply for it? The potential to create that application and start going down that path for you. And then the other big one, content and document review and translation. Translation continues to be a big topic when you think about accessibility, both from multilingual and ensuring ADA compliance and things of that nature. There’s a lot of activity and opportunity for the state’s current websites and contents and data and systems to be able to provide data in various additional flavors and formats. Yes, ma’am. Thank you. Okay, so with respect to the email summarization, for example, so my sense is, so let’s say I get a whole series of emails about an FSSA, a Medicaid issue, enrollment, et cetera, et cetera. So Microsoft copilot, in this case, that potential platform reads through the whole thing, summarizes, they don’t know they are this and this and this and are they eligible? It summarizes it, right? Correct that part. So how are we, I assume our chats, if you will, this email system is because as legislators, if we are the recipient of some of these emails and it goes back to different state agencies, our documents are not public, our emails are not public. So how does that work with this? And I assume we have our own lake in the data storage system where all this is separate from others. So Microsoft copilot, my point is, is not integrating our emails to create a bigger chat system, if you will. Right. So that’s the key that we’re still trying to understand and figure out. And those are legitimate concerns. And when you think about agency to agency, maybe one thing within an agency may be another. Agency to legislator, agency to judicial, agency to public. It’s a very different flavor and variation. And those are some of the test cases that we need to start working on, which is why we’ve not vetted and validated and made this available for actual consumption. At this point, you’re going to where again, where the hair start to raise up on our neck with concern. You look at the initial use case that folks ask is despite getting into the details of, let’s say an FSSA case, I’ve just been going back and forth with my manager and they’ve given me three or four tasks and I don’t remember the due dates. And it’s going to give you a summary of you have these four action items and these dates that they’re due. And so that’s the simple use case but the reality is you look at AI, it’s not knowing. Let me only look for emails from your boss to summarize it. It’s trying to look at the entire footprint to make your life easier. Those are a lot of valid questions and concerns, which is how do we put the right guard rails and governance in place? And that’s where MPH and our teams come together to work hard to get that figured out and nailed down to make sure that we’re analyzing the data that we can and should and more importantly, that we’re analyzing and utilizing data or not utilizing data that we shouldn’t. Okay. So, and along that line, when you’re talking about the last bullet there with content and document review, that’s the other concern. Right? So, I mean, I’ve had a conversation with our chief technology office over here for the Senate about that. So if I allow as a legislator, one of these systems to create a document, if you will, and I put it out there, somebody still has to check that because now I have, I mean, this is now become official. Right. And that’s the problem. So can you just sort of high level, how does someone, you can’t physically, you know, sort of like what representative Pierce said, you can’t have someone sitting next to it ready to pull the plug or say that’s wrong every single time. So what is the process before it goes live as you’re viewing it, to feel good about that this is actually okay. It’s not going to envelop too much, but also we’re able to separate what is going to be private under our laws and what should be allowed to be public or a FOIA request. Right? Sure. Were you finished? Yeah. Okay. I don’t know. That’s the real answer. Right? There’s guardrails. There are the big key that, from IoT perspective, and I’ll see if Ted wants to add as well, is how do I make sure that the right pieces and right inputs are going into that analysis and summary from our perspective and opinion, I don’t think any of us are ready to let AI make decisions and take action on its own. So human review and human validation of anything, even if it’s generating what a recommendation for what could be an action to be taken that should still go through a human review and validation process that allows an individual. Whether it’s a caseworker, parole officer, someone in leadership to look at and say, okay, based on all the data that it has crunched for me, because I don’t have the capacity or bandwidth to do that. But the humans should still be involved in leadership, making sure that our policies are being enacted properly, our guidelines and processes are being adhered to properly, and that we’re not getting poor hallucinations and biased data that’s coming out. These models take a long time to learn. And so to think that turning on any one of these and that they’re live and effective immediately is very poor expectation and definitely has something from a tech perspective that we would recommend and put in front of anyone. Which is why, you know, for as I mentioned, our chatbot on the I n dot gov website, we are still in a beta capacity more than 60 days later. And we will be for a bit as we’re still trying to see how it learns and what it takes to learn. And is it learning the right way? That’s the big piece for us, is long validation and review of the technology and making sure and identifying what all it’s inputting and ingesting and pulling that data from, and human review and validation. It’s not about to your point, you’re not having a person having to sit there and look at it, looking at it, it process and act. But before final decision and action is taken, human interaction and human review has to be maintained. Ted, do you have any more to add to that? I could speak a little bit about the process that we’re putting in place and how we’re working with IoT on that. So Ted Cotterell, with the management performance hub, and for those of you who don’t know, MPH as we call it, really began in 2017, is this idea of enterprise data analytics. The state has all these historical data silos. How can we better leverage data across those silos to enhance decision making from a policy perspective or more boots on the ground activities that executive branch state agencies are undertaking? A part of that creation in 2017 was also this idea that we should have some enterprise governance around data. So, building on the what IoT has been doing since its formation and its current iteration in 2005, the legislature told us to create data quality policy for state agencies, to create data privacy policy for state agencies that builds on existing statutory frameworks. And the state CDO, the chief data officer that heads our agency, has some individual authorities around data transparency and analytics at master planning. So through all of that and working as an OMB agency kind of an extension of the governor’s office and where we’re placed in law, we’ve begin to, over the last few years, and particularly with, you know, chat, GPT comes into the public consciousness early last year. Immediately we were having discussions with IoT around how do we enable this. The state is never going to be at the tip of the spear, right, as a governmental entity, but I think we consider ourselves to be a leader in innovation. How do we enable the use of these technologies in a way that improves the Hoosier condition, but does so responsibly so that we respect individuals and groups of, in a biased context, or we’re not getting sideways with a data owner as it relates to maybe intellectual property. So what we’ve done from a process perspective is to we put out a policy for executive branch state agencies that adopts the NIST AI risk management framework. So National Institute of Standards and Technology in the US Department of Commerce, NIST is, Nistena puts out all kinds of standards that govern different, a lot of it processes. The NIST cybersecurity framework is one, for instance, that we subscribe to. NIST’s AI risk management framework sets forth a large number of, I wouldn’t say they’re controls, they’re just general things that points of consideration as entities. And really it’s any type of organization adopts an AI enabled system. So all we did as a first effort is to say we’re adopting the NIST risk management framework. State agencies, if you want to innovate with AI, that’s great, but let’s do a risk assessment in front of deployment so that we’ve answered some basic questions about that system before. It’s interacting with hoosiers, making decisions about them, whatever it may be. Can certainly talk more about that, but thought I’d offer that high level. All right, thank you. Before we go on to the potential liabilities on the issue of current usage, potential usage were some questions. I think Josh had a question. Thanks. Josh Jackson, lay member here. I noticed Microsoft Google some of the common names in those public clouds. Are theydehethere restricted to Indiana or the United States and how the data is being used? Or is it though, like copilot? Is it in any region that Microsoft deems appropriate? The majority of footprint of our cloud operations actually reside in government cloud or Fedramp compliant cloud. So that is restricting that footprint to us based operations. Okay. So my follow up, I guess, would be for DWD in the engine that you built, it was that internally built? That is an. On Prem. On Prem. On Prem. That is not cloud. Got it. Got it. Okay. Thank you. Yeah, it was a custom built on Prem engine. All right. Representative Pierce? Yeah. So I guess this report essentially says we’ve got three places where AI is currently being used, and we’ve got, I guess, four or five others that are potential. So is this the whole universe or just within IoT, or do we, are we sure we know what all these agencies are doing as far as AI? So respectfully, we never know what everybody’s doing and we also never know what every vendor is out there selling. So the reality is a couple things. One I’ll say is, even with all the tools and technology we’ve had in place, some for over 1015 years, AI is being included and incorporated into base products and being added as features and functions every single day. So there’s a lot of footprint that’s around there. Most of what I’ve captured here are some of the new specific use cases that have been targeted and identified because of AI. To give a perspective of what we’re seeing and what we’re hearing and what folks are asking for. Yes, sir. Then one other follow up. In terms of we don’t know what every vendor. Right. Is out there doing, have you. So the Department of Homeland Security and CISA, or the Cybersecurity Infrastructure Security Agency put out secure by design, have you put that into practice on vendors adopting or signing that pledge? We have not. That’s actually great question and a great topic, and we are extremely supportive of what Secretary Eastley and CISA is doing with that. Secure by design. There’s got to be that practical balance, though, between secure by design and productivity. And as we’re still trying to make sure we keep our operations moving forward and not limiting the solutions and services provided to citizens and constituents. The secure by design has a long way to go to get caught up. It’d be very difficult for us to limit ourselves and put that as a requirement in state contracts at this point. I think it’s definitely something we can start to shift towards and maybe even legislatively identify a path forward that. But it’s very hard to get there when especially the large organizations, large entities aren’t there yet. Good. Cody. Cody laymember so this I see is the sanctioned usage to your point earlier, as you’re looking at the unsanctioned usage of AI across the, the organization or the enterprise, do we have any, are there any efforts around education or awareness to then educate the average user on hygiene or things? Because you have these, which are personal devices, and there’s a numerous amount of apps you can download each day with AI. And so is there any effort or kind of like future plans on education for the workforce to teach that hygiene? So the state has existing regular training through the state personnel department, and IoT has cybersecurity training that I’m sure Tracy can share more about. We’re working with IoT right now, so we’ve done all of this sort of policy formulation, and then on the back end of the policy formulation is a lot of, okay, how do we operationalize this? How do we build something into the procurement process? So there’s, so there’s a check here that’s a big piece for us that we’re focused on right now. How do we put boilerplate terms in place and contracts to sort of start to bring parity to the way that vendors approach us? Because that’s the starting point for state negotiations to the point of state employees. We’ve got 30 something odd thousand state employees that are, we’re always, the individuals are the weakest link in the process, right? We have a draft sort of do’s and don’ts guidelines document that’s going through review right now that we would put, that we intend to put out as a, as a, you know, we have policy, standard procedure guidance. Right. It’s a guidance document that sits on the side of some of these more formal things that we’re putting out. Senator Brown, with respect to that and the bring your own device issue, is there a way if employees, state employees go to certain sites, there may be screens, but when they bring, it’s blocked. But if they bring their own device, but are, I mean, like I have, you know, I can get my Senate email account on my own cell phone if they use their own device, is there a way for currently to detect in the same way that they’ve gone to a site they’re not supposed to on a state system? Is there a way for you to see the watermarking idea that someone has used AI or is that not possible yet? Does that make sense to an extent? Screening type of tool? There’s a couple different pieces that go along with that. There is anything that’s connected to the state’s central backbone. From a network standpoint, we do have visibility and monitoring on where folks are going and what sites they’re visiting. We’re able to block certain sites and things of that nature, state devices and even state cell phones. We do have a similar ability to manage, monitor and block and control what sites and tools and apps are incorporating for the byld and personal devices. If they’re incorporating state application and state system usage. We do have a tool that we’ve done mobile device management through that gives us more control, at least that protects and encapsulates the state specific data and systems and activities within a, let’s say a section of their phone, not the entire footprint of their phone. This is an area that’s getting more and more mature as time progresses. But there’s still a lot of folks are crafty and there are always ways to find avenues around it and loopholes through it. If someone wants to move data from one side to the other and from different avenues, there’s absolutely to be a way to do that. But we’re doing a pretty good job of putting as much in place as we can to try and isolate and separate and segregate that data and those systems from the public access side of their personal devices. To follow up on that, let’s say I ask, I’m a state agency person and I ask an employee of mine to give me some information and data on our current maternal health records or whatever it is, and then I get a fabulous report back. But is there a way for me, and it’s not. It’s on their personal device, so it’s not coming through. Is there a way for me to know this watermarking idea that they have used something so that they feel good about it, but maybe they didn’t actually check it and then now I think I have good data and a good report. So you’re talking about rely on that. Is there a way for me to know that that validation within the state system, you’re protecting me, but from outside this, is there a way for me to know that that validation of authenticity? No. So in the long term, you know, right now we’re, we’re turning the ship right where we’re in this big period of transition. And all of these web based tools that come with their own click terms that the state in most cases cannot agree to, those are there and they’re easy to use. And then government and most organizations probably that are large at this point haven’t set up fully functional internal versions of those things that are safe and trusted in the long term, that’s probably where we’re headed, where we have sort of a pressure relief valve so that folks don’t feel the need to go out to a web based chat GPT that they have a tool right on their phone that they can use that’s appropriate. I don’t want to scare. I think your point is you’re going to provide the tools so that they can be more efficient using these AI tools and not going off to like the cheap version, so to speak. That’s quick and easy, but not very good data in it. Right. And so you’re going to provide the tools they need to make our employees efficient but also secure at the same time. I think that’s the vision, yeah, absolutely. Okay, other questions? Yes, go ahead. Sort of follow up on Senator Brown’s comments. I’d like to pose a hypothetical so that we. Have some concrete examples that would be helpful to me at any event. So the state has existing protocols, both by statute, by Reg, by administrative policy, and by technology that say what’s private and what can be public. I think that sort of defines the universe of the walls. So let’s put inside those walls tax return, a search warrant, and a search warrant that’s been generated in a revenge porn investigation. So there are images that are private. When AI becomes fully functional in the state, how do those protocols that I generally defined before that keep things in the box continue to keep them in the box? Go ahead, Tim. So to your point right at the top there, that the state has existing sort of administrative protections law policy around in APRA, it lists a lot of records as confidential in the Fair Information Practices act, essentially our internal Government Privacy act. It further says that all personal information shall be kept secure by state agencies context. So I know the rules wouldn’t change. That’s not what I’m asking. I’m asking what effect does AI have on the operation of the rules or the effectiveness of the rules? I think it makes it more difficult because we have to put technology around, we have to put technology in place to implement actual technical controls against the movement of data in certain instances. So in an AI development context, context, the context of use is so important, right. We have to understand AI over here in DNR to do a thing may be completely different than an implementation in DC’s. Right? So we’re going to ask a different set of questions and as a result, impose a different set of probably technical controls, working with our partners at IoT to ensure that the right people are seeing the right information for the right purposes, and then following what the state archives tells us, we’re destroying it when there’s no longer a statutory purpose to maintain it. So does that require, or might that require a further subdivision of the rules as they exist now or as they’re applied? Let me just go back to my example of the search warrant in the revenge porn case. There are two people out there who are involved, the putative defendant against whom the warrant’s been issued, and the victim. The victim’s interest is in not having that image spread publicly under any circumstance, save perhaps in court, at trial. The state’s interest is not having the search warrant disclosed to the bad guy, because means methods of operation of law enforcement are dangerous to disclose. So that’s why I asked, does the type of protection, the type of barriers or barricades that you put up get down to that level of granularity such that in order to put AI in a fully operable mode, you really have to start from zero. When in a product development context around privacy and product development, it product, when you create an IT tool and you have a privacy council involved in that, often privacy council will ask for a data flow diagram, what data elements are being created or collected in this transaction and where do they go. And then from there we can talk to our architecture people and understand who has access to those various locations. Right? That’s all a part of the development process. So what we’ve done through our existing, through the policy that we put in place last year, was to say all of this AI implementation work is a team sport. MPH has a lot of competency around data science and data engineering. IoT owns enterprise architecture and has a firm grasp on cybersecurity, right? And then we have business owners, business users, ctos in each agency, and some executive sponsor somewhere that has this idea of what, what they’re trying to accomplish. It’s all of those folks in a room working against a standard that is, one, just generally respected. That’s why we’ve looked to NIST, but two, that it’s applied uniformly across all of these implementations, and then we adjust for context. Let me, let me add to that as well. Just something to maybe a little more perspective. Keep in mind, state operations. From the executive branch side, we’re operating with 100 plus different agencies, boards, commissions, and like in your two examples, between Department of Revenue information data and law enforcement data, they’re also bound to the federal rules and guidelines of how that data is managed as well. So from a revenue standpoint, you have the FTI, federal tax information, and from law enforcement you have criminal justice rules and regulations. Those footprints are already segregated and separated as it comes down to the state. So in order for there to be concerned cross agency, that has to be explicitly and implicitly decided upon by the folks that are engaged in whatever activity is going on there. Again, this is all encapsulated within a state footprint that is not publicly accessible. So as you start to refine the requirements from federal rules, state rules, and then specific agency administrative codes and policies, it gets more granular. And it has to in order for the governance to make sure that whether it’s an AI tool, a dashboard or report or analytics that’s being asked for, requested, that the right information doesn’t get into any of the wrong hands, no matter what mechanism it’s available or being requested from, through a report or through an NAI activity. Thank you. Further questions I think it leads right into your next slide. Sure. Potential liabilities, one planet to consume all the time here today. But I, and we’ve talked about a lot of this, the potential liabilities that we see from our side, that loss of protected, sensitive data. That’s our biggest concern. I know our folks and the MPH folks are very closely looking and thinking about those issues. Inclusion of unnecessary data causing bias, whether it’s intentional or unintentional. The big piece that you start looking at, you’re hearing the efficiency and effectiveness offerings and sales pitches. Is our agencies ready for a tool that’s AI compliant or AI enabled? Is the data clean? Accurate? Current? Is it updated? Do we have good data hygiene? Are there different aspects that are making sure the data is being manicured and maintained regularly? And then back to Senator Brown’s point, the validating of the accuracy of that work product, especially if it’s happened to be produced from an external vendor or partner on that side? And then lastly, just an update. Part of the Senate Bill 150 requests an inventory. We have started to put the framework in place for that. These are the fields that we will be adding to our current inventory store to capture that information from agencies, and we expect to have that ready for capturing that data by the end of this at the end of September. So we should be able to start capturing and pulling a lot more of that AI inventory together soon. Okay, questions, Senator Brown? All right, so I was going to ask, under the potential liabilities, there’s also the cost. So what is the cost of these systems? I mean, are these the ones you already had put up on the previous slide? Are we testing them out on a potential buy basis or are we already purchased some of these systems? And what is that going to look like? Because the efficiency is great, but we’ve seen companies spend literally millions and billions and they’re starting to pull back because they haven’t gotten their ROI. So how is that going to look for us as a state? Or is it too early to say? It’s a very good question. I think it’s too early to say. Definitely not. From the ROI standpoint, we’re truly looking at pure costs right now. The cost for licensing, everything that we’re talking about has had some level of cost implication, whether it’s product licensing, resources consulting, contracting resources, equipment for crunching the data, if it’s an on prem footprint and activity. But we’re nowhere near, I think part of TED, as part of your evaluation, doing ROI analysis. So part of the risk framework review does do an ROI. As for an ROI analysis, but we’re nowhere near at a point where we can actually give any indication outside of saying it’s looking expensive. Go ahead. So we were invited by Gartner to attend a presentation that they did primarily to chief financial officers around AI, just to try to understand what the CFO role is and what the costs could be. Looking back at some of those notes here, it’s not fresh, so forgive me, but this was from June. Cost overruns was one of the sort of enterprise levels, what they termed AI stalls. Cost profile of AI is unique. Implementations have unknown costs, but the estimates appear to be off from. Perspective, anywhere between 501,000% with ongoing sort of maintenance and operation costs as the big unknown. So right now, did you say a cost savings by using it or 500% to 1000% over? Well, they said cost estimates are off by 500, are off by 500% to 1000%. And the big focus was, was ongoing maintenance and operations. So we’ve already in this conversation mentioned procurement a few times and the way the state buys things, certainly we’ve got, we have a lot of good processes in place around buying technology because technology is inherently different than buying these desks and chairs. Right. But as Tracy mentioned, existing vendors are saying, oh, we have a new AI thing that we’re rolling out and now you’re in our cloud. So through their change management process, they’ve just told us the AI is now enabled. So from a procurement, from a vendor management, third party risk management perspective, we have to, over time, get a better handle on that working. It’s both of our agencies, it’s idoa, the attorney general’s office, to put good contract terms in place, but also real teeth in our change management process for vendors in the way that they have to notify us and then we have to have a conversation or do an assessment or something before that goes into place. I’m a little bit familiar with the contract issues because that was a little bit part of Senate bill from 150. So I guess that’s what I’m thinking of with respect to it. For example, years ago, many cities instituted the red light camera thing, right, to catch speeders and people running. And then a lot of them turned them off because they were so expensive because of the storage issue. So thinking just one of your examples from a previous slide, the doc thing. So that would be great, particularly because some of our doc facilities have issues with hiring up. And so if you could, for example, have a monitoring system in place that was AI generated, so if something was amiss in the lunch area, blah, blah, blah, it would alert. So you don’t have to keep multiple people on that. Right. But the flip side is you have to preserve all of that. So as opposed to, you know, Sergeant Smith sitting there every day and watching now this AI, Sergeant Smith, you’re going to have to preserve that. That means cloud data storage. And that gets expensive, right? Correct. I mean, that’s part of this issue. And then, and you add in, maybe we never access it, so it just sits out there, but we store it and then we have to put in the whole risk management point and how long do we have to keep it, etcetera that was the other. You know, I use these simple analogies like the red light, but isn’t this the same thing? It’s going to be expensive. The retention policies that go along with that, and the volumes of data that are usually look to be kept available in order for the engine to learn and look through the anomalies and look back through history. I mean, those volumes are getting large. And so that’s really the while, that’s the brainpower behind the engine. It is an exorbitant cost as well. That is a really good point. So say, for example, we started this because this is going to be completely ours. So we’re not integrating other state systems or out in the public. So we wanted to institute a crowd control call, an AI system that is being sold to us, and we’re going to use it in our correctional facilities. The first day out, the first day of deployment, it has no data, so it is learning, and then someone is sitting next to it and saying, no, those two individuals are not doing anything illegal or inappropriate. There’s no violence there, blah, blah, blah. So it’s okay, but over time, it will learn. That’s an interaction that should be flagged, but it takes a lot of data. So even if it’s all benign data, if you will, nothing is happening. You have to continually feed this beast, if you will, until it’s eventually understands what is the good versus the bad. So that’s part of the problem, right? That’s the. The expense and the storage issue. But this is another instance where the state has existing policy procedure around how long we keep things. So, just last session, I remember a conversation with Representative Pierce around the state archives issues records retention requirements for all state agencies and all units of local government in Indiana. Those become increasingly important as we go forward because a state agency doesn’t maintain a statute. Purpose to actually continue to maintain the data after that retention period has expired under law. So we should be destroying it if there’s no articulable use for it anymore as we implement more AI systems, that should be part of the process too. Although it’s kind of interesting because you can destroy everything that’s stored, but it’s really not destroyed because it’s been integrated into this AI machine or this AI. Right. Kind of ironic. Tracy mentioned DWD’s workforce recommendation engine, or pivot, as they’ve rebranded it. That’s an instance where we leveraged mph working with the folks at DWD. They wanted to train this model on education data, k twelve and higher ED data, along with the data that DWD owns. In the unemployment insurance context, DWD didn’t maintain a clear statutory authorization to see the other data directly at the level needed to train this model. So we did the training. MPH has statutory authority to access all of it. We did the training in an IoT licensed environment that MPh sort of has control of, and then separated the actual model from the, from the personal information and handed the model to DWD to then run and productionize. So from there, DWD’s environment now doesn’t need that big bulk of training data that it, that it initially was injected with. That’s interesting. So if different agencies have different rules on out of retention and at the same time, what level of permission, assuming the feds are out of the issue, HIPAA, whatever it is, you know. Right, but what level of permission are we willing to give to break down the walls so that the data can be linked? So you’ve got a, you know, looking at maternal health and now you want to add an education and then you want to work, add in workforce development, sort of. And so all those silos have to come down. But when you integrate it, then are you mucking things up? Right. Well, you said you pulled it back out again. Well, and, but an important distinction there in my mind is a transactional system versus an analytical system like what we have at MPH. So in a transactional context, DoE is collecting the daily student record. Right. And that system has to be on to receive that data from all of the schools across Indiana. DWD is interacting with unemployment insurance claimants all the time to do a thing like this. What we need is the MPH system to, we pull only the data that we need to train a model out. We have a copy of it in this MPH bucket. We conduct the activities that we need and then we pull the model out and the actual separation goes through. We have a policy documented online. We refer to it as our, it’s a data review team and our HIPAA privacy board that we’ve set up and the data review team reviews that data product, ensuring that only what’s going out is only what was intended to go out is going out, and then tying that downstream use back to the statutory purpose. So where other states probably still struggle with this, where they have a health and human services agency that does those things and they have an education agency that does that, connecting that data is very hard because of FERPA and HIPAA and 42 CFR two, they can’t get past those barriers. What the legislature did for Indiana in 2017 in creating MPH was sort of create a data Switzerland where we can come in again. We don’t operate in that transactional context. It’s only to do these kind of special projects. So we were scient, and I remember I was newly elected when we passed that, which I did not think at the time, but I would have to say. So are you telling me that other states don’t have something like the management performance hub? They’re not, because that. So they don’t have the Switzerland that can make these AI models and pull the data out without interfering legally with any of the. All other states do not. Wow. Yeah. So from a management standpoint, I mean, we’ve had some great questions here about AI and the risk and everything. We know there’s limited budget, people, resources, time and just ability to change. Your point from a design standpoint, can you speak a little bit about the process of like how you’re gathering these. Specifically AI risks across the organization and then like gathering them, prioritizing them, and then is there a line of demarcation of like, look, we’re gonna, we have the means to tackle the criticals. We can’t do high, medium and low. But is there, is there a process in place to kind of track that and then share alignment on what we’re, what our focus is? So the process that’s in place right now, the first gate, if an agency, I would just say, wants to do AI there, we’ve designated, we’ve designated an agency privacy officer in each of the 100 plus business units. And that APO, as we call them, is tasked with submitting a readiness assessment to us. And it asks very basic what we see as very basic questions about this proposal. Do you have sufficient staffing resources to go through the implementation process, but then importantly maintain and operate it for the long term so that hopefully we’re not on the hook with a vendor to do that? Do you have the requisite it infrastructure? Have you had a conversation with IoT? They don’t need to be blindsided. As agencies try to do these things. They want to be involved in the process so that they can help inform and bring to bear this, the benefit of enterprise infrastructure and everything that they do. What does the financial situation look like to maintain this? MPH is an OMB agency. So we want to understand that an agency has sufficient resources to do this. But then we get into much more the substantive line of inquiry, which is what’s the regulatory frame look like in the space where this it implementation, this AI implementation is going to be the data that you expect to ingest to create to output the types of individuals that you’re interacting with. Because again, in my realm, in privacy context matters, right? So we’re asking some basic questions at the outset and that’s going through a review in our office. And it’s a, you know, I talked about in the larger NIST context, we designate a project team. It’s many individuals that have different competencies, sort of following the HIPAA model, right, that look at a data product to determine if it can go out. This is similar in an AI implementation. Many people with different competencies. We have the same way that this committee is representative, right? We have it, we have legal, we have operations, we have policy. Everybody’s looking at that through their own lens. And then their putting questions back to that state agency. That’s just the first gate. Ideally that doesn’t take more than a few days. Now we’re at the very beginning of this operationalizing it is hard. Then what we do is we either determine that we can grant an exception to that, to the NIST portion of our policy in all or part because it is a, the Iot chatbot is a great example. That’s, it’s a, it’s a locally hosted model. It’s trained only on publicly, already publicly available Ion dot gov data. And Iot worked with us. An important component of our policy, I think, is that hoosiers need to receive notice when they’re interacting with an AI enabled thing. Right? They need to understand that black box and IoT worked with us. And I if you click on the beta instance of the Ion dot gov chatbot, a notice pops up and it gives you very easily readable set of guidelines that hey, this is how this works. This is what’s happening. So from there, if the implementation doesn’t get the exception, that’s where we’re going to this full nist assessment. And our policy is still very young. We’ve not done that yet. And I think one of the things that MPH is kind of trying to figure out is how do we ensure we have the right resources from a data governance perspective around assessments and evaluations? We’re already doing data maturity assessments in state agencies. Our CDO has asked our team to do that. We have a privacy maturity assessment and a privacy impact assessment program that’s pretty light. Still, functionally, these AI assessments, agencies want to do this. Vendors are selling AI enabled stuff. So if we’re saying you have to go through these gates first, we want being housed in MPH. The whole goal is to enable innovation with data, but we want to do it responsibly. So how do we allow that to be easy for the agencies? We’re still working through that last piece. Great. Thank you. Any additional questions? So I have one on the, I would say as a representative, one of the things I hear more than about anything else is complaints from constituents on getting access to data. You know, I’m checking in on my unemployment and I’m on the phone for 45 minutes and then I get sent to another one, then I get cut off and I have to call back and then they’re mad, so they call me. Are we moving towards that information being available? Like to the, to the chat bot. To the chat bot conversation? If you directing them to go to workforce development, you can ask them, how do I find out where my, what’s the status of my unemployment? And that gets into that point a lot quicker than that. 45 minutes on hold time. Are we kind of across the spectrum of all of these, you know, interactions? Is it to that level of getting these people that kind of information? Yeah, I would say we’re, most of what’s going on now is really bridging a gap between what’s already publicly available and what most receivers of services can get to through an online system. We have the state houses and operates over 2000 different applications across all of our state agencies. So user self service access is available in almost every agency from almost every level of credentialing, permitting licensure, any of the footprints that are out there. And a number of agencies are continuing to modernize and make those more easily accessible and available as well. The question starts to become into how do I, for folks that aren’t technically savvy and that don’t have the same level of access, is there a way that we can get them more conversational or readable or simplistically access to that data without them having to remember usernames and passwords and things of that nature. And that’s the world that we’re starting to look at and understand. And that really is going to require significant investment and time from the agencies themselves to make sure they’re meeting their mission goals and modernizing some of that technology behind it? So it kind of goes to, we talked a lot about, you know, phishing licenses, and you don’t, you don’t hold any data other than how many people inquired, where did they inquire from and things like that. But if we get into these issues, like, I want my unemployment status, how much is, how much unemployment paid me so far, I want to get that. That’s going to require you to ask me a lot of more detailed questions of who I am. Correct. I realize that the Department of Workforce Development already has the system that tracks your unemployment. And as a unemployment recipient, you can log in and see that information directly. Okay. So that’s why I think there’s a little bit of disconnect with people, because I tell them, you can. This is one particular question was, where’s my unemployment? You can go on and create an account and you can track it yourself, correct? Well, I just called them and they didn’t answer their phones or whatever. And I think people are moving when they hear. I just got a call today from somebody who said, why? I saw on the Internet, I can get this, this or this. Is this a good thing or not? I said, if you got on the Internet, the answer is no. Okay. It’s obviously getting to buy something. But I think people are looking for the fastest, safest, and easiest way to access data, kind of without that login process. So if I want to get by, I don’t want to have 42 different logins and passwords. I want to be able to just get online and get that information almost immediately. But again, you have to find a way to know who I am. Otherwise, I want to know what Matt Pearce has been getting, unemployment, etcetera, that opens up that whole can of worms. That’s exactly right. Yeah. So are we. What’s on the horizon for that path to easy, accessible data that you’re almost. I don’t want to say immediately, I don’t want to go down this discussion today, but you hear a lot about facial recognition and things like that, and I think that’s another issue that plays into this whole AI thing moving forward. So I think that may be for our next session or two or three. I’ll refer to it. I mean, we’re making a lot of progress. The technology has come a long way. Again, it requires investment from both time and resources and financial investment as well. But we’ve got a pretty good cadre of available activities, actions, and filing of rental assistance. We stood up a number of online systems through pandemic because of the inability to get into buildings to get things done. So there’s not been that I can think of any transactional service that the state’s not providing online, that they’re providing in person, that they’re not providing online. So the access is there. The simplistic approach to getting it, that’s the question. And that’s the challenge. User interface, user engagement, customer access. We still have. And I think there’s, you know, there’s one, I don’t know the code offhand, that requires us to still accept applications on paper. So, I mean, there’s still that balance between how do I move forward in technology while still not leaving anyone behind? And so we’re stuck in that. You know, I’ve had folks ask me all the time, don’t you want your website to look and act and work like Amazon and Google? And I said, I can’t afford that. Well, and I would say this. From my perspective, 30% of my county is Amish, which means if they want data, it has to come through a request, written request, and a paper sent to them. So in one sense, how do we keep moving forward with all this technology without cutting off a segment of our population that has either rejected technology as they have or just hasn’t? I mean, could Bruce Border still get this on his flip phone? I mean, because that’s. I mean, he’s probably one of the only reps I know still has a flip phone, but, and that’s just it. I mean, it does what he wants it to do. He can call. That’s all he wants to do is call. Right? That’s right. So if he does, are we forcing them to move towards this, that they don’t want to move towards that? And then there’s those who, just because of their belief, have rejected it. Yeah, I think we can’t let that those people get away completely or to the curb. Other questions? Adam? Partly a statement, partly a question. But if you’ve noticed, Tracy and Ted both have said numerous times as they’re talking about this, we already have to do this. We already have to keep this data private. We already have to look at the business value. We already have to judge the cost. In your last statement, though, Ted, you talked about working on an AI assessment. Explain for the group what is different, because if I had a really good, really smart group of people that would scrub the data that has been proposed by the agency, and they could do it as fast as AI, which that’s what we’re talking about, right? Intelligence. It’s artificial, still intelligence. So a group of people does it. You’d look at the business value. If it’s private data, if it’s public data, all of that. Tell us exactly. Tracy, you could talk about this, too, but tell us the differences or the factors that differ from every other technology decision you’ve made throughout your career that AI presents, right, because I don’t think it’s as much as we think. Right. As a group we’re asking all these questions. Mister Barrett, I respect your questions. Those data, they’re private. They’ve been private since I was an attorney and went to law school. They’ll be private long after I’m dead. That doesn’t change today, right? We’re not releasing AI to go just scrub all that. So I guess, Ted, if you could talk about the assessment you’re building, and maybe Tracy add in on it, I think it would help give context to the real business problem we’re talking about. I think what’s different here for us is that AI presents this ability of a computer to sort of be human like, right. And it can, we’re a traditional IT system, we’re used to that interaction. The dynamic is changing and it feels much more like we’re interacting with a person from a purely contractual state government bureaucracy perspective. My mind goes immediately to, we were kind of talking about this before. If a state agency needs a statutory purpose to process, to collect, to maintain, to use, eventually destroy state data, data that is passing through these halls, certainly our third party vendors need to be held to at least that standard. So right now we do that very informally through our state boilerplate contract that the Department of Administration maintains. What we don’t yet fully have in place is I think we need more resources to refine our existing processes and make them more scalable. So talk about this in the context of this assessment. It’s recognizing that these types of IT systems pose new risks. When we talk about we’re building a thing and there’s a foundational model from Google Cloud, and then we have a local it vendor that’s going to build some customization on top of it. What’s that intellectual property situation look like? Who owns the inputs, who owns the outputs? Who trained this on what data was it? We want to understand that if it’s going to be making decisions about or interacting with our population, right? So from a contract perspective, we can put in place all sorts of administrative controls. Vendor, you attest to us that you’re doing this, you promise to do this, right? And here’s a penalty if you don’t. But we talk a lot about trust, but verify the, but verify in this instance is something that we want to do before the fact, and that’s this assessment. To do the assessment, we need to either farm out to third party assessing organizations so groups over here that aren’t in the data analytics aren’t in the it building space that do these assessments professionally or, and probably more likely when I talk about vendors help, the state does a lot of process stuff really well. Third parties help us innovate, and that’s a good thing. But the long term maintenance and operations of those systems, speaking just as TED for a second, in my mind, that probably shouldn’t be farmed out in most instances, there are cases where it’s appropriate. We probably need this assessment competency in house to evaluate these tools under our existing framework. Now, I’ll step back and I’ll kind of end with also recognizing that right now the framework, and I hope I made this point at the beginning, this is our first effort. This came to the public consciousness. Everybody wants to now do AI. The agency heads and their CIO’s, their ctos, they want to improve the condition of their constituents. Right? That’s a good thing. We want to enable that, but we want to do it responsibly. So we immediately looked to NIST and built up a maturity assessment on top of it. There’s probably over time going to be a better way to do it that’s more efficient and we’ll pivot to that when we feel like it’s right. No, just to add in, Adam’s got an extremely valid point, and I think that’s glad he brought it up. The reality of all these questions that we’re talking about here, these are all the same questions that we have in basic general technology decisions. We’re just adding in an extra layer of complexity and cost. When you think about doing it from an AI perspective, I mean, it’s the evaluation of how do we continue making sure that effectively, our agencies have missions that they need to accomplish, they have goals and services that they’re trying to provide. What is our actions and activities to help enable that as effectively and as efficiently as possible? And that sometimes is not always a technology tool. I mean, there’s been scenarios and situations where, hey, Tracy, can you guys build a program that goes out and identifies and contacts these 2000 people to determine whether they’re valid or not valid? It’s like, yeah, but how quick can I get ten people in here to get on the phone and make that call and have that done by the time I get a programmer to get up to speed and understand what it is that you’re trying to do there? I mean, those are the evaluations that we are going through on a regular basis. This adds another level of complexity. When you look at letting a tool do it that has unknown levels of programming and coding and development, and not maybe secure by design concepts behind it, these are all standard actions and activities that we take on on a regular basis. And I mean, this is just that next iteration. And it’s, you know, not the final one. We know there’s another one with quantum around the corner. I mean, these are, this is the iteration and evolution of technology in our daily lives. Bill, I think my question has been answered well by the last two comments, but to be clear, my question wasn’t, how do you make data private? The state’s done that for years. The question is, and it’s the heart of what we have to work on, is how do you keep data private as you transition into the use of AI? And because it is intelligence, the question is more focused than a device that simply answers rote questions. This is terra incognita for the whole world. And so we’re all trying to learn it. That’s why the question remains relevant. And a specific question that comes from that is, do you require vendors to disclose their algorithms so you can look inside what they’re doing, so you can know that something is going to be treated the way you expect it to be? So we’ve not gone through our full NIST assessment yet. We’re in conversation around one of those. I think that would be the intention. NIST has 70 odd, I think it’s 72 subcategories that, where it asks very specific lines of inquiry. To effectively answer those and determine a maturity on each, we would need to see under the hood. Senator Brown? Well, I just want to, I want to echo and appreciate your comments, Ted and Tracy, because I think, first of all, going back to what you said, that there’s not a transactional, what did you call it? No, what, Tracy, when you were saying across the agencies, 100 plus, that is not automated in some way already. In other words, I can see, yeah, I think that’s actually quite amazing. Because most of us don’t. You know, maybe we do our bmv registrations online, but most of us are not doing a lot of stuff with the state. And if we’re not engaged with FSSA and checking our Medicaid account or checking one of those things that unemployment and that’s, you know, just all one offs. But the fact that that’s all automated I think actually is nice because it puts us ahead of the curve. Because the little bit I am starting to read about this a lot of, I think other states, because the feds aren’t doing anything, are trying to be efficient. And how can I use this tool to make our agencies more efficient? And that’s really not what we’re really needing right now because our agencies are efficient because we have moved that workflow so that people can see and that’s what our constituents care. Now you got a high amish population, they can’t go online perhaps, although some can, right to see that workflow. Where I am in the process, my permit, you know, I’m the professional licensing agency. They’ve started to up their game. You know, they were one of the last, quite frankly, and they’ve started up the game. So I can see where am I in the process. It may still take a while, but at least I know, yeah, they got my documents or whatever the process is. But I think that’s really important. And so that also, and I guess I would just say then my comment is that actually puts us in a really good spot. So we can choose or not to go down the path, maybe with one agency or lots of agencies to look at these models and these products, if you will. But we don’t need to because we’re not efficient enough. It’s more how can it enhance the product we’re offering? So I think that’s what’s nice about the chat box. I don’t know what I don’t know. So when I am starting to ask the state, I’m thinking of being a landscape architect, licensed, blah, blah, blah, where would I even go? Or do I need to, or I ask those really broad questions, right? That’s how that’s a good place because we don’t want someone, you know, sitting by the phone answering. But I think it’s also your point that if ten people could have been hired on a contract to do the phones, which, you know, was the whole unemployment mess, through Covid, et cetera, have these people out there and then in a sense when their job is done, they’re gone instead of buying this big system that we really don’t need all the time. That’s an important piece. But also the fact that we’re looking at the places where there are gaps right now, so to speak, go back to the chat box, but not because our agencies themselves are not doing their jobs and have already integrated technology. I think that’s really important as a takeaway. So even though, as I said when I, you know, visited the cybersecurity or what’s it called, the council, yes. Government’s always going to be behind. In this case, it’s okay because we have pushed our agencies to get out there and be more efficient in their technologies and to adopt what is appropriate to server constituents. But we’re not going to have be looking to adopt these really expensive, bright, shiny objects because they’re cool and they’re fun tools to play with. But there are so many unknowns. And, you know, just from when I. People don’t even know how to audit these, you know, to even see, right. That they’re actually doing their jobs, you know, at the end of the day. So I applaud you too, for having a sort of brought this to us in a way such that you’re honest and that you don’t know what’s out there. And that’s why we don’t have to rush to judgment on some of these things. That’s really reassuring. Thank you. And maybe to build on that from the standpoint of, I do think we’re a bit of an outlier, maybe ahead of the game on some. But what, what have you seen in other states? I mean, are you guys part of any, I mean, like, we’re all part of some organ, NCSL and coil, everything else where we go and we chat and we hear, my state’s doing. Here’s your state’s. Are you a part of any of those organizations and what are you seeing out there that either. Either you come back and you go, man, I’m glad we’re not doing that, or you come back and you say, you know what, we need to develop that a little better, a little faster, a little stronger. So my office, my team, we’re part of the National association of State CIO’s and that has the CIO level. My counterparts from the 50 states and the four or five different territories, we meet regularly. I was just in a meeting in Milwaukee about three weeks ago, and I mean, this is a hot topic, but what are they doing? We’re all cautiously trying to figure out what to do. There’s one use case that I think our friends out in North Carolina have done, you guys may get a kick out of this, is actually interpreting legislative code and bills to understand what they need to be focused on and what, what they need to be prepared for. To testify or be prepared. To work on as their sessions are going on. That’s been the biggest item that has actually made it into a production state that we’ve seen and talked about. I think the agency said that when their name was mentioned in 40 different bills and they didn’t have the bandwidth to digest them all and understand, they were trying to look to an AI tool to make sense of that and summarize that for them and prioritize their time and attention. So it’s very cursory use cases like that. It’s content summarization. Productivity seems to be the one that folks are just dying for and talking about nonstop. And maybe that’s because that’s some of the hype that the vendors are putting with their sales pitches as they’re talking to different jurisdictions across the nation. But we’re all in a cautiously moving forward capacity looking to see what are we missing, what are we doing, how do we make sure we’re doing this the right way and protecting, while increasing the collaboration and the effective output of that value? I’d add, from our perspective, we’re also in the privacy space, lucky enough to be able to glom on to the CIO’s through NaCio, the chief information security officers and the privacy officers also have a group that comes together through them, but also the International association of Privacy Professionals, the biggest privacy association in the world, about, I think, just shy of 100,000 members worldwide. They’ve released recently. The first sort of, I don’t know, I think I’d say commercially accepted AI governance certification. Another team member and I, my colleague Jen Cooper, attended their first AI conference where they’re a governance conference where we’re hearing from people worldwide, particularly in the European Union, around. How are they tackling this? Because often they’re taking the hard governance approach, and ours tends to be a little softer, but we have a lot that we can learn from that, too. All right. Other questions? Comments? So maybe one thing on, on that as we talk, as we wrap this up yet, but, but a couple things. One is I, you said moving cautiously in this, and I think what, what all factors go into that, moving cautiously, because I want to go back to, I think the, the biggest issue I have is data versus human, is, is that loss of personal touch. And I think how many times have we all been in a situation where we see something and say this doesn’t seem right, it’s that intuition we have as humans, to say, I don’t think this is good, it’s going to have a negative impact on something. I didn’t anticipate, whereas data is cold. Data may say, yeah, you’re going to negatively impact this x group, but we don’t care because we’re data, right? We’re giving you the facts, not the feelings. So when you start to move through these things, how much of that we never want to lose the human touch of government to the public. And I think there’s already a disconnect in a lot of that. They already think government’s disconnected from them. But the more we go into the they want faster, they want cleaner, they want smoother, but at the same time they love to hear a voice, they love to touch a hand. And so I think how does all that bounce together? I’m going to answer one piece and I’m going to put tail on the spot to answer more specific to that content from our world. There’s a few gates that we’ve been trying to get through in order to even say we’re ready to do something with AI. There’s been legal reviews of new contract terms and conditions that have come from some of the large providers. So we’ve been going through evaluating, reviewing those, understanding what their models are, their data sharing practices are what they’re looking for. Our ability is to install and utilize that within our own frameworks, our own tenants versus them trying to pull it into a public tenant. Getting the next step, starting to look at can we put what we call a playground or sandbox in place so that we can have our own internal footprint to start and see how does this actually work? How are agencies trying to use this? What are you looking at? So we’re a ways before even looking at any output or determination of has a human needed to or been involved in reviewing and ascertaining the validity of that data and output. We are still working through framework what are the right governance protocols within the technology? Does the technology even fit within the framework and footprint of what we’re doing with our overall enterprise architecture standards? Do we need to be introducing foreign technology that may or may not have been existing in the state operations already versus the tools that like I said, we’ve been using that are starting to infuse AI into their footprint and so we’re still at from our side. That’s how. Tom saying cautiously moving forward, making progress with starting to acknowledge and ascertain the tools that we already have in place or that are really kind of top of mind versus actually getting data ingested and processed through it and analyzed and output determined and actually decision ready. Ted? Yeah, thanks, Tracy. That’s as I listened to your answer. That’s why I think Indiana is blessed with having IoT and now in a smaller and a narrower role, the management performance hub with enterprise architecture. And then between both of us, a degree of enterprise governance in it. We talked about it a little earlier. You get agency heads and their chief technology officers, their chiefs of staff, they’re charged with doing certain things in their domain, right? But often it’s strategy and improvement for that agency’s mission. And today that involves information technology. So having, if they are in a position, generally speaking, where they want to go, they want to do this thing, having that enterprise viewpoint, the 50,000 foot view, that’s a check in the process. In a government context, I think it’s still pretty important. We’re not a startup. We do want to be innovative, but again, we’re not going to be right at the tip of the spear. So specifically, to your question, I mean, it again goes back to other things we’ve talked about. AI is still very much a copilot. When I’ve talked about the NIST risk management framework, if you look at that, it talks a lot about human centricity, that certainly the context of use matters, but that in all decisions relating to the implementation, we should be thinking about what is the impact on all of the user groups. So in a state government context, that could be external users, that might be policymakers, and the information that you’re going to glean from the use of this system. Right. And if we keep those sort of principles in mind as we go through these processes, I think the real struggle just becomes, okay, how do we operationalize governance? How do we actually put a new team in place to do this thing, this assessment that we were talking about or something else? So in a way, is it fair to say it’s kind of that separation between data and decision? Because I look at, I think if I asked you, if I asked Department of Workforce Development, give me unemployment data over the last 20 years, broken down by age, demographics of location, that would take an individual hours and hours a week, multiple people working on that task, whereas AI might be able to generate that report to me in hours type of thing. Now I have the data I want. The question that becomes is AI should not decide to. Okay, now who should get the benefit, you know, and I think that’s the difference. And that’s where we have to make sure we’re putting our parameters is we. I think AI has a wonderful potential to cut down on the time it takes to get just pure data, you know, because there are times you want a long track of, give me this history, years going back and it’s, you know, and God bless LSA. But they’ll say, yeah, we’ll get you that. How many bills were filed in the last 20 years on X issue? We can find that. If they could do that through AI, that’s great. But when I say then AI needs to decide how we’re gonna write the bill, I kind of want Mike looking at that with his eye. So I think that data versus decision, I think is where as policymakers, we need to make sure we’re putting parameters around. We want speed to data. We just don’t want it making the decisions. Is that fair to say? Completely agree with that, Senator Brown. Thank you. You know what, following up with that with respect to that and, you know, referencing LSA, I think that’s the difference too with us right now. And other states, the states that I’ve looked at that have already legislated in this area. I mean, we don’t know what we don’t know. And they’re already saying this is what it should look like and this is what the outcome is. And that product and that whole process could change even before, you know, ink is drying the bill and it’s been implemented. And I think that’s amazing because, you know, that idea of change, of separating the data and the decision is so important because some of the legislation that has been passed is already deciding what the outcomes are supposed to be on these processes. And you can’t really do that. You know, then you’re just almost gaming the system. Right. And so that’s what’s so. Interesting about taking the time to look to see if these processes, products are even appropriate or necessary and where we can fit them in. Because if we’re going to already decide what the result should be in a sense with these, you know, purchasing an algorithm, an AI product and using it, then we’re sort of setting either setting ourselves up for failure or we’re automatically, in a sense, requiring bad data to put in. I think that’s interesting because I can’t imagine where we are today. And frankly, how, again, I’m going back to your comment, Tracy, about how far we are in the space with servicing our constituents in transactions that other states are ready to implement these products and put them out there and decide how, how they should be ethically assessed and audited when they’ve not, they cannot possibly have been tested thoroughly enough within their state systems to know how they should be governed. I mean, it’s really quite remarkable. And I think there are a handful of states that I can recall already have gone down that path, which is kind of amazing when I think about it. I mean, I never envisioned that we would be doing any legislation in this space anytime soon, but that just sort of reinforces it because we could just be basically spending a lot of money and because of our own legislation causing a bad product really because of what we’re asking the outcomes to be. Right. That’s kind of an interesting way to look at it, I think. And then I would just say, and we haven’t talked about this, but, and I’ll just say as the author of the underlying bill we looked at, and it was on the cybersecurity side of our local government. But that should make us all a little concerned if any local government or any government entity in Indiana is already putting these products, using these products because you have a much greater bandwidth to test them, of course, you’re doing it across a lot more agencies probably, but that would be a little disconcerting to me if my local units were using these because they’re expensive. And even with the expertise you have, it’s going to take a lot of not just manpower and figuring out how to make sure that they’re doing the right thing, right, that they’ve been implemented properly. So, you know, I was concerned initially on the cybersecurity side in terms of, you know, us being a connected to these local units of government and their vendors and things, and then they somehow, you know, malware, etcetera, gets in our system. But on this side, I didn’t think about it, but I mean, think about all the data that we take as an example from local units of government, you know, the county health departments, you know, maybe our jail populations and, you know, voter registrations. And if they are implementing products and processes in these areas because they’ve been told it’s efficient and we get better results and we take that in, that could really be a problem for us in the end. So. Okay, so that was a very long winded comment, but how do we then tell the locals, slow your roll. Frankly, you know, don’t let’s, let’s have a conversation about this, because we don’t have a control over them right yet. Well, I mean, for one, we, my office has been engaging with our local government it leaders for the last two and a half, three years. And so for one, we’ve started at least a conversation. We are all, despite the fact that we’re the state, they’re locals, whether it’s counties, cities, libraries, water, wastewater, school districts, we’re all it people trying to fight the majority of the same problems and the same issues and dealing with all the same vendors. So we see it as a collaboration opportunity. We definitely see it as the ability for us to provide some thought leadership we’ve brought together. We just had our last cybersecurity conference in June, and we had over 300 folks from the state and local government footprint just to have the open door communication and ability to share with each other. These are things that we’re thinking about. Hopefully you’re thinking about them, too. This is what we’re hearing from vendors. Here’s how we’re trying to tackle this. There’s been a lot of interest in seeing the output of this task force and the creation of the task force. So the opportunity is there. And so it’s, I think we have folks that are interested in working with us and working together along a same, similar accord just because they see it as the right thing to do in good value. And seeing that the state is starting to be a bit more vocal about technology from that perspective, there’s not been a push or desire or ask for control or legislation or standards or requirements on how to do specific things. Again, we never want to get into tools and specific activities and actions. But the reality that they see us struggling with it and they see us being more methodical, definitely helps them reinforce their messaging to their leadership and their elected officials, that we’re working together and trying to follow the same pattern and same model with each other to keep each other honest and keep each other on the right track to that point. We’ve had a real good run in building coalition between our state and local government education partners, and this is just another step of that iteration as well. As we continue to see, the folks don’t often understand where city services stop and state services start or vice versa. I mean, we hear that all the time. We get yelled at about something that has absolutely nothing to do with us. We have our folks in the local area calling us saying, I’m getting yelled at about something that doesn’t have to do with them. Citizens don’t know that difference. And so how we can, can better provide services overall, I mean, that’s the goal that I think we see when you talk about citizen engagement and digital services. Our constituents are the same six and a half, 7 million residents that interact at every level of government within the state. And how do we simplify and make that more seamless so that understanding between us and handle it from a technical side that we can do, but not necessarily put that as a liability for them to have to figure that out and the challenges that they’re working through as citizens to know where do I go to get these services? You know, we’ve been working on the website side with local governments, the security spaces. It’s been a huge target for us for a while, and I think we’re going to continue bridging that gap. So the one entity we’ve really not talked about is what happens if the federal government says we want to get involved in all this? Are you hearing rumblings of that? I mean, you keep talking about these state groups you meet with, which is fantastic, working with counties, working with cities in the state. Is there any movement that you’re seeing on the federal level that’s going to say, thanks for doing all that, but here’s where we’re coming in and we’re going to set the rules. And I know, Josh, you worked on some federal stuff, so maybe you can chime in, but I just, I’m curious what you’re hearing rumblings from any sort of federal regulatory oversight of all of this. With the, the White House’s executive order on AI last fall, there were many directives to many different federal agencies to say over the next. There were different periods depending on the context. But 68, 12, 18 months, you need to study this and determine what your parameters are going to be for the use of AI, and then you’re going to report. So I actually have bookmarked, there’s an AI dot gov at the federal level, and then they have a list. Each agency was required at the federal level to compile a list of AI use cases. So NIsT, for example, interestingly, doesn’t have any, but the National Archives has a whole long list of them, some of which actually are pretty compelling to our team, too. What I think we should be aware of as it relates to that executive order is that, for instance, not saying that this is going to happen, but if Health and Human Services says any of our grantees that are doing AI have to do these things, if we get out in front of that and build an AI enabled it system right now and it’s not in compliance with something that comes from them ten months from now, is rework going to be required? What’s that going to cost? I just think that those agencies, the administration, we’ve already had some conversations internally about it, so we’re aware that that’s a possibility. That’s part of our check and our initial process right now, but not sure how aware each individual agency is. Matt? Yeah, there’s just a recent news update on that. So the Congress and the executive branch out in DC have definitely been looking at this and trying to figure out what, if anything, they need to do. And so I thought it was kind of interesting because California is also moving ahead on a bill and I think they’re getting close to passage. And Nancy Pelosi actually wrote a letter to her legislature saying like, you guys need to back off because you’re getting ahead of yourselves. And we got all these Californians out here in Congress working on this. So I think that’s interesting. But, you know, it’s the tug of war because sometimes you got to get something done. Privacy is the classic example. Just the Congress has been gridlocked and not been able to do a single thing. So the states have had to fill the breach. So we’ll see. But I just think that was interesting. That’s very recent kind of development, Josh, Yemen put on that. I know you’ve worked in DC on some stuff. Yeah, I mean, I think, I think they’re a long ways away in DC. I think the states definitely, whatever, 13 different states that have privacy bills, and they’re leading the charge with different AI task force. I think there’s more guidance coming from NIST, I think there’s more guidance coming from CISA and DHS saying, hey, here are some threats. And we’re, like, actively looking at those. But they’re not sort of law enforce, they’re not law enforcing, they’re not regulatory enforcing. It’s more of a, hey, we have to work on a public private partnership sort of standpoint, which would be different than here in the state of Indiana. To continue down that line. I think there’s, what I have seen is this productivity versus risk. Like, where does it fit in the quadrant, right. One thing that I’m curious about as we talk in terms of data, like where it’s all centrally located, because I think you said centrally located, right. A Switzerland of sorts, is that correct? So our analytics environment I refer to as a data Switzerland, housed on Prem and in an IoT sort of a protected data zone. Got it. Got it. So, like, segmentation, I think a lot of people want to know, like, what is that segmentation? Because if it gets broken into, what does that look like? Is all citizens data getting leaked out from one central source? That is probably the most concerning that I think everybody is looking at. Right. We see this with hospitals, right. Or banks. Banks. All your money is one bank. It goes down. What are you doing? So I think those are like, specific use cases. I think the federal government is looking at, like, banking and what that looks like holding people accountable. Like the SEC. Right. And they’re. Who are we holding accountable for that? So long winded answer. I think that’s good. And I think, you know, when you mentioned about California made me think, why wouldn’t the message to California be, what are you doing that we could use rather than stop doing that? I mean, I think that’s the issue I have with that communication, lack of communication between the feds and the states. If we’re getting it right, then ask us what we’re doing. Ask us how we’re doing it. So they use our data. Nothing forced something upon us that now we got ahead of them. Right? Well, if our ahead of them is better. So, I don’t know, does it mean, do we need to have some engagement with our delegation in DC, the indian delegation, to say, always remember this, if you’re going to do something around data privacy and around AI technology, talk to us before you just, you know, buy into something that this sort we’re going to pass out to all 50 states and the territories as mandatory standards? Well, I think certainly the administration, through our federal lobbyist, engages in that context, in the privacy space. A lot of what they’re debating as a governmental entity, we are in large part exempted. As I hear this part of the conversation. I know we’re talking about federal potentially flowing down to state, but my mind goes back to the state flowing down to local that we were talking about before. And the, you know, Tracy, having IoT be that rising tide that helps, helps the locals in cyber and with websites and self service models and all of that would just encourage everybody to think about the other touch points that state government has with local governments. From a policy perspective, I go back to the state archives and Records Administration that has actually very broad policymaking authority around records creation and maintenance for local governments, for prosecutors, for clerks, for recorders. There’s a lot that can be done there to help, to kind of steer in a certain direction. And certainly one other example, you know, we get from the federal government, when we receive federal monies, we’ve got flow downs and obligations that come to us as a condition of that receipt. Right. If we need to get the locals to improve or to adopt an assessment framework to do an it thing better, sort of follow the state best practice, maybe those regulated entities are encouraged by the, the relevant state agency to do that, too. So, you know, pulling one out of the hat. Department of Education, talking to school systems, hey, when you’re going to do these things, particularly in the AI context, as. Relates to children. To me, that’s a, you know, we like DWD’s pivot example because it was adults and it’s an, and it’s an opt in system. It was a captive audience, so it was probably going to have high use from the outset. But adults and opt in. To what degree can state agencies, through existing touchpoints, sort of flow down some of these obligations that they want, that we want our state agencies to have? Sorry, I twisted your question there a little bit at the end, but I. You used an AI answer to my question. One of the difficulties I would add in, though, is so we work very well with our federal partners. There’s not an owner of AI at the federal level. I mean, that’s the, you got AI looking broadly across. I mean, we’ve been unlike cyber, where we’ve, you know, we now have the CISA agency that we work closely with on the cybersecurity standards and things of that nature. You’re starting to hear some more rumblings of policies around cybersecurity because of the ability to set those as more general standards. But we get data privacy regulations from almost every flavor of federal agency that are all different. They’ve not been able to, and doesn’t seem very interested in even standardizing their expectations at their level, let alone looking to pass something down to us. So I’d be hesitant to say, you know, do we wait for the feds, or do we, you know, be too concerned about getting out in front of the feds? More to Ted’s initial statement of maybe we are doing it with the awareness that there could be potential for rework or adjustments as they finally decide to what level they want to engage and set that expectation and put that information out there. But it’s very hard engaging and regularly interpreting the various footprints of the regulations of the numerous acronyms, from sieges to FERPA to HIPAA to FSMA to pub 1075 to FTI, across the footprint that we have to then try and separate and silo and apply to the different pieces of data that apply to our different agencies at the state, state level. Senator Brown. Thank you, chairman. I do think that’s interesting, though. I was thinking of Ferpa and Hipaa. I couldn’t think of all those other things you mentioned, Tracy. But I do think it’s interesting, though, if the feds ever said, you know, you talk about us in the local units, had a light touch, but said, oh, by the way, all that data you have, we’re not actually going to ask you for it. But we’re going to ask you to share it with our model so that we continue to grow. Right? In a sense. And your point about children versus the captive audience on the DWD site? I think that’s really interesting. And I think that is what would make or should make every state the hair in their back and say no and a heck no. Right? Because that’s where it’s a soft touch. Like I’m not really going to make you do these things, but I’m going to gather this. And then again, we don’t really know how you’re going to use the data that you now are forcing us to give to you for this model. And obviously it’s usually by the purse. But that’s an interesting thought because on the cybersecurity side, it’s different. You know, we’re saying if you’re a vendor with us, we are able to say you better have the protections that we’re going to require before we have a contract with you. Right. And maybe we look at how we can help our local units of government, who I say are in the cloud attached to us. Right. In a sense, too, but that’s a different thing. And that’s actually kind of interesting because, I mean, data is everything, right? And it is quite clear when we passed the consumer data privacy bill a few years ago that there were some disgruntled individuals and associations in the hallway because their access was going to be cut off or at least made it a little bit harder. But on the government side, that’s pretty interesting because we won’t, if we’re not careful and always staying ahead on that point, we won’t be able to control what the outcome is. And, you know, we’ve seen this, we’ve seen the feds have not been really good. I mean, I don’t want to get too out there, but like in this whole border crisis and keeping track of just human bodies coming over, obviously it is a huge issue. But just like this, we don’t want to give them information if they’re not going to be able to keep good track of it. And then in turn they put it into a system that’s going to skew. And then how do you argue with the computer that what your outcome is wrong? Because we haven’t been able to, in a sense, watch. The algorithm and assess it. That’s really interesting. I just want to go back, though, real quick, before we finish, Ted, you had said when I was at this conference, they were saying we don’t have something like underwriters laboratory, in other words, that good housekeeping, if you will, seal of approval for electronic device, electrical devices and things. So are you saying that there is now an AI governance seal of approval certificate? That’s not what you meant to. No, that’s a. It’s AIGP. I believe it is. If you look up the International association of Privacy Professionals at AIGP, that’s a certification for individuals to be recognized as governance professionals. Frankly, in my own mind, of where I think this might be headed, I think it follows the cybersecurity model someday of that, you know, we have a, we have a Fedramp program at the federal level where cloud service providers that want to do business with the federal agency have to have a Fedramp authorization at a certain level to maintain that federal data in the cloud. And to get that those cloud service providers go through a very robust assessment process and third party review of their cybersecurity posture. AI models present interesting. Certainly there are cybersecurity concerns. That’s one of the various areas that the NIST framework’s concerned about. But AI models and the way that they pivot over time, that they can decide to move in this other direction, they present unique challenges where I feel like we might get to a point where there’s almost a FeDramp style, a trusted assessment for foundational models, and then there’s some way to validate the models that you’re, you know, often local vendors are building on top of that. So not there yet. The fear would still be the data being put into the model. And I think that’s the thing to keep in mind and think about is even if an entity organization comes out and says, we are going to be a model certifying agency, if you’re going to put bad or improper data into it, do not expect valuable output at the end. All right, we’re at 04:01. I don’t want to cut anybody off, but I do want to. Is there anything else, Ted or Tracy, you want to comment or anything to wrap this up? I think what I was hoping to accomplish today, I think we did, is where’s Indiana at, and what are we doing? And I think we’re hearing some positive things. I think there’s possibly still some challenging things ahead of us. So anything you guys want to wrap up with or say one thing, completely non substantive, I’m reminded, and I went back to a note from one of those, from that AI conference I talked about. Trevor Hughes, who heads that IAPP, the privacy association, was using an analogy of Carl Benz inventing the car in the late 18 hundreds. And his point was that trust and safety allow innovation to move more quickly. He invented the car, but it had no brakes. And the initial regulatory response to that was to have a flag man, a man with a red flag, walk in front of the car. So everybody knew a car was coming. It was his wife, Berta Benz, who said, maybe you should make the car be able to stop. And that idea of pumping the brakes a bit and maybe, you know, it allowed all of us to use these things. So when I heard him say that at this conference, that’s worth noting for the work we’re doing. I like that. All right, if you would. I think the next meeting, I know we’ve reached out. We’ve had several people reach out that want to come and present secondary education. Some space, some people in that space, some other technology spaces want to come and opine and give their information. So we’ll look at having another meeting probably sometime that second or third week of September. And then. And I don’t know, I’m just. I’m thinking, I’ve got some dates here that I know I’m going to be very selfish. And, you know, basically say, since I’m driving from northeast Indiana, I’m going to be down here on the 9th, the 16th, the 18th and 24th in the evenings. So if any of those days would work, we’ll reach out to your las and your staff, et cetera, and we’ll talk with LSA as far as availability and things like that to make one of those days possibly work. So I don’t have to drive so far. Yes. Can I just get on the record one topic I’d like to look into for some future meeting? And that’s the issue of using facial recognition to identify potential. Perpetrators of crime. There’s been some problems up in Detroit, at least, where AI was used to basically go out and compare surveillance footage of someone who’s committed a crime. So they’ve got a facial recognition profile of the face, and they’re either going through all the governmental records of all the photos that exist of people, and they’re looking for things that appear to match. Or some cases they’re going out and scraping, like all the social media, and they might come up with like 145 photos that kind of come close to this person that’s visualized committing this crime. And they’re kind of figuring out which one they think just based on that data, which they think is the most likely the perpetrator, and then they throw it into a lineup. And so they basically had situations where a couple clearly innocent people were actually charged and brought in to. And so that raised for me just a question of, is any of that type of technique being used in Indiana? And if so, are there any kind of guardrails on it? And would that be anything we would need to address? I put down AI in the educational space, and then I put AI overall in the economic space and the societal impact space. I would say it’s more in that societal impact space because I’m still, and I’ll say this, it still troubles me, and it just, I can’t get it out of my head. Was last year when the gentleman showed the picture of a group of people and he said, not a single person in this picture is real. They were all digitally created. So there’s no, those people don’t exist. And it led to the issue of, what do you do with things like child pornography? You know, if I can generate digital imagery where my defense would be, who did I harm? And that’s dangerous, and I’ll turn to law enforcement. You already have trouble enough trying to make sure the data connects, and I got this evidence. But we are going to have some societal issues with AI that we’re going to have to say, is that still a crime? Because in my opinion, yes, because it’s where you go next with that data. So I think we. But that’s an issue that I think going to the facial recognition, I like the idea of saying, hey, I can tell who did that because I got their face, but I don’t want to put somebody innocent in jail because my data was wrong. If he goes to the airport now, some airports, like four Wayne airport, you just show your id and they send you through. Right. Other ones where you have to stand in front of the little thing and look at your picture. And I think, is there no one else in the world who looks remotely close to me? I mean. Cause you just go, yeah, yeah, okay, go ahead. You know, and I’m like, okay, there’s no bad guys that look like me, you know, but I. It’s good that there’s not, let’s put it that way. So I think that’s. I’ll put that in that societal impact, but I’d like to have maybe some input from some people in that space. We’ll go to Kerry, then we’ll go to you for the societal impact. And what representative Pierce was saying, that also has a bias impact. Those facial recognitions, they have been shown to only recognize certain races, certain ethnicities, and they have the wrong identification. So you have an ethical issue. And one of the things, I know we went on privacy, but the elastic log monitoring the anomalies and abnormalities, I know it’s been in production since summer 2021. But how are you combating those biases within that program, too? Because AI is only as good as the person who trains it. And as I teach my ethics students, if you have a brain, you have a bias, because we all do, even unconscious. So how do you really train it? What is an anomaly? What is an abnormality? And how do you fight human biases that are training these AI programs going into it, especially when it comes to say you have a group of people of one race and a group of people of none race. Well, maybe it thinks it’s an anomaly to see the one but not the other. And so who is training the programs and then the facial recognition that go into it? There’s a lot of ethical issues that do come around with that. Well, thank you. I think that has been shown, you know, and it’s, it goes back to the, whether it’s good data or bad debt, but also what your base is that you’re using. Right. I mean, I believe it was Amazon that used internally for resumes to try to improve their own retention and hiring. And of course, they put in all of their current workforce and maybe, I apologize, whatever. This tech company, ironically, and of course, it came out with what they already have. Right. I think the bigger question is, though, to piggyback on what representative Pierce was saying, is if a local unit of government, because we know we’re doing an inventory of what the state is doing across law enforcement, every agency. Right. We’ve already had that. I think the concern is not that we’re preventing local government, but that. The ramifications of that. And I think we do have the resources here with this task force and with the MPH and IoT offices here, agencies here to be able to give guidance, and that’s the best we can do. But, you know, we may put our finger on the scale when we’re, I go back to the cybersecurity answer when we’re in issue, when we’re linked with them. But I do think it is important, you know, and I don’t know if any local units of government are using it, but I know what you’re talking about, representative Pierce, and that is disturbing because it took it one step further. You know, Tracy and Ted talked about having an individual and rep. Lehman looking at it before the decision is made. Even if you’re using a product, if you will, or process to create the data, they didn’t do. That is my understanding. You know, so the decision was made, and all of a sudden this AI product made a decision on who the bad guy was, and the decision was wrong. And so that’s what we’re not doing in the state of Indiana. That’s a good thing. But I do think, and I know your office has done a ton of outreach, Tracy, the Iot, but I do think we can’t emphasize enough how we can help give as many tools and information and advice, et cetera, et cetera, to the locals to say, you know, there’s a lot of good products out there and there are a lot of bad ones, and they all sound good at the outset and they’re going to give you great results, but there are really bad, bad ramifications when they go off the rails. And I think that’s another important part of this task force, maybe not writ into the original governing body roles or whatever it is that we have, but I think we need to be aware of that. And so if we know of any local units of government, or if anybody wants to come forth and say, yeah, we’re using a tool, I’d love to hear it, you know, if any are using it to show us how they’re using it in a good way. But, yeah, I’m with you, Representative Pierce. You know, if we’re not ready to deploy at a state level with the resources we have and the teams we have in place, I don’t know how a local unit of government would be ready to deploy some of the things that you mentioned. So thanks for bringing that. So are you asking maybe that we need to ask for that inventory from local governments, not just the state government? We’ll put that on the list. The other thing I would say is this, and I want to make sure, I want to make clear, because I know there’s people watching and there’s people preparing to come and give their two cent. We need to make sure we’re staying somewhat relatively close to what our challenge here is in the resolution and in Senate Bill 150, which is the effects of state agencies use of AI on Indiana residents, including constitutional privacy interests, employment, economic welfare. So again, the tie has to kind of be to that state and governmental entity. So if you’re watching this out there and you think, hey, here’s my chance to come and pitch my product, you know, and things like that, that’s not, that’s not what we’re, that’s not what we’re here to do. We’re here to have you come and say, this is going to be the impact on our public universities, our public schools, our public, whatever, public welfare. So through our agencies that are monitoring and governing that. So that’s, we got, stay close to our charge. And with that, we’ll send out some information on dates and times and any other comments, questions the committee, then we’ll stand adjourned and we’ll see you in a couple of weeks.