AI Governance and the Role of People with Peter Gregory #inch360'24

[00:00:00] Welcome to the cyber traps podcast. I am Jethro Jones. Your host. You can find me on all the social networks at Jethro Jones. The cyber chaps podcast is a proud member. Of the be podcast network. You can see all of our shows at two B podcast. dot network. And today on the show we have. A special interview from the inch 360 conference.

That's the inland Northwest cybersecurity hub. They put on a conference each year and I have the great fortune of being able to go. Go to that conference. And interview a bunch of people. So that's what you're going to hear on this episode. I hope you enjoy it. And if you want. To learn more about inch 360, go to inch 360 dot O R G.

welcome to the CyberTraps podcast. We're here at the Inch360 conference with, uh, the keynote speaker, Peter Gregory. Peter, thanks so much for being here and for supporting the Inch360.

what did you talk about today and what, uh, what did you want people to get out of your, your conversation?

Thanks, [00:01:00] Jethro and, uh, Jethro. Good to see you again. My talk was on AI governance, which is really the overarching management structure that would govern how an organization decides to approach the adoption of AI in whatever kinds of systems or processes that they want to do in order to improve their business.

And so, AI has been around for a while, but it's taken a really sharp increase over the past couple years. What are some of the things that you want people to pay attention to as they're implementing this in their businesses?

What I want people to understand is that for an AI system to succeed for the organization, they are likely going to need to train that system with some of their own data.

Before they do that, they need to have a very clear idea of What [00:02:00] the objectives are of the AI system that they want to implement and then have knowledgeable people who understand how to identify and acquire a tool. A data set to train the AI system. This is all new. There are, there are no people in the world with five years of experience because it's a new technology.

So we're a, a world of out of control beginners who are just trying to figure out how to make these things work. And I've, I'm a career IT person and, and I, I know the mindset you. You get a new system or a new program and you tinker and you tinker and you tinker and once it works You just leave it alone and In in the IT world you may be that system may be working, but it might be Barely [00:03:00] working and it may also be highly unsecure.

but then again, you know, the mindset of IT is once it works, we just don't want to touch it ever again because we're under pressure to do this again with other systems and we're, you know, everyone is behind. So, this is, this is the caution I, I give to anyone who wants to implement AI is that, you The unattended consequences are really significant.

I cited a few examples in my talk today, and there are certainly many others. But understanding what, what the objectives need to be and then to have knowledgeable people ensure that the organization can get there safely, legally, fairly. And so on, so that they can succeed like they intend.

So, this idea of being able to train it [00:04:00] on some particular data and then just leaving it alone, I don't think that that really works with AI because it's constantly, Adjusting and you're not going to get the same response from the same prompt every single time because it doesn't work that way.

It doesn't work in a predictable pattern, at least not in my experience. Am I misunderstanding that or is that accurate and how do we manage that when we want to just set it and forget it?

It definitely can be accurate depending on the design of the AI system. If, for example, the continuous use of an AI system means that it is learning the new things as it continues to process new information, then the AI system could actually drift away from its objective.

If that new data is different in any way from the original data that it was trained with. So there's a lot of ways in which an AI system can go right, but it can go badly as [00:05:00] well if the people managing, designing, and testing don't know how to design it and implement it correctly or if they don't know what to watch for to ensure that it continues to operate in the way that it should.

So what are some of those things they should be watching for to get it to operate the way that they want it to? And if you have any examples, that would be great to illustrate. I've got a couple of specific questions I could ask, but answer that question first. What are some things we need to keep in mind as we're training it?

What we need to keep in mind is, is we need to understand The relevance of the training data, the completeness of the training data, and whether we're giving it more information than we need to. For instance, in a customer service chat bot, which is a really popular way of AI. Chances are they don't need to know train [00:06:00] the data with the PII of their customers.

That's not needed because you're not going to ask the chat you know, as a customer, give me the names of the five other customers who live within a mile of me. I mean, that's, you know, way out of bounds, right?

So, the other thing that we have to think about is, we're not going to share personally identifiable information with the AI, but then what do you train it on? And how do you make sure that you're not sharing too much with it, so that it doesn't do Uh, inappropriate things, but then also how do you make sure that it, has enough knowledge and where's that line?

Well, it all comes back to what are the stated objectives of the AI system in the first place. Another example that comes to mind is a legal department in a company that does business with a lot of other companies. You could feed an AI system the content of all of your active contracts with your customers or [00:07:00] vendors or whoever they are, you know, all of the other parties.

And Get some good insight into all of the tailoring that we all know happens in legal contracts. I mean, you know, when you're a services company, for instance, and you have your master services agreement and statements of work, you're always going to have some customers who want to tweak the language a little bit.

It's very challenging for companies to keep track of all of that contract tailoring. So, it could be that if in an AI system that a legal department would use, if they want to be able to identify companies by name when they have anomalous terms in their agreements, for instance, then they would have to.

Train it fully like that. But if they had a different set of objectives where they wanted to know more about, you know, the bespoke terms and, and other kinds of things about their contracts, but they don't need to know who they are [00:08:00] or who they're with, then the training data could still be the content of the contracts, but somehow, anonymized or pseudo,

pseudonymized. Sodomized, So it, it comes back to the objectives. What does this system need to do today? And a different mistake that some companies make is that they'll build their initial AI system and then they'll think, oh, this is working really well.

Now we want it to do this other thing too. And they might. put more training data into it, or they might just go with what it's already been trained with. So now they run the risk of the AI system not performing in the way that they expect. one of the problems with an AI system of the kind that we see today is that too many companies are tempted to take the human out of the loop.

And that can be really risky, [00:09:00] especially for a newer system. In another example, let's say we're, a bank, and we want an AI system to examine loan applications, and let the AI advise us on the parameters in the application, and whether we should loan money, and if so, at what cost. You know, on what terms and at what rates.

if you take the humans out of the loop, then you're probably going to end up with some discrimination lawsuits because sooner or later, the AI system is going to say, no, that person is not creditworthy when maybe they really are. so the human in the loop is, is essential. Depending on the kind of AI system that is being implemented, if the long term objective is that it's going to make decisions without a human in the loop, they still should work with that system very closely and have the human in the loop at least early on or [00:10:00] for their higher risk use cases.

Let the, you AI system prove its quality and accuracy for a while before you unclip the leash and, and let it run away.

Well, and that, that aspect of keeping the human involved where it matters is something that I've been, as I talk about AI, that's one of the things that I talk about also.

Because if you, in most of my use cases, is in the educational world, if you take the teacher out of it and you try to get the AI to teach the students, then that's a problem. because so much of learning is based on relationships, and those still matter significantly. So, uh, this is definitely a fascinating conversation.

There's a lot to think about with adopting AI, and I love what you said about going back and making sure that your objectives are clear, and that you really know what you're trying to get it to do. Adopting Any technology, but especially AI, for the sake of that technology, is never a good idea. in [00:11:00] closing, how would you like people to get in touch with you if they have more questions, want to learn more from you?

Where can people find you?

People can find me, uh, on LinkedIn, with my full name, Peter H. Gregory. They can also find me on my website at www. peterhgregory. com. Either of those ways will, Get them to me, uh, both avenues have kind of a contact me feature, uh, so that people can get in touch if they have questions or, or things they need to know.

Thank you so much for your time, Peter. This is great. Appreciate you.

Creators and Guests

INCH360
Guest
INCH360
A regional industry group focused on connecting cybersecurity and compliance professionals of all levels. The group will promote education, collaboration, and communication about resources, regional companies, and jobs.
AI Governance and the Role of People with Peter Gregory #inch360'24