INCH360 2025: Cyber Risk as a Business Imperative

Welcome to the Cyber Traps podcast.

This is Jethro Jones.

I am on location for this episode at the Inch 360 Conference and these are panels from that conference, uh, that I think are just really interesting and I hope you enjoy them.

For more information about the this organization, go to inch three sixty.org.

Christina Richmond: I'm Christina Richmond.

I am a new person to Spokane.

Moved here three years ago.

Love the community.

Just so thrilled to be involved with Inch 360.

I'm on the board, one of many organizers and just so appreciate all of the efforts of our board and volunteers.

Today I am here to introduce our first speaker Kane  McGladrey is a keynote speaker.

He's a senior IEE member.

He's an author.

He will tell you more about himself.

He didn't want me to spoil all of his intro, but I will tell you he's an expert in governance, risk and compliance, so GRC and he has three decades in cyber.

So we have a very strong expert about to speak to us.

Cyber risk.

So Kane, please join us.

Kane McGladrey: Okay.

Hello and welcome to my presentation.

Two things before we get started here.

First my council says, I'm supposed to say none of what you're about to hear.

It's not legal advice, it's not investment advice, it's not any specific statement about any organization that I'm currently affiliated with or ever will be affiliated with.

Actually, this is just best practices based on or observations based on best practices in my personal experiences.

Second just by a show of hands here, how many people here went to college?

Oh, wow.

Okay, cool.

How many of you went to university?

Wow.

Cool.

How many of you are senior cyber leaders?

I got a, got a few.

Alright, cool.

Well, I, I dropped outta college.

Now you might've heard about imposter syndrome.

So guess how I usually feel about five minutes before I'm on stage?

I was in the theater program first year and I thought at age 18 I would never be presenting in front of an audience.

Now, thankfully, that's how I got into security.

Does the clicker work?

Yeah, there we're.

Cool.

So my name is Kane McGladry.

I'm a professional speaker.

I'm an author, I'm a CSO in residence at Hyper Proof.

I'm a senior IEE member and I'm the number one thought leader on cybersecurity and risk management worldwide.

And when I used to come to events like this, I would try to take all the notes.

It's a big part of being self-taught.

So I'd sit in the front row and I try and write everything down.

So later today I'm gonna flash everyone a different QR code.

To download the best slides and some bonus materials as a PDF.

And if you wanna follow me on LinkedIn, that's the QR code we're gonna go through.

We're gonna be speed running a lot today, honestly, and if you miss something, that's okay, because it's going to be in the download at the end.

I also wanna thank my friend Christina who asked if I could come say a few words today.

The first time I was in Spokane, I made the mistake of going hiking or trying to go hiking on Mount Spokane because it was a thing to do, and I remember very clearly being chased down that mountain by a bird called a grouse.

Now, some of you might be thinking that's probably not a real story, right?

So I took this photo, it's a little dark of a grouse while being chased down a mountain, which is not the best idea.

Also, if there's a grouse in the audience, I'll remember everyone that the exits are at the back of the hall in case there's any grouse.

So the reason that you're here today is you're trying to navigate cyber risk in today's business landscape.

And you're probably struggling to best address cyber threats as a core business issue.

And in order to be effective, you need to be able to communicate the business impacts both the positive and the negative of cybersecurity in your organization, while at the same time avoiding financial losses from cyber incidents.

So today we're gonna focus on understanding and then translating and then prioritizing cyber risk.

So I first wanna talk about what these so-called cyber risks are.

'cause I'm a ciso.

I've been a CISO twice, and I've done executive advisory work on three continents for global thousand companies.

And I'll tell you right now, I don't think cyber risks exist.

I think they're a fiction.

So just by a show of hands here folks, how many people think that Unen sorting unencrypted credit card data is a. Okay.

All right.

That's kind of a majority of the audience.

All right, so lemme tell you a story from 2006.

And this is a story from about credit card readers from before when we had tap to pay or chip and pin cards.

And it was about a company that I was advising at the time.

My apology cell phone photos in 2006 were trash.

So let's talk about how card readers worked back then.

When a card was swiped, the reader would get the account number, the expiration date, and other data from the magnetic stripe.

And that information was typically sent in plain text to payment processors through, I don't know, dial up modems or dedicated phone lines, early broadband connections.

It was then processed in plain text and was often stored in plain text.

So the target breach, I think many of us have heard about that of 2013 and similar incidents happened because we thought this was normal.

We thought this was fine.

So here's a list of some of the bad things.

About storing and transmitting unencrypted credit card data that we've all learned since then, and that led to a standard P-C-I-D-S-S.

So I'm just gonna paraphrase P-C-I-D-S-S 1.1.

Just scroll down a bit here for you.

And here's the exact language from P-C-I-D-S-S 1.1.

I've highlighted the word consider.

'cause if you go read P-C-I-D-S-S 1.1, you'll notice that it says that maintaining cardholder data, unencrypted cardholder data poses a, a risk to the data and requires additional risk mitigation.

So the document treats encryption as a risk mitigation measure, and it implies that the lack of encryption creates a risk.

But here's the thing, it's not explicit.

And if you take a literal reading to it, and I'll tell you, most attorneys, most compliance experts we're literalists.

It's not a risk.

Which means my client did not see it as a problem for them to solve because it wasn't a risk to their business.

So the client starts getting some pressure from outside forces.

They're told they're gonna be fined a million dollars a day if they don't comply with PCI.

Now, this is a company that had net sales of $89.3 billion in 2006 with net income of $3.6 billion, which meant that a $1 million a day fine could have been an accounting error, less than half a percentage point in fact.

So nothing changes for about a week, and that external pressure ramps up this time with fines of up to $10 million a day.

I think we all learned how to do math to carry the zero.

It's about 4% of their balance sheet.

It's annoying now, but it's not really something to do about like I don't think.

This might be a slight headwind to our revenue before the lawyers go sort it all out.

So this leads to this thread about a week later.

Yeah.

So the next day the client punched a hole through the wall of their data center to build an enclave of dedicated systems to just process card holder data.

Now, I couldn't take a photo of this because of, well, honestly, a lot of reasons, but this is what an AI thought that that would've looked like.

So I want you to start thinking now though, about what the risk was.

'cause clearly it wasn't storing or transmitting cardholder data.

And now some of you're also probably thinking that's a, that's a old story.

Kane things probably improved, right?

Lemme ask you another question then.

Show of hands.

How many here think that a failure to patch a known vulnerability is a risk?

I saw some skepticism when people were putting up their hands.

I like this audience.

You're fun.

So, lemme tell you a story from 2017.

The Equifax data breach was first ex exposed, came up in July of that year, and the breach was caused by attackers at they attacked a known vulnerability in the Apache Threats framework, which Equifax had failed to patch.

In spite of there being a patch available.

And at the time, this was considered to be one of the largest data breaches in history.

So here's a list of some of the bad things that we know will happen when companies don't patch their critical systems.

But Equifax, they ran into trouble with the Gram Leach Bliley Act.

So I'm just gonna paraphrase the safeguard clause for you.

Again, just scroll down a smidge here.

Here's the closest GLBA comes to addressing patching in 2017.

Again, this is the closest thing.

It says, GLBA does not explicitly mention patching or software updates or patches or software maintenance, or anywhere in the text, right?

Well, you could potentially consider unpatched systems under the broad categories of maybe software design, or maybe it could lead to a system failure.

The regulation does not literally or explicitly identify a failure to patch as a risk.

And remember, our compliance officers and our attorneys, they tend to be pretty literal.

And they don't understand how patching is or isn't a risk.

So here's a list of some of the bad things that can happen when a company is found to be in violation of GLBA.

And here's a list of some of the bad things that can happen and did happen to Equifax because they were found to have violated GLBA as well as some FTC rules and so on.

But here's a real bad thing.

The legal issues and the regulatory failures at Equifax led to a landmark settlement that required the company to spend over a billion dollars on data security technology and make some pretty comprehensive security reforms.

So again, I want you to start thinking here, what was the risk if it wasn't patching systems?

But again, you say Cain, that's a really old story.

That's a story from 2017.

We must have collectively learned from this, right?

Right.

Lemme ask you, is a lack of multifactor authentication, is that a risk hands?

Oh, I'm seeing some definite skepticism.

It's, oh, this holds out at the table's.

Like probably not.

So lemme tell you a story from 2024, just from last year.

So the change healthcare breach, which occurred I know early last year.

It disrupted healthcare services across the United States.

And based on public information, the alleged initial attack vector was a lack of multi-factor authentication.

So here's a list of some of the bad things that can happen when a business doesn't use MFA but change healthcare.

They're in trouble with hipaa.

I'm just gonna paraphrase the HIPAA security rule, the privacy Rule, and the disclosure rule here for you, it's a slightly longer document
out of the Federal Register, but if you go read hipaa, you'll notice that nowhere in there does it say that a lack of MFA is a risk.

And our executives, our compliance officers, our attorneys, they don't see a lack of MFA as a risk.

So here's a list of some of the bad things that can happen when a company is.

Potentially or allegedly violated hipaa.

And here's a list of some of the bad things change healthcare is experiencing from their data breach.

That ex it affected 190 million Americans right now.

There's also multi-district litigation and there's several attorneys generals from few states that are leading investigations.

So there's not some consolidated number that I can put up on screen saying, how much is this gonna cost them in the end?

And I can predict what a few of you're probably thinking, like, okay, how does AI play into risk management?

I promise you, we will get there in a bit.

But if none of those are cyber risks, even though a lot of our audience thought those sure felt like risks, how do we start to define risks?

Most CISOs now realize we're just managing business risks.

A technical risk.

Without a business impact, it's not a risk.

Well, that's true.

Well, we need to do to manage risks, business risks.

I'm just gonna paraphrase NIST's Risk Management Framework here for some of the things that we're gonna need if you suffer from insomnia.

As I scroll down a bit here, this is a great document.

Now, there is a quick overview of the process I'm about to describe here.

This is gonna be in the downloadable materials that you're gonna receive at the end as well.

First thing we're gonna need is we're gonna need to define a whole bunch of words.

We need to define words like high, and we're gonna need to define words like low.

And we're gonna use those words to start to define the impact of a risk.

Now, here's an example of an impact statement.

You might wonder how this risk might happen because the initial impact statement, it doesn't say that.

It just says what would happen.

This is one way we can write those factors down.

And this is based on this 830, but what's important here is we've heard from the business about what's bad, and then we worked out one way.

It could happen.

There's probably more ways for this to go and occur.

So we'd read, write those down too.

We also need to do some math, and you might have heard about residual risk, which is the amount of risk left over after you've applied your controls.

We're gonna come back to this one because we haven't yet defined how to measure our control effectiveness yet.

And before we get there, we also need to define the odds of a risk happening before we go and apply our controls.

So here's one way we could write a probability statement based on the information we have at hand to try and gauge the odds of that high impact risk happening.

And if you've been following the space for any amount of time, you've probably seen this formula that you can multiply impact by probability to calculate money.

Here's the thing though, if the risk happens and becomes a material event or a breach, your math can be the best math, but nobody's gonna care.

We don't manage risks because of math.

We manage risks because we help businesses achieve resiliency.

So they can keep doing business in spite of risks that honestly they might or might not occur.

This is why I recommend that you prioritize your risks by impact first, not by probability because we don't have residual risks calculated yet, and we also haven't talked about how to decide on which risks we even need to do something with.

Something else we're gonna need is a list of systems, and these are systems in business terms, the collections of components and stuff that do things for the business.

It might seem a little abstract.

So for example, every business has got accounts payable.

That's how they go and pay their suppliers.

Most businesses negotiate net 90 terms to pay their suppliers.

So if there's a risk that affects accounts payable, you've got about about three months to go fix whatever bad thing happened there.

Every business has also got accounts receivable.

And unlike accounts payable, most businesses have got a typical cycle about about 14 days for accounts receivable.

That's the money that comes into the business.

So a risk that materializes here has got about a two week window to fix it because once cash stops coming in, you've got problems.

Kinda like we saw with Change Healthcare, my first example of that company not being able to visa anymore worldwide.

We're also gonna need a risk assessment process.

And the good news here, please, you don't need to go figure this out on your own NIST's risk management frameworks and ISOs, they've all got defined ways of collecting and processing risk data.

So we're gonna follow that process to ask people about risks.

'cause here's the thing, if I went to our sales leader, Mike and I said, Hey Mike, if our sales system were to go down for about a day, we'd lose about a million dollars.

Mike would laugh me out of his office.

'cause here's the thing, as a ciso, I dunno how much money comes, comes in on an average given day.

And even if I did, I don't know what controls are in place.

And even if I did, I don't know how well those controls are being applied.

And I sure don't have the staff to go apply additional controls either because I'm the ciso, I'm not the sales vp.

I don't own that system.

CISOs don't own many systems.

We can help our executives measure risks, but we don't own those risks.

So we need to go talk to people like the VPs and the system owners and the managers and the control owners as we go when we follow our process.

And we're gonna follow that process to ask the people about the visible risks, the controls that are being applied to those risks and the effectiveness of those controls.

Just their, just their best estimate really.

And we're gonna ask multiple people the same question, and then we're gonna follow the process to figure out how to best estimate that risk's, residual probability and the impacts.

The other thing we're gonna need to know is a bunch of data, like key performance indicators or key risk indicators and business objectives, because those are gonna be used later to prioritize our risks.

And most organizations, honestly, they understand their core businesses processes better than they understand security.

But those existing business metrics that, that cash flow, that supply chain efficiency production uptime, all those are really great when it comes to understanding risk.

So consider an industrial manufacturer that I worked with, they they made cardboard boxes.

It sounds boring, right?

They made cardboard boxes for shipping things and their operations team noticed they had slowdowns at a plant.

One of the lines was just running slower than usual.

Well, that was a metric that they tracked.

Meanwhile, the FBI was sending the security team, FBI Intel about targeted attacks on similar companies.

But the two teams, they never talked.

So days later, inevitably the ransomware hits.

Y'all know where this joke's going.

Shut down that plant.

Shut down.

More plants actually shut down.

Most of the plants in that region.

Cost 'em about a million dollars a day, which unlike that first company, that was more than they could comfortably afford, they had the data to see something was wrong before the risk materialized, but they didn't have a process to connect the dots.

Businesses already collect and track key performance indicators, security teams.

We need to tap into that data to anticipate and prevent those disruptions.

'cause risk management, it's not just about reacting better, it's about using the intelligence and data we already have to build organizational resilience.

The other thing that we need to define is our business' risk tolerance.

For every category of risk is a million dollar a day loss.

Fine is at the end of the company.

This helps us to start to prioritize our risks.

So here's a couple examples of risk tolerance statements you're gonna notice.

They're pretty brief, they're pretty concise, and that's intentional.

It's deliberate.

And here's what you wanna do.

You wanna write those down.

You wanna get your executives to sign off on those risk tolerance statements.

Better yet, get your board of directors to write it down, because then we have got very clear guidance on what's a tolerable risk and what's a risk that we just can't accept as a business anymore.

And all those data, those go into a thing called a risk register, and that contains your risks and your impact statements and probabilities calculated after you've finished
the process where you ask people about the risks, the controls, the effectiveness of those controls, and the data showing that those controls are actually operating.

And then a series of risk tolerance statements that were approved.

Now here's a list of, here's an example of what a risk register looks like as a table.

Now, the underlying data, this is not even vaguely a table, and it certainly is not a spreadsheet because we're gonna ingest a lot of data and we're
gonna manage the provenance of those data so that this is going to be verifiable and so that it can be externally audited and stand up to scrutiny.

But again, some of you're probably looking at this and thinking, cool, that looks like something I could use an AI for.

I promise you we will get there.

So we got a lot of data and we put it in our risk register.

So how do we actually get our executives to make a decision?

Your first goal here should be, it should be easy.

We should always frame risks in terms of systems that they care about.

Y'all remember systems like accounts receivable and accounts payable.

You remember this thing?

The risk register with those systems where we listed like AP and ar, right?

Yeah.

Don't show 'em the risk register.

A lot of CISOs, a lot of new CISOs make that mistake.

Exactly.

Once there's too much data and it's not been prioritized and it's not been sorted.

So here are those data as a heat map, but executives, again, they don't have context of what to look at.

We've just thrown up all the risks on screen for them.

And new CISOs, again, they make this mistake exactly once and that's ignoring what happens if you've got colorblind individuals who are looking at your heat map.

Remember this word tolerance.

You need to apply that to your chart.

Risks above that tolerance line.

Those need decisions.

Any other risks below the line that the organization can tolerate?

Those are accept and those are acceptable for now because until they're above the line, there's not something to do from a business perspective.

And if you cannot express a risk as being above a tolerance, then very little's gonna happen.

Think about Equifax, right?

Knowing that there was a patch available and nobody did anything, why?

Well, probably part of that was because the risk didn't exceed their tolerance.

Something else we need to do, we need to define the owners of those risks.

The business owners.

Remember my example earlier about how I, I couldn't go tell our sales VP about how much risk to the sales system costs, and because I'm the CISO and I don't own the risks to the sales system.

Yeah, everyone of those risks needs to have a business owner.

And every one of those business owners, they need to make a decision about what to do with those risks that they now own and they might previously not have known they owned.

So the first thing they can do is they can say, eh, it's fine.

All those risks below the risk tolerance line, those are gonna be accepted.

Any risk where you don't make a decision that's functionally accepted and any risk that the business owner says it's, it's fine.

It's not a big deal.

In spite of the risk tolerance, that's accepted too.

Something else that they can do is they can transfer risk.

Often this involves cyber insurance.

I think we've got Marsh and McLennan here somewhere.

They do cyber insurance.

And you say that we're gonna use the company's insurance to just magically absorb the fallout.

Remember Equifax, they had $115 million settlement insurance might have covered some of that, but that billion dollars in additional controls, no, your insurer is not gonna give you a billion dollars.

Final thing that a company can do is they can try to mitigate a risk, which means to choose controls to reduce either the probability or the impact of the risk, or both.

Now, I prefer those that reduce impact because those improve business resilience and most modern standards and attestations and contractual obligations like the eus Dora, those require resilience.

But we can't mitigate all our risks.

We have to be choosy.

And here's what you wanna do.

Again, write those decisions down in the risk register.

Who made a decision?

What was the decision?

When was the decision?

And when's the next time you're gonna go look at that risk?

And some of you, the seniors leaders here might be thinking, well, we've tried this and you presented all the risks to the executives, and they just accept the risks anyways.

So you've got this weird revolving door situation where the same risks get accepted yearly and nothing really changes.

Here's what you wanna do.

Update your risk acceptance procedures to force the business leaders to buy short term insurance to cover that risk.

Materializing the breach costs, the investigation costs, the legal fees, the customer notification fees, and so on.

Suddenly, when they go accept the risk, it hits their profit and loss statement for their business division.

This is where we can start to have a conversation now about how mitigating the risk is often cheaper than buying insurance.

Another common problem that we see an objection is they don't wanna make a decision or they don't wanna write down their decision 'cause it seems scary.

So key performance indicators, that's how executives get measured and it's often how they get bonuses.

Executives care a lot about KPIs, but what's not great about them is they're a lagging indicator.

You can tell if you've hit a goal only after you hit the goal, and it's hard to predict it until you've actually cleared your goal.

But key risk indicators, though, that's data that we can show to indicate if a KPI is more or less likely, they're a leading indicator and they're often based on security data and business data.

And we can use those kri, we can tie those kri to our KPIs to make our executives more interested in investing in business resilience and in risk reduction, if that seems abstract.

I'm gonna give you an example.

I worked with an HR leader once.

She had a goal of employee happiness, her goal, her KPI would pay her enough to put in a brand new swimming pool at her house.

So she really wanted people to be happy.

Here's the thing though, published research says when people are getting ready to quit, they tend to they tend to take copies of all of what they think is their data.

You know, their list of customers, their manufacturing plans, their business plans, their source code, pretty much intellectual property of all of, of all
the companies intellectual property that they somehow think is theirs and they upload it to the public cloud or they email it to themselves or whatnot.

Well, odds are an employee considering leaving is probably an unhappy employee.

So our KRI is data exfiltration.

If it goes up, we can predict that people are probably more unhappy or maybe they're just violating your company's AI policy, which is a whole secondary risk that we could also tie a control to.

But if the people are unhappy, odds are that that KPI employee happiness is gonna go down.

And we're also not saying here that data exfiltration is causing a lack of employee happiness either.

We're just able to predict it now.

Well this is where we can suggest controls, maybe like a data or loss prevention system that could discourage or prevent data exfiltration.

That's gonna improve our, improve our KRI.

And it's also gonna protect against other risks that I haven't even mentioned yet.

And we can have that HR leader and other leaders fund that project for us so we can apply a technology control that they might.

Previously not known about, or they might have ignored, provided it ties back to what they care about their KPI.

But again, write it down.

Get them to write down their decision, like using DLP to measure the risk of decline in employee happiness or loss, intellectual property or source code.

So you've got evidence that the CISO didn't just come up with this one weird idea.

'cause we can't apply that control after all the business owners need to apply that control to their systems.

And this is where we get to residual risks.

We've got risks, we've got controls, we've made decisions, and now we need to figure out if any of this stuff works.

So let's start by proposing some controls here.

Like maybe we add some people, maybe we add a vulnerability management program, maybe we add some advanced monitoring.

Now all those have got some guesses at costs and we've got some guesses at likelihood and impact reduction estimates.

Cool.

How do we know if any of this stuff actually works?

This is where you wanna be friends with your internal audit committee and there's two reasons for this.

First up, it's cheaper than hiring red teamers or pen testers or offensive security operators when you're doing regular risk assessments.

And second of all, this is the actual job of internal audit.

Anyhow, they can measure control effectiveness for us, and we're already paying them to do that.

But for them to do that, they need to know what controls they're measuring and we'll need to have associated those controls with the risks that they're meant to mitigate.

So here we've taken our top risk H three, whatever that is, and we've applied those proposed three controls to it with some estimates about effectiveness.

And those controls, remember, they can help mitigate multiple risks, though those aren't shown here.

And what the internal audit team wants is evidence that a control is being operated.

Now you can automate the evidence and collection of it because it's often just like a screenshot or a PDF.

And honestly, your SecOps team, they don't have the time or the motivation to go collect this.

But if you've got evidence that a control is working, you can't reduce the impact or probability because you don't have any proof, any evidence that it's actually working.

And at some point, you need to go talk to your board of directors to justify all those controls.

And all that time you just spent doing risk assessment and planning.

So when you're presenting risks to your board, you wanna lead with how much risk the organization managed, right?

How the organization managed this much risk over this period of time for how much this costs for your controls.

Now you of course wanna have some backup slides showing the kri and the association to KPIs, but that's kind of it at a high level.

Now you'll see there's some details there as well about when a risk went above an organizational level and how the organization got it back within tolerances.

So again, you need a few more slides in this, but you'll be able to lead with something like this.

And if you're waiting to learn about ai, we're nearly there.

So we've been through a few slides here though, and some of you're probably still wondering like, how do we actually take action by conducting a risk assessment or maybe building a risk management process?

Let's talk about that, including deciding if this is something you need to do anything about right now in this chart, it's gonna be available via my QR code as well as in my book.

It helps organizations evaluate how effectively they've integrated cybersecurity risk with broader business risk management.

It helps identify specific improvement opportunities, and you should use it on when you're doing strategic planning or prior to a major security initiative, or as part of a annual security program assessment.

Now, there is a scoring guide too, but for it to be really effective, I'd say you should have your security and your business leaders complete the assessment independently and then compare your results.

That way you can focus on your improvement efforts on the dimensions with the largest gaps.

Between your current and your desired statements.

And so if you think it's time to start talking about business risks, not just security risks, this is an outline for a half day workshop that you can conduct at your organization.

Again, the full versions and the downloadable materials from the upcoming QR code.

I do have a text version of the link to including my notes on how to prepare for this workshop.

But again, that's based on the outcomes of that prior scoring.

If your organization isn't ready, don't start, like you'll gain some executive sponsors first.

So I wanna move on to our third and final topic, which is whether or not an AI could do all of this.

And also whether it should, and this is the reason we've waited to talk about ai.

If your current risk management process is broken, automating it with AI might make it faster, but it's not going to make it better.

And one other disclaimer, again, this, none of this is legal advice folks, so.

Here's my first attempt of having an AI create a corporate risk register.

One of my favorite parts here is it's gonna have a disclaimer about not sharing confidential data.

And we'll, we'll come to back to that in just a minute, but for now, it's, it's creating risks.

It's estimating impact and probability.

And this is an example because I didn't tell the AI anything about the company that I'm at.

So you might have heard this phrase, right?

It's a fair criticism of the prompt I used.

It's actually kind of a terrible prompt.

So next we're gonna look at seven deliberate, fairly robust prompts that I ran through 11 different large language models.

And you're gonna cut, get a copy of all those prompts again, download at the end in case you wanna reuse these.

So it's also worth mentioning.

I ran these with a rag system.

A rag is a way of giving an AI series of source documents for reference so it doesn't go hallucinate.

And in this example, I'm gonna give a description of a company in 30 of their SOC two controls.

Now these are synthetic data and I'm gonna show you how to create those in a moment.

But RAG is useful because it helps in AI focusing on a specific context.

And if you're a visual learner, here's the process.

Basically what we're gonna do is load the RAG system with some prerequisites and you could use your own applicable standards.

And then we're gonna run each prompt and then the output of that prompt is gonna go back into our RAG system so that it can refer to each output.

So our first prompt is gonna generate synthetic data for our fictional company.

You can configure the company size, the geography, and the industry by editing what's between the curly braces.

And that prompt is gonna generate a company description.

That's cool, right?

It's gonna give some details that the rag l LM is gonna use with the various models.

I won't say which model this is for output, but I do have a summary coming up.

Our second prompt is gonna generate your SOC two control descriptions for our fictional company, and it's gonna generate this set of 30 initial control descriptions and statements for that fictional company.

Now I chose 30 because it's a representative set, but obviously that's not comprehensive.

Our third prompt is gonna generate the initial risk tolerance and acceptance statements for our fictional company, and it's gonna give us this initial set of 10 risk tolerance and accepted statements that the LLM is gonna be able to use for testing.

Our fourth prompt is gonna create the risk impact assessment and scale that I'm gonna be using to assess the business impact of a risk, and that's gonna give us this resulting five level risk impact scale to start to assess our business risks.

Our fifth prompt is gonna generate some probable business risks for our fictional business using the company's description, and that's gonna give us an initial set of 10 business risks and every one of them is gonna have a business impact.

Our sixth prompt is gonna add some additional details for those business risks and structure them as NIST RMF statements using that structured pattern that I, I showed earlier and described earlier.

And that's gonna give us our very formal risk statements the way risks RMF wants you to define them.

Our seventh and our final prompt is going to conduct an initial control assessment of the potential reduction impact and the probability of those risks, assuming our SOC two controls are operating at full capacity at our fictional company.

And that's gonna give us this final table view of how well those controls could potentially reduce the impact in the risks.

Sure.

Looks like a risk register to me.

That was admittedly a lot to look at.

Now this is a summary of how each of the models performed.

A lot of the models, honestly, they failed by creating like only three full risk statements.

I think.

Oh four mini only process three of the risks during the control assessment.

And then they left instructions for how a person could complete the rest of it, because it just, it just gave up.

Which leaves us with these two characters, Claude three, or sorry, GR three and Claude four on it.

They did everything they were asked using the prompts that were provided, and the output looks reasonable enough to show on stage.

At least.

That seems pretty great.

Right?

Right.

I mean, I just showed you a demo of how to automate a process that normally takes an organization weeks or months, and it's not always consistent.

Now sure.

You'd have to tailor those prompts to your organization and you'd have to give it your controls, but that seems pretty great.

Right, right.

Seeing some head nods.

Cool.

So that leaves us with something to consider.

Most of these risks have to be approved by somebody or signed by somebody who's gonna sign off on those outputs, even if you've tailored those to your specific organization with your specific controls.

And if you thought none of them, you're right.

What I just showed you is a very good demo and absolutely a potentially bad idea.

And in case anybody hears an attorney, just remember I'm not, and this is not legal advice.

So lemme just start by speed running some common laws and regulations that seem to me pretty clear that C level ex executives and the board need oversight and involvement in risk management.

We start with Sarbanes Oxley, it's otherwise known as sox.

It says a lot about risks and controls.

And if you were following the RR Donnelley case with the SEC in 2023, or the solar wind settlement of 2025 those are accounting controls.

Not cybersecurity controls they're talking about, but under SOCs, executives face financial penalties due to misconduct.

Speaking of our friends at the SEC, they were pretty clear in their 2023 rules update about how the board and about how C-level executives and how management
need to have oversight of business risks and risk management processes, and they're required to disclose their cybersecurity risk management practices.

Now, hipaa, it's a bit higher level.

They set the responsibility at the organizational level, not at the individual role level, but they're clear enough and I've seen
enough settlements with the OCR for material deficiencies and risk assessment processes in the past five years to feel pretty good.

Sharing this example.

Now I could keep going, but what if you're not at a highly regulated industry, and what if you're not at a public company?

I'm gonna paraphrase CSOs, ERM here because a lot of companies say they're following CSO for risk management for.

I don't think having an AI come up with your risk management program.

I don't think that counts.

Maybe you've heard of SOC two.

A lot of companies have these or they need them to get them in order to sell services to other companies.

SOC two's pretty clear that management has got direct responsibility for conducting risk assessments and implementing controls to address risks that could prevent achieving service commitments and system requirements.

And they go on to say it's not just an IT problem.

Risks are everybody's responsibility, but even if those don't apply, right, you're assuming you won't get caught and found out that an AI created your whole risk management program.

This is where I look at the DO G'S evaluation of corporate compliance programs.

They start out by saying that executive involvement ensures the compliance programs have got the necessary authority and resources and organizational support, but then they go on with this gem.

They tell prosecutors to understand why the company set up the compliance program the way they did, and that extends to risk management.

Imagine telling a prosecutor under risk of perjury that an AI told you to do it this way.

And that's where I turn to the United States Sentencing Commission's.

Most recent comments on how to set criminal penalties seems they expected weird, right?

That business leaders would have oversight of risk management and they consider ignorance of the program to be an issue in sentencing.

I recognize that might be a bit of a let down.

We just went through how to run a risk management program manually, showed you then how to automate the whole thing, and then I showed you some of the potential negative legal and regulatory outcomes.

What if you just FOMO at any way and decided to run your program primarily out of AI with minimal oversight from executives and from your board?

Well, this, this is what Sam Altman said about legal discovery in ai, and this is just, this isn't just chat GBT.

If you're using a public model, using your prompts for training or whatever, or maybe it's retaining your outputs, what's your confidence level that a prosecutor can't get the LLM provider to retrieve those for you?

As a part of discovery?

You might have seen this chart from earlier.

I've added two new columns to indicate if the inputs or outputs of that public LLM are normally subject to attorney-client privilege, or if they'd at least be protected from discovery.

But creating a risk management program, it's a normal business function.

None of these would be protected from legal discovery.

So right now, I can't personally recommend using any of these to create a risk program.

Said another way.

Companies can outsource their business functions, but if things go poorly, they're still gonna be responsible under the laws and regulations and contracts that companies follow.

And yet some organizations might still decide in spite of all these risks.

It's fine.

It's fine.

We won't get caught.

I hope everybody saw this in the news.

In August of 2025, just August of this year, security researchers found that open AI's helpful sharing feature was being indexed by the Wayback machine archive.org.

Open AI's turned off that feature, but that's not really the point.

We don't know how these AI services might or might not be sharing or indexing our private chats.

And this is an excerpt from a chat that a user shared intentionally that was indexed by archive.org and where the attorney goes on to say that indigenous people don't know the monetary value of land and they have no idea how the market works.

So beyond the massive PR disaster that that is for that law firm, this is also discoverable evidence from a legal perspective and it's a persistent risk.

Now, what if that was your company's risk management program being described and archived on systems you don't control?

And honestly, where you might not even have visibility?

I wanna talk about First American financials, nearly half million dollar settlement for the SEC for an example of what happens.

So I'm just gonna skim it here.

It's kind of short, but the risk management contributed to the exposure of over 800 million title and escrow document images, including sensitive personal information.

And the breach also led to a violation of a couple Exchange Act rules, which mandate maintaining procedures to ensure accurate and timely reporting of material risks.

I don't think this would've been any better for them for First American, if the SEC had been found out that an AI told them how to maintain their risk register.

But that's not, AI might be saying.

Fine.

Let's talk about Two Sigma.

They were a hedge fund that was using algorithmic investment models, and they settled with the SEC in 2025 because of how they were managing their risks.

Again, we'll just skim it here for a second, but the SEC found that two Sigma was in violation of multiple sections of the investment advisors act due to their deficient risk management practices.

This was a firm using AI for financial trading.

At least they didn't have an AI write their risk management program.

Now, that's about all I had time for, but I wanna give you some more.

I mentioned the download at the end, so I'm just gonna quickly preview that for you.

It's got all the best slides as well as some of this workshops and process documentation for you to get started as well, of all those prompts.

So before we wrap up here, just by a show of hands, how many folks found this talk to be valuable?

Cool, thank you.

So I want to give you an opportunity to share that feedback with the event organizers who took a risk of putting me on stage with over 150 slides for a 40 minute slot.

And it'll take you less than two minutes.

At the end of the survey, you'll also get a copy of the best slides out of the deck, like the workshops, the questionnaires, the sample
risk materials, the sample risk register process, definitions of how to get started, and those prompts and the evaluation and that workshop.

They're from a book that I'm currently working to get published and they're not available anywhere else right now.

So I'll give everyone about a minute, excuse me.

Also a chance to take a sip of water check to make sure there's no grouse in the audience.

And then we'll start to wrap up.

And if you don't like QR codes, talk ac slash cane and the code is risks.

Almost broke the mic stand here too.

And Christina, wherever you are, I'll be sending you all of this live feedback.

Hello.

I'll be sending you all this live feedback after, well, later today.

Yeah.

So at the start of today's presentation, we we, I asked you what you thought, like what are cyber risks?

And we started to talk about what it takes to define risks to the business and how to talk to executives about those risks.

And then we talked about the risks of using AI for your risk management program.

So here's what I wanna leave you with at the end.

First up, follow established standards for risk management.

Please don't make this stuff up on your own.

Use manual processes for oversight because the law does not appear to allow for AI to replace humans for oversight and automate the bits that make sense, like evidence collection.

But whatever you do, don't use AI to manage your risk management program.

With that, Christina, do we have any?

I saw you got 'em.

Do we have any time for questions?

Any questions?

And I will be around at the break.

I'll be around.

I, I, for those of you who haven't seen,

Christina Richmond: come on, don't be shy.

You've had enough coffee.

Kane McGladrey: So if you have questions later, just find me at the break.

Or at lunch.

I'll be around.

Christina Richmond: Going once, going twice.

Thank you, Kane.

Kane McGladrey: Cool.

Thanks everyone.

INCH360 2025: Cyber Risk as a Business Imperative