On Demand

October Office Hours: How Good Vulnerability Management Secures AI

Transcript

00:07
Wonderful. Thanks, Chris. Hi, everyone. Happy. What's today? I think it's Wednesday.  Thanks for joining office hours this month. um For those of you that don't know me, my name's Greg Anderson, and I'm the creator and now CEO at Defect Dojo. And  if you've attended our past sessions, this is a different setup from where I am normally. I'm in California for Dojo right now, and  Chris and I didn't want to reschedule on you, and so I'm doing it from here. So hopefully the presentation and the audio and

00:37
all those good things are coming through okay. But so for this topic, when we talk about vulnerability management and how we justify AI usage, what I wanted to do is start with the opportunities that we see for leveraging AI and security, and then we'll talk about how we justify these technologies and how we protect them, et cetera.

01:02
And so  starting with opportunities for  AI usage and security,  like first, like why even use  AI and security at all?  And  I'm a big fan of being lazy when possible.  sometimes, when I was doing security testing, I hated the meticulousness that was necessary to do a good report.  I would prefer to just do fun things in security.

01:29
rather than  like doing the things that management would ask me to do.  And so  at the end of the day, why I selfishly want  like AI to do better in security is to do as little work as possible and still accomplish our goals in security.  think also to, know, burnout and security is incredibly high. Security professionals are incredibly overworked. So  any technology we can leverage to make our job a little easier.

01:58
provided that it is secure. I'm essentially all for and advocating for. so the last thing I want to mention is I'm really terrible at doing presentations. And so I've started using a lot of AI images so people don't have to look at my stick figures. so on one hand, I think that's maybe a little disrespectful to your audience to just plug something in and generate an image. But I assure you, what I could create on my own is a lot worse.

02:29
And so, uh, with that said, you know, when we look at,  uh, innovations  in the security market, I typically think about anything in security as three different,  uh, bubbles, if you will. think this is true for AppSec. We have the scanning phase and then we have a vulnerability management phase with Defect Dojo is typically in, and then we have the action phase.  And I think this is true for like SOC alerts too. You have, uh,

02:58
the  discovery of the alerts and then you have your SIM and then you have  what you do about it with SOAR by hand.  And when we look at like AI innovation and security specifically, I think it kind of looks like this.  And uh I think this might be for a reason like where there's so little work being done in discovery because there may be problems with determinism in AI that I think uh makes scanning hard to have like this real big.

03:27
uh AI uptick, but uh there's some innovation in our space and we have some interesting things going on there too. But  the big thing I see is kind of in the action phase.  And when we think about like action or addressing things in security, I think the big topic  is like auto remediation  and how we can use that potentially safely. And I think that breaks down into two things. It breaks down into, you where do these fixes come from? But also

03:57
how confident can we be in the quality of the fix that is being submitted?  And so  starting with the second half of this,  how confident can we be  in that  what's being submitted is good? I think those things just come down to things that are  typically outside of security. Like if you have great functional tests,  I keep hearing  the primary challenge why people don't want to use auto remediation.

04:26
is that it creates more work for developers to review  and that stresses out developers and then the program falls apart. And so  I think some people are looking to just completely automate these things. But I think the challenge culturally that makes uh AI usage and auto remediation specifically really, really difficult for people to implement beyond the security implications of using these is

04:56
There's a lot of programs that try to arbitrage like who is responsible for review.  And I think  at some point security professionals are unfairly asked, well, which part  of uh addressing things in code does security own and which part do developers own? And so oh from seeing a lot of programs, I think developers have to own the actual review of code. And I think security has to own

05:25
the commentary of that quality. And so  the things that I don't like that  I typically like to push back on when I hear from developers  is that they don't have bandwidth to address something.  And the reason I don't think that's a great argument is like on one hand, it's basically just saying that like security is not a priority. I think that's what it really means at the end of the day, but  why?

05:53
I don't think it's good to accept this as a reason  is the reality is, is risk doesn't change  based on the bandwidth of a team.  You know, risk is absolute. And so, you know, I, I've generated this image here of, of a wall that has two holes in it with only one of them repaired.  And, know, if you don't address like all your high risks things at the end of the day, like you still have high risk  and um the net outcome for the business is still the same.

06:22
I also think like where we go wrong potentially in security in this process and how these things break down  is we have to own the quality of what we're submitting to the developers for action.  so I just think sometimes we get stuck in these blame games, which can like really hurt the adoption of the technology. And so  it comes down to if you can advocate for having the authority to jump or to fix something.

06:51
you then,  you know, that justification has to be incredibly, incredibly solid with that power. I think the other thing that's really working against security professionals in this space  is  there's kind of two types of security programs. is, you know, security programs that exist primarily for theater is the reality. It's a checkbox exercise so that  people can sell product.  And oh

07:18
I think you have to also recognize what type of program you're in.  I've been in some of those programs in full transparency and it can be a relaxing break from doing things that are more impactful. And so,  if at the end of the day  you're in a compliance-based program, there's probably not a lot you can do to advocate for  the improvement of security with any technology. But there are programs that... um

07:48
Where the end goal is actually security. so, you know, if you're in the compliance checkbox one, you may just be, you may just be doomed from the start. may not matter. Even if you have, you know, good arguments for implementing the technology, whether it's AI or otherwise,  or,  um, or if you have, uh, the right data and the right arguments and the right justifications for who should fix what to be effective and  where security's responsibility ends and dev's responsibility starts.

08:18
Um,  they may not be willing to listen. And so I think it's also important just to recognize that. And then I think the final thing before we start talking about, uh, like justifications and helping to secure AI  is, uh, there's a couple really interesting papers  that vendors are putting out on AI determinism and like where we should care and security and where we should not.  Um,  and the idea is unlike an algorithm, when you ask an AI something,

08:47
you typically won't get the same answer twice unless it's a fairly narrow problem. And so for auto remediations specifically, I don't think we care about if the fix is the same or not, as long as it has the same quality and it passes the tests.  I think you're also,  it's also the truth that when you fix these things,  you're more likely to arrive at the same outcome regardless. ah

09:15
Intelligence and insights I'm kind of on the fence about. So when we talk about  like risk-based vulnerability management, you  likely want a vulnerability to always have the same risk outcome, essentially the same evaluation, whether it's, you know, high risk or needs action, or whoever you're doing that calculation, you want those things to always be the same in a vacuum because otherwise they can be hard to action and your prioritization can immediately change.

09:45
And then I think scanning determinism is way, way more important because  when we talk about, you know, comprehensive security testing, you want to be able to say from one week to another, one day to another, that m the scanning you perform is consistent. And, know, some compliance frameworks require that when you're going through these check boxes of  what your security testing entails or doesn't entail.  so finally,

10:14
That brings us to...

10:18
specifically how we justify using some of these new technologies in security. And so when I think about the security of the AI, I think one constraint that's really unique in our industry is that we have a hyper fixation on the data security elements of AI because the data we handle is so incredibly sensitive. And so I think...

10:45
People always like to think that they have  the best security themselves. And sometimes that's not the case. Sometimes the data truly is safer somewhere else. So whether you are developing your own internal models or using a third party,  if the third party's security is potentially better than your own and you're confident in that, then you can make that journey. But  the safer approach from  our industry,

11:11
is just to run all of these things in a self-hosted manner, which is  semi-difficult today in AI, but it's getting easier and easier.  And so, you know, if you can remove like the third party data risk, what we've generally seen is the uh resistance becomes much, much smaller. And then I think the second portion is the attack surface area because...

11:36
There's all these unique vulnerabilities that are specific to AI that we still don't have like great testing tools for,  whether it's prompt injection or poisoning or any of the other uh new vulnerability classifications that are hyper-specific to LLMs.  If we can like limit the surface area to where people are comfortable, that's, think, kind of the second part of the equation that we've seen be extremely successful.  And so.

12:06
you know, when you put something on the public internet, there is so many additional considerations for security  that we  then have to justify, right? Like the easy answer is if it's not connected to the internet and it's in a closet somewhere and you put a security code on the door, you know, what's the real risk? And so  I think there are good ways and there's more and more technologies in this space.

12:30
that  let us bypass the risk of taking the shortest path using,  know, open AI or Anthropic, or  one of these public services where all the data is intermingled and you then have to trust that somehow all these vulnerability classifications are being patched  or that their data security is perfect. um And then, so the other way that I like to slice this  is highlighting  all the benefits

13:00
that people can get with AI and security to try and make the justification to  people that make those decisions. So  we have, you know, like what we want to use it for.  And then we have, I think, some of the objections to the general usage as it relates to security.  And then like, finally, the benefits to make the whole argument. And so when I think about vulnerability management, this is typically the three pillars that I think about  for good program.

13:28
We live in a time where it used to be just that aggregation was good enough, but now enrichment is so important with the flood of vulnerabilities that really happened in like late 2020. My co-founder, Matt, makes this joke that software is like milk and that it spoils over time. It doesn't get better. so enrichment prioritization has become so, so important for vulnerability management.

13:56
And so metrics that will see improvement if we succeed in justifying AI usage in a security program is first mean time to remediation. So, you know, relying on AIs to get these things done rather than humans. The number of total fixes, the velocity of said fixes, open to closed trend and overall velocity. so

14:23
What I'm hopeful for is just to make security people's jobs easier as if we walk security executives through these different arguments,  they're more likely to adopt these things.  And I think the other thing that I see painfully a lot  is very frequently uh usage of AI is really about risk strategy  and risk strategy and vulnerability management tend to be disconnected more often than not.

14:50
Um, it's defined by, uh, compliance frameworks like ISO 2,701, or, you know, it's, it's baked in from something else and people that are responsible for finding vulnerabilities are stuck in just this vacuum of, that's what I was told the standard was. And thus, um, that's what I have to do. And there aren't these feedback loops. And so I really think we have to look at like, what are we actually seeing? And then like continuously update risk strategy as it relates to.

15:21
exposures that are specific to  your organization.  so oh basically, you know, like, what are you ultimately comfortable  sharing with AI or not, as it relates to risk strategy in your unique company, because, oh you know, data sensitivity just has different levels.  Like pharmaceutical is very different from, say, you know, defense or banking, etc. And then

15:49
Finally, just moving to a couple other things, and I should have mentioned this in the beginning, but  if you're new to these sessions, we always like to talk about  some of the things that are coming on the commercial side of Dojo and then open source updates as well, just to  tile these things together.  And so  on  the commercial side of the house, we do have a really big announcement and demo scheduled for AppSec USA that we're calling Sensei.  And enough to give too much away because

16:19
because Matt will be there and Matt will be doing  that announcement.  We think it solves and sidesteps many of the concerns people have around  AI usage and security. And then the other big update  is  with Defect Dojo's risk-based vulnerability management scoring. Another key piece of feedback is we have this really complicated formula to try and make sure  that

16:47
that prioritization scores were always accurate thinking about things like exploitability,  number of exposures,  risk, et cetera, how easy it is to exploit.  And we did come up with a calculator that is sort of a simple override, if you will. So um our formula was too complex, I think, to  let people manipulate it. And so our base scoring to keep it as good still works off our original formula, but

17:15
Something we've heard frequently is that people want to come up with their own prioritization scores  based on how their leadership  sees, um, risk-based vulnerability management. so that is what's today, Wednesday. Yeah. So that was live as of two days ago. Um, we also added an anchor connector. a new technical integration just to  automatically pull data into dojo. So you don't have to ship data in.

17:44
And then finally, this one is not out yet, but there have been  some changes to build  better hierarchies of data. So better modeling companies in Dojo to associate vulnerabilities  more accurately and build whatever custom structures you want in terms of how you actually see  your company compared to making you stick to just a single data model.

18:11
And then  on the open source side of the house, this is probably the quietest month in terms of uh updates for open source  and full transparency. So we've done some polishing on our docs.  There's been some parser polish that's also done. We fixed certain DDoP edge cases  that  the open source community brought to our attention. And then also some, some minor performance improvements. And so. uh

18:39
You know, we've talked about this open source v3 quite a bit and we are making good progress. Just  this particular month wasn't too too exciting, but  we're still looking, I think,  for  the v3 alpha cut over at the end of December, potentially early January.  Now that we've set the timeline, we're trying  very much so to stick to it for the sake of the open source community, given that we communicated a date.

19:08
And so I think that is  everything we have for  this month, Chris.  Happy to take any questions.