May 22, 2024

Why High Quality Security Results Matter

 

Transcript

00:07
start with Misha. Yep.  Hey everyone. So I'm currently a security engineering manager at Semgrep,  helping to lead the supply chain security research team. I have been in security for almost a decade now. I've worked in different areas, including detection and response,  incident response,  infrastructure security, cloud security,

00:35
And before Semgrep, I was at Sentry helping to build out the DNR in there.  And before that, I was at Cloudflare  working to help  protect the production environment.  the environment that's helping to run Cloudflare's network.  Yeah, very excited to be here. Hey, everyone. Happy Wednesday. Thanks for taking the time to be here today.

01:01
My name is Greg Anderson.  the founder and CEO at DefectDojo. And what we're up to at DefectDojo is making security scalable  and working to  consolidate results  to have  one  actionable list. And so I'm excited to be here with Misha. And today we're going to talk about  why high quality security results matter  in our programs. We are going to kick this  off with, we have a few questions.

01:31
To get going and Greg, do want to kick off with the  1st question?

01:38
Sure, yeah. So the first question is what makes  high quality  security results and why are they hard to accomplish? Yeah, I can start this one. So  what makes results high quality and why is it hard to accomplish?  I would say that  the quality of the data  that you're starting with, that you're using to build  security alerts  on,

02:08
If the quality of the data is low, then I think that will lead to low quality  results or alerts.  There are a lot of tools out there that are very noisy. And I think for me as a security engineer,  I really want results that are actionable, that will help me focus on the riskiest areas in the environment, know, whatever that may be. And I think it's kind of a very  tailored approach, depending on, you know, what kind of data you have, what your  exposure is.

02:38
Yeah,  and I've done a lot, I've seen a lot of work  and effort being put into contextualizing alerts as well,  which I think can make a big difference in how actionable the results are.

02:53
And Greg, what's your thoughts on this issue?  For me, when I think about what makes results high quality,  I think about results that are easy to action, results that don't repeat themselves. So, for example, reporting an issue once  rather than a hundred times.  I think it's important to have all the information, too, that you need to act. If you're just providing an alert without the means to mitigate that alert.

03:21
I don't think you're going to get the outcome that you're looking for. And then  what I think makes this really difficult is the breadth that we have to cover as security professionals.  I think it's very easy to be an expert in one area, but when we look at  the breadth, which I think is very, very unique to security, it can be  extremely challenging  to get  high quality results, not just in one area of security.

03:49
but  across the entire board to make sure that you're  protecting your organization properly and providing ample coverage, not just in one area, but all areas.  I think like, I'm sorry, Nisha. Yeah, I  was going to add that we both kind of gave perspectives as  security professionals or engineers, but we're also  both, you know, at security vendors. So  kind of wanted to build off of that and  get your perspective on what

04:19
are some of the challenges that  vendors actually face?  I think  one of the big challenges is  security is seen as a cost center compared to say,  like a revenue generating center, if you will. So unfortunately,  both vendors  and  people that are using tools from vendors have to sort of justify their existence, which can be

04:46
difficult compared to revenue generating activities. At the end of the day, think if  you can focus  on the risk and what you're protecting and even putting a quantifiable amount to what a product stands to lose, even if it's just a guess,  I think that really helps to justify and get the traction that we need. I think the other thing that really  is holding back our  industry is lack of innovation with technology. So when we look at

05:15
even some of the tools that are popular today that were popular when  I was pentesting 10 years ago, there hasn't been a ton of innovation in terms of how security tools detect things.  They're still for the most part,  parsing responses rather than innovating and truly  testing the application in a way that's more meaningful, say like direct interaction with JavaScript.

05:44
rather than just looking at request response. Yeah, I feel like vendors are starting to play with the idea of contextualizing alerts, like across different log sources. If you're looking at like detection response tools, for example.  But the data has been around for a long time. Like if you look at some vendors, they've had access to the data for a long time.  I mean, I'm glad it's going in that direction, but I agree with you. There's kind of a lack of innovation there to do something kind of different.

06:14
One  of the challenges that we face as a vendor  is balancing,  we're building rules for supply chain  and it's a challenge to kind of catch instances of a vulnerability, for example, and be specific about it so that it's not generating a lot of noise, but also that it's broad enough that we're able to catch different instances of it across a wide set of  customers.

06:43
So be balancing like being generic enough,  but also  not too generic in our approach.

06:54
And onto our next topic.  Greg will throw this to you first.  What impact do you think low quality results have on  dev teams? Looking at  our colleagues and not just  in the security professional realm. Yeah, I think.  Where to start? So  at the end of the day, the unfortunate reality is

07:25
If work can be shifted because everyone I think is under a lot of pressure these days, it can be. And so  if you aren't delivering results that are actually actionable,  then  that is a great excuse to essentially ignore them.  I think when it comes to building trust within an organization to actually get security issues resolved,  delivering things that aren't actionable or aren't helpful is the quickest way to  ruin a reputation.

07:54
And once that's damaged, that can be really hard to restore.  The other thing that I think about is when you aren't producing great results or you aren't scanning everything,  you can't truly know what the risks are to your company. Because if you don't have that complete perspective, you can't prioritize properly.

08:17
Yeah, I think that's really the keys. think number one, above anything else, it's a reputational risk, which in turn turns into an executional risk.  But I also think that  you have to be able to scan everything in a meaningful way to truly know what your risks are.

08:37
Yeah.  Yeah, I agree. Trust and reputation is kind of everything in security.  And that doesn't change if your customers are internal  instead of external.  You don't want to burn out your partners  internally as well.  One thing that really worked for me in the past is sort of embedding your  pipeline, your alerts pipeline into whatever your dev teams are using.  if they're responding and triaging their alerts,

09:06
In Jira  or,  you know, in Slack, I don't like Slack  for triaging alerts necessarily, but if, you know, if they are,  uh, kind of meet them where they're at  and make it easy for them to look at something and quickly be able to make,  uh, you know, a determination.  Um,  I, we also, uh, have used, um, you know, the same kind of like monitoring tools. So for example, uh, at Cloudflare, we had infrastructure alerts  and

09:36
SRE already had Grafana and a couple other monitoring tools in place to check for like the impact on servers. And so we had to use their existing tools for monitoring how our detections were performing. And so that kind of helped build trust between the two teams. And so when we had alerts for our, you know, for our infrastructure, was much easier to get them to respond and take it seriously. Agreed. Yeah.

10:06
Super interesting point. So in my experience,  in the days when I was still generating Pentest reports, I would be lucky if I could get someone to look at those findings once, and typically it would require begging.  And when you push things to where developers actually work, like just their natural backlogs, Jira, Slack, GitHub, etc.,  our data says they interact with it  12x more. So it's definitely a major shift in dynamic that

10:36
may even be more important than the actual results is where you ship them to. Would you say that security tools are actually built for like  the dev teams,  like  not necessarily the security engineers almost? Oh, that's a really interesting question.  I think we've seen a really interesting thing go on in the industry where  security tools are started to be marketed toward developers more.

11:05
I think sometimes that goes a little too far because  it just depends on how organizations are structured and the prioritizations.  But I think ultimately it's very, very difficult to be successful in pushing  security responsibilities onto developers, at least in my experience.

11:27
or not.

11:29
Okay, so if there's no more on that topic, we'll move on.  This one's for you, Misha, first. What are the costs to a business of not having the right tools and platforms in place?

11:44
Yeah, mean, security like  any other thing is kind of an investment.  And  there's a perception that it is a cost center.  We're not actively necessarily building features that get more customers onboarded to our platform  or using our  tool.  But security teams play a very important role.

12:07
role in the reputation of the company.  But in order to kind of see a return on the security efforts, it can take multiple years. So  in order to get support across the business  for all of those years to kind of see a return on that, you have to be very judicious and intentional about where you're investing and where you're asking the business to  invest, right? And so  having  that decision

12:33
Or like, you know, at the point of the decision where you're  picking which tools to use, it can be very detrimental, I think, to the reputation of the security team. If you're not going with the right tool, that makes sense given  like the skillset of the team, what you're actually trying to identify in your environment.  Like more tools does not mean better security.  Sometimes it can be very tempting to build something instead of buy, but yeah, so there are many different approaches.

13:05
But yeah, I think I would say  there's a lot of OPEX costs associated with it, and it takes time to see the returns.

13:18
Right.

13:22
Where to start? So  I think  when you pick the right tools,  you can prevent breaches without question.  think when we think about programs holistically, it's important that you have as much information about your organization as you can. I can't think of a job in which  I was the security engineer and the requirements didn't change, shall we say, about  a year in, give or take.  So when you have low quality results,

13:51
you run an internal reputational risk, I would say, to the effectiveness of your program. Whereas when  you don't have the right coverages or the right tools to meet that coverage, you create an external risk, specifically with your attack surface area,  specifically with getting breached.  The reality is  good security tools are expensive,  but they're still  20x less than what a single breach would cost. And so that is how

14:21
I have argued for better budgets in the past because  the reality is no business wants to spend money that they don't have to, right? And so they must be compelled. You have to put together a convincing argument on  one tool versus the other. And I think that comes down to coverage, utility and value. I think the other thing that's just really hard to quantify is things that didn't happen. If you will, you can't really, it's difficult negative.

14:49
And so I would look back to trying to quantify the key areas of the business and  what a breach would have when we look at the cost of tools. And typically, I think you'll find it's around, you know, one 20th, the program costs one 20th with respect to the tools rather than  what a breach would cost for key pieces of the organization. agree that it's hard to quantify what you caught.  And yeah, it's a kind of a thankless

15:19
role in that way, but you you get more attention when something does go wrong. And  I'm curious if there was anything that worked for you in the past to try to  show the effectiveness  of making the right decision, you know, picking the right tool or platform.  Did you, you know, have to build reports that showed everything that you caught and the risks, you know, if we didn't catch them and things like that?

15:48
I think it's a really tough balance,  right? Because if you're spending time doing that, which is hyper important,  you're not spending time protecting the organization. And if your resources are already maxed, I think  a mistake that's very easy to make is to say, I'm going to focus all of my time on protecting  and none of my time on reporting. And so  before founding DefectDojo, I think I had about six or seven security engineering roles, or take.

16:16
And I think two of those, were successful in proving and showing reporting. And what it came down to is  monitoring attack surface area logs. So when we saw a specific  type of attack come in,  we would then go in and say like, oh, look, we fixed issues around that a couple of weeks ago.  At one organization, we even went as far as tying  app data into our SIM, which gave us incredible alerting potential to know

16:45
when someone should get woken up in the middle of the night and when it was okay to sleep.  So if you had a Microsoft vulnerability against a Microsoft server that we knew was vulnerable from our app and infrastructure testing, then  that was something that needed to get escalated. But if you saw an attack on a Windows-based system when the system was  Linux in nature, then you probably shouldn't get woken up. But sometimes,

17:14
For most times, the  sims and all the systems around them don't have that context, unless you give them that context, unless you create that context and make it available for  consumption.

17:34
Okay, moving on to the next. Amisha, this will go to you first. What does the end state look like when you actually have the right results and the right program in place?

17:48
Um, yeah, I think  end state.

17:54
I  am biased towards  tools that help me be proactive versus reactive. in the end state, if I am able to,  I  would look at like how many incidents we're dealing with versus how many incidents we were able to prevent, things that we were able to catch before it turned into like a full on incident or  if it's in the case of like vulnerabilities,  were we able to effectively work with our,

18:24
dev teams or product engineering teams to fix a vulnerability that's, you know, high  or critical  versus  getting it, you know, caught, versus the vulnerability getting caught a different way. Yeah.  So I think that's something that I  be biased towards.  But I also think it depends  the end state with the right results in program. think it just depends on what kind of environment you're working with,  what data you have to protect.

18:54
You have a lot of like, you know, does your company have a lot of servers and hardware that you have to think about? That's going to look very different versus, you know, for a company that's built in the, you know, built in like GCP or AWS.  Um, yeah, I think it's a very, uh, like one size does not fit all it from my experience, uh, in security.

19:19
I think that when we look at what reporting gets to leadership, there's often some KPIs that are tied to security. And so  when we're implementing anything, whether it's rolling out a new tool or using a new technique to achieve higher levels of automation, which I think is just  sort of another key piece for any security program success is reaching certain levels of automation.  The other thing that I want to achieve

19:48
beyond KPIs is a better working environment for those people that are doing the testing and those  individual contributors  that are  responsible for ultimately making that program happen.  so, you know, without automation, I think you can have  extremely long days in a day and a job that is near impossible. And my hope in terms of the tools that you choose to leverage and the processes that you choose to roll out is that you get to a better working environment.

20:18
that is judged by  how much leisure time that you have. So at Rackspace, when we started to embrace automation, we used to have our internal KPI of how much foosball time we got to play.  so  although the metrics to leadership are important, I think  there's also key things that I want to achieve for the team, just in terms of trying to lighten the burden that is a security program.

20:48
Okay, and when you're thinking about end state, mean, I you answered this a little bit, guys, but, you know, there's the end for the business and also the end for the team and maybe ultimately for the external customers of the organization. And do you think the best programs address all three of those or is one more important than the other?

21:17
Um, Bisha, do you want to answer that first? you want me to take a stab? No, you can go for  it. I think they're all very important and I think they're all harmonious to a certain extent.  Um, you know, sometimes you have to make incredible efforts to achieve those KPIs, but I hope that to a certain extent, one sort of facilitates it and makes the others necessary for, um,  the longevity of the program and the people.

21:50
Yeah, I think like KPIs don't always tell the whole story. And I think it's important to kind of be intentional about what metrics you choose because they're, you're setting up incentive structures for your organization and you want to make sure that you're sort of encouraging the right behavior. So for example, like I think for detection and response teams, like to see the health of the team,

22:19
it often can seem like an easy pick to just go with the number of detections that you have, but that kind of just doesn't pay attention to the quality of those detections.  sometimes it takes longer to write, it does take longer to write higher quality detection than tied across different sources and  add automation for it. So  I think that's just like a pitfall for KPIs and metrics that, yeah.

22:47
It's super interesting too, because it can change with maturity.  And so one really interesting question to ask security leaders is what Misha was saying is,  is more alerts good or bad? Ideally, you would see alerts potentially trend down if you're doing things correctly. But  when you bring in a new tool, you may want to see alerts trend up to know that that tool is actually working and providing value.

23:15
Yeah, the data is really interesting in terms of how you choose to look at it. Two different leaders can look at the same KPI and interpret it differently. That's very true. Something that feels unique to security, or fairly unique at least.

23:33
Okay. Our last topic is one that probably could spend an entire hour on. Um, but we're going to, we're going to try to address it in, in relation to results. Um, but so going beyond results, how do you, how do we address the whole idea of AI and security? Um, for example, how do we enhance data further with threat intelligence or in an AI is AI the answer to all of our

24:03
security woes and  what perhaps are the near term and long term impacts of  AI.

24:11
Greg, you're kicking this one off. Oh, that's fine. Okay.  Want to separate threat intelligence and AI first. And so I do think there is sort of a race right now going on in security for who can produce the best  threat intelligence feed and make that actually applicable to  security testing.

24:36
So when we look at the different frameworks, think EPSS is a standout.  If you're not familiar with EPSS, it's a scoring system that's based on  real world expectation. So they're giving you  data on the likelihood of exploit  directly from the real world. And while I think that's  incredible and they're sort of a standout,  the one I think whole

25:02
in that is just because something isn't being exploited yet doesn't mean it won't be exploited next week. And so it's a landscape that can change more rapidly than the security team to react to. So I do think you have to look at multiple factors, even just using classic things like CVSS scoring in parity with EPSS. And then  on the AI front, we are sort of leading more into the machine learning space rather than going to

25:32
the full-blown answer of generative AI. I think when it comes to the testing of security things, that will always be difficult  for AI just due to the breadth that you need. And also because when you train models  and we look at how we think about security results, 90 % isn't good enough. You have to be hyper accurate and hyper sure across a number  of...

25:59
different topics that have totally different inputs and outputs and behaviors. When you look at just even  take very simple vulnerabilities, SQL injection versus cross-site scripting.  You have to treat them totally separately from a data and a modeling perspective. think AI is hyper interesting in the remediation space. I think post-testing, it's a lot easier to get  the results that you'd hope for out of

26:28
models that remove more and more of human input.

26:35
Yeah, have a follow up to actually one of the points that you made.  I was going to ask, do you think like red teaming and pen testing is kind of an area that's  like  waiting to be,  you know, do you think there are strong AI use cases in those areas? Because if you  use chat GPT, you give it some code and you're like, hey,  find me like the vulnerability in this code.

27:03
Has it been like a bad experience? And I know we're kind of early on as well. So I'm kind of curious to hear what you think, what that could evolve to in like, you know, a few years. I think there is a use case, but I don't know if it will get to the point where it is accurate as we need it to be on the testing side. I've seen really exciting things in the remediation space and the insight space. So

27:32
You know, like post finding the issue,  what information is relevant, how exploitable is it, how do you fix it?  think enhancing the core findings after testing is,  where I expect there will be the most success.  I think detecting  is  more challenging. And so I think there could be value in that side, but I don't know if we'll get all the way to the point where that is  the only answer or something we can.

28:02
rely on exclusively for the protection of organizations and teams.

28:12
Yeah, think for detection specifically, ML seems to be a better fit than AI. I mean, it's almost like statistics. You're analyzing data, trying to find anomalies, detection teams in the past, like before ML was becoming more of like a mainstream thing.

28:36
like picking one behavior or like a one match on one field for specific type of blog. That sounds very different to me than, you know, having somebody who could help with building like a model,  you know, analyzing data, you know, historically,  and helping you understand what the anomalies and behavior,  you know, are and things like that. So I think machine learning could be very exciting, at least in the area of detection and response.

29:06
But each area of security kind of has its own  potential with AI and ML.

29:15
And so for each of and for each of your companies, how do you see  shorter long term AI impacting your product? If at all.

29:29
Are you want to go first? Sure. Yeah, we are continuing to look at where it makes sense to leverage  AI and machine learning specifically  post discovery, which  who would have thought aligns with the view, right?  So when  we look at things like deduplication, which we already  respect human input on, like our deduplication.

29:57
takes human input into consideration and changes over time how it behaves.  We see,  but we aren't going all the way to say,  oh, the machine is telling you the best algorithm.  I think the challenge in training models for  AI that goes across organization is maybe the barrier because when a specific detection is a false positive, it's highly dependent on the technology, the tool.

30:26
the company,  their specific  implementation around those technologies as well. And so maybe that  is  the barrier that will be hard to break through is creating AI that goes  across company and gives you the results that you're looking for. But  when you can do sort of internal model training on specific companies, we've seen that  as  producing better results, especially in the long term for customers.

30:58
Yeah, also,  so we actually just spun up like a, AI task force internally to really  dive deeply into how we can incorporate AI into our platform.  And  we're also focusing  on the like post finding post alert  stages to help customers review triage and remediate our sum group finding. So,

31:24
I think it's fairly, you know, kind of early it's in the experimental stages. Sometimes customers actually aren't willing to turn on the AI  capabilities because they don't want, you know, exposure in that way. So I'm not sure how that's going to kind of play out long-term.  But I think we're seeing a lot of like positive signals for customers, you know, being able to use  AI in their  triage and  review stages.

31:55
I think like, you if there's like a code vulnerability, can give you a suggestion on the fix. It can even assess whether it's actually a false positive and make that suggestion for you. I don't know if it's like, I've seen AI and security tools that are mature enough where you don't have to sort of have someone double check what it's suggesting though. I don't know if it's, I don't think we're there yet.

32:28
Okay, are there any,  I think that's our last official  question  before we  have  Greg and Misha summarize. Are there any questions from  the audience?

32:46
And it can be about what we talked about, about the products, anything that you, any other topics you're interested in while you have  these experts here.

33:00
Okay, well, I guess you guys did a really good job and covered everything perfectly. So we don't have questions.  So we can wrap it up if maybe each one to give just a little summary of what you want people to take away from this discussion.  As they go back, put their security practitioner hats on,  what should they be thinking about when they're looking at results and selecting tools?

33:30
Um, I can go first.  Uh, yeah, I think,  uh, my key takeaways would be, um, don't  be  afraid to kind of like push the boundaries of what it means to have high quality results. I think it's very tempting to  build like the easiest kind of detection or,  uh, you know,  throw the problem to like your dev teams to respond and just let them figure it out. But  I would say.

33:59
pushing  yourself to build these high quality alerts or rules or detections or whatever it is,  will pay out in the long term.  And actually, I would rather find 10  very real security issues than just feel good about identifying 20  maybe not so great findings. And so if it's a peer numbers game,  I would be more thoughtful about that.

34:31
So you don't put the informational findings in your reports? Yes. Tune out the info. Yeah. Info alerts. That's very fair. Yeah. Great advice, right? Key takeaway. to the days where I was honestly padding my metrics. And Misha's very right. I shouldn't have padded my metrics, but I did. I did. It's a very different game as a pen tester. yeah.

35:00
I think tools for me just all comes down to  utility. think about what coverage does a tool give you for your organization? So, you know, when you're building a program and rolling it out,  I'm thinking about, you know, 100 % coverage. How do I get to 100 % coverage? Because if you're not scanning everything, then you can't know where your risks are. And for me,  that comes down to...

35:25
utility. But so,  once you scan everything, you have to get it to a place where it's actionable,  where it is high quality.  And then beyond that, just getting the findings in the developers hands in a way that they actually like to work rather than  more traditional reporting. That's sort of, I think the key takeaways and the key challenges that I think about with building programs  today.

35:52
and ultimately getting to a place where you have a mature program and high quality results  to work with.

36:02
Great. Well, Greg and Misha, thank you very much. Thank you everyone for joining.  We will have the recording available  within the next day or so if you want to share it with your teams  that potentially could make it.  And we will see you on the next one. Thank you so much.