Transcript
00:00 Webinar Overview
01:30 Why Scanners Disagree
02:54 Security in Depth Mindset
04:43 Budget Reality and Aggregation
05:30 Vendor Trust and Lock In
07:08 Parser Nightmare CSV Export
10:54 Actionable Takeaways
12:23 MCP Server New Resources
13:23 OWASP Top 10 Report Demo
14:21 Agentic AI Top 10 Reporting
15:31 EU CRA Readiness Assessment
17:04 PSIRT Advisory Engine Preview
20:20 PSIRT Workflow to DefectDojo
22:41 Wrap Up Q&A
Hello. Hello everybody. All right, we're gonna have a brief webinar today. I think the whole, this whole thing probably within 30 minutes or so, if you're trying to time things why can't we trust any security tool?
So we're gonna cover a couple of different topics today. Security layers, security vendors. So I'm gonna try not to go into a rant here, but there's a reason why I asked, can I do a webinar on this? This, and here we are. And then I'm gonna highlight some recent updates for Defect Agile Pro, specifically our MCP server.
We have some new resources. To help with OWASP the latest top 10, as well as their a agentic AI top 10 which was released in January, I believe, and then also the EU CRA a ability to run a report and give you a gap analysis of where you may be with. And then and then a special treat the new DefectDojo PSIRT advisory engine.
What? Yeah, we'll do a real quick, very teaser preview of what this is and what's coming in future months. So let's get started. All right. I may say some things here that might sound like I am mansplaining some things, security splaining things to a lot of people who are very familiar with security.
I'm also anticipating that there may be some security vendors watching or listening to this 'cause this might get a little of attention as I continue my rant. I've worked 20 years in I internal IT consulting operations, software development. My entire career has been in internal it and a lot of it has also dealt with security, high security environments in a lot of different places.
And I've even sold security products where we were selling a scanner in the past. And I've seen a lot of bakeoffs, I've done my own bakeoffs between different scanners and which ones are at more accurate and which ones are faster, and, have all these criteria. The number one thing that I have learned.
Is that no two scanners are alike. The majority of the time you do not get the same results from two different scanners. You get a majority of the same thing. And then there's always this, per single digit percentage fringe findings where this found some things that this one didn't, and, those kind of things.
So yeah, if no two scanners. Produce the same results, and you always have some differences between those scanners. Scanners can scan things in a lot of different ways. The way that they identify versions of files, whether they're actually looking at the internals of a file or a binary versus. Just looking at the stamp that says what version it is and things like that.
There's a lot of different approaches to the way a scanner can try to identify if your stack has a vulnerability. So this shouldn't be new information, but for some, I think it maybe is. And then this is a, this is my quote. Because if you, I've even heard somebody say, Nope, this environment is secure, and that is your first red flag, that you don't understand that no environment is 100% secure.
In fact, this is why we do security in depth. No security layer is impenetrable. Every layer is has a weakness. That's why we use layers. That's, why layers. And this can be scanners. You can just have different scanners that are doing different things, and sometimes you want to have different scanners scanning the same security layer.
We work with a customer that has a very high security environment. And they run five scanners on every single container. And then they compare and contrast those results because it is that important to make sure that they are getting comprehensive. And if one scanner can find something that the others don't, they need that information.
And it also builds that confidence when yes, 90% of or 95 or 98% of the findings are showing up the same. Then it also increases your confidence. And this is the other principle, and this goes back to, this is a secure environment. You should always be assuming that you're already compromised, security, the grind of security is you're constantly trying to prove you're not right.
You're trying to prove the thing that you you have to assume that you are compromised and you're constantly trying to prove, no, our network packets don't include. Anything else in these layers, nothing else is encrypted. That shouldn't be in just all the little checks that you have to do and all the details.
So again, this, it all just comes and boils down to you cannot trust any single security layer. You cannot trust any secur security scanner or tool. You want to use layers and you want to do as much cross-checking and layering as you can. Now that all sounds really good. But the reality is, budgets, time all the overhead, even adding a second scanner is oh my gosh, now I gotta go change all these pipelines or whatever it is, right?
There's a lot of overhead to that. And if you're not using a tool like DefectDojo that does aggregate and make that easier. You're doing all of this by hand too. So it really is you're, it penalizes you from a work standpoint and an effort standpoint. If you're having to do this, and if you're trying to use AI tools to do this, they're still trying to do the same things that DefectDojo does of normalizing that data de-duplicating that data, transforming that data.
Sometimes some, no two scanners have the same format. We're gonna get more into that as well. Now, there may be vendors out there going, oh, of course, Tracy. You're arguing that all the vendors should just be working together because that's what you guys do. You guys don't exist unless you're importing our data from our scanners.
Yeah, you're right. We have this unofficial vision statement, if you will, that at DefectDojo that we help every environment improve security, right? Whether it's through the open source or whether it's through the paid perversion. Any environment can benefit from additional vulnerability, universal vulnerability management, normalization, deduplication, all the things that we do.
And yeah, we've, you guys don't sell a scanner. 'cause these other vendors they have a scanner and they're telling us it's the best scanner. I can tell you that I've really annoyed some salespeople in the past when I would tell, I've had a situation where a salesperson said, oh, our scanner is the most trustworthy.
You can trust our scanner. And that immediately I knew, undervalued or undermined our credibility. Because in security you cannot trust any scanner. What you should be saying is to any prospect is we have a great scanner and it's very accurate and it catches as many things as we can possibly put into that scanner.
But as you cannot trust any scanner, so you should be using multiple scanners, but you should definitely be including ours in that pack, that's really the message that you should be hearing from vendors. That they understand security principles, that they're putting your security first and not their own, aspirations.
So why am I ranting? Why is this coming up? Why did I ask, can I do a webinar on this? Because I was working on a on a parser DefectDojo supports parsers for the community version as well as the pro version. The parsers give us the ability to, in great detail, import data, normalize that data, transform that data.
In many cases, sometimes we have to break fields apart so that you can use the data elements separately. Sometimes we need to combine data elements. Every single scanning tool. Produces different formats, different strings, all of that kind of thing. And so I was building a parser for one of those scanners, and we I was having a real issue trying to get it to work.
It would import, but then I was missing fields, and then I was trying to figure out why am I missing these fields? And then I was getting column errors that it couldn't parse the CSV. So I tried to open it in Excel, and Excel couldn't open it. So then I'm thinking maybe there's something wrong with the file.
There's, because if Excel can't open this and maybe the Python CSV parser, which has been around for how many years, so what am I, so it's gotta be a problem with the file. Maybe the, the prospect who had provided the file, when they edited or cleaned it up or whatever, obfuscated details, maybe it was in the file.
So they sent me another one and I had the exact same problem. So it wasn't me. It was the file, it was the way that the file was being prepared. And in typical CSV, usually it's, this is it, this is the actual code that you need to parse those. It's been around forever. It's three, four lines. That was not the case with this one.
This one had several deviations. I won't go into all of the technical details here. But the result to get it to work and to be able to parse it was over 150 lines of code to get this to work. And, and with Noel Fields and when they, some of the delimiters aren't there versus when they are.
And what I came to realize, because this was not part of the standard format, is. This feels intentional. This felt like this. This company intentionally was saying we support CSV output. But good luck if you try to use it. They support it, they export it, but if you tried to import it somewhere else or open it in a spreadsheet and you can't, that seems, why would you do that?
I personally, and this is just my personal opinion. But I can't think of another reason why you would do it unless it was not intentional. We have, and I have heard by my own ears, a vendor say we work with a lot of vendors. In fact, a lot of vendors even come to d effect Dojo saying, we, our customers are saying they want to integrate so they can compare our results with other tools that, and they get it.
They're in security. They, some of these vendors know that. We have some other vendors who do not. And when I heard this, the very first thing I thought was, you do not understand what you're talking about. That's not your data. That's your user's data. That's your user's security data. And they're trying to make that data better.
They're trying to cross, compare with other tools to know that they can have some confidence. 'cause they're never gonna trust you. But they at least have some confidence that they're getting some accuracy. D Dojo. Is trying to improve security by giving companies that capability. When a vendor prioritizes things that don't serve you or your security, just things like standard V IDs or, participating in open standards, using standardized formats, not trying to tweak things to make it challenging. That's, lock in is something that a lot of vendors try to do and intend to do because again, they want their logo in the middle and the, you can't just pull this out 'cause it pulls everything with it.
The old Oracle database approach that's not healthy for you. That's not a good thing. So if you're only using one scanner, let me just say, try to add another one. Try to add another level of security, another layer of security. Do some comparisons. There's a lot of open source scanners out there you can use.
And if it becomes a little bit of an overhead, then sure, you've got open source DefectDojo. If you want us to help it, of course there's the perversion. If the vendor you're working with treats. Integrations as a threat or they seem to drag their feet or they promise they're gonna do it, but they don't.
I think you should look for another vendor straight up. Vulnerability scanner vendor lock in. Yeah, if you are going to use, or if you feel like you're locked in and you can't possibly exfiltrate yourself out of that you can also use open source DefectDojo to help break that vendor lock-in because it will give you the ability to see.
I think of it as, all of a sudden my scanners just become different sensors, right? I'm gonna aggregate all of that data. I can de-duplicate between tools if you're in the pro version. And that gives me ability to use more scanners. But I see it as if it were just one super scanner that is trying different approaches and things like that.
So that's the big takeaway. That's what I was like I gotta do a little webinar on this and do a little pitch on this.
Any questions or comments, Chris, we will move on. But that's the pitch. You cannot trust any scanner. You should be using multiple scanners. It's not a pitch necessarily to use DefectDojo. It's a pitch to improve your security and understand which vendors have your best interests at heart.
Alright, let's talk about some new features. So this isn't the Pro MCP server. Again MCP if you're late to the game is a wave to abstract APIs for AI, LLM large language model. We did a webinar in this last fall. Talking about our MCP server? So MCP has things like resources.
So it's a way to define something that you want the LLM to use. So say, do a cross reference, say like an OWASP top 10. So maybe I want to create a report based off of the OWASP top 10 that maybe does a mapping. This is a part of the prompt that I'm giving it little tips and tricks. I don't really write my own prompts anymore.
I have, I use Claude a lot, but you can use any LLM. Ask the LLM to write your prompt, ask, tell the LLM what you need and some of the detail that you need, and then have it write an even more detailed prompt that can write this. This is not the entire prompt that I use to create an OWASP top 10 report from the recently released OWASP community, but this is the one that I used.
And Claude is down at this very moment because of course I'm doing a demo, so I have a backup. So this is the OWASP Top 10 Alignment report that it produced from that from that prompt. This is against a demo environment with really terrible data. And so it basically tells me my data is bad.
It tells me I'm missing a lot of cws and out of active findings and all this, but it does give me a way to. Show like this coverage heat map of different WAIS categories and things that maybe I've got some gaps. I've got a lot of gaps with the CWE zero 'cause it's not very great demo data. But that's also what it told me.
It immediately points me to, Hey, if you had this, I could do an even better report for you. So an example of using the OWASP top 10 resource. Then I did another OWASP. But an additional resource we've added is the Agentic AI applications top 10 that just was released I believe, in January by OWASP.
And again, this is a small par portion or this is the kind of the main portion of the da, data collection steps. I also have prompts in there for what I want the report to look like. I add the DefectDojo logo, which now is recently supported. 'Cause you couldn't just say and add this logo to it but now you can.
So here is AI generated report. Each one of these reports took about 10 to 15 minutes because it's churning through a ton of data. In fact, a couple of them. Even required me to continue the conversation within it. So agentic AI applications, top 10 total findings, products, et cetera.
Terrible demo data, but we'll fix that in the future Readiness scorecard. A lot of things, so if you're, starting, you've got teams that are starting to use these AI tools this might give you an easy way to create a report to say, hey. Where's our security at?
Around agentic ai. And then the third is for our friends in the EU Cyber Resilience Act. So this will help you create a readiness report. Really interesting thing about when you're tracking vulnerabilities remediation of vulnerabilities. Flows through usually the same software development lifecycle as your feature requests and your new enhancements.
You gotta have people who are writing code, building code, changing code, et cetera. So you actually get a lot of visibility of your SDLC just from what you're seeing from defect remediation goes through the same pathways, right? So this ease cyber resilience act. Fairly lengthy prompt there. And we can provide these in the future if you're really interested.
And we may even add them to the DefectDojo MCP page, 'cause we do have some example prompts there. None. Nonetheless, let's take a look at the EU Cyber Resilience Act, compliance Ready Readiness assessment. I'm partially ready. And so you can see you got some finding age, some of the things that the CRA is looking for as far as.
Priority actions your Kev ransomware findings. The things that CRA is looking for. So it gives you a scorecard, all the requirements of it, which things you're compliant, non-compliant, things that were not acce accessed, things that are and yeah, can gives you a way to, a map out what do we need to do, what's our priorities so that you're ready to be compliant with the CRA.
All right. Now for, oh, with nine minutes to spare. I think I can fit this in something new. So this is a little sneak peek. Nothing you see here is not in beta. Everything you see here is in beta, so everything here could change. But this is a the new product security incident response team advisory engine.
If you're not familiar with PSIRT as it's known PSIRT is essentially. It's all the advisories that you don't see necessarily from scanners, right? You're used to seeing CVEs or EUVDs from CSA and NIST, right? We're really familiar with those type of vulnerability advisories, right?
But there is a lot of other advisors that come from, OS providers or database providers, or just product, network devices, all these different manufacturers. And they often, they have their own feeds of things that they have found. Sometimes those things refer back to CVEs and things like that.
But for the most part, these may come in through RSS feeds. They also may come through they may, the websites that hold house, this information can also be like, they'll have a base kind of high level description of what the issue is. But then there's another page that has all of the details, right?
So it can even be. Difficult to even bring that stuff in and then try to figure out what's in your environment that's affected, and then how do I communicate to the people who need to fix that thing, right? It's I need a scanner that can do all of these things without scanning the actual artifacts.
I need a way to identify what a group in my company may have, maybe using on their Windows machines or whatnot and be able to identify advisors that might apply to them, even if I can't run a scan on their local environment. And that is really the challenge that we're trying to solve with the PSIRT advisory engine.
We wanna be able to ingest from lots of different sources, like all these RSS feeds match both against things that I know I have some possibly against an sbo. The majority of the time, I'm not gonna have an SBOM for a lot of the things that I'm trying to secure and make sure that we're not vulnerable against.
So needs to work both with SBOM or environments where I don't have an SBOM and I just need to represent these kind of assets and then prioritize 'em so that I can figure out how to group 'em, publish 'em to the right places, and then track the SLAs on our remediating these things, or making sure that we're not gonna be impacted by them.
So that's quite a bit. And that's what we're building. These are the current feed sources that we have. I do expect we're gonna add a number to this. A lot of, again, these, like you see down here, RSS enhancers, so again, to the, where you have a single page that has like a summary and then you gotta click on each one to get to the details.
So that's what these enhancers do is they go grab that extra data in detail. And then the feed matching rules and matching those to the asset matching rules regardless of that asset has an SBO or not. So that I can say, ah, I think I need to let some people know about this thing and then be able to send it to them and export it into DefectDojo Pro.
So then you can track the remediation of it, just like you're tracking remediation for anything else like a normal scanner might provide. So what does that look like? Again, this is all gonna be a little bit beta, but you're seeing a real live I'm running this locally. You can see we've got all these feed sources.
I can go to these feed sources and see detail if I need to. But really the feed sources come in. We do the asset matching, so I can have ways of matching, keywords, regex that I can, impact that asset by adding points or starring it. Feed filters do the same thing on the feed side.
I want to be able to star things. I want to be able to. Raise the points or, tag things, elevate the visibility because I've got thousands of these things I need to sift out all that noise, right? Once we have those feeds coming in, so like you can see here, we've got lots of different feeds that come in from various sources.
Once I have those feeds in, I can begin to sort, I can begin to see how many matches I have for various products. I can confirm false positive those matches so that maybe I can also ignore those things. And then I can add these things to. A case. So a case would be something where I start to organize these things into different groups that things like maybe CVEs and wants immutable.
I may be sorting these things by components or group. There's all different ways I may want to organize these things. When I organize those findings, then I can add those to an advisory, and then that advisory is what I'm going to send to. The various groups within my organization I can have this approval status, or once I publish it, the advisory gets locked.
So we can have the advisors that include all this information. And then once I publish it to DefectDojo, you can see right here that piece or advisory comes in. You can see I've got a couple of different tests here. Those tests, these alerts or that the alert, the test is essentially like the case and then the findings, the feed items, those turn into findings in DefectDojo.
So then this goes to that group, or the group can then use DefectDojo to track these. You get things like the mitigation policy references, all the detail is all in there, so you can just immediately. Get everything you need to to make sure that you're not vulnerable with that particular issue. So that's what's coming with PSIRT and that's a really quick, fast beta demo.
Questions, comments, concerns.