Aug 20, 202

Kaizen For Your AppSec Program: Turning Big Problems into Small Steps

Transcript

00:12
I should be sharing my screen now. Okay, yes, I'll be talking about Kaizen for your AppSec program. This talk was first delivered in a global AppSec conference in Barcelona earlier this year and this is an adapted version that we'll talk about here today. But before I get into that, I have a question. I'm not sure if I can see the audience here, but I wonder if you have a spare room in your house. And I imagine...

00:41
When I ask this in a big room full of people, let's say 60% of people raise their hands. And then the next question is, think about your spare room. Does it look like room number A or does it look like room number B? I have a spare room in my house and it definitely looks more like room number A than room number B. What's going on in room number A is that, you have it on your to-do list to clean up that room at some point. You will at some point.

01:11
you know, tidy it up, organize everything and make it more livable. But as you don't have any guests coming right now, it's not for this week, it's not for today. And you might as well put another box of stuff there that you need to get out of the way. And like that, it just keeps on piling up. And as it keeps on piling up, the problem with the room becomes bigger and bigger. And therefore the likeliness of you wanting to take it on becomes smaller and smaller until maybe your mother-in-law comes over.

01:40
for a few days visit and then you will have to.

01:46
Another thing that is somewhat similar to this is what I would call surface creep. I don't know if close to your door in your house you have kind of like the top of a cupboard or a kind of table where you keep your car keys and maybe gloves and shawls and glasses and more and more and more and more things. And actually all surfaces in the house have a tendency to get occupied by things if you don't keep them.

02:15
organized and clean. You may be wondering where I'm getting at with all of this. Well, this was my personal journey of how I got onto Kaizen, but I'll get to that first. I'd like to introduce myself. My name is Darf Lasse. I'm originally from Belgium, but I've lived in Barcelona for quite a while now. I'm a co-founder of Kodafik, the company behind the SAMI tool. And I have a doctorate degree in business administration, specialized in organizational behavior.

02:44
So I'm more of a psychologist in background, but I got into Abseck about 10 years ago, let's say. I also teach sometimes for the Geneva Business School where I'm also on the board of directors. And I'm active in OWASP SAM and the OWASP Barcelona chapter. And I'm a messy person. If left to myself or left without any guiding systems, my house will become very, very messy in no time.

03:13
And so does all of my other things. Which is why I was really interested in some of the principles of Kaizen. I came across them when I was preparing for a course. I was asked to provide an introductory course at a business school on lead methodologies. So I was browsing around and looking at different lead methodologies. And I came across Kaizen and more specifically the 5S, which I'll talk about a little bit later. And I was really fascinated.

03:42
In Kaizen, like in many of the Japanese Li methodologies, it's not just methodology, but it's kind of a crossover between methodology, philosophy and habits, or actually I should say habits, methodology and philosophy. By developing certain habits, you change the way you think about things and that becomes kind of the incremental change in your

04:11
approach to a problem. So there's a crossover between methodology, habits and philosophy. What does kaizen really mean? Well, it's Japanese. I don't speak Japanese. I also don't know if this is actually the right science, but somebody who speaks Japanese told me it is, but it's the combination of the words change and good. So it is good change or change for good, literally.

04:39
Like most quality control systems, you typically have a circular thing, you know. It is not a linear improvement process, but it is something that you keep on repeating. In Kaizen, the six steps is identifying the problem, analyzing the current processes, creating the solution, testing the solution, measuring the results, and then standardizing the solution. And then you're back at the beginning. You're back at the start.

05:08
But the thing that really fascinated me was the 5S. The 5S are five activities that you designate a special time slot for. So in my case, I would do this every morning for 15 minutes. Actually, I did it this morning for 15 minutes as well. And that means typically before you start working, you're not allowed to actually sit down and do any work. But during this...

05:37
15 minutes or half an hour, can take whatever time slot you want. You are only allowed to do one of these five activities. And this is with your workplace in mind. So I'll start with my home office, with my desk, with the shelves around my desk. And instead of trying to solve everything all at once, like that big room you have there, you identify one box, one shelf, or one thing that you're going to really tackle.

06:07
and resolve. The first step is sort, throw away all the stuff you don't need. Clutter accumulates and so you need to get rid of a lot of things. Then assign a logical location for the items. What is on your desktop? Do you really need those things on your desktop? What is in the first shelf? Do you really need those items there? And then shine.

06:34
which is a bit surprising here, because how much can you really shine your desktop or how much can you really shine your workplace? But the idea is that the shining is kind of also a meditational thing, a contemplation thing, where you don't really know what to do next, but you'll just polish a little bit and then think about how you're working and actually the way you're set up makes sense, or is there any way you can make things more efficient?

07:05
And then you standardize the process and you'll be persistent. You develop a culture of continuous improvement. So, you try to have some systems in place and you do this every day or every week or whatever the timeframe is. In my case, it's 15 minutes every day. So that was my solution to the home chaos and it's still chaotic in some places, but a lot of it is amazingly organized now. I have systems for.

07:34
for cables, I have system for all kinds of items and I actually know where items are now. In my previous house I was living in, I didn't know where anything was.

07:47
Now, how many rooms are there to clean up in our AppSec program? As we were working with AppSec programs and specifically, typically my organization, the first point of contact when somebody starts working with OWASP SAM and needs some tooling to scale implementation of OWASP SAM. I very often get these arguments. If I had the right resources to resolve all these problems, there's a lot of things I would do.

08:15
But currently I have to pick my battles because I have limited resources and therefore there are certain rooms that I don't enter. I cannot deal with those things right now because there's too many other urgent things I have to deal with and I need to address those. So coming to think of that, I see the parallels between the Kaizen methodology and developing a continuous improvement methodology for your AppSec program.

08:44
Another problem that arises is that we're kind of putting out fires all the time. Yes, you would like to iteratively improve your application security program and you would like to do all of these things, but there's always something that's on fire. Something came out that some process is not up to par. Some people didn't do their training. Some thread modelings weren't done correctly. And you're running after all kinds of activities that you need to do. Whereas you solve one thing.

09:13
You go to the next thing and meanwhile other things start to degrade and you lose track of everything being well under control. So that's where the value of a framework such as OWASP SAM or OWASP DSOM really comes in. SAM stands for Software Assurance Maturity Model and it is a structured inventarization of all the best practices you should be doing and it's structured as a maturity model.

09:41
for all of these 15 activities here. There are three levels of maturity. There are also two streams. So there's 30 activities in total. And basically it's like the catalog of all the things you should be doing, but then not just like binary, like you did it or you didn't do it, but it goes from you didn't do it at all to you have it a little bit, you have it really well and you have it world class quality deployed. And it has the quality criteria for everything. So with that,

10:11
It allows you to keep track of everything you're doing and it allows you to quantify those little shelves, boxes or things that you can address all at once while keeping an eye on the other things not to drift.

10:28
So what are the advantages of the maturity model? Well, a lot of ISMS may be built around compliance frameworks like ISO 27001. But then if you get compliant with ISO or you get compliant with SOC 2, then what? Where do you go from there? Whereas with the maturity model, you can really build roadmaps with different stages, depending on the level of maturity and also depending on the risk context of the application and the...

10:56
business environment that you are in. So you can really fine tune the objectives and the investments to the environment that you are in. The other thing which is really good with this finite and it looks quite big but in the end it's not that big. It gives kind of like a common vocabulary and a common taxonomy to the people at the organization in order to see the map of where we are and what the activities are.

11:26
are supposed to be doing and what the quality criteria of having done something properly are. That way, if different teams are self-assessing how well they're doing on application security, they come out with comparable answers instead of having a subjective narrative saying we're doing very well because we did a lot of threat models or we have security champions in place and therefore things are good.

11:57
And as I mentioned before, we can break things down into small steps and that makes it really good for something like an iterative approach like in Kaizen.

12:09
And it is the map of the situation, the map of everything that's going on and where we are and where we are going. Another advantage here is that if all the teams have visibility on the whole map, they don't need to know everything on the map, but you can show them the scoring and you can show them how the scoring of the team, business unit or whatever the scope is, is improving over time. And this creates more buying for different requirements being given to them at different times.

12:39
If not, if they have no visibility on that and you're the security person coming to the team and saying, ah, you need to do A, B and C in order to have good security. They hopefully will do A, and C and then next quarter you come back and now you need to do D, and F. But you just told me I have to do A, B and C and I did A, B and C. Now we have to do D, E and F. Like how many points are there? We're just going to keep making stuff up with a map like OASPSAM. You have a good picture of everything we wish.

13:09
we could be doing, should be doing, and where we are on that map and where we're going. So another nice thing you can do with it is iterate on a stream level. each one, a stream is basically one activity on this map. There's 30 streams in OWASP-SUM, but you could as well be using OWASP-DSUM, which is a little bit more granular for the SDLC. Basically what you do is...

13:37
Each one of these streams gets their own life cycle and each one of these get their own improvement roadmap. So you see this flow chart also has a circular flow, just like the Kaizen flow chart, where you first evaluate, then you validate, you check if the self-evaluation was correct. If there's a second person involved, that is optional. And then you decide whether or not this is good enough. If it's not, you pick an improvement track.

14:06
implement the improvement track and when the improvement track is complete you come back to the beginning and you re-evaluate the stream. It's important that you don't have to go through the whole framework in order to do the iterations but the iterations can live by themselves on the different streams. That makes it much more lightweight to improve upon your posture and make little steps forward without having to go ask the team 90 questions again.

14:37
So how does that look like in the terms of the 5S? Well, the sorting is basically the sum assessment. You get rid of all the unnecessary fluff and you have a objective set of items and quality criteria that we care about. If it's not in the sum model, we don't care about it. If it should be in the sum model and it's not, then you should let us know because there's a version three coming out soon.

15:05
And we're currently collecting information and opinions and insights in order to make the model more complete where it should be missing. But it's pretty exhaustive. But the nice thing is that it simplifies things and quantifies things. So you're not excessively spending time and energy in fluff. And then one thing you can do in sum.

15:35
or DSOM is set target postures, is set objectives and the objectives are not an overall score, but it's a posture with the different levels and typically companies will have a target posture for different types of business units or teams. Let's say a low risk web customer phasing app, a high risk web customer phasing app, internal IT systems, embedded devices.

16:01
Etc. Each one of these have like a library of target postures that are the goals that the team should be aspiring to. And then the whatever you score on a certain stream can also expire. That means that instead of having to do the whole assessment every year or every two years, you only have to reassess if something expired or if you...

16:30
you went through an improvement process and you have to re-evaluate this process. This means that the burden on the teams is much lighter. And then the biggest return on investment, see where companies implement roadmaps and do this recurringly. So like I spent 15 minutes cleaning up my house or my desk or whatever I'm cleaning up those 15 minutes, the same way you want to allocate a certain

17:00
amount of resources per month per quarter in order to gain some points in the overall map.

17:13
And finally, because everybody has more visibility and more insight on to what we're doing and where we're going, this creates, bottom-up engagement and creates more buy-in and initiative from teams. Very often we see that after the first assessment, after seeing their posture, teams are like, oh, but I see these activities here and those quality characters actually.

17:40
That is something we can do. have the tools and the know-how in the house to actually improve those processes ourselves. We don't necessarily have to wait for somebody from security or somebody from the top to tell us to do this. We can take control of that.

17:57
So this is how it looks like in the Kaizen circle then. So you identify the goals. These are the business goals. Then you do the sum assessment, you set the roadmap and the target postures, you implement the improvements, and then you need to measure if things are working. And I'll come back a little bit on that afterwards, but you want to do the sum assessment and you want to have some more empirical data, not so abstract.

18:27
as the sum assessment. And then you update the target posture and the documentation and you start over again in an iteration. So side effects may include greater visibility on situation and goals. This is bottom up, but also top down. Typically the executive board or team or whoever, the highest person in application security, the director of application security or whatever the role is.

18:57
needs to report about the state of application security to a person or committee or a board that is maybe less well versed in application security and does not want to see too complex things. They want to see clear metrics of where we are, where are we going and have visibility on that. As I mentioned, it triggers bottom-up initiatives and it can trigger greater engagement.

19:26
and foster participatory leadership. So the way of working like this encourages the different stakeholders to work together in order to decide together which is the next thing we can do rather than me forcing it down your throat that you need to do A, B and C. We can sit together and see look this is the target posture this is where we eventually need to get to. What are the first steps we can take? Where do you think you can quickly implement?

19:56
quickly improve and make some big strides towards this objective. And on a more philosophical level for the people involved, this gives more agency and more purposeful work. There's nothing worse than having work pushed to you without seeing the big picture and without understanding the why of the work, but it's much nicer if you understand the why, you understand where we're going and how you are engaged here.

20:26
But then the big question is, does it really improve security? This is a question we've often seen asked at the end of the track. Let's say we start with the sum assessment and the average score. And the first assessment tends to be around 1.2. The overall industry benchmark is around 1.4. This is out of three, by the way. So if I'm at 1.2 or I'm at 1.3 or I'm at 1.5, is that good? Is that bad? And...

20:53
Let's say I go to 1.6 or 1.7. This is looks nice. It's like gamified a bit, but does, it really improve security? Did I really, did I really solve anything? And this is where I, the bottom up and the top down approach need to, need to meet each other. So OWASP SAM is very abstract and process oriented approach to everything you're doing, but it doesn't measure well, it's usually interview based. you.

21:22
talk about the processes and the way the processes are implemented. But we don't have good metrics on actual risk and the risk mitigation that is being created there. This is where bottom-up tools like DefectDojo come in. And I'm going to hand over to Tracy to talk about that. I'm going to stop presenting. Thank you, Dag. Fantastic. We will pick up right there.

21:52
Hi everybody, Tracy Walker. I'm a principal engineer with Defect Dojo and exactly what Dag was saying. When you have lots of different tools producing all this data and you're trying to find metrics that tell you how your security program is doing as well as your software development life cycle, how quickly these things are being fixed. That is what Defect Dojo is built to do. Most of our users are suffering

22:21
or before they start using Defect Dojo, there's a lot of pain. Usually you've got lots of different tools performing scans throughout a development lifecycle, and you're probably trying to track all of those things in a spreadsheet or various tools. There's no normalization across all of that data. Different tools have different formats, they even call different, know, different severity can be different between different tools. So that is exactly what Defect Dojo does. Now,

22:50
I'll break this down into a couple of steps and kind of explaining defect dojo as we go. But the most important thing, I think kind of our secret sauce is because that is such a challenge of getting all of these different scans that can happen at different places, right? We're not just defect tracking focus, we're vulnerability management focus because those are security risks. Those are things we need to have extra attention on and we also extra reporting on because they are regarding security. So.

23:19
All of the scans that can happen throughout a software development lifecycle, manual pin tests that happen after releases, all of the different ways that we try to find vulnerabilities, that is what Defect Dojo does. We don't just aggregate all of this data together, we actually normalize this data, we de-duplicate this data. And we're able to do that mainly, we've got a couple of different ways of importing all that data, but...

23:46
Primarily the flexibility comes from the ability to import directly from these tools from their standard output files. This is both true for the open source Defect Dojo as well as Defect Dojo Pro, our licensed version. This is not all the tools, this is just all the logos that I could fit on the screen. But the magic happens with our parsers. So the parsers are part of the open source as well as the pro version.

24:11
The parsers allow you to get all of the data from these different tools, regardless of the format. The parsers are tailored to each tool and their standard output, whether it's like JSON, CSV, XML, there's a variety of different formats they have. So we can import those findings. We de-duplicate those findings. We normalize all the data so that you're seeing vulnerabilities from different tools as a single vulnerability. So that's the number one way that

24:40
Defect Dojo really can help improve is just giving you literally a single pane of glass. Yes, I know all the vendors talk about single pane of glass, but that is exactly what Defect Dojo does. It isn't running the scans. It's aggregating all the results across all the different environments, all the different tools and timelines. And one of the things that it is able to do, yes, single source of truth.

25:07
But it also gives you a way to compare tools, also gives you a way to really kind of prioritize and see everything in one place. Organizing that data is key, right? Because that's how your reporting is going to work. And then in the case of Defect Dojo, all of these automation capabilities like the deduplication are back single pane of glass. So users only see the data that they're supposed to see.

25:34
You can actually kind of separate data for different teams and things like that. We have issue tracking integrations for JIRA, GitLab. I'll demonstrate a little bit of JIRA here. We calculate all the service level agreements automatically because we can see when a finding was found and then when the finding has been fixed, we don't see it in the scans anymore. So Defect Dojo can detect and perform auto triage of whether it has been fixed or not and calculate SLA.

26:03
completion automatically. You will see that here in just a minute. So lots of different automation as well as just the way we're organizing all of that data so that it's flexible and can work in any environment. Is every environment is definitely unique. I just mentioned the integration, so the open source does have an integration with Jira. We've just added three new integrations for Azure, GitLab and GitHub.

26:31
and more integrations are coming because again, you want to continue using probably the issue tracking system that you're using, but you also still need to have a place where you can see just the security vulnerabilities, things that are not just regular bug fixes and things like that.

26:48
So let's talk a little bit about Defect Dojo. So I'll just kind of give you a quick tour. You're looking at the executive dashboard, or our main dashboard here. All the tiles up here at the top, you can customize. And again, this is RBAC controlled. So the thinking here is that someone in the security organization could customize these tiles to what your organization needs to focus on.

27:14
everybody who logs into Defect Dojo would then see just their data on these tiles. We have a fresh new UI in the new version, and we have lots of different metrics pages. So remediation insights, program insights, which would highlight all the automation and all of the coverage that your security program has, efficiency increases, like for example, the import automation. We want to automate all of this stuff. We want to get rid of all of that manual

27:43
swivel chair working of having to create findings and things like that. Of special interest is our priority insights. So the priority insights allows us to calculate prioritization based on your environment. It's really a blend of information that comes from the findings. Do I have? I do. Here's how we're calculating those findings. Every finding gets a base score.

28:09
And also it gets an EPSS score. So Defect Dojo Pro updates EPSS. That's your predictive context of this vulnerability is being used in attacks. So if it's being used out there, that's what EPSS gives you is a predictive score that if it's being used in attacks, it probably is going to hit you as well. The Kev, known exploitable vulnerability. So if it has been exploited or not.

28:36
Endpoints so all of these come from your scans the blue dots and the severity and then you can tailor your business context So your criticality user records are involved revenue So you're able to tailor these components those all contribute to that score they'll amplify that risk and that will give you a way to look at your risk and severity so

29:03
That's what comes up with the urgent risk. So this is tailored and customized to your environment, which is one of the key elements of SAM of being able to prioritize things based on the business context or the exploit ability of those findings. And you can see the prioritization down here. And now that's kind of calculated. So these are the ones we want to focus on first. So that's just the prioritization there.

29:32
As I mentioned, getting data into Defect Dojo is really one of the keys because there's so many different formats and tools and automation, all that. So we make that really easy. We've got a universal importer which runs on the command line and you can push findings. We have API connectors which allow you to connect directly to tools and pull the findings. So there's just lots of ways. We have a universal parser which allows you to customize.

30:02
give you an example of this. So maybe I don't like one of the parsers or maybe we don't have, you know, of the 200 plus parsers that we have my parser. So you can create a custom parser for any CSV XML or JSON. You can create a custom parser for that and do a custom mapping as well. So I can map my title.

30:29
severity, I'll just grab the first three here and maybe description, maybe I grab some extra fields for that and then it'll show you how it would parse that file. So again, any CSV, JSON or XML, I can save that and then I can use that for importing. lots of different ways of getting the data into Defect Dojo. Let's take a look at what that kind of looks like in real time. So I have prepared a little demonstration.

30:58
You can see here we are in an engagement and you can see here I have a number of different tools that I'm using to scan. We don't trust any scanning tool. Every scanning, no scanning tool is perfect and we do recommend using multiple scanning tools. A lot of government agencies like Department of Defense for the United States uses three different scanners to scan every container because they want to see what different scanners say about the same.

31:27
So that's the beauty of Defect Dojo. I can run lots of different scans, they get normalized, deduplicated, and then you're only looking at the findings that you need to actually fix. I just did this import this morning. So you can see we did a single import where we had six findings created. So, and you'll also notice we've already deduplicated. So we already have one finding that has been found in another part of this product.

31:55
What I'm going to do is kind of explain how this workflow would happen automatically. Let's say I'm in security and let's say I have a number of engineering teams working on various development products and none of them like me. Every time I come into a meeting, I add time to their schedule or I'm asking them to fix something that they hadn't planned for. So they don't answer my emails very often. They don't give me pizza for the two pizza parties. So what I've done is I've agreed with these teams.

32:24
I'm going to do all the work. I'm going to assign you a vulnerability in JIRA. All I need you to do is fix it. You don't have to update the JIRA ticket. You don't have to send me an email because Defect Dojo is going to automatically detect when these things are being fixed and will update the JIRA tickets for me. So let's take a look at what that looks like. I'm going to assign this critical. see this Apache log for J. Just going to assign this to.

32:52
You can see I've verified. We can also create all the tickets automatically if you want to flood your engineers with tickets. I'm going to push this one to JIRA. It has been pushed. I don't see the JIRA issue yet. Let me do a refresh. And there's my JIRA ticket. So this has been sent to JIRA.

33:14
It's gone to that team's JIRA instance and you can see it's in their backlog and all the data that I need to provide to them where to find it in Defect Dojo, all the details from the CVE references, all of these things, right? So my work here is done. I've assigned that finding and all I need to do is just wait until those engineers fix it. So let's say, let find my...

33:42
Andy Dandy scan tool here. So I'm going to run a new scan. Let's say I go to lunch. I come back from lunch and I get a notification that I have received a new scan update. And I'm actually getting this in my Slack so you can see it right here. So I click on that and we have a reimport. So a reimport means that we are going to Defect Dojo is going to differentiate between what was the last scan results and this scan result. And you can see

34:10
Five, no changes, but one has been closed. So if I scroll down, my critical is now inactive. It's been mitigated. Somebody fixed this. They checked it in, ran a scan against that container. I got the scan results, and it has been verified as fixed, which is what I would normally do anyway. Used to, I would do that manually. So now the JIRA issue has also been closed.

34:37
So that was done by Defect Dojo. So that entire workflow that I kicked off by just assigning the ticket has all been automated all the way through that. So I'm feeling pretty good about myself. Let's say I go back to, as I keep going over there, let's say I come back to work the next day. I log in, I see that I have a new notification. Very timely. I click on that notification and I have another scan. Okay, cool. All good.

35:06
They've closed two more findings. So we've detected that, but one has been reactivated. Uh-oh. I come back down. My critical is now active again. So perhaps somebody used the wrong base image when they were fixing these other issues, or maybe they used an older library. Regardless, they have reintroduced that finding. Now you can see here we've mitigated this medium and we mitigated this high. And also notice

35:34
mitigated here so my SLA on those has been set also notice my SLA on my critical is not set anymore it was set but that's because this one has been introduced so we're going to need to reopen that ticket because that needs to be fixed again so if I go back to that JIRA ticket oh there we go it's already been reopened so even if something gets reintroduced defect dojo can automate creating that that finding there

36:03
Let's do one more import. again, these imports are coming, know, scans are happening every day, every week. And so every time a scan happens, queue the scan, I click to defect Dojo. You can see we did another reimport. Two more were closed. I scroll down. My critical has been mitigated again, which means we should have closed that ticket, which we have done again. And we can go around and around on that, right?

36:33
That JIRA integration, all of these integrations can be bi-directional, which means that if somebody closed the JIRA ticket and then we discovered that it was not really closed, we would just reopen that ticket. And you can see here the SLA and the mitigation. So all of that critical data that you need to run metrics to populate all of these things like remediation insights.

37:00
How quickly are we remediating things? The mitigated within SLA calculation. So all of these dashboards, all of that data is coming from this automation for lots of scans, lots of products across everything. Let's see, I think I had one more thing to kind of introduce that we're really excited about because yes, there's custom reporting, there's dashboards, but sometimes you really need a deep insight.

37:27
and we have just recently announced a support for MCP server for Defect Dojo. If you're not familiar with MCP, you're probably familiar with all the AI tools like ChatGPT, Claude, all the LLMs, DeepSeq, so these large language models. So you kind of have two choices. You can build AI into the product or you can have an MCP server and expose something like a Defect Dojo to your private LLM.

37:55
And what that means is then the LLM can actually see the data in Defect Dojo and do analysis on that. Now, why would you want to do this? Why not just connect all of your scanning tools to your Chat GPT or DeepSeq or Cloud? Well, you're asking that large language model to then do the normalization, to do the analysis across a lot of data that does not match. So kind of think of it as like dirty data. Duplicate findings.

38:25
inconsistent formats, missing context, all of these issues. So the LLM is going to have to deal with all those issues and it's going to have to use a ton of compute power to do it. And your results may vary greatly. But if you're using a defect Dojo MCP server, that's clean data that's already been normalized, already been deduplicated. We can augment that data with the EPSS, Kev.

38:53
different sources, you can update that data using the API or automatically as we do for EPSS and Kev. And that means that you're really seeing, know, to use the phrase apples to apples, everything's an apple. Now I can give my LLM some clear analysis on this and it can give me some very interesting analysis. For example, this is just one example prompt that we were using in testing.

39:21
You can see here, my CISO wants to evaluate. So this is a Claude prompt. My CISO wants to evaluate the effectiveness of our current tools. I want to see false positive rates and mean time to remediation. I want to see vulnerability patterns, developer team performance comparison, recommendations for tools that I should use, training gaps. What things should I be training my teams on? How about a cost analysis of the tools that I'm using today?

39:50
This is just one prompt that I'm giving Claude and here is the result that that thing gave me. So here's the actual report that that produced. Now this is on fake data, which is interesting. One of our environments has a lot of Mario Kart and Donkey Kong and a lot of video game data. And it gave us specific recommendations on tools to use for gaming environments because it was gaming data. It just interpreted that. So my executive summary.

40:20
All my FOSS positives, my current state analysis. This is all one response from that. My critical findings, things to work on. Cost benefit analysis of using different tools. Maybe you want to consider using some of the open source free tools. Cost savings if you reduce some of those tools. An implementation roadmap. Success metrics, recommended tool stack.

40:47
training recommendations for your developers. And at the very bottom, think, yes, we got some risk mitigation, conclusion, and then your expected outcome. That's from one prompt. So obviously, it's pretty exciting what you're going to be able to do with AI and LLMs, but it's even more exciting when you're able to prepare that data by doing all the things that Defect Dojo is already doing.

41:15
So this just empowers all the work that you're doing to aggregate all those results into one place and to show those results off with additional tools.

41:27
I think that is right up about what I was hoping to show today.

41:34
So I'm maybe turning this back over to Dak. Any questions so far?

41:43
Alright Tracy, thank you very much. I love how everything comes together in one place and that gets deduplicated and aggregated and we can keep track of everything in one place. That brings to mind...

42:02
that got me remembering another methodology that the Japanese like to do that originated in Toyota is to ask why five times? This started at Toyota at the production facilities where they were looking at defects and they don't want to have defects. Or if there are defects, they want to catch them as early as possible, which is kind of like the same as shift left what we are doing here. So if there's a dent in the car or there's a misalignment of panels, why did it happen? Well, because the machine misaligned it.

42:31
Why did the machine misalign it? Is it a problem with the machine? Is it a problem with calibration? Is it a problem with physical spacing, with training, whatever. So if you ask why five times, then you finally get to the root cause of the problem. So the same thing we could do here, and it's a little bit what your LLM was getting at. It's like, okay, which are the really fundamental things we could change here in order to prevent these things from happening? Yes. Okay. So you could ask why did I have a critical vulnerability?

43:01
It was a log4j vulnerability that was in the Apache server. Why, why am I getting that? Why don't I have a good inventory of all of the components? Is it because I don't have a good inventory of all the components I have? Is it because I don't have good SCA scanners? Is it because I don't have an approved set of components that developers are allowed to use? And so if you keep asking the questions, eventually you'll come back up to

43:31
the sound practices and the things you need to invest in. A little bit like your LLM was also suggesting, what are the underlying trainings or the underlying process changes, tool changes that may actually lead to significant improvement here. But the nice thing is also that it goes both ways. So it also goes the other direction. If you did invest in new tooling and you did invest in new training and so on, you could measure how

44:00
how many less critical vulnerabilities, how many less risk exposing incidents have happened proportionally to whatever is relevant. And thereby you can demonstrate ROI on the investment you're making from a process level. This is something that we've seen happen in very large corporations. There are some internal metrics for those corporations, but I'm not allowed to share them.

44:30
But if anybody in the audience would like to do a kind of an academic exercise around this and like demonstrate how different changes in processes actually show different vulnerabilities happening down the line, then we'll be very happy to do that. Please reach out to me. So to come back to the conclusion, with the Kaizen techniques, we turn the big problems into small steps and we make an iterative improvement process.

45:00
in order to grow forward. We can systematically iterate towards better security and we rely on vulnerability measurements to see impact and to demonstrate return on investment of those improvements in processes. That was our talk for today. Thank you very much for joining. I saw there were some questions that went into the chat. I don't know if Chris, you want to...

45:29
Go over them. I've read them actually, so I could jump into them if you like.