Jul 17, 2024

July Office Hours: Understanding Import and Re-Import

Transcript

00:09
So yes, we are understanding import and reimport, the two sort of major choices of getting data into DefectDojo.

00:20
So let's start with the big picture. So the big picture is we want to get you to the place where DefectDojo was sort of the single source of truth for your AppSec program. You're funneling a bunch of your tools into there. You're then taking those results after being normalized and deduped and otherwise made better in DefectDojo and shipped to downstream systems such as reporting or to JIRA to get those issues remediated. So in this particular talk, what we're really going to do is

00:50
Shift left, wink, wink. We're gonna talk about the left side of this chart. So we're gonna talk about how we get data into defectojo, what your choices are, what some of the options are, all those things that make hopefully your life slightly better.

01:06
And so quickly, I just want to talk a little bit about the data model to sort of level set for this conversation. So products, product types have products, products have engagements, and then under engagements are tests. And you may say like, why is this, what's this engagement thing here? I really just care about scanning and that's a test. Well, the reason we have an engagement here is it allows you to collect a bunch of tests together. There's two different types, there's interactive and CICD.

01:34
There's just different metadata stored for both of those. But so why, why engagement? Well, I think the big thing for engagement is it allows you to take multiple scanner outputs and stick them together, particularly for reporting purposes. So if I run a one, I want to run the 12 tools that I have and have one report come out of that. I can do that as opposed to running 12 tools and having 12 reports come out of Defectojo. So that's why.

02:02
engagements exist in the data model.

02:07
And then why import and reimport, right? Why do we have these two things that sound like they're the same, but they're not? So import was what we originally created a DefectDojo in the very, very early versions of it. And the idea is that at that, for an import is a point in time assessment. When we first did DefectDojo back at Rackspace, these were generally manual assessments that we were augmenting with some tool.

02:31
And so in an import, I can have one engagement with one or more tests. Like I said, in the early days, this was a pen test plus several other tools to sort of augment whatever was found in that manual pen test. And it gives you sort of a picture in time, or a snapshot, if you will, of what was found when that engagement happened. Reimport was something that came along a little bit later in DefectDojo's history. And the idea here is that I'm doing recurring scans.

02:58
And the scope of those scans isn't changing. So every week I'm scanning my entire network infrastructure, or every month, or every release I'm doing a scan, a SAS scan of a MyCode base, or whatever it is that I'm doing. I'm doing a container scan on every deploy. Doesn't really matter from a DefectDojo perspective what tool it is, but the real big important thing for reimport is the scope doesn't change. So I'm doing the same thing over and over. That lets me put results into DefectDojo.

03:28
usually from things like CICD runs or calendar based, like I said, recurring scans, you have one engagement and the same test used over and over for reimport. That's kind of the key difference is when you reimport, you're literally importing again into the same test. That's where Deepak Doja comes in and does the diffs. But for reporting purposes, which is kind of why I took this tangent on engagements, import or reimport doesn't matter, right? All whatever.

03:57
you choose, those will be summed up at the engagement level and you can do reporting there, or at the product level or any of the other levels that Deepak Dojo is reporting. So in terms of reporting, import, reimport isn't really that important. It's more about the type of scans and how you want to use that data going forward.

04:18
So just talk about some other considerations around getting data into DefectDojo. So deduplication, this works the same for reimport ports and imports, doesn't matter, pick whichever one, it does need to be enabled to DefectDojo, so obviously if you don't have deduplication turned on, DefectDojo will dutifully not dedupe your findings for you, so you likely want that turned on. And it doesn't matter for reimport and import between same tool and cross tool deduplication.

04:48
Those are both the same. So diffing of scans, which is only for reimport. So the idea with reimport is I do one scan at one point in time that I do the same scoped scan at a second point in time. DeepVacDojo will take a diff of those two scans and only leave active those findings that exist in both. Like I mentioned, your scope can't change here. And what this allows you to do is have a continually updated view of what those vulnerabilities are.

05:17
That's really where the key difference kind of between reimport and deduplication is that deduplication works across anything. Reimport gives you that updated view of like right now as of the last time we looked at this thing, these are the active open issues with it. And then false positive history, sort of the one of the last other considerations. Another thing you have to enable in DefectDojo, but

05:45
Just like deduplication, it works for both imports and reimport, it doesn't matter which. So there's no sort of, you don't have to give up any of these smart features if you choose import or reimport.

05:58
Okay, let's talk about data ingestion, getting data into Defectojo.

06:04
So the first in the traditional way, the historical way, the initial first way was to do... Matt, I'm sorry, I'm gonna just interrupt. There was a question, there was a hand up. Yeah. So before we move on, Alex, I'm gonna let you talk. There you go.

06:25
If you just unmute now you have to unmute yourself.

06:39
I can't hear you.

06:49
Hmm. I don't know, he still looks muted. Oh, well, we'll just continue and maybe you can put it in the chat.

06:58
Yeah, I'm happy to take him to chat or the Q and A or whatever, whatever works for you.

07:07
Okay, so just keep going and I'll, if I get something, I'll let you know. Okay. Yeah, no worries. I will keep trucking. So we're going to talk about data ingestion. So like I was mentioning earlier, a minute ago, file uploads or parsers, right? Things that take a file from a scanner and parse them into DefectDojo findings. This was what we started with back in the original days of DefectDojo. We currently have around 180.

07:32
We keep getting them at it all the time. So this number is subject to change. That's probably incorrect because we have more than last time I counted. But this was an easy ramp. This is an easy ramp. If you do want to contribute to DefectDojo, the parsers are fairly straightforward. They're an isolated piece of code. There's a couple of things you have to do when you write a parser, but it's not too bad, which is probably one of the reasons why we've got so many parsers in DefectDojo. And we include...

07:58
unit tests in those parsers to help detect drift, if there is future drift in terms of changes of the formats that tools output, because of course they're vendor tools and some of them are open source tools, but tools over time will change the output that they produce. So we have to deal with that. And this is the way that we deal with it, is a unit test to at least catch those as quickly as possible.

08:23
And then there's the API, right? Instead of doing the normal log into a thing, click with your mouse and upload a file, you can also use the API. This works for, there are endpoints rather, for imports and re-import. And both of those just use the parsers under the covers because you're basically using the API to ship a file to the Effect Dojo. There's a whole bunch of API beyond just the imports and re-import, but for the scope of this talk, I'm really just talking about those two endpoints.

08:49
but any kind of API actions we have in DefectDojo, you can do through the API. So we're very automation friendly. And then if for some reason you're using DefectDojo and you're interacting with the API, this is just a pro tip. If you don't have access to the logs, there is a system setting API exposed error details that instead of putting those details into log files, which was a historical way we did this,

09:15
You can also have those returned to your client that's calling the API to help debug issues. Otherwise, some of the errors would literally say, go look at the logs. And if you can't see the logs, you can't look at the errors. So we added that functionality to give people who don't happen to be able to look at the logs, the capacity to see what's going wrong if they're sort of learning the API. Because inevitably, you'll make several requests before you kind of figure out exactly how that API works.

09:43
And then CI, CD and pushing data. And I get your bingo chart out. I've got the infinity symbol for DevOps here. So like it'll be all the way. And I found this really hokey one. I thought it was a great example of some of the silly icons we use for CI CD and DevOps, but anyway, um, see ICD generally when people are doing CI CD, they're pushing data to DefectDojo. And this is again, through the API, right? You have a job that runs in CI CD. It produces results.

10:10
Those results are then gathered up and shipped to DefectDojo via the API. Mostly CI-CD, you're scanning the same body of code because it is continuous integration, continuous deployment, right? And therefore, we usually scope to just a single repo. So these really, really should use reimport because they're recurring. They're going to happen repeatedly. And the scope won't change. That is the definition of a reimport.

10:36
This is a great way to augment other testing efforts and expand coverage, right? Is to push to defectojo data. For example, there was a, in the past, I worked at a place that a different business unit had a license for whatever reason for Qualys laws, their web application scanner. It wasn't quote unquote, our team's tool. We talked to them, we got the blessing to integrate with that. And now we had yet another data feed into defectojo that it gave us even more visibility.

11:05
into the state of things for tools that our team didn't even run, which was pretty cool. And I already mentioned this, that the seat, the, oh, that final bullet point, if you are just starting a security program, I've had several people ask me like, hey, I just got appointed the apps person at my company. Where do I begin? I like using tools that only need access to a repo for several reasons. One,

11:33
Usually getting read only permission to a repo, even if there are some political intrigue happening wherever you're working is not too hard. And secondly, you don't directly need, you don't need a lot of direct developer interaction at first. So it's a great way to start getting results into Deep Effect Dojo, showing some value for your program without taking a lot of developer time upfront. So I kind of like those as a way, an easy start with a AppSec program to get in.

12:02
get some results, show some value, and then move on to doing more and interesting things. I wanted to make sure I pointed that out while I was on this particular slide.

12:12
And then app sec pipelines and API. So this is when you start getting kind of Ninja and you start doing, Automation at a higher level. So app sec pipeline, the whole idea of this, I came up with this term a number of years ago, Aaron and I myself started the OOSP app sec pipeline project. The idea is at the time CI CD was a very new thing and it was, it was really helping drive some value for the development side of the house. And I thought, well, shoot. Like

12:40
CICD is there to create build artifacts and ideally deploy them. Why can't I do the same thing with some automation and run some tools and instead of producing build artifacts, I want to produce findings and defectogems. So that's kind of the impetus for what this AppSec pipeline idea is that I came up with several years ago. And so there's sort of two, as I've done this in several places, there's sort of two generations of this in terms of maturity. What I call first gen AppSec pipelines,

13:09
It's just automating running of tools. Like I may as a human kick off that run, but it is an automated run. And this allows you to do things like run more tools for more important apps or more critical apps and less tools for less critical apps. And then what I call the second generation of AppSec pipelines is after I had done that for several places I worked, I now wanted to start capturing events. Like, is there a change that I want to react to? Like in many cases,

13:39
say a merge to the main branch in a repo, where I'm about to do a deploy, or those kinds of things that I can hook into those events and then run the automated tools. And for an example of this, I want to also remind myself, which is why I have that final bullet, there was when I worked at Duo Security, I set up an AppSec pipeline and we were able to do static code analysis as well as a couple of other linting type tools.

14:06
across 46 Python repos and under three, it was like three minutes and 20 some odd seconds. I don't remember the exact number, but it was very fast. We thought we were gonna have to do this on a, like a weekly, maybe if we were lucky daily cadence, we got it going so fast. So we actually changed this to commit based a bit. So anytime there was a commit, we ran tooling, produced results, shipped them to Deepak Dojo. So, you're not saying you have to start being ninjas, but you can get to ninja status with a little bit of work and some time.

14:37
So here's an example or the diagram that Aaron and I created around what an AppSec pipeline is. The idea being is you start here, where you have some kind of intake. Triage is basically deciding how much tooling you need to run against that thing, generally based on risk or criticality of that app. You've found some tools, they get pushed to a vulnerability repository, AKA DefectDojo. And then from there, you can talk to downstream systems. So for the purpose of this talk, we're talking about everything sort of

15:06
left of DefectDojo.

15:10
And here's a diagram of that one I mentioned that I did at Duo Security where I had an event source. In this case, I merged into the main branch. The AppSec pipeline got that event, fired off a whole bunch of containers. Those containers ran a whole bunch of tools against repos. Those tools returned results and those results got shoved into DefectDojo. And so that's the idea of an AppSec pipeline. It's another way to get results into DefectDojo.

15:38
And then finally connectors or API imports. So for DeFi Tojo Pro, we have created several, what we call connectors that allow you to provide us with the details of the API, the vendor API that you happen to have access to. So if you happen to have, let's say, I don't know, I'll use Semgrep, because that's kind of in the middle of my screen. If I have an account with Semgrep, I can put the, whatever I need to authenticate to the API plus that URL.

16:04
into DefectDojo, set up a schedule, and then on a daily basis, it will reach out to the API, do it actually re-import under the covers, and keep those vulnerabilities up to date on a rolling basis, continuing rolling basis. So it automatically fetches those findings from the upstream vendor API, pushes them into DefectDojo. We've got six of them now, and there's two more coming. It's Tenable and SonarQube slash Cloud that are next.

16:36
And then I want to talk through a couple special cases of getting data into DefectDojo. So in this case, pen testing, manual pen testing. I'm going to talk about third party pen testing in a couple of slides. So if you're subject to that, pause for a minute, I will get there, I promise. But manual pen testing. So one of the things, DefectDojo has the ability to, as you were testing, manually entering in those findings that you find as a manual pen tester into DefectDojo directly. And then...

17:05
use those to create reports. We did this frequently at Rackspace where I would hire one of the team members would be doing a manual pen test. I might run some additional tools just to get a little more info about the state of whatever I was assessing. I would combine those into one engagement. But one of the really nice things, if you do have to do manual pen testing, or you are someone who does manual pen testing, is that Defectojo has this idea of finding templates where I can put in the boilerplate that, you know, across site scripting occurs when blah, blah, blah.

17:35
you know, those traditional sort of things that you define in every finding for a pen test, you can put those into a template and then do create finding from template. In this case, maybe I did a file to Ebola, find, create finding from template, Ebola, bam, I've got all that boilerplate already entered for me. I put in a couple of details about the specifics of this instance of me finding Ebola, and I'm off to the races. So that is a fantastic thing if you are doing manual pen testing.

18:03
Kind of a somewhat a niche use, I guess, of DefectDojo, but if it is something you have team members that are doing, we specifically created it to be pretty nice for this because that's what we were doing a lot of times at Rackspace. I see a hand raised.

18:18
Yes, Mara Shell, if you can unmute yourself and ask your question, please. Yeah, thank you. Can you hear me? Yes. Okay. Yeah, I'm sorry. I should have asked earlier. I joined late, but I like, do you have a way on how to categorize vulnerabilities that are for OS patches, and then for vulnerabilities on infrastructure servers that are embedded software?

18:47
like need to be remediated by the owners.

18:52
So it depends on the tool. But one thing you can do, if you are using a tool for infrastructure and you have a particular way you wanna categorize those, one option you have when you're importing or re-importing, bringing those into DefectDojo, is you can assign tags at the point that you're doing the re-import. So if you wanted to tag those so that you could later sort them, you could easily do that. That's probably the best way to do that going in. Does that make sense?

19:17
Yeah, we're using one. It's basically infrastructure vulnerabilities. But like we wanted to separate it to like OS patches. Those are solely need to be remediated by infrastructure team. But like the embedded ones and servers. This scanning tool is still tenable or Nessus. But we call them as application because those are embedded software.

19:45
that need to be remediated by application, not like check marks, scan results. So it's really a tag, right? Really a tag in rules on how we can differentiate those two. Yeah, okay, I didn't quite get your situation, but I do now. So two things, the today answer and the shortly answer. The today answer is the best thing to do and what I've seen several of our customers do is they'll bring those into a...

20:13
a single engagement or test or whatever you want to do. They will have those triaged by a team. There's usually a team that goes in and does the triage. You can do the tagging there, right? Cause I can go through and look at the long list of, you know, I'm gonna make the numbers easy. 10 findings from Tenable. Three of them are server-based. The rest of them are the at-teams problems. You can tag them appropriately and then assign the, use the tag to filter the ones that belong to each team. Does that make sense?

20:43
or the best way to do that today. The future answer is we are starting work on a thing we're calling the rules engine that will allow you to define rules. So you could write a rule such that if I get a result from say Tenable and the result has this in the title, automatically tag it with infrastructure. But if it has this other thing in the title, automatically tag it with your app team or whatever the other category was. So there'll be a way for you to programmatically set up rules

21:12
that will happen on import time to do that tagging for you. But that's probably not gonna land until Q4 of this year. So later on this fall, early winter, depending, I guess, on where you are on the globe. Does that help? Yes, because we do have a lot. And if we use tagging, that will be manual, right? We have to manually tag finding vulnerabilities, correct?

21:38
Correct. Now you can do some interesting things like if there is, and this depends on, well, I haven't done, I haven't looked deeply into the tenable results recently, but a lot of times you can do keyword searching or keyword filtering of the findings and then categorize them in bulk. If I have this in them, this particular string in them, mark them with this tag. If you get what I'm saying, so you can make some of that.

22:04
quicker as a human activity, you don't have to read each finding and tag them one by one. You can kind of do them in bulk. Like if I see a whole bunch of cross-site scripting, we're going to presume those are app problems. I will tag all those with the app team and move them on. Do you have auto-parsing? Because that sounds like a manual task still. It is a manual task today. That's why we're writing the rules in it, because then it will be you writing a rule that says, hey, computer go off and do this looking for me and tagging for me. Does that make sense?

22:34
Yeah, that really makes sense. Thank you. Yeah, sure. No, great question. Also, do you have connection to Archer? We do not have an outbound connection to Archer currently. I think Archer has an API, if I remember correctly. I had Archer at one of the places I worked at in the past. And so you could, in theory, write code to pull results out of DefectDojo's API and push them into the API of Archer today.

23:01
But that would be code you'd have to write, or we could help you if you're a customer, write that code. Thank you. Sure, yeah, absolutely. No, I love questions. It's better than talking to myself in a hotel room in Dallas, so much, much better.

23:17
All right. The other special case was manual tests. I mentioned pen tests, but this doesn't have to be pen tests. This can be any number of things. So I've seen people do threat models and take the results of a threat model, maybe some control shortfalls or whatever, and put them as findings into DefectDojo. There's another customer of ours that does basically a desktop exercise where they find out what security controls are baked into each application.

23:46
And if there's a, they have policies around the different criticalities of applications needing which controls. And so if there's a controls gap, that's a finding in their world. And they put it into DefectDojo. I've seen people use DefectDojo manual entry of findings for internal audits. So Dojo doesn't really care where a finding comes from, as long as it can sort of fit the nature of a finding, like a thing that isn't the way we want it to be in front of a security perspective.

24:15
It can go into Dojo and Dojo is perfectly fine.

24:20
Oh, and yes, I had to do this. I love this. This is, this is me being a little starkey about AI. I asked an AI driven thing to do a threat model. And besides the like fashion model answers I got, I got this crazy little hacker changing login thingy came up and I thought it was ridiculous. So I also did a, just writing on a whiteboard, which is to me, what's right. Well, it is sorry. I forgot. I put that in there, but I made that.

24:48
But oh my goodness, yeah, I was like, threat model. No, not models like on a runway with fashion. This is a modeling like modeling. Anyway, then the final special case, and this is where if you are doing those manual tests, you don't have to necessarily type them in. DefectDojo has this thing called a generic importer that allows you to take a CSV or a JSON formatted, whatever you want and upload them into DefectDojo. So if you happen to have a tool that we don't have a parser for,

25:16
We've had cases where customers had internally created tools and obviously we don't have a parser for those. Or you do a third party pen test. This is where I said I would mention it in that previous slide, like no problem. Contractually, what I would suggest and I've suggested to many of our customers, put in the year or contract with the third party pen test that you will provide all of the findings either in CSV or JSON. Give them the example CSV and JSON that we have for Deepak Gojo.

25:43
They produce a report for you. They also produce a CSV or JSON. You upload that in a DefectDojo and bam, you're tracking that pen test, that third party pen test, no problem. Same for those internal security controls or whatever. Put them into an easy for you to create format whether that CSV or JSON. And then you can just directly import them into DefectDojo and no big deal.

26:04
We have two questions. Oh, let's. You ready? Yeah, I'm ready. Is there is there a way to change SLA for a finding instead of stuck in the product? That's number one. At a specific finding level right now, no SLAs are applied at a product level. The.

26:29
I think if there's a real way, there's not really a great way to do it for just an individual finding currently because it is managed at the product level. Trying to think. You could maybe overuse risk acceptance to do something like that, but that would be kind of kludgy. There really isn't a great way to do just a single finding. Short of maybe copying into a different product that had a different SLA attached to it, but that seems kind of clunky, to be quite honest.

26:59
Okay. And then same person. How do you group findings and push it to a team other than the product team? Ah, okay. Inside of DefectDojo when you're viewing the list of findings, there's a bulk select option that you can do that you can do the bulk select and then select multiple findings and turn them into what's called a finding group and DefectDojo and that's a way to bundle a bunch of findings into one unit.

27:28
And then frequently what people use that for is either for reporting or with our JIRA integration, for example, you can push that finding group over to JIRA as one issue in JIRA, even though it might be seven findings in Defectojo. And usually customers do that because the mitigation of those, say, seven findings is one action by whoever is responsible. So instead of saying seven times in JIRA, fix this thing by doing whatever, like a great example is TLS.

27:58
You know, your TLS has weak ciphers, your TLS allows TLS 1.0, your TLS has all these seven different issues. Instead of seven issues in JIRA, you can just have one that says, pardon your TLS, which would then have those seven individual things all in the description in a JIRA issue. So that's how you can kind of group findings and push them to downstream systems. Okay. And then they...

28:25
Yeah, they say, I think the grouping works for us, separating apps versus infrastructure teams for remediation. Oh, oh, interesting. I hadn't thought about it, but you certainly could do that grouping for that as well. Yeah, oh, I hadn't, I had not even thought about that, but you certainly could. And even better, if you happen to have two different JIRA projects, you could push those to different JIRA projects by finding group, because you can take a finding group and ship it to a place in JIRA. And so you could, in essence, group

28:55
all of your infra and push them to one Jira place and push all of your app and push them to a second place. Oh, that's awesome.

29:10
Are those all the questions? Yeah, that's it for now, carry on. Okay, so we're gonna have a re-import versus import grudge match with mooses because I don't know, I like this image. So we have a point in time assessment we're gonna do. We need to retain those assessments. We have a proof of that assessment work for future audits. What should we use? Well, for this one,

29:40
import, right, we're at the point in time. Then we're only gonna do this once. We have to have future proof for auditors. So you can just do an engagement with one or more tests in it, and then close that engagement and it's sitting there waiting for an auditor whenever they come back and ask for it.

29:57
Compliance, oops, oh man, I messed up the, I was supposed to have this one come in. I messed up the animation of the image, my bad. Well, so compliance, very similar to that. You need to do a PCI assessment, say every quarter. How do you do that? I would also do that as an import, right? Although you're probably scanning the same thing and you could do that as a reimport, a lot of times auditors like to see proof of what happened in that quarter.

30:26
So you can just do one per quarter. It's only four per year. It's not a lot of engagements. And you can have thousands of engagements in Deepak Dojo if you want, it won't hurt anything. There's no real reason to not have engagements.

30:41
So you've acquired another company and you've been asked to assess the security state of that purchase, right? So you need to provide some kind of overview of the state of that acquisition. What would you use in this case? And I have actually lived this very thing. What I used in that case was an import. Cause once again, this is really a point in time. I'm going to go back and say, when we bought this company this is what their software looked like. Hopefully in a year I can do another one and say and look, it's even better now that we've had our hands on it or whatever.

31:10
But I would say this is a point in time as well.

31:14
So it seems like I'm leaving reimport out and reimport is a comparison between two things, kind of like this guy looking at X-rays. But I didn't do that on purpose. That's the easiest examples that I could think of were all import ones because reimport is such a generalized case. It's any kind of recurring scan. So CI-CD, automated testing, all the AppSec pipeline stuff generally tends to be very well handled.

31:42
because it's repetitive and it's the same scope with re-import, recurring scans, right? And I crossed out scan in the title and made it test because in Dojo speak, a scan is really a test in terms of the data model. One thing we get a lot of questions about re-import is engagements do have a stop and a start date. And some people get a little confused about that. And what do I have if I'm running this every week and I don't plan on stopping, what is my end date? Well,

32:11
One thing Dojo doesn't care about end date. That's just for sort of calendaring, quite honestly. You don't need to have an end date. What I have seen teams do is if they're doing a recurring scan of whatever, this could be CI CD, this could be infrastructure, it could be container, it doesn't matter. Um, but pick a timeframe that makes sense. So they'll do it quarterly and then stop and start a new Q2, let's say. Engagement that way they sort of have what the quarter looked like summed up.

32:40
when they stop that Q1 re-import and start a new Q2 re-import. I've seen customers do the same things, but for yearly, and we have customers that do forever. They just never close. They just keep pushing into DefectDojo. And Dojo doesn't care. It's not problematic. All of the smart features still work. So it's really your choice. I mean, one of the kind of the biggest design goals behind Dojo was to not force you to change how you do work just because Dojo wants you to.

33:09
It's a plus and a minus. It can be confusing for people because there's like, in this case, there's an end date that doesn't make sense in some of the use cases for engagements. But you just don't have to put it like, if you never close an engagement, Dojo doesn't mind. So I wanted to drop it in because we've had that question a couple of times.

33:29
Ah, and then some advanced tips and tricks. This is one thing that this kind of came out of this idea of DefectDojo that is sort of implicit in its design, but it's not necessarily very obvious, but one of the nicest things about having something like DefectDojo, particularly with DefectDojo smart features is that there is a piece of software and potentially humans in between running of those security tools and the downstream systems. So if I can't tune.

33:57
a tool to get rid of either false positives or findings that just don't matter or findings that are say critical, but because of your compensating controls that are really mediums. Right? If I can't tune that out at the tool level, at this level here, I've got the effect dojo and I have a second place to mark false positives, change that critical to a medium because we have a compensating control and do whatever kind of massaging I need to do with that data before I ship it to downstream systems.

34:26
Because the minute you ship it to say, you know, like a JIRA integration and put it in somebody's backlog, you're saying this is actionable and real, and I need you to go do something about it. But if you have the ability in Deepak Dojo to tweak that and adjust it to fit your context, then you can get much more accurate issues downstream that are actionable and get a much better relationship with your dev teams because you're not pushing stuff that you then have to go.

34:51
Re-argue. Oh yeah, I did push that as a high. I'm sorry. It's really medium because we have this compensating control, right? You can just avoid that altogether. So, um, I've seen a lot of people do automation with security tooling where they automatically pump results directly into an issue tracker. It sounds like a good idea. Uh, but unless you have great faith in that tool, never making false positives or incorrectly stating, say the criticality of something, I would put something in between that tool.

35:19
and the people doing the work, in that case, defectojo.

35:25
And then a neat trick that I've seen some customers use before is they will do recurring re-imports into an engagement over and over and over again. But they needed to prove to an auditor, this is like the once a quarter thing or a once a year thing, that they did an engagement to test at these particular calendar intervals. So what they would do is they would continually re-import into the same engagement.

35:53
And then at the end of that quarter or end of that year or whatever the time period was, you can make a copy of an engagement. So they would copy an engagement that would save those findings at that point in time. And then you could go back and keep your re-import into that same existing test. So this is just a nice little hack to get you a snapshot of a point in time while continuing to do re-import as your normal sort of daily operation. That way you don't have to have

36:20
a separate process to do a single import to take that snapshot. You can literally just copy out of your ever moving forward, reimported engagement.

36:32
Okay, and then finally some final last other choices you have.

36:38
So two things I didn't mention about, or one thing I didn't mention about DDoP is there's two levels in which you can place it, right? You can place it at the product level or at the engagement level. Product is what you usually want. Usually you're running tools across multiple engagements about the same product and you want to DDoP within that scope. I have seen though instances where customers actually need to do it down a level, that down at the engagement level instead.

37:04
How does this work or why would this happen? One example of it was a customer that had a product and they had a main branch and a feature branch scans as different engagements in DefectDojo. But they wanted to be able to tell that the same issue may exist in two feature branches and the main branch or only feature branches. Well, if you have DDoP at the product level, the DDoP

37:33
between those branches and that's not what they wanted. So we simply moved their engagement or had them configure it to move their engagement down or the BDU down to the engagement level. And so these are some choices you have to make with reimport and how you've decided to lay out your products. But you have two choices, so it is flexible.

37:52
Oh, here's another, if you like a graphical representation, I turned this into a, I kind of like this data model diagram. I'm a very visual person. So here is the sort of two levels. And then if you look in the documentation, we have this chart that also is another visual representation. So there's several ways to slice this, but the idea is you need to understand the scope of where that dedupe happens. At a product level, it's going to cross engagements. At a engagement level, it's only going to cross tests, basically.

38:23
And then one last thing, you'll find that DefectDojo is very flexible, but a couple of times we've run into issues with customers where what I would call this customization last mile. They can't quite get the system they need natively in DefectDojo features, or at least in the primary features of DefectDojo, and your bridge across that chasm is tags. And so tags are fantastic, and they're all over the place in DefectDojo throughout the entire data model.

38:50
And a lot of times when we're working with customers, that's what ends up being that sort of final thing to get that little bit of filtering you need or adjusting the data or monitoring the data. So you can do reporting like you need to is with tags. For example, we had a customer that had several different ways they wanted to track risk accepted findings. They had three types. Well, fine. Like make a tag per type. And we kind of even did a little bit of a cheat on the tag. Each tag started with a two letter prefix and then a dash.

39:19
and then the type. So if it was a false positive, it would be two letters dash FP for false positive, for example. Well, you can search for those two letters dash and get all of the risk accepted findings, or I can search for the tag that's two letters dash FP and only get the false positives. And this allowed them to report specific subcategories of risk acceptance or all risk acceptances all by how they did the tags. So if you have these like little corner cases you can't quite catch,

39:48
Tags are usually the way to get you there.

39:52
Key takeaways, the beautiful thing I think about DefectDojo, although it can be sometimes a little bit of a dauntingly complex product, is you don't have to adapt how you work. DefectDojo adapts to how you work, you don't have to adapt your work to how DefectDojo wants you to work. That's a big win. Import is good for one-time assessments. If you haven't caught that, I'm going to say it again.

40:15
And reimport is really good for recurring scans where the scope doesn't change. And that scope doesn't change is really important. If you change the scope on a reimport, you're going to get inaccurate results or certainly odd results. It's sort of like taking a scan of one repo and taking a scan of another repo and comparing them. It really doesn't work because they're two different repos. And then if you combine Deepak Dojo's smart features with this reimport and import, you can save tons of time. And really more importantly,

40:45
remove paper cuts and the drudgery. Honestly, like comparing scans, I used to do that manually. It is awful. It is gross. That's half the reason why we put it into DefectDojo because it's just not fun work. And if you can get that drudgery out of your life, then you can focus on the fun stuff, which is the other reason we wrote DefectDojo to begin with was we were doing the security work and I'd much rather do the fun things than the drudgery things. And guess what computers are great at the drudgery things. I think that is it. So I will happily answer any questions.

41:15
that may or may not exist still in our audience. Okay, we have one. If I use different engagements in the same product, same tools, can't that mess up the dashboard and inflate the number of vulnerabilities present?

41:33
Ooh, that's a great question. It kind of depends on how you do the dashboard and where you're pulling those numbers at. Because DeepFact Dojo can do reporting at all those different places in the, actually, let me go up a bunch. I can point at this guy. So you could do a bunch of, you can do reporting at any of these levels. So if you did that at an engagement level and you're worried about it muddying up the product, it could, but that means what I would do is filter the reports by engagement, which you can do. You can do reporting and do filtering by engagement.

42:03
And then you could do, like for the example I used with the branches, I could do this, like say this is the product and this is the main branch and this is a feature branch. I could do filtering by product and feature branch and only get those results for the feature branch. So some of the numbers may be inflated if I look at a product level, that is true, because you might have something double counted if it was existing in the main and the feature branch. But if you filter to the engagement,

42:32
the customer who was doing this branching model would need to do to get that level of accuracy, yes, you could avoid that sort of inflated value issue. Hopefully that helps.

42:43
Okay, great. We have another one. Is there a means to provide permissions at the vulnerability level or based on tagging? Oh, interesting. The permissions right now stop at product and product type. So I can do RBAC and give someone access to everything within a product type or just a specific product. What we've seen cases where you want someone to go in and...

43:10
I want to go into this particular finding and say, hey, this is a false positive, or this should be a, like I mentioned earlier, a medium instead of a critical because of XYZ. What we've seen customers do is do, if you give somebody read only RVAC permissions into say this product, they can go to this finding and in the UI, there is a request, a peer review.

43:35
And a peer review is basically a way for a read-only user or other users also have this ability to tap on basically a security person's shoulder and say, hey, I don't think this finding is right because of these things. I think it should be this. It's audit logged in the finding. People with proper permissions can then go in and see the findings that have those peer reviews requested and triage them however they need to. Review the finding, agree with the person, disagree with the person, whatever. But it's a way to give people access

44:05
ask for changes at a finding level without being able to change everything else within that product.

44:18
Okay, great. Hopefully that was helpful. Oh wait, there's a follow-up on that. Need to limit view of other test engagements within a product while sharing specific findings. Oh, interesting. We need to...

44:36
Ooh, that gets really interesting. So.

44:42
At that point, I would almost, I guess it depends if this needs to be interactive. If you're literally just sharing the findings, I would use our report generator to generate a report of the findings filtered for only those user groups, and then hand them the finding report is probably the best way. You can also export that as a CSV if you want to just give them a CSV or an Excel of data. But that's probably the best way to filter that out to just a subset of findings.

45:11
without giving broader permissions and not using the peer review thing I mentioned earlier. Okay. So yeah, the follow-up is pen test data is not shared.

45:22
Interactive use desired to use the HOS ticketing system. Ah, got it. So you could then do a filter to get all of the test types, excluding pen tests into a either a CSV or a printed PDF report, and then share that with the teams that way. It's probably the easiest way to do that in terms of providing limited access to a subset of findings.

45:49
Okay. We don't have another follow up there, so we should be good. And then I have one more. So if this is if someone wanted to kind of work with Dojo to get this laid out for their organization, like the reimport import what they should specifically be doing. Is that something?

46:19
that they can get from us or where can they get that support? Yeah, so all of our customers that have a pro engagement with us or whatever, SaaS or on-prem for that matter, but our pro customers, we do that by default. That is part of our normal onboarding or continual support group. And we've done that. The example of the subtypes of risk acceptance was literally a support ticket that came in

46:49
from one of our customers who said, I have this need to produce, in their case, they wanted to make sell these types in an easy fashion. How do I do that? And we worked them through the process of doing that. So that's something we do day in and day out for our customers. So that's no big deal. If you're an open source user, there's a fairly active, but there's a very active honestly, Slack channel on the OAS Slack where those kinds of questions can get answered. And you have a broad audience of people that are experienced with the Epic Dojo as well.

47:21
Okay, great. Thank you. Are there any comments? So if we have any other questions, please ask them now.

47:37
Okay, well thank you everybody for attending today. This was great, Matt, thank you so much. And just to let you know, you'll be, for everybody who's on our either newsletter or our email channel, in two weeks from today, Matt will be doing an intro to Dojo Pro. So if you're curious about that at all, you can see what's.

48:04
the differences are, what the benefits are of doing that, and he'll be going through that in our July webinar. So you'll see an invite in your mailbox, so please sign up. And if you haven't already, sign up for our newsletter, just so you can stay aware of all of our virtual and in-person events, and also product updates. So thanks everybody, and we will talk to you soon. And thank you, Matt. Yep, take care everybody.