Transcript
00:07
There we go. Hi everyone, happy Wednesday. It's great to be with you all today. My name is Greg Anderson and I'm the creator and now CEO at DefectDojo. And so what I've prepared today for our office hours this month is a walkthrough, kind of a high level at the platform and then deep diving into deduplication specifically. I just think the context of the platform and the other smart functions that are available.
00:35
are important to weigh in in terms of what will work best for your organization, DefectDojo is made to be hyper flexible. And so there are a lot of choices when it comes to changing how these things behave. And so why we're here today, I think the big premise is that vulnerability management is ultra, ultra painful. This is something that I experienced before going on to the vendor side, just without a tool like DefectDojo, I find
01:05
that the work we ask our security team to do is near impossible. And also that work is about 60% things that the team doesn't even like doing or want to do. Typically what we're aiming to build with DefectDojo, this is sort of our vision when it comes to security automation. After tuning, when using smart functions in the right way, we want people to be able to get to end-to-end security automation.
01:33
without human intervention. Now that's a pretty tall order and many organizations aren't ready to get all the way to that sort of security promised land, if you will. But likewise, we want to be able to meet people for the tool to be able to meet people wherever they are in their security journey. So if you have no automation, that's okay. You can still get value out of the platform, either open source or commercial. You have some automation.
02:00
or you're looking to go to those really advanced use cases where we're constantly tuning and adjusting these things to potentially remove the human element.
02:12
So when it comes to smart functions, the primary goal of any smart function in DefectDojo is to enhance and enrich your security tools at a number of layers. The primary goal is just to reduce the noise, consolidate results, because frankly, tools aren't great at this in general. They're noisy, they're verbose, they're riddled with false positives, generally speaking. The other thing that's really important to us when we talk about DefectDojo smart functions,
02:42
is all of them are designed to respect the input that humans give us, meaning they will change over time based on how you mark data. And that applies to both reimport and deduplication, which are the primary functions that I want to talk about today. So when it comes to reimport,
03:08
The goal of reimport is to compare two different scans over time. This is really great for the CI-CD use case of recording what has changed from one scan to the next, whereas deduplication is focused more at a general level, although there is some intersection between the two functions. But the idea of deduplication is to achieve similar results. It just works on a different level based on scope.
03:38
And then false positive history, I don't want to talk about too much today. With where deduplication is false positive history was more of an experiment. I generally recommend using deduplication over false positive history because they can also be used to achieve very similar results without having to rely so much on the learning aspects of the algorithms. So.
04:04
Diving in more to reimport and where that fits really well is when you're repeatedly scanning the same set of assets every day, every week, every month, every quarter, whatever your cadence is. And this is sort of a visualization of where reimport works the best.
04:28
Oops, sorry, we have some late joiners that I have to let in quickly, Don. So picking up, this is sort of a visualization of where we believe reimport works the best. And then where reimport doesn't work well is when your scope of what you're scanning is changing or expanding. And the reason for this is that reimport is going to automatically look at those results to...
04:56
determine what has changed and close findings appropriately. And so if a finding isn't present in your new scope, reimport is going to assume that it was closed.
05:12
I can use, moving on to deduplication, which is where I want to deep dive from a smart function perspective. Deduplication works really well when your scope is evolving, you don't know your scope, or you can't guarantee that you're going to be scanning the same thing over and over again. How I typically like to talk to people about it is, it's very nice when you know your scope, but if you work in a hectic security environment,
05:41
It may be the case that you just need to throw data at the tool and let the tool figure it out. And so deduplication is better suited for that use case. The other thing that you can achieve with deduplication is tuning across tool, which I'm going to touch on a little bit in terms of configuration, but going across tool is work.
06:10
Essentially, I would say this is not what you want to start with out the gate. I would typically say how we engage with people is getting everything working well with regard to a single scan first and then expanding into the use case of working across tool. The other thing that I recommend just when we start to look at how these things are configured is that it's highly recommended
06:40
that you, if you're a customer, you're working with support ahead of time because there are not a lot of protections when it comes to changing these settings. It is possible to make mistakes and those mistakes can be difficult to walk back.
07:01
So when we look at our options for deduplication, we make four algorithms available. And so out of the box, we do have settings that we think work very well from a general perspective, but you can change which algorithm DefectDojo is using if you want to customize. So most of the algorithms today, it really varies, are typically using option four, which is the most advanced option, which we use.
07:30
a unique ID from the tool itself, and then also fields from the scan data. But to walk you through each one, legacy deduplication was our first run at this, which just essentially compares finding details in a non-hashed way. And then from there, we evolved it to use unique IDs from the tools. This is really great for working within a single tool with regard to deduplication.
07:59
But when you want to examine going across tool, the unique IDs aren't going to match. And so from there, we went to using hash codes, which is similar to legacy deduplication, but it's frankly just more advanced. And then finally, we started combining these different methods to get something that would be both precise from the same tool comparison. So just one example.
08:26
If you're using Zap and reoccurringly using Zap, unique ID sort of solves that. And then the hash code comparison is better for going across tool essentially.
08:42
When we look at the fields that are ideal for using the hash code on, we see that there are certain objects that work really well with SAST and others that work better with DAST. So with regards to SAST, it's primarily the location fields that work the best. We want to, at a minimum, be ensuring that a finding is in the same place, because on the SAST side of the house,
09:08
You can have vulnerabilities that are repeated in other areas. So location data is really, really key with regard to evaluating duplicates specifically in Saast. And on the DAS side, typically we like to look at the endpoints. You can walk this back if you don't want to use location on DAS, but then you run into issues with one vulnerability that may encompass another.
09:37
And so typically it's recommended. And then there's also a couple of fields that work well across both SAST and DAST. And so generally that that's CVE and CWE.
09:55
And then so finally, how do you do the tuning that's available in an open source? So tuning an open source is essentially a three step process. So I tried to fit this as well as I could in a slide, but I realized that it is a big slide. And so on the left side, some of the settings that are available is the algorithm that is being used by the tool.
10:25
And then on the right side are the fields that the algorithm is taking into effect. And so with regard to open source to change the algorithm or the behavior of the algorithm, both of these need to be considered and adjusted. And then once you've settled on the type of algorithm you want to use and the fields that you want to apply, again, DefectDojo ships.
10:53
with relatively, with the settings we think are best for generic use. This is only for when you start to look at, I would say hyper small sets of adjustment. But the other thing you need to do is we provide management commands for validating that the settings you have entered will work essentially, at least from a syntax perspective. And so...
11:21
If you're a super user of DefectDojo, you'll be familiar with the management console. I debated adding more information here, but if it's an area you're not familiar with on the open source side, I was hesitant to build out these examples further because management commands don't have the same level of validation that sort of regular app usage does. And so I purposely
11:48
didn't include those details because I wanted to try and avoid people getting into trouble with making some of these adjustments. But it's all in the repo if you go look, or if you email in, I will tell you where to find this. It's just I'm trying to do a little bit of protections. That's one of the big challenges with open source, actually, is just giving people the right guardrails in terms of making something that is configurable.
12:16
versus making something that has a set happy path where people can't get into trouble. And so this is one of the key management functions validate DDoP config when it comes to making sure that the changes that have been made are effective. And then option two, or the second thing that you have to run is the actual management DDoP command. What this is going to do is recalculate all
12:45
of the deduplication functions. So this isn't rerunning the actual process of changing findings, but it's recalculating all of the hash codes such that when new findings come in, the duplicates will be correctly triaged. So this is just so you don't have a gap with some findings that were calculated with one deduplication algorithm and ones that were calculated.
13:14
with a different algorithm. Okay, we do have a question, Greg. What is a good way to correlate an SCA with a running container finding? With a running container finding? Is that, when you say a running container finding, is that a dynamic finding in your? You can come off mute, Ruben, and maybe talk directly to Greg. Oh.
13:41
I mean, you can call it dynamic, but still less SCA finding, right? So in the pipeline, you find all these issues associated with your open source. Then on your container findings, you will still find the same SCA based on your libraries that are running, right? So I'm just wondering how we can...
14:10
you duplicate those. Can you name the tools specifically? I just, I wanna make sure. Oh, sure. So let's say sneak is one of the SCA tool, right? And if you use a WIS or a Quasic or anything that actually gives you the running container. Because when I say running container, what I meant is, I'm more,
14:40
worried about not just the static image, but the container that is actually running instead of just sitting idle, right? Got it. So my understanding on both Sneak, SEA, and Aqua with regard to the vulnerabilities that they're reporting specifically with containers, because they do offer, both Aqua and Sneak have tools that are beyond SAST. And so I just wanted to clarify.
15:10
One of the things I should have also mentioned is when we talk about comparisons in dojo, currently all the comparisons are sass to sass or dast to dast for accuracy purposes. We aren't currently crossing sass to dast, although it's something I'm thinking about to be honest with you in terms of talking about. Am I right in the way of thinking? I mean, do I make sense? Because when I think about that, like, you know, making sure that
15:38
we are focusing on the issues that are in your execution. Provided they're the same type, I would say yes. Specifically with static scanners, they tend to be more verbose, I would say, than dynamic overall. There's a lot more overlap when you use multiple static tools than dynamic.
16:07
And so in those two specifically, generally, the SAS recommendations here will be true. The other areas that those will tie into, specifically with SCA, is component, component version. And there's one more element at the component level. But the thing to look at is how these different tools align from a data perspective to see where they are the same.
16:35
So those comparisons can be made. But I think this slide is a pretty good high level for looking at the objects in terms of the fields that will work best for SAS to DAS and everything you're saying I think aligns with the SAS side. Does that answer your question? No, it does. Thank you. Of course. Other questions before we continue on with implementation and how changing these things looks like?
17:05
and periods.
17:10
take that as a no. So we talked a little bit about how these algorithms can be adjusted in open source and the changes that are necessary. And then moving on to validation, how these things are validated. And then when we look at the pro side, we've built this out in an automated way in pro. So looking at the pro interface at the very bottom, there is this pro settings tab.
17:39
And in that Pro settings tab is a duplication tuner. So in Pro we'll also automatically recognize the tools that you're using. So you don't have to stare at a big wall of text for all the tools that we support. And likewise, with regard to validation and rerunning all the hash codes. So those are all aligned. That's all handled automatically for you in Pro.
18:09
Um, and then the last place that I want to leave it to sort of just open it up to any and all topics and DefectDojo is just getting people's security programs to the best state possible so that people have to do as little manual work as possible and hopefully make your goals and security actually achievable for the purposes of protecting your org, making sure that all of your, your products and your endpoints are actually tested.
18:38
but also for every security engineer's sanity. So yeah, with that, I'll open it up to any questions that anyone has.
19:04
and
19:09
I know you said it in the beginning, but there's still some question of when do you use, what are the use cases for deduplication versus reimport, just to really clarify that. Yeah, I think flipping back, give me just one second. The crux of the decision making process is just what sort of data you have going in.
19:38
about how you scan. So if you are scanning the same product repeatedly, if you have everything defined, then reimport is going to work well for that use case because of the comparison. But if you have data and scanning that looks like this, where you're walking into potentially a greenfield security program, where you're just being told to scan things, you may not know how
20:08
to do all encompassing scans. And so option B is just to throw data at Defectojo and let it figure it out. So if you do have CI-CD scanning and options available to you, reimport works really, really well with CI-CD because you can make comparisons even all the way up to a code commit in terms of what's changing from a security perspective. If you're a program that...
20:36
doesn't have CI-CD security automation, and you are essentially just charged with scanning your organization, deduplication is likely a better fit for you because you don't have to worry about scope. You don't have to worry about what changes. Deduplication's goal is just to get you to an ultra clean list of findings. So similar in nature, but just sort of two different challenges.
21:03
in two different use cases for security teams, depending on your security team's mandate.
21:10
Okay, great, thank you. And then I have another one. Can you use deduplication across different tools?
21:23
You can, you can. It just requires tuning and analysis and several steps. So the algorithm has to be changed. The validation has to be done. The functional change of running through all your old findings and the hash code has to be completed. And so those are three of the steps that essentially, in Pro we've put guardrails around and likewise can happen automatically.
21:50
OK, I do have another one too. I'm going to read this one. It came in the chat. I have a question on tools config versus scoped access in the source tool. Example, in DefectDojo, we can configure a single SonarCube API, but this would allow teams, groups in DefectDojo, to essentially import scans from other projects in their product. Are there any plans to more fine-grained config tools?
22:21
Oh, interesting. Um, let me think about this one for a second. Okay. I think he doesn't have a mic. So yeah, I see it in the chat. Configure a single essentially import scans from other projects in there. Oh, interesting.
22:46
So yeah, in that situation, who does the importing? Is it the security team or is it individual developers?
23:00
is I think.
23:03
You want to have the teams configure their own products. I did not write the access control, to be honest with you. I wrote some of these other ones originally, so I know the ins and outs of them better. I don't know our RBAC system well enough to confidently answer. I believe there's a different scoping level at product type read write versus import read write. The original thinking,
23:33
as I recall, but again, not totally my area of specialization was that we had cases where we wanted people to be able to ship scans, but not necessarily change the data to ensure integrity, if you will. Not that anyone would ever just close findings or make things look better, but potentially. And so I believe there's a difference in terms of if you allow right on product or product type.
24:02
versus the import scan permission that may solve what you're looking for, but I can't say with absolute certainty. And I can, Rem, I can give your question to the correct person and send you an email back with an answer.
24:25
Okay, great.
24:29
Um, okay. So can you see in the chat this from Donald?
24:36
I think that the future vision that deduplication would involve not only app sex scanner overlap, but also one day against container or infrastructure volume overlap at Dojo is the eventual volume orchestration solution. And some findings come from runtime environment and code execution. That's containers. Um, John, can you elaborate maybe a little more? So are we talking about, or are you talking processing the
25:05
Dynamic to static barrier or just other? Well, because dynamic is the, the piece where not only are you building the project, but it's, you know, where it's built, whether it's, you know, an open shift container or if it's running in, you know, uh, a VM, whatever, right. Some scanners findings are going to overlap when you, you know, hit it with your callus infrastructure scanner and whatever you're using for DAS, right? Just because.
25:31
you're going to get not just code analysis, right? Like you left an open, you know, ended, you're not doing validation, input validation on this particular piece of code. But as it builds a project, you're going to see that, you know, I don't know, this Java library that's built in the OS is just as vulnerable as if you're using it in the code, right? And some of that's going to overlap, possibly. That's what I mean by eventual.
26:01
deduplication maturity, because you're scanning not only just the code, but the entire runtime and build environment. And because you typically in big organizations use different scanners for different aspects of the build process, you know, your infrastructure scanners, your container scanners are different than your AppSec scanners, right? But the findings might still be similar in nature, right? So making it more application centric from
26:30
cradle to grave, so to speak. And having orchestration of all of that eventually is where I think, to me, where I would see the future of deduplication, right? Cause that's where your value is. You don't want a dev to fix it six times because four different scanners, no matter which aspect of own management you're looking at it from are, you know, getting reported, right? Like you don't want one thing going, Oh yeah, okay. Well, though that was for the.
26:57
my rail eight and now yes you're right i'm using the same java library in my code to build the project right so it's like you see like i guess that was my point is that hopefully you know over time we get not only just out of the appsec space which is true and i see the value here but um you know the whole convergence in terms of the dynamic piece taking into account all the environment you know pieces as well right um so that's kind of where i was going with that comment
27:26
We can do that, I think, today for the most part with tuning, as long as the tool is dynamic in nature. So Matt and I actually talked about this. If you don't know Matt, he's my co-founder. And so there is, when we say dynamic, we truly mean dynamic, meaning anything that is being run and being tested. So that includes overlap with infrastructure, dynamic tools, whereas DAST is...
27:51
I think from a term perspective, purely application focused, and there's overlap between DAST tools and then dynamic infrastructure tools. You see that with regard to, like encryption is one that comes up so frequently, like TLS key management, et cetera. One key area of overlap between infrastructure and application. And both of those findings today in DefectDojo are considered dynamic in nature.
28:21
And so that tuning can occur today. Um, we're looking at future roadmap. There's a lot of other things that we want to do, but, um, potentially where and how it would start to make sense to cross those barriers going from, like you mentioned, a component library. That's creating dynamic findings and tying those all together, but we're not confident in bridging SAST and DAST today because that alignment.
28:51
is difficult but possible, especially when you start to look at it in the context of, you know, like five tools when you're working with someone to 160 from a configuration perspective. Does that answer your question, Don? Or... Yeah, no, it just makes sense. You know, I was just kind of saying that. Hopefully that's the vision, but it sounds like you guys are at least looking at it. So, appreciate it. I think specifically on the dynamic side, we could get folks there today.
29:20
or I'd expect that we can like when engaging with someone on, when we start to talk about crossing the component barrier, that's where I think there's like an opportunity for improvement.
29:37
Okay, we have another question. Oh, I think Matt answered in the chat, but let's just say it out loud. Is there a way to set multiple DDoP algorithms for a specific scan type? Matt, do you wanna answer that out loud for the group? I can also elaborate a specific use case I was thinking about. So there's a Newsy Parker importer in DefectDojo, which is a secret scanning tool. And I found out that
30:07
DefiqDojo has the option to do either a full history scan or a current branch scan. So if you're deduplicating on the line number, the secret, and the file path, that would change based on if you're doing a full history scan or a current branch scan. Don't know if that made sense.
30:35
It, it does to me, this is Matt. Okay. Oh, hi Matt. Hello. It does to me that would kind of depend on how you broke up ingesting those findings into DefectDojo, because if you did them in different engagements, you can be duped at the engagement level, but you would still have the issue of, um, what fields to pick, you only get one set of fields currently, right?
31:03
But you could separate the scopes of those by doing them in two different engagements, one for the master or the main branch and one for your teacher branch. Yeah, whatever it is. Yeah, full, there you go. Thank you. Yes. So you can, you can separate how you ingest those and then use the D Dubai engagement to handle that variance.
31:21
Okay. I see. Thank you. Yeah, sure. Yeah. Thank you for explaining the use case, by the way. Cause I was like, Oh, this is really interesting. But now I understand. Right. Because like, if you have a, uh, doing a full history scan, you don't really care about the line. Right. The line number, because that can change throughout the project.
31:43
Totally, totally understand that. Yeah, that makes perfect sense. Okay. Wait, so just to clarify, you're saying, uh, create separate engagements and then use the same, do you, I'll get algorithm. You could do that. The other thing that just occurred to me too is if you're, since the get history is going to be, I mean, static isn't the right term, but
32:08
repeat findings in the Git history, because they've already happened, or gonna re-happen if you continue to scan. You could use re-import for the full Git history and use import with DDoop for the branches. And that would get you the diffs you want for the re-importing with the full Git history and then the DDoop for just the branch using the DDoop algorithm. Okay, okay.
32:38
Thank you. Yeah, yeah, sure. No, I love these questions. They're great. Yeah, secret scanning is particularly interesting as well because they don't supply a unique ID, like primarily unique ID comes from dynamic tools. And so for that side of the house, yeah, I think you only have one option because you do have the algorithm option of hash code or unique ID from tool, but unfortunately in secret scanning.
33:07
I'm not aware of any secret scanners that provide that unique ID to potentially leverage both or have an algorithm that's looking at both. Unfortunately, Matt said is spot on and perfectly correct. And that's why I love Matt. Thanks, Matt.
33:28
Okay, anybody else? I'm going to jump in. Greg and Matt, that's like gold for your question. Hello. Does the reimport scan is possible even if we didn't import something before? We think that we kind of integrate Cdefec.
33:57
that's run on every repository, but even the new one. And I was wondering that we can import something that has been never imported. Yes. Because we have kind of a state that creates the product if it not exists. And my point was more like, if there was no scan before, does it break anything or something like that? It does not. The only important thing to note is it won't do the comparison on the first scan.
34:25
We assume that if you're submitting findings, that those findings are present, but afterwards it will do all the same re-import things. The other thing you can do with re-import to lower the lift even more in terms of setup, because what I sort of hear in your question is the less you do to configure, the more value you get, right? Like creating products, creating engagements.
34:53
doing import once then switching to reimport. Reimport also has a setting in the API called auto create context. And when that is enabled, you don't even have to specify a product or an engagement. Defectojo will auto populate those things based on what it sees in the scan. And so it's just another good way if you have a ton in CI CD, if you're not tracking things with regard to
35:23
like a CMDB and tying all those things together, you can just turn auto context on essentially, and just use reimport and just throw data. And that works really, really well for CI-CD. And so you're going to... I must take it very step-by-step. That's the nice idea. Okay.
35:52
Okay, we have another one. Is there an easy way to get more details on errors from a scan, especially with Nexus IQ Cyclone DX in XML format to import scan API? We get a simple response internal server error, but no details. Also alerts gives no results as well. We have to dive into server logs to get more data.
36:15
Interesting. We are evaluating as part of v3 what errors it makes more sense to highlight versus not. So there is always the opportunity to enable debugging settings, but then you're getting, you know, entire stack traces. It's not intended for production use because it gives away more data than is safe with debug enabled. I know we're also looking at the alerts.
36:45
Matt, do you have some thoughts on that? You may know that one better than I do. Yeah, we are looking at doing some updates to the alerts to make them a little more verbose in circumstances like this. The historically Dojo was somewhat of a power tool and it kind of presumed you could look at the logs, at least when we wrote it at Rack for ourselves. That was certainly where we were thinking because we were running the thing we were writing. And that's an area where we definitely could use some improvement. There's some...
37:14
Decent logins in the alert, but that is one area where we're not only looking to improve it from a verbosity perspective, but in a, how do you want to say that? A configurability perspective to where you can sort of, excuse me, use those alerts to do more things with. Cause right now it's kind of just a very static thing. So we were looking at potentially adding the ability to have actions follow on those alerts, particularly where there's a pro feature we're working on called.
37:44
rules engine and you could potentially write rules that occur after an event. And so that's one of the areas we're looking at implementing and one of the things we wanted to do is be able to have an alert trigger a rule. So on import of this type of scan, do this thing would be sort of the natural outgrowth of that. But that's still early days of design. We don't have anything coded yet. So I don't wanna make promises of stuff that doesn't exist.
38:16
Okay, thank you. Anybody else? Last call for questions. Do you hear me? Yes, Danilo. Yes. Nice to meet you. Excuse me, I had some miscalculation with the with the time zone. I'm in Switzerland. My name is Danilo. I'm calling from the Epilepsy.ch application. I'm using your tool now on a large scale. And I
38:47
I have a question how you are using or intended the entities product and engagement. How is it intended? I'm currently scanning my Git repositories and projects with Bitbucket and I'm using the product as product and the repositories I'll importing it as an engagement. Now if I'm repeating this on a weekly basis.
39:16
The line diagram, for example, doesn't match the expectation. And yes, would you mind to explain it? How you're thinking, how it should be used to reimport import API. Generally, I would say repos map best to products. Would you disagree with that, Matt? No, that generally works.
39:46
I have seen some people break up repos into engagements, but that's usually when they're dealing with the more unusual situation of a mono repo where multiple teams own pieces of a repo. And that adds its own level of complication that hopefully you're not facing. And that's mostly just because some tools don't understand mono repos and can't break up their findings based on say a file path. That's a great point. Typically product should tie to owners in general. And so if you're...
40:15
repo is owned by a single group or a single individual, if a single party is responsible for fixing it, then generally speaking, that should be tied to the product level in Defectojo. Okay, and then the engagement, is it a scan type, for example, a trivy or a sneak, or is it for each scan, I open up a new engagement, like the name is a timestamp? If you're doing CI CD,
40:45
I generally think one engagement is best. Engagements are just meant to be holders for scans. So for people that are doing CI-CD, it doesn't make sense often to create a new engagement. You just want to know what is found in the context of CI-CD.
41:06
but we also didn't want to exclude the possibility that people also need to do manual testing because I think that's the reality for most orgs that are doing security. Very few are at 100% CI, CD adoption and usage. And so in those cases, I do think it makes sense to split out a separate engagement. So you have sort of a bucket in time, if you will, of everything that was stood. That also helps you to validate to make sure that
41:36
your team did all the testing that they were supposed to, because you'll have a test type for every single tool in that case. And likewise, if you're doing any sort of auditing, PCI, FedRAMP, etc., you can drop that in a single engagement for audit and reporting purposes.
41:57
Does that answer your question? Yes, that answered my question. Just one last already on the start screen we're having on the community edition, this severity cycle and the line diagram with severity by month. Now, if I'm importing, let's say my 20 Git repositories, which have a
42:26
My expectation was to see, okay, they fixed three. So the line will go down a bit, but it showed me to scan last weekend that it only showed, well, we found three findings. We, we find three new findings. So is the reimport API function.
42:53
suitable that this line diagram has the data that it expects for visualizing everything? Or is it just something I misconfigured, for example? It could be two things. So I know that we hash a number of views for performance reasons. I'd have to see the exact chart to be with certainty. I know we also just did a sweep of our charts last
43:23
weak, Matt, if I recall correctly, I'm not sure I'd have to see which one specifically, all the ones on the dashboard to my knowledge are being cached. Is that your recollection as well, Matt? Or am I off there? No, no, you're correct. The one other thing to to take into account for reimport is the way that it handles the second occurrence of a finding. Because if it is closing the second occurrence of a finding because it already exists.
43:49
It's not found in this week. It's found in say last week. So you can have what, what looks like wrong data, but is actually correct because you may have refound a finding that was found. So would there be a possibility that I configured the dashboard? Like that, that it says, okay, the line stays at 20 findings. Even it's.
44:15
It's on the counter of 20 findings since three months. Yeah, there is a, let me look at, there's a, I'm looking at something right now. There's a metric that I think is better for you at the product level. That's day to day by severity. Um, and, but that's at the product level. It's not at the dashboard level. And that'll tell you the changes day by day of the various, the various
44:44
of a repo over time and that would likely give you more accurate information. I haven't tested it against reimport to be totally honest, but knowing how reimport works. I think that's more of what you're looking for. Yes. I, I've chosen to really import exactly because I, I needed to, to use the dead application feature, but I'll try it to implement it with one repo is one.
45:13
product and then I'll find out how it behaves. Thank you. Yeah, sure.
45:21
I just want to make sure, um, Tandy, did the information that map put in, is that helpful for you or do you want? Yeah, that's perfect. Thank you. Okay. Great. Yep. Any more questions? These are great questions. Really appreciate the interaction.
45:42
No, okay, well then I would say thank you for coming. If you got here, if you got an email and signed up for this, then you probably are already receiving our emails, but I would encourage you to sign up for our monthly newsletter, which you can find on our community page on our website, because that'll give you a list of all the month's activities and allow you to sign up and also where we'll be in real life.
46:09
And two weeks from today, we will have Matt presenting on a webinar on taking your DevSecOps to 11. So that's really about all the best practices for building your program and how DefectDojo can help you. So I encourage you to register for that as well. And Greg, do you have any final thoughts? No, no, thank you all so much for taking the time to come and speak with Matt and I today and to check out our presentation.
46:39
And we hope to see you at Matt's next webinar, which is more focused at the program level and getting end to end. So actually building out that pipeline we showed at the beginning.
46:52
All right, well thank you everybody and have a great rest of your day.