Aug 27, 2025

August Office Hours: Staying Compliant with the Cyber Resilience Act

Transcript

00:07
Hi everyone, happy Wednesday. Thanks for joining our office hours. So before hopping into the CRA requirements, I just wanted to quickly show you our agenda for today. So we'll talk about this new regulation in Europe and what it means and the implications and what we believe is going to happen as these things come into effect. And then we'll talk a little bit about a new feature in Pro and then we'll also share some new features and updates that are coming to open source.

00:36
And so with that said, we'll get started. So if you haven't heard about the CRA, it is this new regulation in Europe. And um as we all know, everyone tends to be pretty sensational in security. And we wanted to do a webinar on this because I think it is one of the few things in security that is maybe not sensational.

01:01
The reason I think that this is going to be a big deal is because of the fines and implications associated around the CRA. Essentially what Europe has done is GDPR was so successful in terms of getting companies to change behavior that they've replicated the framework, the fines, et cetera, to the CRA. And so um when I look at things that matter in security,

01:29
I think this is actually one of them compared to sort of, you know, the standard noise that we hear in our industry. And so, these regulations don't come into effect until 2027. And so with everyone being so busy in our industry, maybe this is a little early. I think for anyone that does have extra cycles, it's always better to get these things out of the way earlier rather than

01:59
being in a mass panic to get it done. So where I was when GDPR came into effect, we were in full transparency a little late to the party. And so it was like a mad dash to the finish line. It was a very painful couple of months right before GDPR came into effect. And so if you have the bandwidth to get ahead of this, I think this is probably one of the number one things that's going to cause pain

02:26
as the regulations come closer to being in effect. So breaking down the requirements, I think I'm getting a little ahead of myself, but some of the things that I've heard from our European customers and communities is kind of interesting because some of the expectations maybe don't align with the regulation. But first, just looking at the high level requirements, there is expected to be some elements of security by design.

02:56
And I think the other thing that's really important about the CRA requirements is companies are essentially need the mechanisms and means to prove that they're doing these things. So, you know, security by design sounds kind of obvious, but I think the question is like, well, how do you prove security by design? Like what sort of paperwork, diagrams, et cetera, are necessary to meet these regulations requirements? I think this is one of

03:26
the more nebulous ones and seems like, yes, obvious, of course, but how do you actually prove these things? And then two, vulnerability management. I mean, certainly less, more tangible than security by design. When we look at the vulnerability management requirements that the CRA is looking for, there's three key elements. There's the regular testing elements. So some sort

03:53
of logging collections, reporting, et cetera, snapshots in time that prove that you are actually doing security testing and vulnerability management. And then another part of that element is providing security patches and updates and in some sort of manner that are also free of charge. And then finally establishing a software bill of materials, which, you know, thankfully everything is pretty clear, cut and dry here, standard sort of

04:23
elements for compliance and regulation and whatnot. So this one doesn't concern me as much, but there's work that has to be done here. And then next is just doing a complete assessment. So I don't even believe that there's timelines around this one currently, but the net is you have to demonstrate in some fashion that you are doing a complete and regular security assessment

04:49
as part of the software and product development lifecycle. And then,

04:58
The other thing is you have to be, you have to set end of life periods. And so you have to be transparent on when those are, when they're coming so that people have the opportunity to update and plan their transitions around software as it relates specifically to security. And then,

05:23
Finally, again, just talking about the timelines that these things come into effect. Like there is some time still, but for large corporations, 2027 isn't that far away. I think we'll blink and it'll be here in some cases.

05:39
And so when we talk about how we're reacting to this as a company, in addition to preparing our own compliance with these elements of the CRA, we have added a feature to the platform. And so that is Kev support. I think the thing that is interesting about Kev, in full transparency, it's not my favorite metric for exploitability.

06:06
But in talking with members of the European community, customers, community members, prospects, et cetera, people really, really want Kev specifically. And so, you our team looked at this regulation, like we read the actual letter of the specific requirements around documenting exploitability and whatnot. People's belief seems to be that Kev is the requirement specifically.

06:35
And we can't find that in the text to be transparent. But it's one of those situations where we're not going to pretend to be smarter than our users. A lot of people have asked for this. So who am I to question that? And so we did end up deciding to ship this. And so if you're not familiar with Kev, Kev is the known exploit

06:59
catalog and it's provided by an entity in the U.S. government, specifically CISA. And so the goal of Kev is to just report on CVEs and when those CVEs become exploitable. It's very similar to EPSS, but it doesn't have the scoring. It doesn't provide the context of how exploitable something is or isn't. It just is a yes or no value. And so when we talk about, you

07:29
calculating priority and risk and how risk and priority should be looked at in the context of like real actual threats and timelines to fix them, Kev doesn't provide that level of information. And the other thing is that there's so much going on in the US around budget cuts, even maintaining CVEs as a standard in a database. So,

07:58
I just, a little hesitant in this space. It's not my favorite metric for exploitability, but again, you know, enough people have asked for it that we did it. And so, just a little more on like why I don't love kev. And I think that there are better things out there, but, but we did it. And so in dojo, you'll now see this, new field called known exploitable. And then there's also a kev date that is not displayed.

08:27
And so the goal of this is to both tell people and users what Kev thinks about a given finding CVE and also share the date that Kev believes it became exploitable so that that could be factored into a risk and prioritization scoring. And then we also have some open source updates, not related to Kev, not related to CRA, but just talking about

08:57
Some new things that are coming in open source that are potentially really exciting. With regard to the open source side of the house, there are I think three new features in here that I wanted to highlight for the community. So first is new finding groups. So this was a contribution made just by someone in the community that we worked with to get accepted. And so when it comes to

09:25
distilling findings defacto Joe has essentially three strategies. There's an auto triage strategy through a re import function. And that's designed to compare scans over time and determine what's changed. I have same tool D duplication, which aims to consolidate findings that originate from the same tool. And then we also have cross tool D duplication, which aims to identify

09:54
findings that are the same from different scanners that have overlap. And then we also have this group by notion. So it's a way of combining findings for ticketing or groups just to provide another level of distillation. And so the grouping is what this user was enhancing, this member of our community. And so how this works is you now get a global view for looking at

10:23
group findings in mass. And so I think this is like a hyper distillation use case, if you will, this a way to look at vulnerabilities by category and how you're remediating against those categories rather than talking about individual threats with the number of findings people now have to deal with, with how tool sprawl has continued to increase. I think

10:51
We're definitely sympathetic to looking for even additional ways to distill findings, group findings, prioritize, et cetera. And so this is essentially what this new feature looks like. There's a new tab in DefectDojo that allows you to, in this case, the example is grouping by some vulnerability ID, and then the total counts, SLAs

11:18
associated and I believe there's a view somewhere that lets you know like what percentage of the findings have been mitigated with your given group and strategy and I believe there's three or four that are implemented and so we were excited to you know get this one across the finish line working with that community member to give to give everyone in the dojo community

11:45
a new and enhanced way to group findings. And so the other big change that's going on is you'll see this, I believe in the next two releases, maybe let's say three releases to be on the safe side to always try and be transparent on dates. But we're also making some changes to the core DefectDojo model. So product types will be shifted to something called organizations.

12:15
And products are being shifted to something called assets. And finally, endpoints are being shifted to something called locations. And so this was kind of the original open source data model here with regard to the prior product hierarchy. And this is how things will look once those changes are made. We have also, or we are also introducing toggles. So

12:42
If you want to stay on the classic nomenclatures, you certainly can and you're able to. And we also won't be breaking the APIs associated with the older model. And the reason that we're making these changes to the data model is first just to better align with current industry standards. So

13:06
many people for repo management have centered on, on GitHub or GitLab. And we wanted to more closely align with their naming conventions to let people import data easier to make the model kind of easier to grok for those that, maybe newer to security, but also to provide more flexibility and granularity in terms of how we report on things. And

13:31
The other element of kind of this grand vision with redoing certain parts of the model is the ability to create additional nested hierarchies and pro. So I think companies are much more complex today than they were say five years ago or even two years ago. And so with regard to these kind of collections here, these assets will be able to be grouped.

13:58
And in Pro, we expect that we'll be able to create hierarchies around these to create really complex relationships and dependencies, et cetera, with how orgs actually deploy application, ship code, et cetera.

14:17
And then finally, so we've been talking about it for so long. It's taken such a long time with regard to this next one, but we're fast approaching a date for people actually seeing the V3 API. And so I think that will also be within the next three releases, maybe two. It may also pop up tomorrow, to be honest with you. It's very, very close. I'm just trying to buy our team

14:45
a little bit of time rather than putting the pressure there. Our V2 API is expected to remain for the foreseeable future. That's because we are extremely sensitive to breaking users or causing a bad experience. And so if you've built around V2 as literally tens of thousands of people have, we don't want to mess up your stuff. And so

15:15
V2 will be here. We have no date essentially on end of life in V2. I don't know if we'll ever get to end of life V2. It might be five years. It might be a decade. I don't know. We have no plans of discontinuing V2, but at the same time, the V3 alpha is coming. And when you see those changes in open source, they aren't final. And so

15:45
while we want people to be able to see what we're doing early, at the same time, those APIs will be revised essentially. And so it is possible that if you start using v3 immediately when it comes out, that the final version will be different. It may have breaking changes, et cetera, before it is finalized. But just to kind of restate again, the goal of v3 is to address key pieces of feedback from the community

16:14
on where the platform could be improved that were either too difficult to do without funding or would cause breaking changes without the financial support we now have from investors. And so just slipping back a little bit on the data model. This is something we had taught wanted to do for a while and talked about at, know, conferences, meetups, et cetera, with the community.

16:41
But the level of effort to get this done is fairly monolithic and to make sure that things aren't breaking. And so I know our engineering team is at the point where this is passing successfully without breaking unit tests, which is very exciting. But this has been a really big effort and investment for us for the open source standpoint to make sure that open source continues to be

17:08
a great experience while also locking, I think, kind of a new level of potential for the platform and the community. And so, oh, that's my last slide. So with that, I'm happy to take any questions that anyone has about the CRA or commercial edition, the changes in open source. I truly appreciate all of you attending. Thank you very much.

17:35
Thank you for your support and time today and yeah, I'm happy to answer any questions.