Mar 27, 2025

The Future of Secure Coding: Insights from Industry Experts Jim Manico and Greg Anderson

Transcript

00:08
Hi everyone and welcome to today's Fireside Chat. My name is Greg Anderson. I'm the founder and CEO of DefectDojo and today I am joined by an industry legend, Jim Manico. I consider Jim to be the security trainer. His training is truly secondary to none. It's a transcendent experience if you haven't had the opportunity to see it personally and I'm sure our viewers will get a glimpse of that today.

00:37
Jim is founder and CEO of Manicode Security. With that, Jim, would you like to share a little bit on your background? Greg, that was incredibly kind of you to introduce me like that. I'm very honored to be here with you and very honored to be part of what you're doing at DefectDojo. Greg, I've been a software engineer since I've been writing code since I was 14 years old. So I've been writing software professionally for almost 30 years and about

01:05
20 years ago, we began to shift into the modern security area, the modern security era. And I realized that there's not a lot of books and literature and information on how to do security engineering. So as I began to pick up techniques, I saved those techniques in my favorite way in a PowerPoint presentation. And now I have about 10,000 slides on secure coding and I'm a professional instructor

01:33
teaching developers how to write secure code. One more bit, right? I'm a Sicilian American. My great grandfather, my grandfather and my father are all college professors. So I'm from a, like a long lineage of professors in my family and really honored that I have a chance to do what I do. It is so much fun and a lot of hard work. Well, Jim, it's a pleasure to have you here today. The first thing I wanted to start out by asking you, as I'm sure you're aware,

02:01
Burnout is incredibly high in security. I've seen some reports that even suggest it is the highest in tech. As someone who has been at the top in your area for so long, how are you able to do it? How are you able to maintain your passion in a very high pressure environment? I just spent a week in Japan for work, and the secret is I walked about 13,000 steps a day

02:29
while I'm running. And so I just keep my body moving as much as possible. In the past, I was very obese. I got sick from doing it and I learned how to eat better and lose weight. It's always a struggle. So I just do my best to eat well and I drink a lot of water. I don't drink any alcohol that really, that it gives me a short-term gain and long-term decrease. I drink water and coffee and exercise. And I do a lot of yoga and try to make sure that

02:58
I support my job, my brain, my family, and my body all with equal measure so I can continue to grind away. And Greg, you know, you know, it's hard. Like running a startup and running a company, there's no end to the amount of work you need to do. And so it's, these are, these are challenges I think everybody faces, but the more you work and the less you take care of yourself, the more burnout is going to affect you. And been there, done that too many times.

03:28
Greg, I have a question for you. I know you introduced me and it's all about me. That's great. I love me, but I want to hear about you. Would you introduce yourself and tell us all who you are and where you came from? I love to hear more about your background and where you came from and what you're up to today. Oh yeah, of course. I guess, I guess that's why Jim's the professional, right? Like things I should have done in the beginning, but yeah, a little bit myself about myself and

03:56
how we got to where we are today is security was something that I was always passionate about when there wasn't exactly a track for security or an educational path. And so it was something that I just started dabbling in, if you will, maybe. There wasn't a lot of good ways to learn. So sometimes you learned in a live environment. But I met Matt Tesoro, who you also know and is a

04:24
an AppSec legend. He and I met when he was hiring at Rackspace, back when Rackspace competed with Google, Amazon, and sort of the other titans of cloud when that was being fought out about 10 years ago. And Matt hired me as an intern, and watching Matt attempt to protect an entire cloud was just absolutely insanity. The security team was

04:49
I mean, being worked to death, we were there overnight more than any other team trying to get all of our testing done. And just watching Matt go through this was, and the rest of the Rackspace team, just total insanity. And so me being the very lazy person that I am, I was like, we have to write something to make all of this go away. The way that you're doing security today is going to kill us in all seriousness. And so

05:17
That was how DefectDojo started. It was something, was my very, well, not my very first, I worked for the University of Texas before that in kind of a security research role. I was fortunate to have access to some of the supercomputers like, what was it, Ranger, and then Stampede, which was a joint supercomputer with the CIA. But after moving on from that and working for Matt, that was my first real

05:46
professional programming project and experience and what developed took on a life of its own. So Matt and Jim, Jim Friedman was so happy with the results that we produced in a month from DefectDojo or what would become DefectDojo back at Rackspace. It was a project called Test Track, but they were so happy. We shipped it live after a month and we started using it in production at Rackspace after a month. And then

06:15
Rackspace didn't have a ton of interest in the tools, they decided to open source it. And that's when we kind of transformed it, put the open source edition and OWASP and developed this massive community. And so it's been my life's work, 10 years in the making, we always knew that security automation would be a thing and that we felt that it was inevitable for the industry. But 12 years ago,

06:40
Automation and security was a dirty word. It meant you were just scanning and throwing results over. Manual testing was a badge of honor. And so it took the industry a very long time to change, but Matt and I were certain that that change would eventually come. And so today, DefectDojo, 38 million plus downloads used across the world, 400 plus contributors.

07:07
Our, our telemetry suggests that there's a minimum of 10 to K organizations using the platform. It could be much, much higher on the open source side. And so, I'm just incredibly unfortunate and fortunate and it still feels like, like a dream to be honest with you, you know, it feels like just yesterday we were at rack space and then, you know, we met you and we developed the commercial version and, things just took off from there. It's been a really incredible journey.

07:35
That's fantastic. I'm really glad to hear about your journey. That's just, that's amazing to take these early ideas and turn them into such a huge commercial company. That's, that's amazing. Thanks. It's easy when you live it, you know, as, you know, people that, really lived and experienced the problem, it's pretty easy to, you know, forge what you need and then, you know, just listen to the community. Matt and I don't pretend to be smarter than we are. I would say, you know,

08:02
90 ish percent of our roadmap is just directly driven by feedback and 10% is where we think the industry is going to head. And so I'm very, grateful to the community. I'm grateful to Matt. I'm grateful to people like yourself. I'm grateful to everyone at the team that's joined us on this journey. It's been really incredible. That's fantastic. But Jim, know, being the expert on developers, I think

08:29
One thing that's really interesting, we tend to speak in security in terms of things on like a yearly basis. Like rarely do we talk about a decade because unfortunately most security professionals aren't here after a decade. The stat is that they typically burn out in 1.9 months or sorry, 1.9 years, excuse me. So one of the things that I think people feel is that sometimes there is this at odds relationship between

08:59
developers and security. You know, I think developers are pushed to go really, really fast due to agile technology. And, you know, the reality is things like QA and security slow developers down. Do you think that other business units do a good job of taking security seriously or how do you see that relationship today and how can we make it better? So the question is the relationship between developers and security teams today. You want me to answer?

09:29
What I see more and more is DevOps not becoming something special, but becoming the norm, right? When I look at a team and I see that they're not doing DevOps-based development, I think they're radically behind the times because what DevOps enables, it puts security more into the hands of developers. And now security professionals are focusing on building, maintaining these security pipelines or

09:58
doing threat modeling with developers and developers now are engaged with security scanning on a day-to-day basis. So it's putting security problems more into the hands of developers and forces them to like handle these problems before they can merge to trunk and similar. So I think the best interaction with security teams is when security teams enable these DevOps pipelines to put security in the hands of developers.

10:27
And that, I don't see DevOps as adding friction. DevOps and products like DefectDojo radically reduce the friction that we saw traditionally between developers and security teams. The only time I see friction is when people are not doing security scanning in a DevOps cut fashion or back to older techniques that require a lot of human engagement in the security process. And the more we can get rid of that, the happier developers are.

10:58
Like one of my, one of my favorite things is to say, you know, once I have a good fine tune code base and a good DevOps process, and there's a, there's not a lot of vulnerability debt. Then I'm like, Hey, guess what? You can't merge to trunk unless you scan clean and developers hate that at first. And then it becomes the norm of how they do business. And then get a bill of complaints and then those complaints go away because they now know what their job is. They know how to, how to deal with these problems. And it's just the norm. So

11:28
DevOps reduces friction between teams. And if you're not doing it, you're really behind the times these days. And do you think it comes down to policy, Jim, to getting that buy-in or how do you get that buy-in to have that reduction in complaints if we're going to say, you know, block pull requests based on security issues? Well, usually in a big company, there is a lot of

11:54
different levels of how people are deploying DevOps in a product like DefectDojo, right? So there's like a large number of teams that are still working on it. There's a modest number of teams that have good maturity, but are not blocking merge to trunk. And then there's a couple of teams that are really on the bleeding edge that block merge to trunk. And they're usually some of the more top teams. So what I do is instead of me telling the story as like a security consultant or an educator,

12:23
I go and find the teams that are doing the best practice and get them to tell the story to their other teams. And then it becomes almost like a competition between teams. Hey, team X is doing this and they're rocking at full speed. Why would you not wanna do it? And then little by little, we see other teams begin to join the process. So in my world, it's about finding one of the different many teams for a big company,

12:48
getting them to do the best practice, getting them to love it and having the story be told internally to spread that out. And then the thing is not every team is gonna want to block merge to trunk ever. Some teams will never do that because they have different concerns, right? Or they have legacy and they can't their legacy debt for whatever reason. Some teams are just slow to adopt it. So I aim for the wins where I can have those wins as best as I can.

13:18
And some companies, an executive or a CISO has bought into this and they just mandated across teams, right? Are they give them a window saying, you gotta be doing this within three months. And then we do analysis and see which teams have too much legacy debt to make that window and see if we can renegotiate what functionality delivery they need to do so they can catch up to the debt so they can get to the magical place of

13:48
you're not merging until you scan clean. Do you think that you need that executive buy-in to make it happen or how high up the chain do you think you have to go to make it a reality? It depends. If the company's primary duty is software development, if there are software development shop period, then I need C-level buy-in. But if the company is like a manufacturer or software development is not their main job or they don't, or I'll say politely,

14:17
they don't think software development is their main job, then I can usually get buy-in lower down the stack. But a lot of my customers are just big software shops and I really need to get CTO and CISO level buy-in to set strong policies like that. And legacy debt often makes that a challenge. So maybe we do it incrementally, but I have gotten to say, look, can we at least say you can't merge to trunk unless you scan clean

14:46
from critical vulnerabilities, okay, we'll buy into that. I'm like, okay, look, can we block Merch2Trunk only if there's a security vulnerability at the API level and we'll let the client side code, okay, so I try to institute that Merch2Trunk rule set in little by little and get as much win as I can there. That's usually successful. That makes perfect sense. So you want them to take a little step essentially,

15:16
if you will, just getting them to take any improvement to ultimately get to that ideal security landscape in terms of how they're blocking things. I share that perspective very much. I think that was one of the hardest things before I became a vendor when I was in the role of a security engineer was getting the buy-in itself. I think when, you know, Dev switched from

15:45
Waterfall to agile processes everything became about velocity and you know people act where their interests are. You know our philosophy as a company is I want everyone's interest to be aligned top down so. When we think about like giving out stock options at the company that's something that we lean very far into. Because I want employees to be aligned with the interests of the company and investors and so how this relates to developers if

16:13
is if their KPIs is just about velocity, what they're going to care about is velocity. But if instead you incentivize people with different KPIs, so it could be anything from infrastructure savings to how many things that you block as it relates to security, I just think if you don't align the incentives, it's very hard for people on the front lines to get to where we want them to go. And how I see it is it's all about getting that buy-in

16:42
from the correct leaders to help institute those things. Do you agree with that, Jim? Or what are your thoughts? That's wise and professional. I've had to adjust my methods over the years. And I've been an entrepreneur for too long. I've been my own boss for too long. And I've had to learn as I get older and wiser and creakier, I've had to tone it down and build consensus

17:08
and act more professionally in order to help a company more effectively, right? And so getting buy-in, working with the team, making slower adjustments, moving the boat little by little, making sure that I explain the benefits in detail, understand the business objectives. These things, soft skills are becoming hard skills in my world to accomplish good security engineering. They're not nice to have, they're fundamental

17:36
to help an organization do better security. Am I in the right direction with answering your question, Greg? Oh, absolutely. You're the expert. I defer to you. I would consider you to be quite the expert yourself. A lot of this is debatable as well. There's no one way to do this. There's a lot of different experience and methodologies that lead to successful security, I dare say.

18:04
I have to tell you, Jim, on my intro at Snowfrock, there's so many wonderful people around the company these days. I tell them, what I do is I push paper and I talk to investors these days. That's unfortunately my realm these days. So yeah, Jim, when we talk about getting buy-in, I think that that ties into culture. Can you share a little bit with us in terms of how you've seen

18:33
culture shift in security and if you think those shifts are good or bad or where culture can continue to evolve between developers and security professionals for the betterment of both. I can't always influence security, but I can't always influence culture. I'm a teacher, but I can observe it and maybe move it in small directions. But I'll tell you this, one of my customers, my favorite customer,

19:03
They brought me in 10 years ago when they were a small startup and the CISO deeply believed in developer security training and application security against the grain of his company. And he decided to invest in security training aggressively, frequently, extremely early on in his company. And he made AppSec code security his priority as a CISO because they were a software engineering company.

19:33
He himself basically forced his culture on the whole company. This is where we're going to do it. And guess how that affected them seven years later in the market. I got to watch them turn from a small startup to a giant, multi-billion dollar company as they went as because he forced this culture on the entire development staff and they went along with it. It was their job. They didn't know any different. They'd show up and they showed up to an extremely

20:01
aggressive app set culture where it was fundamental. One of the top criticalities in the company. And I watched them grow radically because of this, because as they went out into the market and sold their service, they didn't just have a security story like many companies like to weave. They had a security reality when it came to their software. They had artifacts and evidence and like they showed a piece of paper in sales.

20:31
Here's our training schedule, how we train our developers for the last six years. And that's rare. And that culture and that story helped them grow to a multi-billion dollar company and help them dominate their market. And I don't want to leak the company out of respect for them and lots of legal reasons, but in companies that do not have their security culture, where they're bringing me in to do checkbox training and I do the best I can, guess what we're doing? We're doing

21:00
the same old training, the same old topics, the same old problems every year. And they're having the same kind of events and expenses around security and incidents and developers come and go. And it's a much looser, less rigorous engineering culture in general. And they suffer because of it, right? But I'll tell you this though. It's not just about security culture. I think is the most important thing. I think the most important thing

21:30
is just engineering culture, right? As a software engineer, if I'm doing disciplined software engineering, then adding in security culture is pretty easy and also pretty natural, right? But if I have like a half-ass, loosey goosey, sloppy engineering, let's whip out some code kind of culture, then security is always gonna suffer and bringing in

22:00
additional security culture is always going to be difficult. So the main thing that I think is critical in any company's culture is how disciplined are they at software engineering? Are they actually engineering with strong processes? Security is going to be easy to add. Something like DevOps and DefectDojo is easy to add to their world. Are they sloppy, random, cowboy coding without disciplined architecture? That it's always going to be hard to add good security to that kind of team.

22:29
And this is, it's so easy to whip out code and now we got vibe coding. I'm using AI to build giant apps with no security overnight. So that kind of coding is interesting. I can innovate quickly, but completely at the expense of security. So I think it's all about engineering discipline as a high level principle. Thanks, Guy. Right on.

22:55
Going back to culture for one second, Jim, before we hop to AI, do you think that there's any excuses at this point, or do you think it's just so foundational that it has to be incorporated? Is it a must at this point, or do you think that there's any excuses left? Such a hard question. Where do you want me to go with that? What's the goal, right? What's your goal? If I'm a government agency

23:23
and I'm trying to protect constituents, if I'm trying to protect the population, there's no excuse, right? If I'm like a medical group trying to protect people's medical safety, I don't think there's any excuse. If I'm a vibe coder whipping out a new app to make a quick buck, hey, go vibe code. It really depends on what I'm trying to build.

23:50
Because if you, so it really depends on what kind of organization you are. Think of the top tier high risk organizations. There's no excuse today. But think of like the startup culture where I'm building the next like AI driven logo generator. I don't think it's nearly as important in the world of startup vibe coding. Not at all. But people who are

24:17
doing the vibe coding or whipping prototypes out, they're rarely bringing me in to do security training. I'm very biased because by the time someone wants to invest their entire development team into doing security education, they're already a couple steps down the world of maturity. So I have this like rose colored glasses of the world because by the time I get involved and I'm teaching developers,

24:44
They already have some measure of commitment to application security. And I realized that doesn't represent the entire world. So there's a long answer to a short question. Yeah, I think it's really interesting. When I see teams at odds, I think the fundamental disagreement is that they don't realize at the end of the day, they have the same goal, which is to ship the best piece of software possible. And so some people see

25:12
know, velocity as the key to that. But the reality is, you know, security just wants them to ship the best product that they can for the sake of customers. And I think, unfortunately, sometimes that's what gets lost. I wonder, I'm hoping that we'll see a shift in this, though. There's a couple indications that I see in the market that I think are really interesting as it relates to security's place in

25:41
technology. So, you know, first Google buying whiz. I mean, that is an investment to differentiate their product with security, which is something I haven't seen in a ton of business models, to be honest with you. And let me add something to that. I have, we have Shane, Shane gives her on, Shane Glitzer over on and chat. Good to see you again, Shane. Shane says,

26:07
It's helpful to utilize tools, enable the platform to automate the parts that should be easy. Things like don't let pushing secrets to branches enable automatic OSS dependency checks and code quality checks running. I'd add running SAS to that. I think Shane makes a really good point. We have so much modern tooling in the DevOps world. There are a lot of things I can flip on that are not invasive that will give me some very easy quick wins.

26:37
And I think it's negligent not to do some of that. I think that's a good point. Because again, running these tools 20 years ago, we were at the bleeding edge. But now like SAS and third party library scanning and similar, these are very well known tools throughout the entire software engineering industry. And I would dare say if you're not doing third party scanning, if you're not using some kind of

27:03
Secrets protector to not check in secrets to branches if you're not doing some kind of static analysis These days Greg it's pretty close to negligence And I think that's what you're aiming for and I I do sincerely agree with you and what Shane is saying in chat here The the other thing that I think about is you know with the rise of vibe coding I guess what I'm hoping for in all of this as it relates to to culture and where the industry is heading in terms of

27:31
how we develop code is that security will get propelled in its importance so that everyone is thinking about, you know, Manicode and the training that Manicode provides rather than it being a secondary influence. You you see all these publications, we take security seriously, we take security seriously. And it's like, you did, you probably wouldn't have to say it, right? Like it would just be inherent and obvious. And so my hope is,

28:00
is with the rise of AI and with Google leading the charge in some regards, we'll see security take a higher priority maybe than some organizations have considered it in the past, to your point about it being no longer optional for key enterprises, et cetera.

28:21
Tell me more. Give me a question with that. I think everything you said, I agree with. Tell me more, Greg. Let's go to AI, right? So we've talked about AI a little bit. I mean, you can't have a conversation about tech without talking about AI. So, what's your stance on AI as it relates to security and security programs? Where do you think it works well? How do you see the ultimate application of AI shaping up in security? And when do you think we'll see it?

28:51
I'm all in with AI, we're seeing it right now. And I think it's going to change the world faster than any technology we've seen in the last 20 years. A lot of the advances we've had in the last 20 years have been linear in their advancement. And AI is exponential growth. It's going to change everything when it comes to security. So pick a spot. What do you want to talk about? We'll talk about firewalls, static analysis. You want to talk about software engineering. What part of the world of AI do you want to focus on? And I have something to say about

29:21
Well, which one is most important to you, Jim? Where do you see the most exciting applications? Software engineering. I think that the way that we write software now is Neanderthal if you're not using AI. One of my quotes recently is, if you're not using AI as part of software engineering, then you're Neanderthal-banging-rocks. I'm just trying to be sensational. I'm not trying to offend anyone.

29:47
The problem though with AI and we're talking top tier engines, I can even give you a demo if you want to see it. When I ask top tier engines, the big ones, Cursor, OpenAI's, know, recent mini high model, looking at Claude, I'm looking at Gemini, all the recent models today, if I'm asking it to write even basic like user interface pages and similar, they build better and better code in the last couple of years

30:15
but they're still radically bad when it comes to security. Cause that's what they're pulling from insecure open source code bases, right? Most codes insecure in the open source world, right? And so what I've been doing is I've been studying security engineering for the last 20 years and I have 10,000 slides on the topic. So what I've been doing is I've been taking my courseware and converting them into extremely verbose and detailed rules that are specific

30:44
to actual coding rules. I'm not talking about things like ASVS. ASVS is more high level principles and I love it. I've been working on it for three years with Larr Lang and Josh Broisman and Daniel Cuthbert and many other volunteers. I love ASVS, but it's less valuable to developers because developers want a real specific answer. Like if you're gonna use dangerously set inner HTML on untrusted content,

31:12
then sanitize that data with Dom Purify. If you're gonna take data from a user and render it in a user interface page and you wanna display it just like a user typed it in, then put that in your mustaches in JSX template language. So I've gone through all the frameworks I work on, all the architectures like OAuth 2, I've taken like guides like the RFC 9700 on OAuth 2 and similar and converted all of that

31:42
into very specific prompt engineering rules for different AI engines. And my rules are very verbose and AI engines don't like that. I then use meta prompt, prompt engineering instructions individual to each engine to take my security rules and convert them into prompt engineering instructions for each major AI engine. And I apply these rules and then ask it to write code. And the difference is night and day.

32:11
I can, if not today, I'm always happy to give you a demo on how this works, but it blows my mind. So my work now is building prompt engineering instructions for all the major frameworks and not just for security, but also for things like maintainability, low cyclomatic complexity, low cognitive complexity, use the drive principle, use a good naming convention. And so I think of AI as a code generator, as a highly skilled

32:41
but and super fast, but novice developer. And I teach it and I train it around what style maintainability and secure requirements I wanted to build. And I'm not talking high level principles. I'm talking really specific meticulous rules. And the difference is night and day. And so this is now just starting to happen. Most people use AI for code generation

33:09
as a raw engine and a lot of senior developers dismiss it because of the poor quality. But when you teach your AI what that quality and security will look like and give it code samples and similar, the difference is night and day, Greg. And I believe that AI for software engineering is not just the future, it's going to radically help us make more secure code as engines catch up to the security rules and they will.

33:39
That's incredible, Jim. Is that something you plan to bring to market or how do you betterment the world with that tech that you're building? Someone from White Combinator built a company called Manicode.ai and I saw this happen. I own the name Manicode. I have a copyright, a pending copyright on it. I've been using the name for 20 years. My name is Jim Manico. And so I had a little meeting with them and I'm like, guys, you're

34:09
you're using my name inappropriately and they changed their name to code bash and they very respectfully handed me the domain mannaco.ai for free. They gave it to me and said, we're sorry we made a mistake. I give these big guys were so classy. They didn't do it on purpose. So now I got mannaco.ai Greg and I'm taking all of my prompts and they're bunch of text files and all they are. I got all my crazy prompts for all these frameworks and I'm going to

34:38
build mannico.ai and sell these prompts to different companies. I already did a few demos to a few companies. I give this away in training. And one of the companies said, we'll buy your React guidelines for like 20 grand. And I'm like, I'm going to give it to you. And they're like, don't do that. Make us buy it. Don't give me a hint, Jim. This is so valuable. This would take us so long to figure out.

35:06
Give it to us, sell it to us, trust me. Because we want a license for the next couple of years for you to help us keep it up to date. And that blew my mind, Greg. I'm like, didn't even think, I don't think I invest in startups like yourself, but I like money, but I'm more knowledge driven than anything else. And I've been given this away and several people told me, stop it. And so now that I got Manicode.ai in my hands, I'm probably going to build that out

35:35
and start selling my prompts to different AI engines or companies who want to do secure software through AI. I think it's a good time to do that. And if I don't move soon, somebody else will. So I'm on it, Greg. I think I'm going to build this out and really soon too. I think you should. I think you should. I think it is incredibly exciting and solves a gap that is getting wider and needs to be plugged with the wide adoption of AI.

36:05
What about with security teams, Jims, with regard to concerns in 2025 versus say what you saw five years ago? Have you seen the focus of security teams change when you think about, you know, core issues or things they now have to test for? I think companies are doing a good job at understanding web and API security problems. They understand JSON web tokens and microservices.

36:34
They understand web security. We have a lot of background in that area. And I think a lot of companies are catching up with that knowledge. But where things get problematic is, I see, now let's switch gears in AI. Not AI for code generation, but building chat bots and AI systems to support your company for various reasons. People are building AI systems so quickly. And one of the major problems is,

37:00
A vector database that's under the hood of an AI engine, it does not store data in a tabular relational format. AI engines store data as a vector database, and it's really difficult to do access control at the vector database level. You can tag data or do output filtering or build other AI engines for access control. So the problem is that the new state of the art AI systems that we build

37:30
It's almost impossible to do access control well. And the methods that I see in use freak me out because it's like, I'm going to send the question to the AI engine. I'm going to pull out that data. I've tagged that data and I'm going to review the tags to see if the user can access that data or not. And this is extremely weak and primitive compared to like securing data and a relational database or an object database. And

37:59
What I see some customers do is they're separating individual user data into different vector database namespaces, or they're actually building entirely separate models for each individual customer or each individual user. And that's, you know, performance it. So I think the problem today is we're rapidly moving to building AI systems and we don't understand how to do access control

38:27
within those kinds of systems. That's, I think is a number one problem that we're not gonna get right for a while, Greg. And the other thing that I think about is with all these AIs amassing so much data, I mean, there's no better target right now, right? Like all the data that these models and companies are getting access to, I can't think of a better treasure chest to pursue for malicious actors right now.

38:57
It is so brutal when I'm like, say you're like a healthcare company and you're putting patient data into one model. And now you're going to, you know, fine tune the research data you put in that model. When as a customer of a healthcare company using AI with my patient data and also good medical research, the quality that that delivers to me

39:22
Eclipse is what a doctor can do. In fact, there's a lot of studies out in the last couple of weeks that show individual consumers prefer interacting with an AI engine over an actual doctor because they're getting so much better service. But again, the problem is access control is extraordinarily non-trivial. And most of the engines I've looked at, there's easy bypasses to break through the access control layer

39:50
to pivot to other users' data. And this is a completely unsolved issue in the world of AI development. That's ripe for opportunity.

40:05
Do you see with your customers, Jim, are they choosing to build their own models or are they working with, are they building off of OpenAI and Claude and all the others in the space? Right now, most are building off of building off OpenAI and similar, but I don't think that's going to last, right? These are giant multipurpose AI engines and they're too beefy and OpenAI is choking from a performance point of view. It's slow and brutally bad.

40:35
And so from a flat out response point of view, where if I'm running a local model that's doing a more dedicated, has more dedicated expert system functionality to a very narrow space, and I can fine tune how I train the model, I'm going to be able to have much more control, better response time and similar, and be able to do things like fine tuning and prompt engineering and retraining

41:02
A lot better than if I'm using a big open model. So I'm just beginning to see the move. For a way of those who are early adopters of AI engines, they're starting to buy a lot of GPUs, build local rigs, use local hugging face models and use prompt engineering and fine tuning type technologies to enhance the models for their domain for their domain space. That I believe is the future of AI. Not

41:31
open AI, be honest with you. The last question I wanted to ask you, Jim, and this is switching topics a little bit. Something I keep a very close eye on is sort of macro relationships as it relates to government, because we use things like Kev and EPSS and these other frameworks that are primarily funded by the US government.

42:00
As you may be aware, you know, there's been some layoffs at at SZA recently, and although it hasn't impacted Kev yet, I was speaking with some people at DefectDojo and I was like, well, what's next DHS because, you know, DHS provides some of the funding for MITRE and we use MITRE standards as well. Do you have any thoughts there, any worries or concerns in terms of, you know, what's happening to SZA and other bodies like those?

42:29
So if I remember correctly, Rob Ross left SZA recently, right? With the new administration? Yeah, Jen Easterly left too, who was one of, I think, the key champions. I was incredibly impressed by her. I had the chance to meet her at, she presented at RSA, and then I got to speak with her briefly. Yeah, she was, her and Rob were some of the key driving forces over there. Rob Ross is one of the best of all of us. He can go out into the private world,

42:58
and add a zero to his salary. He's so good. And he was a believer. He was there not to make money. He was there because he was a believer. If he wanted to make money, he could do a thousand other things, go make a lot of money. So he was a true believer in our country and securing our country. He's one of the best of all of us. And this is a political statement, right? To lose him is a giant loss to the future security of our company. I weep for his loss because he is one of the best of us.

43:28
And I'm sad to see him leave SZA. He was doing an amazing job. And, you know, I don't want to get political. I really don't. And I hope that the changes happening in our country end up benefiting the country long term. But just from a security point of view, the people that are leaving government in the security world, they all have the ability to make way more money elsewhere. They were there because they were believers

43:55
in our country and I'm sad to see them leave. It's a great loss.

44:01
Jim, was there any other topics you want to address that we didn't get to or discuss? I, you know, I want to send a little snip. Those of you who are in chat watching the talk live, I want to show you a little snippet of prompt engineering instructions that aren't even security related. This is, this is like my first batch of instructions, maintain those psychomatic complexity, minimize cognitive complexity, avoid code duplication, do high cohesion and loose coupling, use clear naming conventions.

44:31
Follow the single responsibility principle and ensure accessibility. There's one, two, three, four, five, seven prompt engineering instructions that I recommend you use before you ask AI to write code for you. That alone will show you a significant quality difference in AI. And what I would ask any of you using AI for software development is capture your existing standards and security requirements

45:01
in text instructions, convince AI to convert them to prompt engineering instructions, and put that under source control and fine tune it over time. I believe that the prompt engineering instructions that you build for security and quality will ripple through time to benefit your software engineering organizations, especially for the next three to five years. That's my, there's a little gift I'm tossing out to everybody in chat.

45:31
A little something you can copy and paste and have fun with. It's a great start to how to do prompt engineering to increase AI's ability to write good code for you. There you go. That's my final thought of the day. Wonderful. Thank you, Jim. Jim, I want to thank you for doing this. I want to thank everyone for tuning in. For those of you that aren't following Jim along on some platform, I think he's

45:58
one of the greatest and one of the last oracles of security. And so if you're not, I encourage you to check him out, but we greatly appreciate you and everyone that tuned in taking the time to do this. Thank you again, Jim. And thank you everyone. My big fan, Greg, it's been an honor to talk to you and thank you for having me on your show. Hope you have a great day, Greg. And thank you everyone who's listening in as well. Good luck everyone out there. Have a wonderful day, folks.