Transcript
00:07
All right, thank you very much. So yeah, let's talk about AI and AppSec, the good, bad, the ugly. This is a topic near and dear to my heart, because DefectDojo is doing some AI things, and we've experienced some of this. So today, the agenda is gonna be, I'll do a quick intro, just cover some sort of broad topics. I'm gonna talk a little bit about how AI and MCP play well together in some circumstances. There's this thing that I'm sort of trying to get...
00:36
people to start thinking about that I'm calling LLM agility. I'm talk about if you are doing LLM powered applications, usually with the Llama and then the Lang chain, et cetera. There's a whole bunch of Lang star libraries that are quite popular. And also this lethal trifecta that you really do need to keep into account when you are building these things. And then I'll wrap up with a conclusion. So let's plow right into the intro.
01:03
So yes, who's this guy talking to you today? I'm Matt Tesauro. I like to call myself a reformed programmer and APSEC engineer. I'm the CTO and co-founder of DefectDojo Inc. And just for background, I've had 17, over 17 years of being in the OWAS community, being very active there. I have 25 years of using open source and I'm a Linux person, so Linux and open source software. When I get to write code, which isn't as frequently as I like, I write it in Go.
01:33
And that was actually me breaking two boards at once to get my second degree black belt, which was very scary, but also quite rewarding. So let's get into this thing. So no surprise AI has had a big influence. If you think about AI and its influence on the world, I kind of like to use a metaphor of a meteor hitting the atmosphere, right? Is it going to be...
01:58
an existentially sized, big sized meteor and it's going to fundamentally change life on this planet? Or is it going to be something that if you're checking your phone you miss? I'd like to say we're going to be somewhere in between those two extremes and we honestly, I don't think we know right now. I think it's too early days, but it certainly is going to have some sort of impact. I don't think if you're checking your screen you'll miss it, but I also don't think we're all going to die.
02:26
So anyone on this call has likely interacted with some kind of GPT prompt thing. And let's say we get to this future state where we could actually get physical things out of these AIs. And let's say I told an AI I need a truck. In your mind's eye, you probably have some kind of truck that you're envisioning, right? Usually, I don't know, I live in Texas. So it's probably something very rugged and four by four-ish like this, right? Unfortunately,
02:54
Sometimes when you ask an LLM, hey, I need a truck, it ends up a little bit more like this, which is still a truck, but not quite as fun. And the reason I say this is because even the people behind some of these large, the big players in the AI space have found shortfalls in AI. So Andrei Karpathy, I'm...
03:21
hopefully not terribly mispronouncing his name as a co-founder at OpenAI. He left and started his own thing, doing an open source model called NanoChat. And the first thing reporters asked him was, oh, you're doing your own model. You know, did you vibe code that? And his answer was no, it's all handwritten because I tried to do it a couple of times and it was just more work than it wasn't all that helpful. And I'm like, this is the person who helped give us OpenAI and
03:50
That's super interesting because I've, I've used that and it's I've had good and bad experiences with code generation, but I wonder if vibe coding would do this for you. And I'm not sure how readable this is, but this is some go code I wrote. And that comment says we should never get here, but just in case I'm exiting with an error message and a error code of one to say there was an error. And much to my surprise, I wrote that code expecting this to be two lines that are never going to get exercised.
04:20
Well, it did, it actually did. We got into this case that I thought was impossible. So I'm curious to see if vibe coding will produce these kinds of guardrails. Cause there was, I didn't think there was any possible way to get here, but we did. And so in the back of my mind, I'm wondering is the speed, particularly in vibe coding of code generation, does that come at the cost where we have to accept mediocre results?
04:49
And as I was preparing this talk, I, I'm a Shakespeare person and I had this, this King Lear quote pop into my head. Truth, the dog must to kennel. He must be whipped out while the lady Brock may stand by the fire and stink. And the whole idea is like, people don't want to hear about truth. They're going to run it out of the house. So I asked myself if, if truth is a dog, what kind of dog is AI? And
05:19
Kind of what I came up with was the dog's the LLM, the prompt is the treat, and the LLM desperately wants the treat. But I think the golden retriever, this is a little unfair to it. We're talking AI, right? We probably need a cyber look. So let's cyberify that dog. But let's get serious into speaking about MCP in particular. At least this is the first part of our main body. So.
05:48
To level set for people, an MCP, it's model context protocol. The whole idea of MCP is I need to get some data into the LLM, into its context primarily, but I don't wanna copy and paste it. So what's a way where the LLM can go retrieve data on its own from different sources? These could be applications, these could be a database, this could be stuff on disk, whatever. I need a way to make that more universal.
06:17
So I don't have to one-off solve this problem every time I have to interact with an LLM or I just have to copy and paste a bunch of junk, which doesn't work great. So this is what MCP gives you. It gives you a way to custom data on demand into the LLM as it's answering problems or answering questions or doing work for you. That's what an LLM was made to do. And so in an AppSec context,
06:46
or even a cybersecurity context, you're running a whole bunch of tools and they give you information that's useful. And I'd like to take these 14 different tools and have the LLM tell me something about them. Well, unfortunately for the LLM, without MCP, this means a whole bunch of copy paste, right? Which is completely not exciting. I'm sorry, I don't like copying and pasting from 14 different tools to get an answer. So this is where you say, great.
07:16
Let's solve the copy paste problem. Let's MCPify all of these things. And if every one of these tools had MCP servers, we could do this, right? I'm not sure they do. Maybe they do, maybe they don't. You could certainly write one. You could vibe code one into existence if not. So we've solved the copy paste problem, but we have a different problem. We still have the problem of 14 different views of reality, 14 different vendor specific views of what a vulnerability is.
07:45
And the LLM still has to make sense of those different 14 different views of reality. It has to decide which ones are real, which ones are false. Do I have duplicates? There's a whole bunch of questions still that the LLM has to do. So this is definitely better. I'm not copying and pasting 14 times, but I'm still shoving a lot of work onto the LLM and you know, tokens and all that. So DefectDojo
08:13
And this was somewhat, I'd love to say we were super clever and we had this planned all along, but this kind of organically came out of our creation of our own MCP server, because I was somewhat skeptical going in and I was very surprised at the quality of results coming out. And in thinking about why that was the case, it turns out that DefectDojo does a whole bunch of work to make sure that that data is normalized, is augmented with things like EPSS and Kev.
08:43
So it's much better quality data, we dee-doo-doop, we do false positive management. And so suddenly you have a stream of data coming out of DefectDojo that is high quality, very little, a lot of signal to noise, like a favorable signal to noise ratio. So suddenly you invert that model and you put DefectDojo in front of all of those 14 tools and we have 260.
09:11
plus tools that DefectDojo will handle these days. You let DefectDojo do the work of enrichment, of doing Intelligent DDoop, making sure that we're augmenting those findings with EPSS and Kev and other enhancements to the quality of the data provided by those scanning vendors. And you put an MCP on top of that, and now suddenly, you've solved the copy paste problem, certainly.
09:38
and you've solved the data quality problem. There's only one version of reality that that LLM has to understand, which is what DefectDojo provides as findings. And you can add more or take tools away from behind DefectDojo, and the LLM's work doesn't change. So this greatly enhances the quality of output you can get from an LLM. And the other key consideration honestly is, as I mentioned earlier, tokens, right? Tokens are the new currency in this
10:07
AI world, how many times you've ever heard people say like, oh shoot, I got to update to the pro plan because I burned through all my tokens. Right. This is a common occurrence. It makes me wonder is SEO going to go away as a profession and we're going to have PTO and I don't mean pay time off. I mean, prompt token optimization, right? Cause there's some real cost behind these things. And that is the sort of the, the money making mechanism of the large AI companies.
10:37
And so in this token driven world, you got to ask yourself like, what would happen if AI went away? What if AI went belly up? Cause there are some crazy massive deals. Billion is like a million used to be. And trillion is certainly not an unknown term. I mean, 900 billion as you see on screen is not that far from a trillion.
10:58
And you have these massive deals with these very large market influencing companies happening and you have the sort of key firm risk, right? If, if you have decided that your product is going to integrate with one of the big players, let's just, I'll pick on OpenAI for now. If you're going to integrate with OpenAI and use that as part of a key feature of your product, if that firm goes away, that product feature goes away. So that's very interesting.
11:28
And you have some players that have alternate sources of income. OpenAI has teamed up with Microsoft to some extent, so there's an alternate source of income. Anthropic has teamed up with AWS, so there's an alternate source of income. Google has Gemini, but Google has lots of other alternate sources of income. So if things get financially tight, there are some players that have other ways to make money besides just token selling, basically, or...
11:56
licensing their LLMs, but we're not assured this is going to happen. And if it does happen, it's going to be very interesting for a lot of players. And then on the flip side of that, it's a very interesting economics around the 5.2 trillion worldwide being spent on data centers. And the data center is a long-term investment. You're spending a lot of money upfront and recouping that with revenue over time. And one of the key questions about it, by the way, as an aside, I'm an economics undergrad.
12:26
I'm into economics as well as technology. And so I read The Economist, and this is where a lot of this data came from. But if you think about it, the bet here is that supply people buying and using AI is going to grow at a speed that outpaces the spend in these data centers. And if that doesn't come to pass, you suddenly have spent more on a data center than you're ever going to recoup. And no one likes to be out more money than you're
12:54
bringing in that puts you in a financially disadvantageous position. So suddenly this is just a very interesting thing. And as more and more people get into this AI market, the credit quality of the people playing in this field is decreasing. So it just is a very interesting environment to work in. And all this was bubbling through my mind as we were working on our MCP server. So I wanted to share some lessons we learned.
13:24
and ways to think about if you are gonna implement or interact with an MCP. You absolutely have to think about tokens, right? You can burn through the context of an LLM very quickly with an MCP that is verbose, right? So you really need to think about tokens. If you're creating an MCP, you need to spend a lot of time thinking about what you want it to do, but I think almost more importantly, what you don't want it to do.
13:50
Like what are the things I never wanted to do and if I never implement those, an LLM can't hallucinate into a problem, right? If you have read-only access through your MCP, an LLM can't accidentally delete, remove or change anything. Like, and if that's important for your use case, then definitely think about that ahead of time. You need to think about how much context is needed by the LLM to make use of the data that you have.
14:19
How discoverable from an LLM perspective is the data that you will be providing? How formalized is it? How understandable is it? How well named are the key value pairs? If not, you can send instructions, usually call the description on a tool in an MCP server. Think carefully about those tool instructions, because that's how you tell the LLM, by the way, if you want this, this is how you ask for it. And if those instructions aren't clear,
14:48
you get a lot of repeat requests and you start burning through tokens. You have to consider the standard IO versus HTTP, which used to be SSE and now is HTTP streaming. Some support all three, some support only one. I prefer the HTTP because I like decoupling, having to run local versus not local. So the nice thing with HTTP is you can have the LLM in one place, the MCP in another place.
15:17
and your data source in a third place with a standard IO, you end up having to have the LLM agent running locally as well as MCP, which I'm just not a fan of, but it's just something to think about. And then in creating our MCP server, I wanted to get up to speed on exactly what this MCP thing was when it first came out. And just to be blunt, read the spec. There's a good spec online at the MCP website.
15:45
I read and saw and watched so many videos that are just slop driven repeats of high level summaries that I finally just read the spec. It's not sexy. You're going to have to buy a cup of coffee, but you'll have a much better understanding of how this stuff actually works. And then if you are interacting with or otherwise messing with MCP servers, I can't praise highly enough the MCP inspector tool that came out as part of this initiative from Anthropic.
16:13
It is fantastic. Here's an example of it listing the tools that are in the DefectDojo MCP. Here's some prompts that are in the DefectDojo MCP. And I'm going to wrap up this section just for time, but if you want some more information, we've had several good videos up on our YouTube channel about MCP, both that Tracy did both of these, they're really good. If you want to dig a little more into MCP from a DefectDojo or AppSec perspective.
16:43
Those are good places. So let's move on to LLM agility. And let's talk a little bit about what I mean by this, because I'm honestly creating this term, but I think it's something people need to think about. And the thing that inspired me to think about LLM agility is another thing that's in AppSec called cryptographic agility, right? This is kind of a repeat of that. Cryptographic agility says,
17:12
Hey, we don't know what's gonna happen in the future with crypto primitives. So why don't we make it so that the cryptography used by an application is easily move aroundable. You can switch it because you never know when say RSA was a great bit of encryption until it wasn't. And then suddenly RSA is no good.
17:40
And so how do we know where, how do we make our apps able to withstand the sort of class break problem where suddenly the encryption method you're using is no longer valid, it's broken. How do I switch it quickly? And so this is the idea of cryptographic agility. And honestly, this is straight out of Wikipedia, if you can't tell from the screenshot and the link. I would just take the same definition and just sub in.
18:08
AI in the right places are LLMs. So an agile AI application design has LLM agility that allows you to switch between large language models. As you never know, like today, Anthropic is the best at something, tomorrow, Gemini may be significantly better at it. Are you going to have to refactor everything or can you just quickly swap out those models? Like if you can, you've got a significant market advantage.
18:37
Because to be honest with you, this AI stuff is changing quickly and kind of like castles made of sand, your Jimmy AI Hendrix may just wash away. And so you have to decide is the vendor that you're choosing, are they going to survive these tides of change? If they're not, it may be time to consider switching or I think ideally having LLM agility. And one of the driving forces for me,
19:06
was stories like this next one. This is a AI company that had a chat bot that was kind of focused on younger people. This, this, God help this poor mother, her son committed suicide. And because of the terms of service that had been agreed to by their son, when they joined this service to chat with this LLM, they got a hundred dollar payout. That's what arbitration gave them. And
19:35
I read that story and besides just being generally heartbroken, I thought to myself, would I want my company name associated with this LLM or this chat bot company? Oh, heck no. Right? So understand that this agility has some real world consequences. And then the other thing that came to mind when I started going down this path was I was around when Enron was a thing.
20:01
strange aside, I actually got recruited by Enron and I couldn't get anybody there who wanted to hire me to explain exactly what they did. And if I can't understand what somebody does, I don't know that I want to work there. But I'm wondering, is there going to be a building somewhere where we pull off, say the Enron sign and we slap up an Nvidia sign? If you look at the left, this is a screenshot of a Twitter post.
20:27
SPV, by the way, is a special purpose vehicle. It's a sort of a cheater way you could call it, or a different way, an alternate way to IPO on a, on an exchange. But here you have Nvidia selling GPUs to XAI through a company that's IPOing through an SPV that's being funded by Nvidia. So Nvidia is sort of funding a company that's buying things for GPUs, it's very circular financing.
20:57
Very interesting from a financial perspective. And this isn't the only case of circular deals. It's crazy if you dig into this stuff. I had to write this down because I honestly had a hard time keeping track of it, but Nvidia is investing billions and selling chips to OpenAI. OpenAI is buying chips and earning stock from AMD. AMD is also selling GPUs to Oracle. Oracle is
21:27
building data centers with OpenAI. OpenAI also gets data center work from a company called CoreWeave, and CoreWeave is partially owned by NVIDIA. So I tried to get AI to give me a drawing of people handing IOU notes between themselves, because at times it just feels like we're passing money between the vendors. And it's just, it gives me pause. And so let's say you've decided to go down this route.
21:58
And you're going to do something or some kind of app enablement with AI. You've probably looked at or thought about these different libraries. And I want to talk through those as well as give you a quick idea about what I call the lethal trifecta of AI. So just to level set everybody, Ollama is an open source tool. And there's some libraries in a bunch of different languages.
22:26
that allows you to easily run the open source, AKA once you can just go download models locally or remotely. It basically provides a common API interface on top of a bunch of different models. LangChain was probably the first and biggest open source library or framework in multiple languages. It makes it easy to work with LLMs. LangGraph came around afterwards, built on top of LangChain and it helps you do these agentic things.
22:56
We have multi-step long running agents. And then LangFlow was this drag and drop platform that sits on top of both of those that allows you to do even more interesting and complex things with LLM. So it's likely if you're playing around with LLMs, you're using one or more of these libraries. I don't have time to go into great examples, but if you want great examples of the kind of the super cool stuff you can do with this, Fabric from Daniel Messler.
23:23
is an amazing project. It's been out for a while. It's open source. I would highly recommend taking a look at that. And PAI or the Personal AI Infrastructure is another open source project by Daniel that talks about how to do these agentic things. Give them a look. I could have an entire talk about either one of those, so I can't cover it here today. But if you want to see the cool side of this and some people doing some really interesting valuable work and in an open source way where you can just go play around with it.
23:53
Those are two great projects.
23:57
One of the things to think about though, as you're doing these agentic systems is this idea of exponential errors compounding. So what this graph really tells us is if you assume an LLM produces 95% accurate data, which is probably okay, but could be considered generous. But in a 95% accurate agent,
24:24
When you start having multiple steps, because each step is problem, problem, ah, it's not deterministic, it's probabilistic. So each step increases that error rate. So if you have five steps, each done by an LLM, suddenly your success rate is down to 77. 10, you get a 59, 20, you're at a 36% success rate. So this means if you're trying to chain a whole bunch of probabilistic,
24:54
AI interactions together, the chance of that output being what you want, the longer the chain grows, decreases rather significantly. So this is a very interesting problem to consider. There was a great DARPA challenge that, oh shoot, who was, who did that? There was a DARPA challenge to create an AI that would find vulnerabilities in code, write an exploit for the vulnerability in code.
25:24
create a fix for that exploit and then fix that exploit, the code and then run the exploit against it to prove it was fixed. So was sort of a full chain of discovering vulnerabilities in addition to exploiting them, fixing them and proving they're fixed. And one of the things that came out of...
25:51
I can't remember the name of this, the security vendor that came second or third in that contest. But one of the interesting things I saw an interview with them, one of the interesting things that came out of it is they intermingled deterministic steps between those AI or probabilistic steps to bring down that exponential error problem. So this is something to consider if you are building these more complex systems is if you just chain
26:20
AI to AI to AI to AI, you run up against the exponential error problem. Another consideration, this is related to the exponential error problem is the 70% problem, right? The further you get in with an LLM, you can suddenly see these diminishing returns happen. This is if you're going back and forth, particularly vibe coding. You end up getting a solution created by the LLM that's
26:49
close to what you want and it's certainly better than if you started from scratch, but it's not done. This is why you'll hear a lot of people say AI is great to vibe code POCs or MVPs, but maybe not the final final thing, or you have to approach it in a very different way. But I think particularly if you're using a naive approach to vibe coding where you're just saying, I want an app that does XYZ, I think your chance of getting an app out of that process.
27:17
is significantly diminished if you don't do some other things to make sure that you give it the right steps. This is where asking it for a plan, asking it to do testing while implementing steps of that plan. There's a bunch of things you can do to improve the quality of your vibe coding, but it's hard to get 100% you're perfectly happy with it. It is ready to roll to production code out of an LLM. And a lot of that is because that exponential error problem also rears its head when you're vibe coding.
27:48
I think one of the other sort of ways that you can see this coming to life is this whole idea of AI work slop, right? Where the whole idea of work slop is you're using AI tools to generate stuff within a work context. They look really cool. They look polished. They look legit. And then you read them and you realize, ah, maybe they're not so legit. Maybe they're a waste of time. And this is that compounding error problem.
28:18
And so you end up finding these things where I can quickly generate documents that look great, but are, I'm going to steal from a beer commercial, less filling, less fulfilling, right? Where I have to actually go back and quality check it. And when I do the quality check tells me it's actually not that great. So be careful about this also when you're implementing these things. And then the thing to talk about here is how do we get around? I've,
28:45
talked about some problems. How do we get around them? How to think about them? So if you're creating a system, if it's just a simple MCP, if it's an agentic system with multi-steps, however complicated you wanna make this thing, there's sort of three fundamental things to think about that your AI system or LLM or what have you may have to do. And the more you can limit these, the less your risk profile is or the smaller your risk profile is.
29:13
So access to untrusted resources. If you have the ability for LLMs to pull external data and that data is from an unsanitized source, that can be super interesting. All the prompt injection talk you hear about, this is what it is, right? I can leave a white text on a white background where a human won't see it, an LLM will read it, and it'll slip a prompt into a thing, right? All those kinds of tricks that people are playing with LLMs.
29:41
really boil down to accessing from untrusted sources. So the fewer untrusted sources you can pipe into your AI solution, the better. The ability to read valuable secrets, right? If an LLM can't see a secret, it can't accidentally divulge it. I mean, this is simple truth. So anytime you have, if you have LLMs have the ability to read internal or sensitive data, you have to...
30:10
put up guard rails to protect against that. And then the ability to communicate to the outside world. This is where I reach out to say MCP servers or other external tool calling. That's another source of untrusted data that's coming in and can bring in that kind of prompt injection problem.
30:30
So, I mean, I hate to say this, but if you talk to anybody who hasn't completely drunk the Kool-Aid, I think prompt injection is a feature, not a bug. You are asking a piece of software to take in text and produce text, right? Fundamentally, that's what LLMs do. And they can't tell that this text is code versus this text is just text by the nature of how we do this interactive prompting. Now, some of the...
31:00
If you put the LLM in a context where it's part of an IDE, you could probably let the LLM know this is code, but fundamentally they don't have a way to tell. And this is why prompt injection is so problematic, because there isn't clear lines delineating where code stops and starts. That's just gonna be a problem with LLMs. And then your typical ways that you interact with
31:28
LLMs or do security work rather they break down with LLMs, right? If I run a static analyzer against the same code base at the same, you know, GitHub commit It should give me the same it will give me the same results. It's deterministic, right? So every time I run it, it'll say these three lines have problems. Whatever those are However, LLMs are probabilistic. So They won't necessarily tell you the same thing every time
31:58
How do you validate or audit or otherwise confirm, which is what we're used to doing in security, when the response changes? I run LLM code checker the first time and it finds three issues. I run it the second time and it finds five. And then I run it the third time and it finds three again. So is it three or is it five? Right? Hard to say. So this is just where a lot of typical security mindset has to be adjusted.
32:27
to think about the ways LLMs work and the fact that they are probabilistic as opposed to being deterministic. And I think the thing is, and this is something developers and security people have not been great about doing, but you need to sort of adopt a civil engineering mindset, right? You need to accept that there are certain ways that LLMs work that by their very nature produce flaws and weaknesses in the system.
32:56
And you have to accept that those flaws exist and then sort of overbuild. So if I'm building a bridge and I expect four cars at once, maybe I build it as if there were eight cars driving over at the same time. It's way over engineered, but it also isn't going to fail because the failure case for a bridge is pretty extreme. So take that into consideration when you're building these AI systems.
33:22
And then you have to prioritize safety margins and bounded access, right? Anytime you can constrain in your system, the things, the data, the sources of data, the tool calls, what have you, that an AI can get access to, the more you are providing those safety margins, right? So you may have to restrict things or be very specific at how those things work. You may have to do things like if you have an LLM running on a particular VM,
33:52
Maybe you don't allow any outbound network connections except for well-established ones, right? All those things reduce the blast radius should something go wrong. And I think that's one of the things that we have to take into consideration when we're building these AI applications or AI enhanced or augmented or whatever you would want to say. And then really think about failing safely, right? This is how can I limit what the LLM can do in a way that if it does fail,
34:20
the scope of how it can fail is known as opposed to just an unbounded thing. This is where it gets interesting from a product development perspective. You have to be willing to potentially reduce a feature set so that your LLM application can fail safely. And I think we've seen examples of cases where vendors haven't chosen that fail safely, unfortunately.
34:47
So let's wrap things up. I had this vision in my head, particularly in the economic side of things, of these AI vendors where it's like a snake eating its own tail, except for it's not one snake, it's multiple snakes. And by the way, if it wasn't a screenshot, yeah, screenshot, all of the images in this deck so far have been AI generated. So this is another AI generated image.
35:15
I love to say this was my first shot at getting this right. It isn't. It was actually my seventh shot at getting it right. Here's some other examples. My prompt was draw two snakes from above, which form a circle. One of which is labeled Nvidia. The other one, which is open is OpenAI and have snake one eating the tail of snake two and snake two eating the tail of snake one and therefore forming a circle. I see two circles here.
35:45
and some circles kind of spattered around, but no real circle. So just know that AIs are not infallible, right? They are probabilistic things, which means there's a probability that they will not produce the right result. In my case, it took seven shots to get one that was acceptable to me.
36:04
And even the mighty Google can fail. So I ordered something from eBay. I was coming in on the 15th. I got an email that Google's helpful AI told me was going to arrive on the 13th. It actually arrived two days after the 15th on the 17th. So just know these things aren't perfect and plan for that when you're designing these systems. There's a, there's an old expression, right? If I say good enough for.
36:33
think most people, at least in a US context and probably internationally, have this concept of good enough for government work, right? That is a common phrase that people say. And I'm wondering now, is the new version of that good enough for AI output? Is that where we're at and are we happy with that? Or can we work towards a better future where we don't have to accept good enough for AI output? If I said anything too crazy,
37:03
or off the beaten path in this presentation. I want to say that this, I'm neither going to confirm or deny this presentation. And if it gets really bad, I'll just say it's a deep fake and I'll move on with my life. I just think that's hilarious now with someone who's presented a zillion times in different audiences, that now I can claim that I didn't make that talk, which is pretty hilarious. And I want to finish with a picture and a quote from one of my favorite movies, Brazil. Listen, kid, we're all in it together.
37:32
That's Robert De Niro saying that in this particular scene. I think AI is definitely going to have an impact on things. I'm not sure how big that impact is going to be, but it's definitely going to change the landscape to some extent. We can argue about how much it does, but it's definitely going to change.
37:50
And so as we're all working to get the most out of this very interesting technology, let's share our knowledge. I think that's one of the reasons why I wanted to point out Daniel's work. He's been very good about saying, these are the cool things I'm doing. I want to share and let you see examples of what's going on. And that is it. And I'm ready to field questions should people have them.