Blog

ChainLeak and OpenClaw Make it Clear: Third-Party AI Has a Trust Issue

Written by GREG ANDERSON | Feb 18, 2026 6:08:35 PM

2026 has not been kind to cybersecurity professionals and AI so far. I’ve already discussed OpenClaw in detail (long story short: if you are only using OpenClaw’s default permissions, you are going to get hacked, no ifs, ands, or buts).

But while OpenClaw was holding our attention, another open-source AI framework also disclosed two major vulnerabilities. Chainlit has been gaining steam as a way to quickly build AI chatbots. As the steward of a long-standing OWASP Flagship Project, I am absolutely in favor of open-source software and sharing collective knowledge to build better software.

What I am not in favor of is using something with two vulnerabilities that are big enough to get their own name in ChainLeak and which both come with high vulnerability scores.

  1. CVE-2026-22219 (Score: 8.3): “Chainlit versions prior to 2.9.4 contain a server-side request forgery (SSRF) vulnerability in the /project/element update flow when configured with the SQLAlchemy as the backend.”
  2. CVE-2026-22218 (Score: 7.1): “Chainlit versions prior to 2.9.4 contain an arbitrary file read vulnerability in the /project/element update flow.”

With these two vulnerabilities, a hacker can not only move out of Chainlit and access the cloud environment it’s hosted in, but also access any file readable by Chainlit.

First things first: If you haven’t updated to at least Chainlit 2.9.4, do so. The most recent version available is Chainlit 2.9.6.

But more importantly, even as AI gets more and more popular, third-party risks aren’t going away. If anything, they’re only getting worse as AI proliferates.

This is something we’ve been monitoring closely at DefectDojo—even as we’ve released both MCP support and our own AI agent in Sensei. In developing both MCP support and Sensei, our goal was to make our AI capabilities as safe as they possibly could be. Unfortunately, it’s all-too-clear that not every AI developer is thinking the same way.

Let’s put it in simpler terms: third-party AI can be a huge breach by proxy risk. No provider is completely trustworthy. No, not even Anthropic or OpenAI. (OpenAI has been breached more than 1,000 times as of last summer.) By handing data over to these providers, you’re also giving up control.

What’s a security leader to do? From DefectDojo’s point of view, the basic pillars of secure AI strategy remain the same:

  • Visibility: Maintain comprehensive awareness of your AI systems' activities, data interactions, and information flows.
  • Customization: Align AI capabilities with your specific threat model, compliance requirements, and operational realities.
  • Control: Develop robust policies, oversight mechanisms, and architectural frameworks that enable deliberate, informed choices rather than reactive responses.

In 2026 so far, it’s clear that for many organizations, building a secure AI strategy is still a work in progress, even as boards and C-suites push to integrate AI more and more.

It’s our job to protect our organization from any cyberthreats—and it’s very clear that AI poses a lot of cyberthreats. That means that we must play a real, significant role in our org’s AI governance structure. We’re just as important as CTOs who lead the selection process, CFOs who look for ROI, and CHROs who help the workforce transition.

If we abdicate this responsibility, we’re dooming ourselves to constantly trying to put out AI fires instead of making it an organizational ally (including in cybersecurity itself!). And in the fight against cyberattacks, we need every ally we can find.