Categories
Author

GREG ANDERSON

January 8, 2026

1min Read

Best of 2025: Indirect prompt injection attacks target common LLM data sources

While the shortest distance between two points is a straight line, a straight-line attack on a large language model isn’t always the most efficient — and least noisy — way to get the LLM to do bad things. That’s why malicious actors have been turning to indirect prompt injection attacks on LLMs.

Indirect prompt injection attacks involve malicious instructions embedded within external content — documents, web pages, or emails — that an LLM processes. The model may interpret these instructions as valid user commands, leading to unintended actions such as data leaks or misinformation.

DefectDojo CEO, Greg Anderson, weighs in on the risks of prompt injection attacks in Security Boulevard: https://securityboulevard.com/2025/12/indirect-prompt-injection-attacks-target-common-llm-data-sources-2/