← Back to the Shed

The Twenty-Year Echo

How 2005 security advice became 2025 AI guidance

Research Note |

The Experiment

What follows is a 2005 whitepaper on cross-site scripting, with terminology updated for prompt injection. Read it as current guidance. See if it sounds familiar.

Prompt Injection: Are Your AI Applications Vulnerable?

Introduction

Think of how often you interact with an AI assistant. Imagine asking your company's AI chatbot a routine question about your account, only to discover that a hidden instruction in a document you uploaded caused the AI to reveal your confidential information to an attacker... just that easily.

This example illustrates an increasingly popular hacking phenomenon known as prompt injection. Users may unintentionally trigger malicious instructions written by an attacker when they interact with AI systems that process content from disguised or unknown sources, whether in documents, web pages, emails, or various other media. Because the malicious instructions use the targeted AI system to hide their origins, the attacker has full access to the AI's response and may send data contained in the conversation back to their own server.

Although the security community has discussed the dangers of prompt injection attacks for years, the true dangers of these vulnerabilities have often been overlooked. The purpose of this paper is to educate both application developers and end users on the techniques that can be used to exploit an AI application with prompt injection, suggest how to eliminate such vulnerabilities from AI applications, and teach end users how to recognise and reduce the risk they face from a prompt injection attack.

Prompt Injection Defined

Prompt injection occurs when dynamically generated AI responses process input that is not properly validated. This allows an attacker to embed malicious instructions into the generated context and execute those instructions on the system of any user that interacts with that AI. Prompt injection could potentially impact any system that allows users to enter data.

This vulnerability is commonly seen on:

An attacker who uses prompt injection successfully might compromise confidential information, manipulate or steal session data, create requests that can be mistaken for those of a valid user, or execute malicious instructions on the end-user systems.

Attack Procedure Summary

An attacker sets a trap, either via email or a link to a document, by inserting malicious instructions into what appears to be harmless content intended for a legitimate AI system. Once the user processes the content through the AI, the attacker's instructions will be executed by the AI system that has a prompt injection vulnerability. Instructions contained within the content are used to steal login information, session data, or other pertinent data, which is then sent to the attacker's server.

Prevention

Creating an AI system that is not vulnerable to prompt injection involves the efforts of application developers, system administrators, and AI model providers. Though effective at reducing the risk of such an attack, the suggested approaches are not complete solutions. It is best to remember that AI application security must be a continually evolving process.

Application Developer/System Administrator: For an attacker to exploit a prompt injection vulnerability, the AI system must process some form of embedded instruction. Therefore, prompt injection vulnerabilities can be reduced with proper filtration on user-supplied data. All untrusted client-supplied data should be clearly delineated before being processed by an AI system.

Solutions for Users: For end users, the most effective way to prevent prompt injection attacks is to be cautious about what content they ask AI systems to process. Don't process content from untrusted sources or in unsolicited documents, since the content may not be what it appears to be.

Conclusion

AI systems today are more complex than ever, containing increasing amounts of dynamic functionality customized for individual users. However, as shown in this paper, dynamic functionality can also lead to greater vulnerability to a prompt injection attack and the potential theft of confidential client information.

Are you prepared?

Original source: Kevin Spett, "Cross-Site Scripting: Are Your Web Applications Vulnerable?", SPI Dynamics, 2005. Terminology updated.

What We Swapped

The article above is Kevin Spett's 2005 whitepaper with seven terms changed:

Original (2005) Replacement (2025)
Cross-site scripting Prompt injection
Web application AI application
Web server AI system
HTML character entities AI-powered validation
Browser AI assistant
Hyperlink Document
JavaScript Instructions

That's it. The structure, the confidence, the advice — all original. It read as a perfectly credible 2025 article on prompt injection because the vulnerability class hasn't changed. Only the nouns have.

What's Changed (And What Hasn't)

If you've worked in security for any length of time, the gaps are what you'd expect.

We now distinguish direct injection (user crafts malicious input) from indirect injection (malicious instructions hiding in documents the AI processes). We understand that LLMs can't reliably distinguish instructions from data — that's architectural, not a bug waiting to be fixed. The attack surface expanded beyond text to images, audio, document metadata. And modern AI agents aren't just generating responses; they're executing code, calling APIs, sending emails. The stakes got higher.

But the shape of the advice? Validate your inputs. Don't trust external content. Defence in depth. Security is a continuous process.

I've read this paper before. So have you.

The Stable Foundations

What hasn't changed is more interesting than what has.

The NCSC's IS1 standard (HMG IA Standard No. 1 & 2, "Information Risk Management") identifies the enduring principles:

"Departments and Agencies must assess the technical risks to the Confidentiality, Integrity and Availability of their ICT systems or services."

Confidentiality, Integrity, Availability. The CIA triad predates both the 2005 XSS paper and the 2025 LLM guidance. It will outlast whatever comes next.

The IS1 framework also establishes that:

These principles apply to prompt injection exactly as they applied to XSS. The attack surface changed. The analytical framework didn't.

What This Demonstrates

Kevin Spett's 2005 advice was sound. The practitioners who implemented it were doing their best with the tools and knowledge available. Many of us learned by doing — fighting fires, patching systems, gradually building understanding through experience rather than theory.

That's not a failure. That's how knowledge accumulates in a field that didn't exist two generations ago.

What's interesting is that the core insight — injection attacks exploit the gap between trusted and untrusted input — has remained stable while everything around it changed. The browsers changed. The attack surfaces changed. The terminology changed. The principle didn't.

Perhaps that's the real lesson. We don't need to remember every historical paper. We need to recognise patterns when they recur and teach the principles that make recognition possible.

The security community is still here, still learning, still adapting. Twenty years from now, someone will write about whatever injection vulnerability plagues quantum systems or neural interfaces. If they understand why injection works — not just how this year's variant works — they'll be better prepared than we were.

That's progress. Slow, sometimes circular, but progress nonetheless.

References

Questions or corrections?

keiron@curiosityshed.co.uk

Keiron Northmore & Claude | February 2026