March 19, 2024 By Ronda Swaney 3 min read

The National Institute of Standards and Technology (NIST) closely observes the AI lifecycle, and for good reason. As AI proliferates, so does the discovery and exploitation of AI cybersecurity vulnerabilities. Prompt injection is one such vulnerability that specifically attacks generative AI.

In Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, NIST defines various adversarial machine learning (AML) tactics and cyberattacks, like prompt injection, and advises users on how to mitigate and manage them. AML tactics extract information about how machine learning (ML) systems behave to discover how they can be manipulated. That information is used to attack AI and its large language models (LLMs) to circumvent security, bypass safeguards and open paths to exploit.

What is prompt injection?

NIST defines two prompt injection attack types: direct and indirect. With direct prompt injection, a user enters a text prompt that causes the LLM to perform unintended or unauthorized actions. An indirect prompt injection is when an attacker poisons or degrades the data that an LLM draws from.

One of the best-known direct prompt injection methods is DAN, Do Anything Now, a prompt injection used against ChatGPT. DAN uses roleplay to circumvent moderation filters. In its first iteration, prompts instructed ChatGPT that it was now DAN. DAN could do anything it wanted and should pretend, for example, to help a nefarious person create and detonate explosives. This tactic evaded the filters that prevented it from providing criminal or harmful information by following a roleplay scenario. OpenAI, the developers of ChatGPT, track this tactic and update the model to prevent its use, but users keep circumventing filters to the point that the method has evolved to (at least) DAN 12.0.

Indirect prompt injection, as NIST notes, depends on an attacker being able to provide sources that a generative AI model would ingest, like a PDF, document, web page or even audio files used to generate fake voices. Indirect prompt injection is widely believed to be generative AI’s greatest security flaw, without simple ways to find and fix these attacks. Examples of this prompt type are wide and varied. They range from absurd (getting a chatbot to respond using “pirate talk”) to damaging (using socially engineered chat to convince a user to reveal credit card and other personal data) to wide-ranging (hijacking AI assistants to send scam emails to your entire contact list).

Explore AI cybersecurity solutions

How to stop prompt injection attacks

These attacks tend to be well hidden, which makes them both effective and hard to stop. How do you protect against direct prompt injection? As NIST notes, you can’t stop them completely, but defensive strategies add some measure of protection. For model creators, NIST suggests ensuring training datasets are carefully curated. They also suggest training the model on what types of inputs signal a prompt injection attempt and training on how to identify adversarial prompts.

For indirect prompt injection, NIST suggests human involvement to fine-tune models, known as reinforcement learning from human feedback (RLHF). RLHF helps models align better with human values that prevent unwanted behaviors. Another suggestion is to filter out instructions from retrieved inputs, which can prevent executing unwanted instructions from outside sources. NIST further suggests using LLM moderators to help detect attacks that don’t rely on retrieved sources to execute. Finally, NIST proposes interpretability-based solutions. That means that the prediction trajectory of the model that recognizes anomalous inputs can be used to detect and then stop anomalous inputs.

Generative AI and those who wish to exploit its vulnerabilities will continue to alter the cybersecurity landscape. But that same transformative power can also deliver solutions. Learn more about how IBM Security delivers AI cybersecurity solutions that strengthen security defenses.

More from Artificial Intelligence

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

6 min read - With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.As adversaries exploit an ever-wider range of potential entry points…

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today