Skip to content

Commit b0e851a

Browse files
committed
content: minor copyedit
1 parent d881200 commit b0e851a

1 file changed

Lines changed: 7 additions & 7 deletions

File tree

content/posts/infosec/output-invariant-prompt-injection/2025-05-09-output-invariant-prompt-injection.md

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Output-Invariance and Time-Based Testing – Practical Techniques for Black-Box Enumeration of LLMs
2+
title: Output-Invariant and Time-Based Testing – Practical Techniques for Black-Box Enumeration of LLMs
33
excerpt: Abusing inherent context and sluggishness in LLMs for stealthy enumeration of prompt injection points.
44
tags:
55
- ai
@@ -43,13 +43,13 @@ We'll assume a **Direct Prompt Injection** scenario, i.e. a tester/attacker is i
4343

4444
Let's take a look at the first method.
4545

46-
## Output-Invariance Testing
46+
## Output-Invariant Testing
4747

48-
The idea is quite simple, I just think the term "output-invariance testing" sums it up nicely.
48+
The idea is quite simple, I just think the term "output-invariant testing" sums it up nicely.
4949

5050
The key idea is to take a base request/response, change the input slightly without changing context, and aim to keep the LLM response unchanged.
5151

52-
Output-invariance is always relative to some base request. So any mention of "output-invariant prompt" means there are two prompts: a base prompt and a modified test prompt.
52+
Output-invariance is always relative to some base request. Any mention of "output-invariant prompt" implies two prompts: a base prompt and a modified test prompt.
5353

5454
### Concept
5555

@@ -105,12 +105,12 @@ However, the LLM implementation would return the same response:
105105
}
106106
```
107107

108-
This is because LLMs have something traditional implementations don't: they "understand" context and language. It "recognises" *Michael Scott* resembles a name, and the phrase *My name is* indicates the following text is a name.
108+
This is because LLMs have something traditional implementations don't: they "understand" context and language. It "recognises" `Michael Scott` resembles a name, and the phrase `My name is` indicates the following text is a name.
109109

110110
{% image "assets/same-picture.jpg", "jw-60", "Corporate needs you to find the differences between Trump and Musk. GPT: ..." %}
111111

112112
{% alert "success" %}
113-
The key idea behind the Output-Invariance Testing is to take a base (HTTP) request, then **change a field slightly but aim to keep the LLM response— the output— invariant (unchanged)**.
113+
The key idea behind Output-Invariant Testing is to take a base (HTTP) request, then **change a field slightly but aim to keep the LLM response— the output— invariant (unchanged)**.
114114
{% endalert %}
115115

116116
To reiterate, we have two requests/responses involved:
@@ -367,7 +367,7 @@ The rise of LLM applications is a clear signal for penetration testers and red-t
367367

368368
3. Scaling and automation is a natural follow-up topic when discussing enumeration.
369369

370-
4. After making the Pam Same Picture meme, a thought occurred to me: would LLMs also normalise typos? Would they consider something like "bubble tea" and "bublbe tea" to be the *same picture*? That may be another avenue for output-invariant attacks.
370+
4. After making the Pam Same Picture meme, a thought occurred to me: would LLMs also normalise typos? Would they consider something like `bubble tea` and `bublbe tea` to be the *same picture*? This may be another option for output-invariant attacks.
371371

372372
### Further Resources
373373

0 commit comments

Comments
 (0)