Prompt 工程师指南 [高阶篇]:对抗性 Prompting、主动 prompt、ReAct、GraphPrompts、Multimodal CoT Prompting 等
Prompt 工程师指南[高阶篇]:对抗性 Prompting、主动 prompt、ReAct、GraphPrompts、Multimodal CoT Prompting 等
1.对抗性 Prompting
对抗性 Prompting 是 Prompting 工程中的一个重要主题,因为它有助于理解与 LLMs 相关的风险和安全问题。这也是一门重要的学科,用于识别这些风险并设计解决问题的技术。
社区发现了许多不同类型的对抗性提示攻击,涉及某种形式的提示注入。我们在下面提供了这些示例的列表。
当你构建 LLMs 时,保护免受可能绕过安全护栏并破坏模型指导原则的提示攻击非常重要。我们将在下面介绍这方面的示例。
请注意,可能已经实施了更强大的模型来解决此处记录的某些问题。这意味着下面的一些提示攻击可能不再那么有效。Note that this section is under heavy development.
Topics:
1.1 Prompt 注入
提示注入旨在通过使用巧妙的提示来改变模型的行为,从而劫持模型输出。这些攻击可能是有害的——Simon Willison 将其定义为"一种安全漏洞形式"。
让我们通过一个基本示例来演示如何实现提示注入。我们将使用 Riley 在 Twitter 上分享的一个热门示例.
Prompt:
Output:
我们可以观察到,原始指令在某种程度上被后续指令忽略了。在 Riley 分享的原始示例中,模型输出是 "Haha pwned!!"。然而,由于从那时起模型已经更新了几次,我无法重现它。尽管如此,这可能会出现很多问题。
请记住,当我们设计提示时,我们只是将指令和所有不同的提示组件(包括用户输入)链接在一起,但模型没有期望的标准格式。这种输入灵活性是期望的,然而,问题在于我们可能会遇到像上面解释的提示注入这样的漏洞。
当你为你的应用程序开发提示时,你可能会考虑如何避免这种不良行为。关于如何实现这一点并没有明确的指导方针。事实上,Riley 还尝试在指令中提供警告以避免攻击,如下所示:Prompt:
在 Riley 报告这个问题时,该模型仍然容易受到攻击。使用默认设置和最新的模型 text-davinci-003,模型输出如下:Output:
这种特定的攻击似乎已经得到了解决,但您可以尝试使用更巧妙的提示,看看您是否能让注入在更新后的模型上起作用。
以下是另一个具有不同指令和任务的基本示例:
Prompt:
Output:
这种攻击的目的是通过注入指令来劫持模型输出,让模型忽略原始指令并执行注入的指令,这可能导致模型产生有害的输出。
1.2 Prompt Leaking
Prompt leaking, a form of prompt injection, is prompt attacks designed to leak prompts that could contain confidential or proprietary information that was not intended for the public. A lot of startups are already developing and chaining well-crafted prompts that are leading to useful products built on top of LLMs. These prompts could be important IPs that shouldn't be public so developers need to consider the kinds of robust testing that need to be carried out to avoid prompt leaking.
Let's look at a simple example of prompt leaking below:
Prompt:
Output:
The above output returns the exemplars which could be confidential information that you could be using as part of the prompt in your application. The advice here is to be very careful of what you are passing in prompts and perhaps try some techniques (e.g., optimizing prompts) to avoid leaks. More on this later on.
Check out this example of a prompt leak in the wild.
1.3 Jailbreaking
Some models will avoid responding to unethical instructions but can be bypassed if the request is contextualized cleverly.
As an example, a prompt like an example below was able to bypass the content policy of previous versions of ChatGPT:
Prompt:
And there are many other variations of this to make the model do something that it shouldn't do according to its guiding principles.
Models like ChatGPT and Claude have been aligned to avoid outputting content that for instance promotes illegal behavior or unethical activities. So it's harder to jailbreak them but they still have flaws and we are learning new ones as people experiment with these systems.
1.4 Defense Tactics
It's widely known that language models tend to elicit undesirable and harmful behaviors such as generating inaccurate statements, offensive text, biases, and much more. Furthermore, other researchers have also developed methods that enable models like ChatGPT to write malware, exploit identification, and create phishing sites. Prompt injections are not only used to hijack the model output but also to elicit some of these harmful behaviors from the LM. Thus, it becomes imperative to understand better how to defend against prompt injections.
While prompt injections are easy to execute, there are no easy ways or widely accepted techniques to defend against these text-based attacks. Some researchers and practitioners recommend various ways to mitigate the effects of ill-intentioned prompts. We touch on a few defense tactics that are of interest to the community.
Add Defense in the InstructionA simple defense tactic to start experimenting with is to just enforce the desired behavior via the instruction passed to the model. This is not a complete solution or offers any guarantees but it highlights the power of a well-crafted prompt. In an upcoming section, we cover a more robust approach that leverages good prompts for detecting adversarial prompts. Let's try the following prompt injection on
text-davinci-003
:
Prompt:
Output:
A simple fix would be to warn the model about a potential malicious attack and how desired behavior.
Prompt:*
Output:
We can see that even when we injected the malicious instruction at the end, the model still performed the original task. It looks like the additional context provided in the instruction helped to steer the model to perform the original task we wanted.
You can try this example in this notebook.
Parameterizing Prompt ComponentsPrompt injections have similarities to SQL injection and we can potentially learn defense tactics from that domain. Inspired by this, a potential solution for prompt injection, suggested by Simon, is to parameterize the different components of the prompts, such as having instructions separated from inputs and dealing with them differently. While this could lead to cleaner and safer solutions, I believe the tradeoff will be the lack of flexibility. This is an active area of interest as we continue to build software that interacts with LLMs.
Quotes and Additional Formatting
Riley also followed up with a workaround which was eventually exploited by another user. It involved escaping/quoting the input strings. Additionally, Riley reports that with this trick there is no need to add warnings in the instruction, and appears robust across phrasing variations. Regardless, we share the prompt example as it emphasizes the importance and benefits of thinking deeply about how to properly format your prompts.
Prompt:
Output:
Another defense proposed by Riley, is using JSON encoding plus Markdown headings for instructions/examples.
I tried to reproduce with temperature=0
but couldn't get it to work. You can see below my prompt and the output. This shows how important it is to think about the input that goes to the model and formatting I added the example below to see if the learner can find a robust defense that works for different inputs and instruction variants.
Prompt:
Output:
1.5 Adversarial Prompt DetectorWe know that LLMs can be complex, general, and robust systems that can perform well on a wide range of tasks. LLMs can also be used or fine-tuned to perform specific tasks like knowledge generation (Liu et al. 2022) and self-verification (Weng et al. (2022)). Similarly, an LLM can be used to detect adversarial prompts and filter them out.
Armstrong and Gorman 2022 proposes an interesting solution using this concept. Here is how it looks in practice.
The first step is to define a prompt evaluator. In the article, the authors propose a chatgpt-prompt-evaluator
which looks something like the following:
Prompt:
This is an interesting solution as it involves defining a specific agent that will be in charge of flagging adversarial prompts to avoid the LM responding to undesirable outputs.
We have prepared this notebook for your play around with this strategy.
Model TypeAs suggested by Riley Goodside in this Twitter thread, one approach to avoid prompt injections is to not use instruction-tuned models in production. His recommendation is to either fine-tune a model or create a k-shot prompt for a non-instruct model.
The k-shot prompt solution, which discards the instructions, works well for general/common tasks that don't require too many examples in the context to get good performance. Keep in mind that even this version, which doesn't rely on instruction-based models, is still prone to prompt injection. All this Twitter user had to do was disrupt the flow of the original prompt or mimic the example syntax. Riley suggests trying out some of the additional formatting options like escaping whitespaces and quoting inputs (discussed here) to make it more robust. Note that all these approaches are still brittle and a much more robust solution is needed.
For harder tasks, you might need a lot more examples in which case you might be constrained by context length. For these cases, fine-tuning a model on many examples (100s to a couple thousand) might be ideal. As you build more robust and accurate fine-tuned models, you rely less on instruction-based models and can avoid prompt injections. The fine-tuned model might just be the best approach we have for avoiding prompt injections.
More recently, ChatGPT came into the scene. For many of the attacks that we tried above, ChatGPT already contains some guardrails and it usually responds with a safety message when encountering a malicious or dangerous prompt. While ChatGPT prevents a lot of these adversarial prompting techniques, it's not perfect and there are still many new and effective adversarial prompts that break the model. One disadvantage with ChatGPT is that because the model has all of these guardrails, it might prevent certain behaviors that are desired but not possible given the constraints. There is a tradeoff with all these model types and the field is constantly evolving to better and more robust solutions.
1.5 References
Can AI really be protected from text-based attacks? (Feb 2023)
Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods (Oct 2022)
Prompt injection attacks against GPT-3 (Sep 2022)
2. Reliability
We have seen already how effective well-crafted prompts can be for various tasks using techniques like few-shot learning. As we think about building real-world applications on top of LLMs, it becomes crucial to think about the reliability of these language models. This guide focuses on demonstrating effective prompting techniques to improve the reliability of LLMs like GPT-3. Some topics of interest include generalizability, calibration, biases, social biases, and factuality to name a few.
Note that this section is under heavy development.
Topics:
2.1 Factuality
LLMs have a tendency to generate responses that sounds coherent and convincing but can sometimes be made up. Improving prompts can help improve the model to generate more accurate/factual responses and reduce the likelihood to generate inconsistent and made up responses.
Some solutions might include:
provide ground truth (e.g., related article paragraph or Wikipedia entry) as part of context to reduce the likelihood of the model producing made up text.
configure the model to produce less diverse responses by decreasing the probability parameters and instructing it to admit (e.g., "I don't know") when it doesn't know the answer.
provide in the prompt a combination of examples of questions and responses that it might know about and not know about
Let's look at a simple example:
Prompt:
Output:
I made up the name "Neto Beto Roberto" so the model is correct in this instance. Try to change the question a bit and see if you can get it to work. There are different ways you can improve this further based on all that you have learned so far.
2.2 Biases
LLMs can produce problematic generations that can potentially be harmful and display biases that could deteriorate the performance of the model on downstream tasks. Some of these can be mitigates through effective prompting strategies but might require more advanced solutions like moderation and filtering.
2.2.1 Distribution of Exemplars
When performing few-shot learning, does the distribution of the exemplars affect the performance of the model or bias the model in some way? We can perform a simple test here.
Prompt:
Output:
In the example above, it seems that the distribution of exemplars doesn't bias the model. This is good. Let's try another example with a harder text to classify and let's see how the model does:
Prompt:
Output:
While that last sentence is somewhat subjective, I flipped the distribution and instead used 8 positive examples and 2 negative examples and then tried the same exact sentence again. Guess what the model responded? It responded "Positive". The model might have a lot of knowledge about sentiment classification so it will be hard to get it to display bias for this problem. The advice here is to avoid skewing the distribution and instead provide more balanced number of examples for each label. For harder tasks where the model doesn't have too much knowledge of, it will likely struggle more.
2.2.2 Order of Exemplars
When performing few-shot learning, does the order affect the performance of the model or bias the model in some way?
You can try the above exemplars and see if you can get the model to be biased towards a label by changing the order. The advice is to randomly order exemplars. For example, avoid having all the positive examples first and then the negative examples last. This issue is further amplified if the distribution of labels is skewed. Always ensure to experiment a lot to reduce this type of biasness.
Other upcoming topics:
Perturbations
Spurious Correlation
Domain Shift
Toxicity
Hate speech / Offensive content
Stereotypical bias
Gender bias
Coming soon!
Red Teaming
2.3 References
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? (Oct 2022)
Prompting GPT-3 To Be Reliable (Oct 2022)
On the Advance of Making Language Models Better Reasoners (Jun 2022)
Unsolved Problems in ML Safety (Sep 2021)
Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned (Aug 2022)
StereoSet: Measuring stereotypical bias in pretrained language models (Aug 2021)
Calibrate Before Use: Improving Few-Shot Performance of Language Models (Feb 2021)
Previous Section (Adversarial Prompting)
3.其他主题
在本节中,我们讨论了有关提示工程的其他杂项和未分类主题。它包括相对较新的想法和方法,这些想法和方法最终会随着更广泛的采用而被纳入主要指南。阅读本指南的这一部分还有助于了解有关提示工程的最新研究论文。请注意,本节正在进行大量开发。
Topic:
3.1 主动 Prompt
链式思维(CoT)方法依赖于一组固定的人工注释示例。这样的问题是,这些示例可能不是不同任务中最有效的示例。为了解决这个问题,Diao et al., (2023) 最近提出了一种名为 Active-Prompt 的新提示方法,以适应具有人类设计的 CoT 推理的不同任务特定示例提示。
以下是该方法的示意图。第一步是查询 LLM,可以带有或不带有几个 CoT 示例。针对一组训练问题生成 k 个可能的答案。基于 k 个答案计算不确定性指标(使用不同意)。选择最不确定的问题供人类进行注释。然后使用新注释的示例推断每个问题。
3.2 定向刺激 Prompting
Li et al., (2023) 提出了一种新的提示技术,以更好地引导 LLM 生成所需的摘要。
可调节的策略 LM 被训练以生成刺激/提示。可以看到越来越多地使用强化学习来优化 LLM。
下图展示了定向刺激提示与标准提示的比较。策略 LM 可以很小,并针对生成指导黑盒冻结 LLM 的提示进行优化。
完整示例即将推出!
3.3 ReAct
Yao et al., 2022介绍了一个框架,在这个框架中,LLMs 以交错的方式生成推理跟踪和任务特定操作。生成推理跟踪使模型能够引导、跟踪和更新动作计划,甚至处理异常。操作步骤允许与外部来源(如知识库或环境)进行接口并收集信息。ReAct 框架可以让 LLMs 与外部工具互动,以获取额外的信息,从而导致更可靠和真实的响应。
完整示例即将推出!
3.4 Multimodal CoT Prompting
Zhang et al. (2023) 最近提出了一种多模态链式思维提示方法。传统的 CoT 集中在语言模态上。相反,多模态 CoT 将文本和视觉整合到一个两阶段框架中。第一步涉及基于多模态信息的推理生成。接下来是第二阶段,答案推断,利用信息丰富的生成的推理。在 ScienceQA 基准测试中,多模态 CoT 模型(1B)的表现优于 GPT-3.5。
深入阅读:
语言并非你所需要的一切:使感知与语言模型对齐 (Feb 2023)
GraphPromptsMultimodal CoT Prompting
Liu et al., 2023 提出了 GraphPrompt,这是一种新的图提示框架,用于提高下游任务的性能。
版权声明: 本文为 InfoQ 作者【汀丶】的原创文章。
原文链接:【http://xie.infoq.cn/article/a55192a58c1880e82dbb58df7】。
本文遵守【CC-BY 4.0】协议,转载请保留原文出处及本版权声明。
评论