From: aidotengineer
Few-shot prompting involves including examples of what you want the model to mimic, do, or understand about a problem in the prompt itself [04:37:34]. This approach is likened to “showing rather than telling” the model what is expected [04:45:17].
How it Works
Instead of explicitly describing a client’s tone or style, you can teach the model by providing input-output examples, such as a brief and a piece of related content [04:50:41], [05:04:38]. The model then fills in the content based on the provided examples [05:00:23].
Performance Characteristics
Most of the performance gains from few-shot prompting are achieved with just one or two examples [05:17:34]. Graphs illustrating performance versus the number of examples typically show this trend [05:21:04]. Conversely, including too many examples can sometimes degrade performance [05:28:44]. For builders, this means only one or two diverse examples are generally needed to cover various input scenarios the model might encounter [05:31:07].
Considerations for Reasoning Models
Reasoning models function differently and require a distinct prompting approach [06:51:24], [06:56:06]. Research, such as Microsoft’s Medprompt framework with 01 and Deepseek’s R1 model, has shown that adding examples can lead to worse performance for these models [07:06:01], [07:11:43]. OpenAI has also cautioned that providing excessive context can over-complicate and confuse the model [07:18:59].
When using reasoning models, it’s advisable to:
- Avoid few-shot prompting if possible [08:22:56].
- If examples are necessary, start with only one or two [08:24:43].
- Do not instruct the model on how to reason, as this functionality is often built-in and doing so can negatively impact performance [08:32:00].