From: aidotengineer

Few-shot prompting is a technique in prompt engineering that involves including examples of desired inputs and outputs within the prompt itself [00:04:39]. This method allows the model to mimic or understand the problem by showing rather than telling [00:04:45].

How it Works [00:04:39]

Instead of explicitly describing the desired tone or style, you provide input and output examples that demonstrate what you want the model to achieve [00:05:06].

Content Generation

To generate content for a client, you might provide a “brief” (input) and “related content” (output) example. Then, when you provide a new “brief,” the model will fill in the corresponding content, having learned from the example how to match the client’s tone and style [00:04:50].

Benefits and Effectiveness [00:04:39]

  • Most Gains from Few Examples: Most performance improvements are seen with just one or two examples [00:05:17]. The graph of performance versus the number of examples typically shows significant gains early on [00:05:21].
  • Ease of Implementation: It’s a straightforward and accessible way to get better outputs from large language models (LLMs) [00:01:15].
  • Diversity of Examples: It’s beneficial to select diverse examples that cover different inputs the model is expected to handle [00:05:33].

Considerations

  • Performance Degradation: Sometimes, performance can degrade if too many examples are included [00:05:28].
  • Interaction with Reasoning Models: With newer reasoning models, adding examples can sometimes lead to worse performance or overcomplicate things, confusing the model [00:07:06]. Researchers at DeepSeek, for instance, found that few-shot examples degraded performance when building their R1 model [00:07:11]. If using few-shot with these models, it’s advisable to start with only one or two examples [00:08:24].

Few-Shot and Chain of Thought Prompting

Few-shot examples can be used within Chain of Thought prompting by including examples of the model’s desired reasoning steps [00:03:14]. For example, when solving math problems, you can provide an example problem that explicitly shows the step-by-step reasoning process [00:03:22]. LLMs can also be used to generate these reasoning chains for few-shot examples automatically [00:03:31].