Welcome back to our series on prompt engineering. We are in the middle of a series of posts about few-shot examples (otherwise known as “in-context

How many few shot examples should you use?

submited by
Style Pass
2024-07-09 15:30:08

Welcome back to our series on prompt engineering. We are in the middle of a series of posts about few-shot examples (otherwise known as “in-context learning”), where we examine ways to optimize the addition of few-shot examples to your prompts. In previous posts, we’ve looked at how dependent prompt performance is on which specific examples you select for few-shot, and at how likely it is that a successful set of few-shot examples in one model will perform well in another model. In both of those posts, we used Libretto to generate and test a ton of different few-shot prompt variants, but in all cases, there were exactly three few-shot examples in every prompt variation.

There is some evidence that adding more few-shot examples to a prompt that already has few-shot examples can increase prompt performance, but we assume that there’s going to be a limit to this technique’s effectiveness; it can’t just get better and better forever as you add more examples. Stated a little more formally, for some number N, after the LLM has seen N examples in a prompt, it probably won’t help to give it N+1 examples because there won’t be much new information in the N+1th example. But are these assumptions actually true? And if they are, what is N? And is N different for different models and prompts? Knowing the answer to these questions will help us not only make our prompts perform at peak accuracy, but it will also help us avoid unnecessary examples and thereby keep our token usage as low as possible, helping with both cost and latency.

To test out this question, I once again used our trusty Emoji Movie dataset from the Big Bench LLM benchmark. As a refresher, Emoji Movie is a set of 100 questions asking the LLM to come up with a movie being described by a string of emojis. For example:

Leave a Comment