AS
JD

Few-Shot Approach: why it works and what it has in common with the way we think

· 4 phút đọc

Few-Shot Approach: why it works and what it has in common with the way we think

When you teach someone something new, you don’t hand them a whole textbook. You usually show them a few examples — and suddenly they “get it.” This is exactly how the few-shot approach in AI works.

It’s a technique where you give the model a handful of short examples of a task before asking it to perform a new one. You’re not training the model from scratch. You’re not changing its parameters. You’re simply giving it the context it needs to get into the right mode of thinking.

What actually is “few-shot”?

It’s a style of prompting where:

  • You start with a few examples of inputs and correct outputs.

  • Then you give the model a new input, and it produces the output based on the pattern it detected.

Example:

Example 1:
Input: "2+2"
Output: "4"

Example 2:
Input: "3+5"
Output: "8"

New task:
Input: "7+1"
Output: ?

The model sees the structure and repeats it.

Not because it “understands math,” but because it detected the pattern of behavior.

It’s like showing a child how to solve two puzzles — the third one they’ll complete themselves.


Why does few-shot work? (no technical jargon)

Large language models are essentially very advanced pattern-recognition machines. You see this constantly in everyday life:

  • You start the phrase: “Roses are…” — and everyone finishes it.

  • You say: “Say something in French” — most people begin with “bonjour.”

The human brain loves templates, order, repetition.

Models work the same way.

1. The model doesn’t “understand,” but it imitates rules extremely well

If you give it three examples of sales emails, it will generate a fourth one because it notices:

  • the format,

  • the tone,

  • the structure,

  • the pattern at the end.

No magic. Just statistical pattern matching — at massive scale and surprising precision.

2. Humans think in a very similar way

When you try to solve a logic riddle like:

“There are two people in a room, another one walks in. How many people are there?”

You might answer 3… But if someone gives you a few “trick questions” beforehand, your brain shifts into a different mode:

You start looking for traps.

Example (few-shot for humans):

  1. “You have a match that’s not lit. How do you make it go out?” — “You light it.”

  2. “What weighs more: one kilo of feathers or one kilo of lead?” — “They weigh the same.”

After a few such examples, when you hear a new riddle, your brain follows the pattern you were primed with: “Aha, this is probably a trick again.”

This is the exact same mechanism few-shots activate in models.


3. Humans learn exactly like this

Imagine you’re teaching someone a new card game. You show them a few example moves. Then you ask them to try the next one.

That person:

  • doesn’t know all the rules yet,

  • hasn’t analyzed the full instruction book,

  • but can repeat the pattern.

The model does the same thing: it copies the structure, not necessarily with deep understanding.


Why are just a few examples enough?

Because models already have broad knowledge about language and logic. Few-shot examples don’t teach them from scratch — they:

set the context

They tell the model: “This is the kind of output I expect from you now.”

narrow the interpretation

Without examples, the model can:

  • misinterpret the task,

  • use a different format,

  • go too broad.

Examples act like safety rails.

inject a micro-signature of style

Format, tone, structure — all are replicated from examples.

In practice, it’s like “reprogramming” the model through text alone, without real training.


Perfect analogy: math tutoring

You’re teaching a kid how to solve proportions.

You could explain the theory for 20 minutes, but most people do this instead:

  1. Show a few solved examples.

  2. Ask the kid to solve a new one.

They don’t understand the deep mathematical theory. But they’ve learned the pattern of solving.

This is exactly how few-shot prompting works.


Summary

The few-shot approach works because models — just like humans — respond extremely well to examples. With just a few demonstrations:

  • they understand the format,

  • they replicate the structure,

  • they limit hallucinations,

  • they interpret the task in the exact manner shown to them.

It’s not magic. It’s statistical pattern imitation, which closely resembles how students learn through examples and hints.

Share: