Prompt types (Zero, Few, Chain-of-thought)
Prompting la Level Up ā 3 Powerful Techniques
Imagine pannunga ā neenga oru restaurant la irukeenga. Waiter kitta 3 different ways la order pannalaam:
- "Biryani kudungo" ā simple ah solreenga, waiter best guess pannuvaaru
- "Last time maari, less spicy, extra raita, chicken big pieces" ā previous experience reference pannreenga
- "First rice half boil pannunga, then masala layer pannunga, then dum 20 min pannunga" ā step-by-step instructions kudukreenga
AI prompting la um exact ah ivve dhaan nadakkudhu! First method Zero-shot, second Few-shot, third Chain-of-thought (CoT).
Nee already basic prompts ezhudha therinjirukka. But real-world tasks la ā email draft pannanum, code debug pannanum, data analyze pannanum ā basic prompts podhaaadhu. Right technique use pannaa, same AI model 10x better results tharum.
Indha article la neenga learn pannavai:
- šÆ 3 techniques oda exact definitions and differences
- š Real-world example prompts for each technique
- š When to use which ā clear decision framework
- š„ Combined techniques for maximum power
Ready aa? Let's master the art of prompting! š
Zero-shot, Few-shot, CoT ā Core Concepts
Prompt engineering la indha 3 techniques foundation maari. Oru building kattanum na foundation strong ah irukkanum ā same way, AI oda effective ah work pannanum na indha 3 techniques therinjirukanum.
Zero-shot Prompting ā No examples, no context. Direct ah question or instruction kudukreenga. AI already training la learn pannadha use panni answer pannudhu.
Example: *"Translate 'Good morning' to Tamil"* ā Simple, direct, no examples needed.
Few-shot Prompting ā 2-5 examples kuduthu, "idhu maari pannunga" nu solreenga. AI pattern recognize panni, same style la output tharum.
Example: *"Happy ā š, Sad ā š¢, Angry ā ?"* ā Pattern from examples, AI continues.
Chain-of-thought (CoT) Prompting ā Step-by-step reasoning force pannreenga. Complex problems la AI "think out loud" pannudhu, accuracy dramatically improve aagum.
Example: *"A store has 15 apples. 8 sold morning, 3 more added afternoon. Think step by step ā how many now?"*
Key insight: Indha techniques mutually exclusive illa! Combine pannalaam. Few-shot + CoT = examples with step-by-step reasoning = most powerful combination.
| Aspect | Zero-shot | Few-shot | Chain-of-thought |
|---|---|---|---|
| Examples needed | ā None | ā 2-5 | ā Optional |
| Best for | Simple tasks | Format/style control | Complex reasoning |
| Token cost | Low | Medium | High |
| Accuracy | Good | Better | Best (for complex) |
How These Techniques Flow Inside the LLM
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā ā USER PROMPT ā āāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāā¬āāāāāāāāāāāāāāāāāāāāāāāā⤠ā ā ā ā ā ZERO-SHOT ā FEW-SHOT ā CHAIN-OF-THOUGHT ā ā ā ā ā ā [Instruction]ā [Example 1] ā [Instruction] ā ā ā ā [Example 2] ā ["Think step by step"] ā ā ā ā [Example 3] ā ā ā ā ā ā [New Query] ā ā ā ā ā¼ ā ā ā ā¼ ā ā ā ā¼ ā ā āāāāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāāāāāāā⤠ā LLM PROCESSING ā ā ā ā āāāāāāāāāāāā āāāāāāāāāāāāāāāā āāāāāāāāāāāāāāāāāāāā ā ā ā Pattern ā ā Pattern ā ā Step 1 ā Step 2 ā ā ā ā Match ā ā Imitation ā ā ā Step 3 ā ... ā ā ā ā (Direct) ā ā (From Ex.) ā ā ā Final Answer ā ā ā āāāāāā¬āāāāāā āāāāāāāā¬āāāāāāāā āāāāāāāāāā¬āāāāāāāāāā ā ā ā ā ā ā āāāāāāāāā“āāāāāāāāāāāāāāāā“āāāāāāāāāāāāāāāāāāāā“āāāāāāāāāāāāā⤠ā AI RESPONSE ā ā Quick answer ā Styled answer ā Reasoned answer ā āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā
Zero-shot Prompting ā Deep Dive
Zero-shot prompting la neenga AI kitta examples onnum kudukka maateenga. Direct ah task solreenga, AI already irukka knowledge use panni answer pannudhu.
Eppo use pannanum?
- Simple, well-defined tasks (translation, summarization, classification)
- AI already well-trained topic la irukka queries
- Quick answers venum, elaborate setup thevai illa
Real Examples:
š¢ Good Zero-shot:
*"Summarize this paragraph in 2 sentences: [text]"*
*"Is this email spam or not spam? [email content]"*
*"Convert 45 USD to INR at current rates"*
š“ Bad Zero-shot (Few-shot venum):
*"Write a product description"* ā What style? What tone? What format? AI guess pannanum.
Pro Tips:
- Be specific ā "Summarize in 2 sentences" vs "Summarize"
- Add constraints ā "in simple Tamil" or "for a 10-year-old"
- Mention format ā "as bullet points" or "as a table"
When Zero-shot fails:
- Output format very specific ah venum (like JSON with exact keys)
- Niche domain knowledge required
- Creative tasks with particular style
Zero-shot la result satisfactory illa na, don't force it ā Few-shot ku move pannunga. Right tool for right job ā idhu dhaan key principle.
Success rate by task type:
| Task | Zero-shot Accuracy |
|---|---|
| Translation | ~90% |
| Classification | ~80% |
| Summarization | ~85% |
| Creative Writing | ~60% |
| Code Generation | ~70% |
Few-shot Prompting ā Deep Dive
Few-shot la neenga AI ku "idhu maari pannunga" nu examples kudukreenga. AI pattern learn panni, same style la continue pannudhu.
Eppo use pannanum?
- Specific output format venum (JSON, CSV, particular structure)
- Particular tone or style maintain pannanum
- AI ku unfamiliar or niche task kudukka porenga
- Consistent outputs across multiple queries venum
Real Example ā Sentiment Analysis:
*"Classify the sentiment:*
*Text: 'This movie was absolutely fantastic!' ā Positive*
*Text: 'Worst food I ever had' ā Negative*
*Text: 'It was okay, nothing special' ā Neutral*
*Text: 'The new iPhone camera blew my mind!' ā ?"*
AI immediately understand pannudhu ā oru text kuduthaa, Positive/Negative/Neutral la classify pannanum nu.
Real Example ā Tanglish Translation:
*"Convert English to Tanglish:*
*'How are you?' ā 'Epdi irukka?'*
*'I am going to office' ā 'Naan office ku poren'*
*'What is your name?' ā 'Un peru enna?'*
*'Where do you live?' ā ?"*
How many examples optimal?
| Examples | Quality | Token Cost | Use Case |
|---|---|---|---|
| 1 | Basic pattern | Low | Simple format matching |
| 2-3 | Good pattern | Medium | Most tasks (recommended) |
| 4-5 | Strong pattern | High | Complex/niche tasks |
| 6+ | Diminishing returns | Very High | Rarely needed |
Pro Tips:
- Diverse examples kudungo ā edge cases include pannunga
- Consistent format maintain pannunga across examples
- Order matters ā best examples first, tricky ones last
- Examples la wrong patterns kudukkaadhenga ā AI adha um learn pannudhu!
Chain-of-thought Prompting ā Deep Dive
Chain-of-thought (CoT) la AI ku "step-by-step think pannunga" nu solreenga. Idhu complex reasoning tasks la game changer.
Why CoT works?
LLMs next token predict pannudhu. Complex problem la direct answer guess pannaa, wrong pogum. But step-by-step break pannaa, each step la correct next step predict panna easier.
Simple CoT trigger words:
- *"Think step by step"*
- *"Let's solve this step by step"*
- *"Show your reasoning process"*
- *"Break this down into steps"*
Example ā Without CoT:
*"A farmer has 3 fields. Each field has 12 rows. Each row has 8 plants. How many plants total?"*
AI might say: *"288"* ā or *"256"* ā ā sometimes wrong!
Example ā With CoT:
*"A farmer has 3 fields. Each field has 12 rows. Each row has 8 plants. How many plants total? Think step by step."*
AI responds:
*"Step 1: Plants per field = 12 rows Ć 8 plants = 96*
*Step 2: Total plants = 3 fields Ć 96 = 288*
*Answer: 288 plants"* ā Almost always correct!
CoT variants:
| Variant | Description | Example |
|---|---|---|
| **Zero-shot CoT** | Just add "think step by step" | Simple problems |
| **Few-shot CoT** | Examples WITH reasoning steps | Complex domain problems |
| **Auto-CoT** | AI generates its own reasoning | Advanced prompting |
| **Tree-of-thought** | Multiple reasoning paths explore | Research-level tasks |
When CoT is overkill:
- Simple factual questions ("Capital of India?")
- Translation tasks
- Basic classification
When CoT is essential:
- Math & logic problems
- Multi-step planning
- Code debugging
- Cause-effect analysis
- Decision making with trade-offs
Pro Tip: CoT use pannaa tokens increase aagum (longer response), so simple tasks ku avoid pannunga. Complex tasks ku dhaan use pannunga ā accuracy improvement worth the extra cost.
Real-life Analogy ā Cooking Instructor
š³ Cooking class analogy la purinjirukkum:
Zero-shot = "Sambar pannunga" nu matum sollradhu. Student already sambar panna therinjaa, OK. Theriyaadha student ku? Disaster! š
Few-shot = "Paaru, idhu maari ā first dhal boil pannunga (shows example), then tamarind add pannunga (shows example). Now nee try pannu." Student pattern follow pannuvaanga.
Chain-of-thought = "Step 1: Dhal wash pannu. Step 2: Pressure cooker la 3 whistle. Step 3: Meanwhile, tamarind soak pannu. Step 4: Tempering prepare pannu..." Every step explain pannreenga.
Combined (Few-shot + CoT) = Examples WITH step-by-step instructions ā like a cooking show where chef explains WHY each step matters! Gordon Ramsay style! šØāš³
Remember: Simple dish (maggi) ku Zero-shot podhum. Medium dish (biryani) ku Few-shot. Complex dish (wedding feast) ku CoT. Right technique for right complexity!
Under the Hood ā Why These Techniques Work
LLM internally epdi indha techniques process pannudhu nu paapom.
Zero-shot ā Pre-trained Knowledge Activation:
LLM billions of text la train aagirukku. Zero-shot prompt kuduthaa, relevant pre-trained patterns activate aagum. "Translate to Tamil" nu sonna, training data la irukka translation patterns fire aagum.
Limitation: Training data la illa na, hallucinate pannudhu. Niche tasks la unreliable.
Few-shot ā In-Context Learning:
Idhu LLM oda most powerful capability. Examples kuduthaa, LLM temporary ah oru mini-pattern learn pannudhu ā weights change aagaadhu, but attention mechanism examples la focus panni, same pattern replicate pannudhu.
Key insight: Few-shot la LLM actually "learn" pannala ā pattern match pannudhu. So examples quality critical.
Chain-of-thought ā Sequential Reasoning:
LLM left-to-right generate pannudhu. CoT use pannaa, intermediate reasoning steps generate aagum. Each step previous step ah input ah use pannudhu. So complex calculation la:
- Without CoT: Direct jump to answer ā often wrong
- With CoT: Step 1 output ā Step 2 input ā Step 3 input ā final answer ā usually correct
Research findings:
| Model Size | Zero-shot | Few-shot | CoT Improvement |
|---|---|---|---|
| Small (7B) | 35% | 45% | +5% (minimal) |
| Medium (70B) | 55% | 65% | +15% |
| Large (175B+) | 70% | 80% | +25% (massive!) |
Important: CoT mainly large models la dhaan significant improvement tharum. Small models la effect minimal. GPT-4, Gemini Pro, Claude ā indha level models la CoT best ah work aagum.
Real-World Prompt Templates
Comprehensive Comparison ā When to Use What
Decision framework ā indha table save pannunga! š
| Scenario | Technique | Why |
|---|---|---|
| Quick translation | Zero-shot | Simple, well-known task |
| Email classification | Zero-shot | Binary/simple categories |
| Product descriptions in specific style | Few-shot | Style consistency venum |
| JSON output with exact schema | Few-shot | Format precision critical |
| Tanglish content creation | Few-shot | Unique style pattern |
| Math word problems | CoT | Step-by-step reasoning needed |
| Code debugging | CoT | Trace execution flow |
| Business analysis | CoT | Multi-factor evaluation |
| Exam answer generation | Few-shot + CoT | Format + reasoning both |
| Legal document analysis | Few-shot + CoT | Domain-specific + complex |
Cost vs Quality Trade-off:
| Technique | Tokens Used | Response Time | Quality | Cost |
|---|---|---|---|---|
| Zero-shot | Low (50-100) | Fast | Good | š° |
| Few-shot | Medium (200-500) | Medium | Better | š°š° |
| CoT | High (300-800) | Slower | Best* | š°š°š° |
| Few-shot + CoT | Very High (500-1000) | Slowest | Best* | š°š°š°š° |
*For complex tasks. Simple tasks la Zero-shot um equally good.
Decision Flowchart:
- Task simple ah? ā Zero-shot
- Specific format/style venum? ā Few-shot
- Complex reasoning involved? ā CoT
- Complex + specific format? ā Few-shot + CoT
- Still not working? ā System prompt + Few-shot + CoT (full stack!)
Golden Rule: Always start with the simplest technique. Over-engineering prompts waste tokens and sometimes confuse the model. Simple first, complex only if needed!
Common Mistakes & Limitations
ā ļø Avoid these common pitfalls:
Zero-shot mistakes:
- ā Vague instructions: "Write something about AI" ā too broad!
- ā No format specification: AI decides format, usually inconsistent
- ā Fix: Be specific ā "Write a 200-word blog intro about AI in healthcare, conversational tone"
Few-shot mistakes:
- ā Contradictory examples: One example formal, another casual
- ā Too many examples: 10+ examples waste tokens, confuse model
- ā Bad examples: Wrong patterns in examples = wrong output
- ā Fix: 2-3 consistent, diverse, correct examples
CoT mistakes:
- ā Using CoT for simple tasks: "What's 2+2? Think step by step" ā overkill!
- ā Not validating reasoning: AI can generate confident-sounding wrong steps
- ā Small models + CoT: Minimal improvement, wasted tokens
- ā Fix: CoT only for genuinely complex tasks, verify each step
General limitations:
- All techniques depend on model quality ā garbage model = garbage output
- Context window limits affect Few-shot (too many examples = truncation)
- CoT doesn't guarantee correctness ā it improves probability, not certainty
- None of these replace domain expertise ā always verify AI output!
Why This Matters ā Career & Real Impact
"Prompt engineering is the new coding" ā indha statement over-hyped ah irundaalum, oru truth irukku.
Industry trends:
- LinkedIn la "Prompt Engineer" roles 300% increase 2024-2025 la
- Average salary: $120K-180K (US), ā¹15-30 LPA (India, top companies)
- Every role ā marketing, HR, sales, development ā AI prompting skill expect pannudhu
Real impact stories:
š§ Email Marketing Team ā Few-shot prompting use panni, 50 product descriptions daily generate pannuvaanga. Before: 2 days for 50 descriptions. After: 2 hours. 20x productivity gain.
š Data Analyst ā CoT prompting use panni, complex SQL queries generate pannuvaaru. Before: Senior analyst needed. After: Junior analyst + AI = same quality output.
š» Developer ā Zero-shot for boilerplate code, Few-shot for project-specific patterns, CoT for debugging. 40% less time on repetitive coding tasks.
Why YOU should master this:
- Differentiation ā Most people use AI badly. You'll use it expertly.
- Efficiency ā Same task, 50% less time, better quality output
- Career growth ā Every company wants AI-literate employees
- Foundation ā Advanced techniques (RAG, Agents) build on these basics
Indha 3 techniques properly therinja, nee AI la 80% of tasks handle panna mudiyum. Remaining 20% ku advanced techniques (RAG, fine-tuning) venum ā but idhu dhaan foundation!
ā Key Takeaways
š Remember these 5 things:
- Zero-shot = No examples, direct instruction. Simple tasks ku perfect. Be specific in your instructions.
- Few-shot = 2-5 examples kuduthu pattern set pannunga. Format, style, tone control ku best. Examples quality > quantity.
- Chain-of-thought = "Think step by step" add pannunga. Complex reasoning, math, logic tasks ku essential. Large models la best results.
- Start simple, escalate ā Zero-shot try pannunga first. Results not good? Few-shot ku move pannunga. Still complex? CoT add pannunga. Over-engineering avoid pannunga.
- Combine when needed ā Few-shot + CoT = most powerful combination. But token cost high, so use judiciously.
Quick reference card:
| Need | Use | Magic words |
|---|---|---|
| Quick answer | Zero-shot | "Summarize...", "Translate..." |
| Specific format | Few-shot | "Like these examples..." |
| Deep reasoning | CoT | "Think step by step..." |
| Both format + reasoning | Few-shot + CoT | Examples + "Show reasoning..." |
Next level: These are prompting fundamentals. Next article la structured prompt templates ā RICE framework, role-based prompts ā learn pannunga. That's where you go from good to great! šÆ
š Mini Challenge ā Try It Yourself!
šÆ Challenge Time! Indha 3 tasks try pannunga:
Task 1 ā Zero-shot:
Open ChatGPT/Gemini and type:
*"Classify this movie review as Positive, Negative, or Neutral: 'The visuals were stunning but the story was predictable and boring'"*
Note the answer. Was it accurate?
Task 2 ā Few-shot:
Now try with examples:
*"Classify movie reviews:*
*'Amazing film, loved every minute' ā Positive*
*'Terrible acting, waste of money' ā Negative*
*'It was fine, nothing memorable' ā Neutral*
*'The visuals were stunning but the story was predictable and boring' ā ?"*
Compare with Task 1 result!
Task 3 ā Chain-of-thought:
*"A shop offers 20% discount. Then an additional 10% on the discounted price. If original price is ā¹1000, what's the final price? Think step by step."*
Now try WITHOUT "Think step by step" ā did the AI still get it right? Try with a harder problem!
Bonus Challenge: š„
Write a Few-shot + CoT prompt for your actual work task. Share what worked and what didn't. Practice makes perfect ā every prompt you write teaches you something new!
Pro tip: Screenshot your results and compare. You'll literally SEE the quality difference between techniques. That "aha moment" will change how you use AI forever! š”
Interview Questions ā Be Prepared!
š¤ AI/ML interview la kelka possible questions:
Q1: "Explain the difference between zero-shot and few-shot prompting."
A: Zero-shot uses no examples ā relies on model's pre-trained knowledge. Few-shot provides 2-5 examples to establish a pattern. Zero-shot is faster and cheaper, few-shot gives more control over output format and style.
Q2: "When would chain-of-thought prompting fail?"
A: Small models (under 10B parameters) show minimal CoT improvement. Also fails when the reasoning chain is wrong ā AI can generate confident but incorrect intermediate steps. Simple factual queries don't benefit from CoT.
Q3: "How do you decide which prompting technique to use for a production system?"
A: Start simple (zero-shot), evaluate output quality. If format matters, add few-shot examples. If accuracy on complex logic matters, add CoT. Always consider the cost-quality tradeoff ā production systems need efficiency AND accuracy.
Q4: "What is few-shot chain-of-thought and why is it powerful?"
A: It combines examples WITH reasoning steps. You show the model not just WHAT the answer looks like, but HOW to arrive at it. Research shows this gives the best accuracy on complex tasks, especially in domain-specific applications.
Q5: "How many examples are optimal for few-shot prompting?"
A: 2-5 examples. Research shows diminishing returns after 5. Quality matters more than quantity ā examples should be diverse, correct, and representative of the task distribution.
Final Thought
š Oru important realization:
AI use pannradhu easy ā WELL use pannradhu skill. Indha 3 techniques un AI toolkit la most important tools. Carpenter kitta hammer, screwdriver, saw irukku maari ā unnoda Zero-shot, Few-shot, CoT irukku.
Right tool, right job. Indha principle remember pannunga.
Today's challenge: Unnoda daily work la oru task pick pannunga. First Zero-shot la try pannunga. Then same task Few-shot la try pannunga. Then CoT la. Results compare pannunga. Difference feel pannuvenga!
Next article la structured prompt templates ā RICE framework, CREATE method, role-based prompts ā learn pannunga. Adhula un prompts professional-grade aagum! š
Next Learning Path
šŗļø Ungal learning journey:
ā Completed: Prompt types ā Zero-shot, Few-shot, Chain-of-thought
š Next: Writing Structured Prompts (RICE/CREATE frameworks)
š® Coming up: AI Tools Ecosystem, AI Hallucination, Using AI for Daily Work
Suggested practice path:
- Today ā Try all 3 techniques on ChatGPT/Gemini (30 min)
- Tomorrow ā Apply Few-shot to your work tasks (1 hour)
- This week ā Master CoT for complex problems
- Next week ā Learn structured prompt frameworks (next article!)
Resources:
- Practice on: ChatGPT, Gemini, Claude (all free tiers available)
- Document your best prompts ā build your personal prompt library
- Share and learn from the community
Keep experimenting, keep learning! šÆ
Frequently Asked Questions
A developer wants AI to generate unit tests in a SPECIFIC format (Jest, with describe/it blocks, specific assertion style). Which technique is MOST appropriate?