Module Review: Prompting
1. Key Takeaways
- Prompt Engineering is about guiding the probability distribution of an LLM, not giving orders to a human.
- Structure Matters: Use clearly defined roles (System, User) and formats (XML, JSON) to improve reliability.
- Context is King: LLMs have no memory of past interactions unless you provide it in the context window.
- Chain-of-Thought (CoT) forces the model to allocate more compute (tokens) to a problem, significantly improving reasoning.
- Few-Shot Learning (providing examples) is often more effective than complex instructions.
- Agents are built using the ReAct pattern: Thought → Action → Observation → Thought.
2. Interactive Flashcards
Test your knowledge of the key terms from this module.
3. Cheat Sheet
| Technique | Description | Best For | Example |
|---|---|---|---|
| Zero-Shot | Direct instruction without examples. | Simple tasks, creative writing. | “Translate this to French.” |
| Few-Shot | Providing input-output examples. | Formatting, style transfer, complex classification. | “Input: A, Output: 1. Input: B, Output: 2.” |
| Chain-of-Thought | Asking for step-by-step reasoning. | Math, Logic, Multi-step problems. | “Let’s think step by step.” |
| Self-Consistency | Sampling multiple CoT paths and voting. | High-stakes reasoning where accuracy is paramount. | Running CoT 5 times and taking the majority answer. |
| ReAct | Interleaving reasoning and tool use. | Autonomous agents, accessing real-time data. | “Thought: I need weather. Action: Search.” |
| System Prompting | Setting the persona/behavior. | Chatbots, Role-playing. | “You are a helpful assistant.” |
4. Quick Revision Checklist
- I understand the difference between System, User, and Assistant roles.
- I can explain why “Let’s think step by step” improves performance.
- I know when to use Low Temperature (Code) vs High Temperature (Creative).
- I can implement a basic ReAct loop in code.
- I understand the concept of Hallucination and how Grounding (RAG/Context) helps.
[!TIP] Next Steps: Now that you can prompt effectively, the next module RAG (Retrieval Augmented Generation) will teach you how to connect LLMs to your own private data.