Chain-of-Thought (CoT)
Chain-of-Thought prompting encourages the model to generate intermediate reasoning steps before arriving at a final answer.
Why it Works
LLMs process tokens sequentially. By asking the model to "think step by step," you provide it with an internal "scratchpad" that prevents it from jumping to a wrong conclusion too quickly.
Implementation
You can trigger CoT by adding simple phrases like:
- "Let's think step by step."
- "Walk me through your reasoning process."
- "Analyze the problem before providing the solution."
Step-Back Prompting
A variation where you ask the model to first state the general principles or facts needed to solve a problem before attempting it.