AI Prompting Techniques

How you structure and format your questions or requests to AI affects the quality of its response.

There are two different aspects of "prompting":

  1. The structure/format of a prompt, including natural language, standard Markdown, and raw Markdown.
  2. The method/technique used to instruct the AI model.

The more crucial "primary methods for prompting" often refer to Prompt Engineering Techniques, which guide the AI's reasoning and behavior.

Here is a breakdown of the primary techniques, followed by a discussion of formatting, pros/cons, and the preferred method.

Primary Prompting Techniques

These techniques focus on the content and logic you include in the prompt to elicit better, more specific, or more complex reasoning from the Large Language Model (LLM).

Technique Description Pros Cons
Zero-Shot Prompting Giving the model an instruction or question with no examples of the desired input/output. It relies solely on the model's pre-trained knowledge. Fastest and simplest method. Good for general, low-complexity tasks. Can result in generic, inaccurate, or poorly formatted responses for complex tasks.
Few-Shot Prompting Including a few examples of the desired input and output within the prompt itself to demonstrate the task, format, or style. Significantly improves performance and consistency, especially for structured tasks or specific styles. Requires careful crafting of examples. The prompt becomes longer, which can increase cost and processing time (token usage).
Chain-of-Thought (CoT) Prompting Instructing the model to output the intermediate reasoning steps before giving the final answer (e.g., by adding "Let's think step-by-step"). Dramatically improves performance on complex reasoning, mathematical, and logical tasks. Increases transparency of the model's process. Makes the output longer. Requires more processing time and token usage than Zero-Shot.

Additional Prompting Techniques

These are often extensions or combinations of the primary methods:

Technique Description
Zero-Shot CoT Simply appending "Let's think step-by-step" (or similar phrasing) to a Zero-Shot prompt.
Self-Consistency Generating multiple Chain-of-Thought responses and then selecting the most frequent or "consistent" answer as the final output.
Generated Knowledge Prompting the model to first generate relevant facts or background information, and then use that knowledge to answer the main question.
Role-Based Prompting Instructing the model to adopt a specific persona (e.g., "Act as a senior software engineer...") to condition the tone, expertise, and focus of the response.
Meta-Prompting Asking the model to critique, refine, or even generate the prompt itself to optimize performance for a specific task.
Tree-of-Thought (ToT) A more advanced form of CoT where the model explores multiple possible reasoning paths ("thoughts") in parallel, resembling a tree structure, for more complex problem-solving.

Prompt Formatting / Syntax

Here are valid approaches to formatting the prompt, which helps the model distinguish between instructions, context, and data.

Format / Syntax Descriptions Pros Cons
Paragraph Form (Natural Language) Simple, flowing text that reads like a natural request or question. Most intuitive for users. Fast to write. Can be ambiguous. The model may struggle to separate instructions from context.
Standard Markdown Code (Structured) Using Markdown elements like headings, bullet points, lists, and bolding to structure the request. Your preferred method. Very clear separation of components (Task, Context, Format). Visually clean and easy for the model to parse. Requires slightly more effort than a simple paragraph.
Raw Markdown Code / Delimiters Using special characters or delimiters (e.g., <<<...>>>, ---, or backticks ```) to enclose specific sections of the prompt, such as context or data. The clearest, most unambiguous way to isolate sections of text, reducing model confusion. Less human-readable in the prompt box; can feel more technical.

More on Markdown

How Markdown Works

Not all AI Large Language Models (LLMs) can accommodate markdown syntax for prompts, but it's an increasingly common and often preferred practice. The ability to process markdown varies depending on the model and how it was trained.

Many modern LLMs, particularly those developed by major companies like Google and OpenAI, are trained on vast amounts of text from the internet, a significant portion of which is in markdown format (e.g., GitHub documentation, blog posts, and forums). This training process gives the models an implicit understanding of markdown syntax, allowing them to:

  • Parse Structured Input: The model can interpret formatting like headings (#), lists (*, -, 1.), bold (**), and italics (*) to understand the hierarchical structure and intent of the prompt.

  • Generate Formatted Output: Because they've seen markdown in their training data, they can also generate responses using the same syntax to provide clearer, more readable output.

Why is Markdown Recommended?

Using markdown in prompts is considered a best practice in prompt engineering for several key reasons:

  • Clarity and Accuracy: It enables the model to distinguish between different parts of the prompt, including instructions, context, and examples. This structured input reduces ambiguity, leading to more accurate and relevant responses.

  • Human and Machine Readability: Markdown is a lightweight, human-readable format that is also easy for a machine to parse. This makes prompts simpler to write, read, and edit for both users and the AI.

  • Enhanced Performance: Some studies and anecdotal evidence suggest that using a structured format, such as markdown, can improve a model's performance on certain tasks, especially those requiring reasoning or multi-step instructions.

The Preferred Method

The consensus among prompt engineers is that the Preferred Method is a combination of a powerful technique and clear formatting:

  • Technique:Chain of Thought (CoT) Prompting (especially the Zero-Shot CoT version, which is simply adding "Let's think step-by-step.") is often considered the most powerful universal technique for enhancing quality and reasoning, making it a strong preferred choice.
  • Formatting:Standard Markdown or Delimited Formatting is preferred to ensure the model clearly understands the different parts of the prompt (Role, Task, Constraints, Context).

Therefore, the "preferred method" involves: Clear Instructions + Context + Role-Playing + CoT + Structured Formatting.

Example of the Preferred Method

Markdown

**Role:** You are a seasoned university professor specializing in behavioral economics.

**Task:** Explain the concept of "Loss Aversion" to a first-year undergraduate student.

**Constraints:**

  1. The explanation must be under 300 words.
  2. Use a simple, real-world analogy (not a monetary one).
  3. **Let's think step-by-step** to ensure all parts of the task are met.

**Format:** Provide the step-by-step reasoning first, followed by the final answer.