Prompt Engineering Basics

[!NOTE] Prompt Engineering is the practice of designing inputs for LLMs to produce optimal outputs. It is less about “trickery” and more about clear communication and constraint setting.

At its core, an LLM is a probabilistic engine that predicts the next token. A prompt is the initial set of tokens you provide to guide this prediction.

1. The Anatomy of a Prompt

A high-quality prompt is structured, not random. It typically contains four key components:

  1. Role (Persona): Who the AI should be (e.g., “You are a Senior Python Engineer”).
  2. Context: Background information relevant to the task.
  3. Instruction: The specific task you want the AI to perform.
  4. Constraints & Format: How the output should look (e.g., “JSON only”, “under 50 words”).
SYSTEM / ROLE "You are a helpful coding assistant specialized in Python." CONTEXT "The user is a beginner. Explain concepts simply." USER INSTRUCTION "Write a function to calculate the Fibonacci sequence." Constraints: "No recursion"

Roles in API Calls

Modern LLM APIs (like OpenAI’s Chat Completions) explicitly separate these roles:

  • System: Sets the behavior and persona. This is “invisible” instruction that persists.
  • User: The actual input or question from the human.
  • Assistant: The model’s response. You can also pre-fill this to provide examples (Few-Shot).

2. Key Parameters

Controlling how the model generates text is as important as the text itself.

Temperature (0.0 - 2.0)

Controls randomness.

  • Low (0.0 - 0.3): Deterministic. The model always picks the most likely next token. Best for code, math, and factual answers.
  • High (0.8 - 1.5): Creative. The model might pick the 2nd or 3rd most likely token, leading to diverse outputs. Best for poetry, brainstorming.

Max Tokens

The hard limit on the output length. Use this to prevent the model from rambling or to cut costs.

3. Zero-Shot vs. Few-Shot Learning

LLMs are “in-context learners”. They can learn a task just by seeing examples in the prompt, without any parameter updates.

Zero-Shot

You simply ask the model to do something.

“Translate ‘Hello’ to Spanish.”

Few-Shot

You provide examples of the task within the prompt. This drastically improves performance on complex or specific formatting tasks.

“Translate English to Spanish: Dog → Perro Cat → Gato Hello ->”

Zero-Shot Instruction: "Classify sentiment:" Input: "This movie was great!" Output Few-Shot Examples: "Bad movie → Negative" "Good food → Positive" Input: "This movie was great!" Output

[!TIP] Few-Shot Tip: Even providing one example (One-Shot) is significantly better than Zero-Shot for adhering to specific JSON formats or coding styles.

4. Code Implementation

Here is how you structure prompts in code using the official libraries.

```python from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model="gpt-4-turbo", messages=[ # 1. System Role: Sets behavior {"role": "system", "content": "You are a poetic assistant."}, # 2. Few-Shot Example (Optional) {"role": "user", "content": "Explain recursion."}, {"role": "assistant", "content": "A mirror facing a mirror..."}, # 3. Actual User Input {"role": "user", "content": "Explain loops."} ], temperature=0.7, # Creativity max_tokens=100 ) print(response.choices[0].message.content) ```
```java // Using LangChain4j import dev.langchain4j.model.openai.OpenAiChatModel; import dev.langchain4j.data.message.UserMessage; import dev.langchain4j.data.message.SystemMessage; public class PromptExample { public static void main(String[] args) { OpenAiChatModel model = OpenAiChatModel.builder() .apiKey(System.getenv("OPENAI_API_KEY")) .modelName("gpt-4-turbo") .temperature(0.7) .build(); String response = model.generate( // 1. System Role SystemMessage.from("You are a poetic assistant."), // 2. User Input UserMessage.from("Explain loops.") ).content().text(); System.out.println(response); } } ```
```go package main import ( "context" "fmt" "os" openai "github.com/sashabaranov/go-openai" ) func main() { client := openai.NewClient(os.Getenv("OPENAI_API_KEY")) resp, err := client.CreateChatCompletion( context.Background(), openai.ChatCompletionRequest{ Model: openai.GPT4, Messages: []openai.ChatCompletionMessage{ // 1. System Role { Role: openai.ChatMessageRoleSystem, Content: "You are a poetic assistant.", }, // 2. User Input { Role: openai.ChatMessageRoleUser, Content: "Explain loops.", }, }, Temperature: 0.7, }, ) if err != nil { fmt.Printf("ChatCompletion error: %v\n", err) return } fmt.Println(resp.Choices[0].Message.Content) } ```

5. Interactive Playground

Experiment with prompt structure and parameters below.