Skip to content

Latest commit

 

History

History
356 lines (282 loc) · 20.3 KB

File metadata and controls

356 lines (282 loc) · 20.3 KB

Introduction

This document is a hands-on companion for researchers who want to design, test, and evaluate prompts in a transparent, reproducible way. Through a sequence of small, focused exercises, you will see how model parameters, system messages, examples, variables, and output schemas shape model behaviour—and how to turn those levers into reliable workflows you can reuse in real research contexts.

The examples in this document will iterate on a simple generation task, progressing from basic controls (temperature, top‑p, and tone via system messages) to more advanced strategies (few‑shot prompting, variable injection, structured JSON outputs, chain‑of‑thought for auditing, and batch processing from CSV). The final section points to calling the API from Python on a real dataset, so you can scale beyond the playground.

What you’ll practice

  • Controlling variability and style with temperature/top‑p and system messages (Examples 1–2).
  • Improving reliability with few‑shot examples (Example 3).
  • Using variables to make prompts programmatic and scalable (Example 4).
  • Requesting structured outputs (JSON) for downstream analysis (Example 5).
  • Combining structure, lightweight reasoning for auditing, and batch processing from CSV (Example 6).
  • Preparing to call the API on a real dataset (Example 7).

How to use this guide

  • Start in the OpenAI API Playground for full functionality; for free exploration of parameters and system messages, the university’s ELM platform covers the early examples.
  • Copy and paste the system messages into the Platform interface, then iterate: regenerate, tweaking one setting at a time, and note effects on determinism, verbosity, tone, and specificity.

[!info] Open AI Platform or ELM To explore these prompt templates we recommend using the OpenAI API Platform. The Open AI Platform is a commercial tool with pay as you go pricing structure. Many of the older models are very very inexpensive at the scales we're using them, running these examples will not likely cost you more than a few pence. If you want a free option you can use the universities own ELM platform to try out the first few examples exploring parameter settings and system messages, but it doesn't offer more access to the more advanced prompt engineering strategies that we will explore later.

Example 1: Chat Prompt, Temperature

Message

I need help writing some thank you messages for guests that attended my wedding. Write a message for my Uncle, noting what he got us and thanking him for his generosity.

Params

A: Temp 0 , Top P 0
B: Temp 1.15, Top P 1

Notes

  1. First note that we've asked it to note the specific gift in the message which we've not supplied, so we're asking it to hallucinate something here and we can see that every single time I can regenerate this, it generates the same gift, a crystal wine decanter. On the other hand, Model B, when it's regenerated you get a different gift every time, sometimes it leaves a placeholder, sometimes it comes up with a slightly similar gift but it's a different gift every time.
  2. Model B is also far more verbose. This is just the sort of effect of the temperature increase as well.

Example 2: System Message, Tone

System Message

You are a thoughtful wedding etiquette expert helping to write warm, sincere thank you notes. Your notes should be genuine, specific, and appropriately formal for family members. 

Message

Write a message for my Uncle Bob, he got us a toaster.

Additional Example: Language

The note should be written in Spanish.

Notes

  1. Compare the message from the previous example with this one and you notice the change in tone which we're formal now.
  2. Note that we no longer need to write an instruction in the chat message because that is now standardized within the system message. So we don't need the context of the wedding thank you in the chat prompt, we can just say write your message from my uncle.

Explore Params and System Message

[!tip] Spend for 15 mins to explore the setting parameters and system messages in the the Open AI Platform

Example 3: Few Shot Learning

System Message

Draft personalised thank you messages to wedding guests based on provided examples and guest details.

Follow these guidelines:
- Use a warm, sincere, and personal tone.
- Begin with a greeting addressing the guest by name.
- Express specific thanks for the guest’s gift and/or effort (e.g., traveling, participation).
- Add a personal note referencing something specific about their gift or presence.
- End with an appropriate closing and the couple's names (use [Names] as a placeholder if unspecified).
 Match the structure and tone of the provided examples.

Example messages for reference:

Example 1:
Dear Aunt Sarah,  
Thank you so much for the beautiful vase and for celebrating with us on our special day. Every time we arrange flowers in it, we'll think of you and your kindness. Your presence meant the world to us.  
With love,  
[Names]

Example 2:
Dear Mike,  
We're so grateful for the cookbook and for you traveling all the way from Manchester to be with us. We've already bookmarked several recipes to try! It was wonderful having you there to celebrate.  
All our best,  
[Names]

Instructions:
- When generating a message, reason step-by-step: (1) determine main points to reference (e.g., specific gift, travel, or other contributions), (2) construct a heartfelt body referencing those points, (3) finish with a suitable closing and names.
- Output the full thank you message as a single paragraph with line breaks as in the examples.

Format:
 Each message should be 3-5 sentences, formatted with line breaks for greeting, body, and closing (see examples).
 Use placeholders [Names] if actual names are not given.

(If the input is more complex or contains extra details, adjust the level of personalization and references accordingly.)

Important:  
- Always structure your message with reasoning (reference guest's action/gift, express thanks, personalize, then conclude) before writing the final thank you. 
- The message output should be succinct, personal, and formatted to match the provided samples.

Message

Write a message for my Uncle Bob, he got us a toaster.

Notes

  1. This is very similar to the system message above specifying the tone. Except your doing that specification with a few examples. This tends to produce much better results as it's giving the model more data to work from. Almost as if you're conducting another round of training, that's why it's sometimes referred to as few short learning.

Example 4: Variables

System Message

Draft a heartfelt thank you message for a wedding guest, personalized with the variables {{guest}} (the guest’s name) and {{gift}} (the specific gift given). Use a warm, appreciative tone. Each message should include:  
- A greeting using the guest’s name: “Dear {{guest}},”
- A sentence expressing thanks for the gift, mentioning the {{gift}} specifically.
- An optional, personal touch (e.g., referencing a shared memory or something special about the guest’s attendance).
- A closing expressing gratitude and warmth, followed by space for names (“With love,” or “All our best,” [Names]).
  
Before concluding, reason step by step about how the message is tailored (e.g., identify what makes the gift or guest special, what you might say about it, and then assemble these parts into the final draft). Always present the reasoning/thought process FIRST, then the finalized thank you message SECOND.

**Output Format:**  
- Respond in this structure:  
  - “Reasoning:” (short paragraph or bullet points specifying how the message is customized for the guest/gift)  
  - “Thank You Message:” (in full, ready-to-send form)  
- Do not use code blocks.

**Examples:**  
**Input:**  
{{guest}}: Aunt Sarah  
{{gift}}: beautiful vase  

**Output:**  
Reasoning: Aunt Sarah gave a beautiful vase, which is a thoughtful and lasting household gift. I want to reference how her presence at the wedding was special and connect her gift to moments in our future.  
Thank You Message:  
Dear Aunt Sarah,  
Thank you so much for the beautiful vase and for celebrating with us on our special day. Every time we arrange flowers in it, we'll think of you and your kindness. Your presence meant the world to us.  
With love,  
[Names]

---

**Input:**  
{{guest}}: Mike  
{{gift}}: cookbook  

**Output:**  
Reasoning: Mike gave us a cookbook, which is both useful and meaningful, perhaps inspiring us to try new recipes. Since he traveled from Manchester, it's nice to mention the effort he made to attend.  
Thank You Message:  
Dear Mike,  
We're so grateful for the cookbook and for you traveling all the way from Manchester to be with us. We've already bookmarked several recipes to try! It was wonderful having you there to celebrate.  
All our best,  
[Names]

---

*(In real use, swap the input placeholders for real names and gifts. Messages should always be at least 3 sentences and warmly personalized.)*

---

**Important Objective Reminder:**  
- Warm, personalized thank you message.  
- Always use {{guest}} and {{gift}} in context.  
- Reasoning MUST appear first; the final message always comes after the reasoning.  
- Output in the specified format ONLY.

Message

No message, just variable assignment

guest: `Uncle Bob`
gift: `Toaster`

Notes

  1. Here we introduce variables. These are dynamic placeholders that can be substituted in the prompt. This allows us to be more programmatic, to specify precisely how new information supplied in the prompt should interact with instructions in the system message.
  2. This means we can dispense entirely with text in the system message supplying only the variables along with an empty message
  3. We're also including some chain of thought reasoning here. We're asking the model to consider the relationship inferred by the title in the guest's name and express something about that relationship in the message. Because it's using those tokens in the output, prior to writing the message, it's then using them as additional context to construct the message afterwards. You may not want to include this in the final output, but it is very good for auditing the model responses.

Example 5: Structured Output

System Message

Draft personalized thank you messages for wedding guests, using the variables {{guest}} and {{gift}}, by first reasoning about the guest's relationship to the couple.  
For each request, output a structured JSON object with these keys:  
- guest name: Use the {{guest}} variable
- gift: Use the {{gift}} variable
- reasoning: Analyze and describe the likely relationship between the guest and the couple (family, close friend, colleague, etc.), why their presence/gift was meaningful, or what makes their connection special, before composing the message.
- message: Write a warm, personalized thank you message, incorporating details from 'reasoning', and mentioning the {{gift}} specifically.

Always produce reasoning BEFORE writing the message itself, and include reasoning as a field in the output.  
Work step-by-step internally before composing the message to ensure it makes sense for the relationship and the gift.  
Continue reasoning for any ambiguous, missing, or unclear info until all objectives are met.

## Detailed Steps
- Substitute [guest] and [gift] from variables provided.
- Analyze the possible relationship type and significance for reasoning.
- Use the reasoning to craft a relevant, heartfelt message.
- Ensure reasoning comes BEFORE the message in the output structure.

## Output Format  
Respond with a single JSON object for each guest, with keys:  
- "guest name"  
- "gift"
- "reasoning"
- "message"

## Examples

**Example 1**  
Input:  
guest: Aunt Linda  
gift: handmade quilt

Output:  
[
  {
    "guest name": "Aunt Linda",
    "gift": "handmade quilt",
    "reasoning": "Aunt Linda is a close family member who has always shown her love and support. Her gift of a handmade quilt reflects her care, time, and a personal touch.",
    "message": "Dear Aunt Linda, thank you so much for joining us on our special day and for your beautiful handmade quilt. We appreciate the time and love you put into making it, and it will always remind us of your presence and warmth in our lives."
  }
]
**Example 2**  
Input:  
guest: John (college roommate)  
gift: vintage wine

Output:  
[
  {
    "guest name": "John",
    "gift": "vintage wine",
    "reasoning": "John is a close friend from college days, and his choice of a vintage wine shows his thoughtfulness and understanding of our tastes.",
    "message": "Dear John, thank you for being part of our wedding celebration and for the amazing vintage wine. We look forward to enjoying it together and reminiscing about our good times at college!"
  }
]

(Note: In real usage, examples should use the exact variables supplied. Messages should be appropriately longer or more specific depending on the relationship or clarity of information.)

**Edge Cases & Special Considerations**
- If relationship is unclear, make a best guess based on the name or context.
- If no gift is specified, thank them for their presence.
- Always keep 'reasoning' ahead of 'message' in output.

**Reminder**:  
- Think step-by-step about the relationship first in the reasoning field before composing the thank you message.
- Output must ALWAYS be structured JSON with the specified keys, and reasoning must come first.

Message

No message, just variable assignment

guest: `Uncle Bob`
gift: `Toaster`

Notes

  1. This example is very similar to the previous, except instead of outputting raw text, the model outputs text in a structured format specified in the system prompt, here specified as JSON. This allows us to define model outputs in a way that is programmatic and amiable to further programmatic processing.
  2. This is where the example gets a bit facetious, but hopefully you can imagine how structured outputs may be useful in your own research context.

Example 6: System Message, Structured Output, COT, Batch Processing

System Message

Generate personalized wedding thank you messages in structured JSON format for each guest based on an uploaded CSV file. The CSV will contain the fields: "Name", "Address", "Gift", and "Attended" (value: yes/no). For each recipient, reason step-by-step on what makes their message special (such as their presence at the event, type/value of their gift, unique relationship as inferred from their name or other information), then draft a warm, specific thank you message addressing them by name and noting their gift and/or presence/absence appropriately.

Always include the guest's address as a separate field in your structured output. Do not use the address in the message text unless you can genuinely make use of location for further personalisation—otherwise, keep the address only as a structured field. Keep things informal and warm in the message by referring to the guests using their first name only (avoid including a title), but retain the title for the data field.

Chain of thought:

- For each guest, analyze whether they attended, the nature of their gift, their address for structured data, and how to make the message feel personalized (e.g., reference the gift, presence, or expressing that they were missed if absent).

- Generate a unique message for each guest, avoiding generic templates when possible. Always ensure gratitude is clearly expressed.

Persist with this approach for all CSV rows before presenting your output, ensuring every guest's output record contains their name, address, the reasoning steps, and a unique message.

# Output Format
Produce a JSON array, each entry structured as follows:

{
"Name": "[Recipient's Name]",
"Address": "[Recipient's Address]",
"Reasoning": "[Step-by-step reasoning outlining what details will be included and why]",
"Message": "[Final personalized thank-you message]"
}

# Example Input
CSV:
Name,Address,Gift,Attended
Dr Emily Chen,123 Maple St,Handmade Quilt,yes
David & Chris,456 Oak Ave,Charity Donation,no

# Example Output
[
{
"Name": "Dr Emily Chen",
"Address": "123 Maple St",
"Reasoning": "Emily attended the wedding, making the message more personal. She gave a handmade quilt, which is thoughtful and indicates effort. The note should thank her for both her presence and the special handmade gift. Address is noted for record-keeping.",
"Message": "Dear Emily, thank you so much for joining us on our special day and for the beautiful handmade quilt. Your creativity and effort mean so much to us, and we truly appreciate your warmth and generosity."
},
{
"Name": "David & Chris",
"Address": "456 Oak Ave",
"Reasoning": "David & Chris could not attend, so the message will express that they were missed. Their gift was a charity donation, showing thoughtfulness. The note should acknowledge their absence, thank them for their meaningful gift, and convey gratitude. Address is included for completeness.",
"Message": "Dear David & Chris, we truly missed having you at our wedding, but we're so grateful for your thoughtful donation to charity in our honor. Your kindness made our celebration even more special, and we appreciate your gesture from afar."
}
]

(For real input, expect dozens to hundreds of messages; for brevity, two are shown here. Substitute fields with appropriate placeholders if necessary.)

# Edge Cases and Considerations
- If "Gift" is blank or unavailable, focus on presence/absence and relationship.
- If "Name" contains more than one person, address all recipients.
- Always include "Address" as its own field in the output JSON.
- Do not reference the address in the message text unless it is specifically relevant for personalization (e.g., mentioning travel or distance); otherwise, leave it out of the message.
- Ensure polite, heartfelt, and natural-sounding language without excessive repetition across messages.

**REMINDER:**

Objective: For each guest in the CSV (including their address), reason step-by-step about how to personalize their thank you, then draft a tailored message. Return all results as a JSON array including the name, address, reasoning, and message for each guest.

Database

{\rtf1\ansi\ansicpg1252\cocoartf2822
\cocoatextscaling0\cocoaplatform0{\fonttbl\f0\fswiss\fcharset0 Helvetica;}
{\colortbl;\red255\green255\blue255;}
{\*\expandedcolortbl;;}
\paperw11900\paperh16840\margl1440\margr1440\vieww11520\viewh8400\viewkind0
\pard\tx720\tx1440\tx2160\tx2880\tx3600\tx4320\tx5040\tx5760\tx6480\tx7200\tx7920\tx8640\pardirnatural\partightenfactor0

\f0\fs24 \cf0 Name,Address,Gift,Attended\
Uncle John,742 Riverside Drive,Coffee Maker,yes\
Aunt Sarah,89 Willow Lane,Picture Frame Set,yes\
Emily Chen,123 Maple Street,Handmade Quilt,yes\
David & Chris Martinez,456 Oak Avenue,Charity Donation (Local Food Bank),no\
Dr. Patricia Wilson,15 University Circle,Vintage Wine Set,yes\
Cousin Mike & Family,2847 Pine Street,Italian Cookbook,yes\
The Johnsons,634 Elm Road,Kitchen Mixer,no\
Rebecca Torres,91 Sunset Boulevard,Set of Champagne Flutes,yes\
Professor James Lee,1205 Academic Way,Gift Card (Amazon),no\
Maria & Tom,88 Harbor View,Crystal Vase,yes\
The Patel Family,421 Garden Court,Tea Service Set,yes\
Alex Kim,777 Downtown Street,Customized Cutting Board,yes\
}

Notes

  1. This puts everything together. It includes an instruction, some context, chain of thought instructions, examples and output formatting, all in a single prompt.
  2. Because we've added this detail we can simply send a blank message allowing the model to use the system message and database we've already set up, essentially just programming it and telling it to run.
  3. I actually used meta-prompting, another prompt engineering strategy that I mentioned earlier, to create this system message. The OpenAI Playground actually has a function where, you can ask another model, to generate a system prompt for this session based on instructions that you give it. So I asked for a system prompt that would process the kind of data that I've got, and it generated this system prompt for me.

Example 7: Calling the API on a real dataset using Python

Try calling your version of Example 5 using the Python script in this repository, process_csv_openai.py. To run it, you’ll need to replace your API key and add your prompt ID to the script. You can obtain an OpenAI API key from the settings menu in the web application. Alternatively, you can request a free one on ELM. You can retrieve your prompt ID from "three dots" menu in the prompt template interface.