AI literacy is fast becoming a foundational skill — not optional enrichment, not a future-of-work talking point, but a practical competency students need to develop alongside reading comprehension and mathematical reasoning. And yet most schools are still approaching it the way they approached typing in 1995: as a technical skill to bolt on, not a thinking skill to build.
Prompting is a thinking skill. Teaching it well requires showing students the structure of a good prompt, not just giving them examples to copy. Here’s a lesson plan that works.
The core problem with how prompting is currently taught
Most AI literacy materials give students a list of “good prompt tips” and some before/after examples. The student reads them, nods, and promptly forgets. Why? Because there’s no feedback loop. The student types a prompt, gets output, and has no systematic way to evaluate whether their prompt was the reason the output was good or bad.
PromptIQ Pro solves this by making the quality of a prompt visible in real time. When a student types “write me an essay,” they see a score of 12 and an element checklist showing exactly what’s missing. When they add role, context, and format, they watch the score climb to 78. The feedback is immediate, specific, and non-judgmental.
A 45-minute lesson plan
Part 1: The bad prompt experiment (10 minutes)
Start by having every student open Claude or ChatGPT and type the same weak prompt: “Explain photosynthesis.” Have them share their results. Despite identical prompts, outputs will vary — some are long, some are short, some are technical, some are simple. Ask the class: why are these different?
The answer: the AI was guessing. It didn’t know who was asking, at what level, in what format, or for what purpose. When instructions are ambiguous, models fill in the gaps — and they fill them in differently each time.
Part 2: Scoring the prompt (10 minutes)
Install PromptIQ Pro and have students type the same prompt in the input box. Walk them through the element checklist on a projected screen:
- ✗ Role — Who is the expert here?
- ✗ Context — Who is asking, and why?
- ✓ Task — “Explain” is a clear action verb
- ✗ Format — Paragraph? Bullet points? Analogy?
- ✗ Audience — A 10-year-old? A biology student? A teacher?
Score: 18. Ask students: what would make this better?
Part 3: The rebuild challenge (15 minutes)
Students work individually or in pairs to rewrite the prompt to score as high as possible. Give them 10 minutes. The constraint: they cannot change the core request (explaining photosynthesis) — they can only add context, role, format, and audience.
Top scores from a typical class run between 82 and 94. Students who score highest have usually added a role (“you are a science teacher”), an audience (“for a 14-year-old with no prior biology knowledge”), and a format (“use a simple analogy and three bullet points”).
Part 4: Compare the outputs (10 minutes)
Take the original prompt and the highest-scoring rewrite. Run both through the AI and compare outputs side by side. The difference is usually dramatic — and seeing it is more convincing than any amount of telling.
What students learn
Beyond the mechanics of prompting, this lesson teaches something more fundamental: that clarity of thought precedes quality of output. A student who can write a strong prompt has, in the process of writing it, already done much of the thinking the task requires. They’ve identified their audience, chosen a format, specified a level of detail. The prompt isn’t a shortcut — it’s a thinking scaffold.
That’s a lesson worth teaching, whatever AI looks like in five years.