If you’ve spent any time trying to improve your AI prompts, you’ve probably encountered two names: COSTAR and RISEN. Both are legitimate frameworks. Both produce better outputs than a bare-bones prompt. But they’re not interchangeable — and picking the wrong one for your task will leave you frustrated.

Here’s the short version: use COSTAR for creative and complex open-ended tasks, use RISEN for technical and step-by-step tasks. The rest of this article explains why.

What is COSTAR?

COSTAR stands for Context, Objective, Style, Tone, Audience, Response. It was designed to give an AI model rich situational awareness before it starts generating. Think of it as writing a brief for a creative professional — you’re not just telling them what to do, you’re telling them who it’s for, how it should feel, and what form the output should take.

A full COSTAR prompt for a blog post might look like:

Context: I run a SaaS company targeting HR teams at mid-sized companies.
Objective: Write a 600-word blog post about employee onboarding best practices.
Style: Conversational but professional, similar to Harvard Business Review.
Tone: Warm, practical, confidence-inspiring.
Audience: HR managers with 5–15 years of experience.
Response: A structured article with 3 subheadings and a short conclusion.

When you run this through PromptIQ Pro’s COSTAR scorer, each of those six elements is detected and weighted. A prompt that nails all six reliably scores in the A range.

What is RISEN?

RISEN stands for Role, Instructions, Steps, End goal, Narrowing. Where COSTAR focuses on context and output quality, RISEN focuses on process. It’s built for technical tasks where the sequence of actions matters — debugging code, writing a data pipeline, walking through an analysis methodology.

A RISEN prompt for a coding task looks very different:

Role: You are a senior Python engineer specialising in data pipelines.
Instructions: Refactor this function to handle missing values gracefully.
Steps: First identify all places a null could enter. Then add appropriate guards. Finally add a unit test.
End goal: Production-ready code with no silent failures.
Narrowing: Use only the standard library — no pandas or numpy.

Notice how the Steps element forces the model into a sequential mode of thinking. This dramatically reduces hallucination on technical tasks, because the model has to check each step against the previous one.

The simple rule

Ask yourself: does my task require creative judgment or procedural accuracy?

  • Creative judgment (writing, analysis, brainstorming, summarising) → COSTAR
  • Procedural accuracy (coding, debugging, data work, step-by-step instructions) → RISEN

When you’re unsure, COSTAR is the safer default. It handles ambiguity better because it front-loads context rather than prescribing a process the model might follow too literally.

What PromptIQ Pro does automatically

PromptIQ Pro’s auto-detection reads your prompt for signals before you even choose a framework. Code keywords, import statements, numbered steps, and technical terminology push it toward RISEN. Tone, style, and audience signals push it toward COSTAR. You can always override this manually — but for most users, the auto-detection picks correctly more than 80% of the time.

The element checklist then tells you exactly which pieces are missing, so you can fill them in before you hit send. No guessing. No wasted tokens.