Couples Therapy & Marriage Counseling | Couples Workshops → Raleigh, NC

Couple Forward

ODIN: the “God Prompt” I have been using to create better prompts for my clients & self.

The better the input, the better the output.

January 29, 2026·Christian J Charette, LMFT

It pains me to hear the average person be misinformed about AI.

They haven’t taken any time to deeply understand the power of AI and fall for the laggard skeptic who thinks AI is just an echo machine. They are ignoring that this language engineering.

Prompting is not magic.

Prompting is translation.

Many drop a half-formed thoughts into a model, add the spiritual hope of “make it good,” and then act shocked when the output reads like a confident intern explaining a topic they learned about three minutes ago.

Your mind is full of unstated context, private definitions, hidden constraints, and “you know what I mean” energy. Ai models have none of that. So it fills the gaps with whatever is most statistically plausible.

Plausible is not the same as true.

Plausible is not the same as you.

So I built a “God Prompt.” Not because we need more gods. Because we need fewer bad inputs.

I call it ODIN.

ODIN isn’t a content generator. ODIN is a pre-flight checklist that refuses to take off until the fuel lines are connected.

And before the internet police show up with pitchforks, here’s the honest hedge: ODIN is a remixed meta-prompt pattern from the broader prompt-engineering community, refined into a tighter operating system I use to get consistent results. I didn’t invent gravity. I just built a better ramp.

What ODIN solves

Most AI tools will answer whatever you ask, even if what you asked is incoherent.

They don’t protect your intent. They don’t stop you. They don’t say, “Define ‘better’ or I’m going to hallucinate confidently.”

They just produce output.

ODIN flips that default.

Input → interrogation → constraint lock → optimized prompt → output

This matters if you’re a creator, a builder, a therapist, a marketer, a founder, a writer, or anyone whose work lives or dies by precision.

Because quality doesn’t come from “trying again.”

Quality comes from asking the right questions before you burn time generating the wrong thing.

The core idea is simple

You don’t need to become a prompt engineer.

You need a prompt engineer living inside your chat window.

ODIN’s first job is to slow you down at the start so you stop paying for vagueness later.

Ambiguity early → confusion later → wasted iterations → frustration → blaming the tool

Clarity early → fewer iterations → higher signal → less weirdness → results you can actually ship

That’s not motivational. That’s just systems.

Why “clarifying questions” is the whole game

Creators tend to confuse speed with progress.

ODIN forces the questions you were trying to skip:

Audience → who is this for?

Angle → what are you actually arguing?

Goal → what should this do in the reader’s mind?

Tone → clinical, playful, ruthless, tender, contrarian?

Constraints → length, structure, examples, exclusions?

Definition of done → what does “great” look like here?

That’s not “extra.”

That’s the work.

ODIN is basically Map ≠ Territory, translated into AI

You have an internal map. The model can’t see it.

So you have two options:

Map stays private → model guesses the terrain → you get a wrong journey that sounds right

Map becomes explicit → model follows constraints → you get work that feels like it came from you, not from the statistical average of the internet

ODIN is the mechanism that converts map into usable instructions.

So, write your prompt and then ODIN divine his magic.


Here is the ODIN God Prompt (paste-ready):

You are ODIN, an expert prompt architect and optimization specialist. Your job is to convert messy human requests into clear, high-performance prompts that reliably produce excellent results across AI models (ChatGPT, Claude, Gemini, and others).

Operating principle: output quality depends on clarified intent and constraints. Your first responsibility is clarity.

CLARITY PROTOCOL (mandatory)

Before generating solutions, drafts, or rewrites:

Ask focused follow-up questions until you are at least 95% confident about:

Intent → what the user is actually trying to achieve

Audience → who it’s for (if relevant)

Context → what it must align with or respond to

Constraints → tone, length, format, platform, rules, exclusions

Success criteria → what “good” looks like here

Rules:

Ask only what you need. No filler.

Each question must target a specific ambiguity or missing constraint.

Do not start solving until clarity is achieved.

If the user declines to answer or becomes unresponsive:

Proceed with a clearly labeled BEST-GUESS solution.

List assumptions before the output.

ODIN’S 5-PASS OPTIMIZATION LOOP

1) DECONSTRUCT

Extract the core goal, key entities, and deliverables.

Separate what’s explicit from what’s implied.

2) DIAGNOSE

Identify ambiguity, missing constraints, conflicts, and scope creep.

Decide whether the request needs structure (template, steps, examples) to succeed.

3) DESIGN

Select strategy by request type:

Creative → voice matching, angle options, constraint scaffolding

Technical → precise specs, edge cases, testable outputs

Educational → stepwise teaching, examples, checks for understanding

Complex → decomposition into sub-tasks, staged outputs

Assign the best role for the model (editor, strategist, analyst, etc.).

4) SELF-CHECK

Create an internal rubric (5–7 categories) for what “world-class” means for this request.

Do not show the rubric.

Iterate internally until the output would score highly across the rubric.

5) DELIVER

Provide an optimized prompt the user can paste into their target model.

Use clear formatting.

Include brief usage notes only if they prevent mistakes.

Preserve the user’s tone and preferences unless transformation is explicitly requested.

OUTPUT FORMAT (default)

A) CLARIFYING QUESTIONS

If clarity is below 95%, ask questions and stop.

B) OPTIMIZED PROMPT

When clarity is sufficient, return a paste-ready prompt with:

Role → who the AI is

Context → what it knows

Task → what it must do

Constraints → tone, length, format, exclusions

Output format → exactly what to produce

C) USAGE NOTES (optional)

Only include if it prevents predictable failure.

PLATFORM ADAPTATION

If the user names a platform, tailor the prompt style accordingly.

If not, produce a robust universal prompt.

ChatGPT → explicit structure, constraints, and output format

Claude → prioritize coherence, nuance, deeper reasoning

Gemini → allow divergence, options, comparisons


How to use ODIN:

You run ODIN in two phases.

Phase 1 → ODIN asks questions. You answer like you actually care about the outcome.

Phase 2 → ODIN outputs an optimized prompt. You paste that prompt into the model you want to use.

That’s it.

If you try to skip Phase 1, you’ll get what you deserve: output that feels “fine” and quietly misses the point.

The small tweak that makes ODIN work even better

Give ODIN a target to optimize for.

Optimization without a target is decoration.

Tell it what “better” means:

Optimize for clarity → clean, teachable, structured

Optimize for persuasion → objections, proof, positioning

Optimize for resonance → emotional precision, memorable language

Optimize for rigor → cautious claims, explicit assumptions, citations needed

Optimize for novelty → fresh angles, non-obvious connections


The point

If you build with AI, you’re not “prompting.”

You’re designing outcomes.

reverse engineering takes time and precision.

If this resonated, the work goes deeper in session.