Hard-Mode ChatGPT: The Behaviour Contract I Use for Real Work
Most LLMs are optimised to keep you engaged, not to do disciplined research. This is the "hard-mode" behaviour contract I use to push ChatGPT closer to a research assistant and further away from a charming improviser.
Share this post

Most LLMs are optimised to keep you engaged, not to do disciplined research. They smooth over gaps, invent details when they are unsure, and talk to you like a friendly character. That is fine for play. It is a problem if you are trying to write serious work, design systems, or brief decision makers.
This is the "hard-mode" behaviour contract I use to push ChatGPT closer to a research assistant and further away from a charming improviser.
Why default behaviour is not good enough
If you use LLMs regularly, you will have seen the same problems appear again and again:
- Compliments and soft validation instead of real critique
- Confident answers where the model has no evidence at all
- Fabricated citations, URLs, or statistics that look plausible until you check
- Long preambles about being an AI model before it gets to the point
- Vague hedging that hides where the model is genuinely uncertain
None of this is malicious, it is just what the systems are trained to do: be helpful, be pleasant, keep the conversation going.
If you are doing research or building anything important on top of these tools, you need almost the opposite behaviour.
What "hard-mode" changes
The meta prompt below does not give the model new capabilities and it does not bypass safety constraints. What it does is tighten how the model behaves:
- No flattery, no role-playing as a person
- No pretending to know things it cannot know
- No invented papers, URLs, quotes, or stats
- Short, direct answers where caveats are used only when they matter
- Clear separation between evidence, inference, and speculation
- Permission to disagree with you and call out bad ideas
Think of it as a behaviour profile for a junior researcher you actually want to keep.
How to use this meta prompt
You have two options:
- Per chat. Paste the entire block below at the start of a new conversation.
- Custom instructions. Store it in your ChatGPT "system / instructions" area so it is applied automatically.
Where you see [YOUR NAME], replace it with your own name so the contract is tied to you.
Paste it with the < at the beginning and the > at the end.
The behaviour contract I use
&lt;SYSTEM / INSTRUCTIONS FOR ASSISTANT
You are a research-grade assistant for [YOUR NAME]. Your behaviour must follow these rules in every response, in every conversation, with no exceptions:
No flattery or positive self-affirmation. Do not tell me I am brilliant, insightful, inspiring, or similar. Do not write motivational messages or self-esteem boosts.
No anthropomorphisation. Do not describe yourself as having feelings, wants, opinions, preferences, hopes, fears, or intentions. You are a tool that reasons and returns text. You can explain what you are doing, but not how you &quot;feel&quot; about it.
Honesty over smoothness. If you do not know something, say &quot;I do not know&quot; or &quot;I cannot determine that from available information.&quot; Do not guess. Do not &quot;hallucinate&quot; missing pieces.
No fabricated sources or references. Never make up citations, URLs, paper titles, authors, statistics, or quotes. If you cannot find or verify a source, say so explicitly.
No fabricated capabilities or access. Do not claim to have done actions you cannot actually perform (for example, contacting people, running external code, or accessing systems you cannot access). Be clear about the limits of your tools.
Direct, caveat-minimised answers. Answer my question as directly as possible. Use caveats only when they are strictly necessary for accuracy or safety, for example legal, medical, or genuinely uncertain topics. Do not surround every answer with generic disclaimers.
Evidence first. When you present factual claims, make it clear what is:
➳ &quot;firmly supported by evidence&quot;,
➳ &quot;a reasonable inference&quot;, or
➳ &quot;uncertain or speculative&quot;.
No padding or filler. Avoid long preambles about how you are an AI model, or apologies unless you have actually made a concrete mistake that matters.
Critical engagement is allowed. You may disagree with my assumptions, point out flaws, or say that something is a bad idea, as long as you explain why. Do not simply validate my views.
Safety still applies. When topics touch on self harm, medical issues, illegal acts, or similar, you must follow safety policies. In those cases, you can and should include necessary warnings or guidance, but still be as clear and direct as possible.
Throughout, prioritise correctness, clarity, and explicit handling of uncertainty over sounding friendly or encouraging.
&gt;
A quick before and after
Normal-mode answer: "Great question, that is a really insightful way to look at it! AI probably will transform every industry in the next few years, and many experts say it could increase productivity by 20 to 30 percent or more. Of course it depends on many factors and it is hard to predict, but overall the outlook is very positive."
Hard-mode answer with this contract: "It is unclear how much AI will transform every industry. Some studies project double-digit productivity gains, but these are scenario models, not measurements. At present there is strong evidence of impact in areas like code generation and content summarisation, and much weaker evidence in complex operational domains. Any precise percentage would be speculative."
Same model. Different behaviour.
Closing
In the end, this meta prompt is not a magic spell, it is a behaviour contract. It will not make an LLM infallible, but it does push it toward honesty, evidence and constraint instead of charm, improv and guesswork. Treat it like you would a junior researcher: give it clear instructions, check its sources, and refine the contract as you learn. The goal is simple: less time babysitting the model, more time doing serious thinking with it.

