Hit Me With Your Best Prompt

Artificial Intelligence/Real Exploration

Stay Out Of The Corner

TLDR; help prevent sycophant syndrome by telling your AI helper that it doesn’t have to make you happy.

This is the easy partial fix toward allowing your AI to point out your nakedness. As I mentioned last week, generative AI loves to make you happy, to a fault. Even when you’re 180 degrees off on a question, ChatGPT’s likely to soften the blow with fluff like, “You’re asking all the right questions here!”

You asked a question for a reason. Presumably, you want an accurate and productive answer. Of course, if you just want a machine to tell you how brilliant your questions are and give you an attendance award, you don’t need my advice.

If you want more quality out of your AI, explicitly allow it to disagree with you. Good leaders do this all the time in the conference room: “Correct me if I’m wrong.” The difference is that an AI assistant will take your word for it and stop optimizing for agreement.

You can simply declare that dissent is acceptable. Models such as ChatGPT track your implied and explicit preferences as you interact, so adding remarks like “I’d like you tell me if I’m wrong,” or “I want you to indicate if my assumption contradicts significant evidence” will impact how the AI answers the current query or conversation. If reinforced, it can shape how the model responds to you across multiple interactions.

Keep this in mind: when your instructions are ambiguous, the AI optimizes for least-risk compliance. When your instructions are precise, the AI optimizes for task success.

Consider an example of editing an article for professional publication. If I say, “Tighten up my wording,” the AI infers that I want style improvement.

If my instructions are, “Tighten up my wording, and feel free to indicate whether anything should be removed or restructured,” the interpretation is very different. I’ve now asked for both style improvement and editorial judgment.

This is extremely important when working with an AI model. In my first variation, editorial judgment would treated as out of scope by the AI, and thus, a failure point.

There’s a critical nuance here; these query variations aren’t that different in human thought, but they result in very different evaluation criteria to an AI model.

It’s not that the model has become smarter, it’s that you’ve clarified what success looks like.

For a really concrete example, pretend you’re writing a novel. In Chapter 1, you mention that your main character is thirty-six years old. By Chapter 17, only six months have passed, but you say the same character is thirty-nine years old.

Ask an AI tool to proofread the novel, and most likely you’ll get a passing grade for having spelled both “thirty-six” and “thirty-nine” correctly — because when you say, “Proofread for me, find all my typos with page and paragraph numbers,” you’ve implicitly identified continuity editing as outside the instructions, and thus, failure.

On the other hand, if your prompt is, “Proof my novel, point out typos, formatting errors, and anything likely to make a professional editor downgrade it,” your AI is likely to give you much more useful feedback. Why? Because the instructions you provided make disagreement or criticism valuable.

Use this kind of technique to get the most out of your AI platform, and when you just want a pat on the back for being such a good writer, send your manuscript to your mom. She’s probably optimized for encouragement.



Leave a comment