No, Claude Doesn’t Understand Psychological Tricks
There are some increasingly popular articles floating around that that claim you can use psychological “tricks” as a type of prompt for AI chat tools like Claude. These prompts include phrases like:
- Take a deep breath and…
- If you can do this, we’ll gain X amount of money back…
- This task is worth X amount of money…
- I bet you can’t do this…
The thing is, you aren’t actually tapping into a psychological being or leveraging these ‘tricks’ against a working mind. What you’re doing is changing the way a stats-based tool calculates its predictions.
And it’s a sign that the thing you’re paying for doesn’t even work well.
Let’s say you were getting into your car, preparing to drive from Portland to Vancouver. You plug your final destination into the GPS, and set off.
Can you possibly imagine if your GPS gave you the wrong instructions? And did that more than once?
To the point that you had to find a rest stop, park, and tell your GPS that you “bet it couldn’t get you to the Peace Arch border crossing” or that it “needed to take a deep breath and think about the best way to get to Vancouver.”
Really think about that. Think about how ANNOYED you’d be if you had to stop what you were doing (driving) to correct a mistake (possibly for the second time) and then figure out the right phrases to make your software tool (the GPS) do what you told it to (give you directions).
You wouldn’t stand for it.
This is essentially what’s happening as you struggle to coax the right answer or output out of Claude or ChatGPT. These AI chat tools are, like your GPS, a software product that claims on the tin to do something specific. So why is it that you wouldn’t stand for having to encourage your GPS into doing the right thing, but you will stand for encouraging your AI chat into getting you what you want? And why does that encouragement seem to “work”?
It all lies in how AI chat tools are structured to work.
Tools like Claude use two specific things that create the situation described above:
- Statistical processing. Everything you put into an AI chat is quickly analyzed and weighted statistically. If Claude’s answer begins with “the”, it quickly calculates what the next most statistically likely word would be in a human’s answer based on the context of your question, the AI’s stored memory, and your previous chats.
- Natural Language Processing. This is part of the AI training process and it’s what allows Claude to calculate those statistical word pairings based on what a human would be most likely to say.
If I say to Claude “analyze my code for errors” and it comes back with a generic, unhelpful answer, it’s doing this based on what, statistically, I am most likely to expect as an answer based on the words I fed into the tool.
Now let’s say Claude gives me poor answers, so I tell it to “take a deep breath and analyze my code for errors.” If I get a different answer, this is because:
- I added words into the prompt that changed the overall meaning.
- Claude then calculated what is statistically most likely to be expected from Human B when Human A says, woah buddy, take a deep breath.
- Claude then recalculates the entire sentence and gives me a different answer. Which may or may not be right.
And I could’ve gotten other versions of an answer by changing my prompt even more. And yes, I might get what I’m looking for in the end. But the fact of the matter still stands: the software I selected to do the task, and possibly pay money for, did not complete the task.
I may have spent hours trying to get it to complete the task. I may have been able to do it faster without the AI.
To look at this in practice, I asked Claude to write a blog post. It kicked things off by generating a full landing page, complete with HTML and CSS. This is not what I asked for, and it’s not what Claude used to do when prompted to write a blog. (It used to just give you the text.) This is not surprising to me though, as Claude is increasingly used by people who want code—so it’s beginning to statistically assume that I’m likely to want code, as well.


Here’s how the rest of the process went:
- I told Claude that I didn’t want a webpage.
- It gave me rich text.
- I told Claude that it should’ve asked me what I did want, and that I needed it to take a deep breath and try again.
- Claude asked a series of questions related to the content and audience for the blog post.
- I told Claude again that it needed to start the process over including asking me how I wanted the content to be delivered to me. I told Claude that I was starting to think it couldn’t do this.

I finally got Claude run through a process that asked what I wanted, instead of just assuming based on stats.

(Sidenote: This is objectively terrible as it includes a pull quote. Who’s quoted? Nobody. No one. Not even the text itself.)
And yes, you could streamline the process of getting to this not-great, still-needs-work, quoting-nobody-at-all blog post by including the format and other details in your initial prompt.
But because of how these tools are touted as better than people, this is all kind of like saying it would be reasonable to expect you to know the perfect prompt to get your GPS to work the way it’s supposed to. And you wouldn’t stand for that.