No, Claude Doesn’t Understand Psychological Tricks

There are some increasingly popular articles floating around that that claim you can use psychological “tricks” as a type of prompt for AI chat tools like Claude. These prompts include phrases like:

  • Take a deep breath and…
  • If you can do this, we’ll gain X amount of money back…
  • This task is worth X amount of money…
  • I bet you can’t do this…

The thing is, you aren’t actually tapping into a psychological being or leveraging these ‘tricks’ against a working mind. What you’re doing is changing the way a stats-based tool calculates its predictions.

And it’s a sign that the thing you’re paying for doesn’t even work well.

Let’s say you were getting into your car, preparing to drive from Portland to Vancouver. You plug your final destination into the GPS, and set off.

Can you possibly imagine if your GPS gave you the wrong instructions? And did that more than once?

To the point that you had to find a rest stop, park, and tell your GPS that you “bet it couldn’t get you to the Peace Arch border crossing” or that it “needed to take a deep breath and think about the best way to get to Vancouver.”

Really think about that. Think about how ANNOYED you’d be if you had to stop what you were doing (driving) to correct a mistake (possibly for the second time) and then figure out the right phrases to make your software tool (the GPS) do what you told it to (give you directions).

You wouldn’t stand for it.

This is essentially what’s happening as you struggle to coax the right answer or output out of Claude or ChatGPT. These AI chat tools are, like your GPS, a software product that claims on the tin to do something specific. So why is it that you wouldn’t stand for having to encourage your GPS into doing the right thing, but you will stand for encouraging your AI chat into getting you what you want? And why does that encouragement seem to “work”?

It all lies in how AI chat tools are structured to work.

Tools like Claude use two specific things that create the situation described above:

  1. Statistical processing. Everything you put into an AI chat is quickly analyzed and weighted statistically. If Claude’s answer begins with “the”, it quickly calculates what the next most statistically likely word would be in a human’s answer based on the context of your question, the AI’s stored memory, and your previous chats.
  2. Natural Language Processing. This is part of the AI training process and it’s what allows Claude to calculate those statistical word pairings based on what a human would be most likely to say.

If I say to Claude “analyze my code for errors” and it comes back with a generic, unhelpful answer, it’s doing this based on what, statistically, I am most likely to expect as an answer based on the words I fed into the tool.

Now let’s say Claude gives me poor answers, so I tell it to “take a deep breath and analyze my code for errors.” If I get a different answer, this is because:

  1. I added words into the prompt that changed the overall meaning.
  2. Claude then calculated what is statistically most likely to be expected from Human B when Human A says, woah buddy, take a deep breath.
  3. Claude then recalculates the entire sentence and gives me a different answer. Which may or may not be right.

And I could’ve gotten other versions of an answer by changing my prompt even more. And yes, I might get what I’m looking for in the end. But the fact of the matter still stands: the software I selected to do the task, and possibly pay money for, did not complete the task.

I may have spent hours trying to get it to complete the task. I may have been able to do it faster without the AI.

To look at this in practice, I asked Claude to write a blog post. It kicked things off by generating a full landing page, complete with HTML and CSS. This is not what I asked for, and it’s not what Claude used to do when prompted to write a blog. (It used to just give you the text.) This is not surprising to me though, as Claude is increasingly used by people who want code—so it’s beginning to statistically assume that I’m likely to want code, as well.

Here’s how the rest of the process went:

  • I told Claude that I didn’t want a webpage.
  • It gave me rich text.
  • I told Claude that it should’ve asked me what I did want, and that I needed it to take a deep breath and try again.
  • Claude asked a series of questions related to the content and audience for the blog post.
  • I told Claude again that it needed to start the process over including asking me how I wanted the content to be delivered to me. I told Claude that I was starting to think it couldn’t do this.

I finally got Claude run through a process that asked what I wanted, instead of just assuming based on stats.

(Sidenote: This is objectively terrible as it includes a pull quote. Who’s quoted? Nobody. No one. Not even the text itself.)

And yes, you could streamline the process of getting to this not-great, still-needs-work, quoting-nobody-at-all blog post by including the format and other details in your initial prompt.

But because of how these tools are touted as better than people, this is all kind of like saying it would be reasonable to expect you to know the perfect prompt to get your GPS to work the way it’s supposed to. And you wouldn’t stand for that.

Is It Absurd To Critique Generative AI?

I often find Cory Doctorow’s commentary interesting, and typically discover a few nuggets in each text that I find to be salient and useful. While my recent read of his March 12, 2026 Pluralistic article proved the same, there was one statement that hit me as being patently not accurate. So much so that I left a comment (I almost never comment on anything) and am writing more about it here.

In the Pluralistic post, Doctorow talks about how AI hype and the dawn of AI psychosis as a topic—and makes many good points along the way. He then says this:

This is an extremely normal technological situation: for a new technology to be promoted and productized by shitty people who have grandiose goals that would be apocalyptic should they ever come to pass — and for some people to find uses of that technology that are nevertheless beneficial to them and their communities. The belief that AI is an exceptionally bad technology (as opposed to an exceptionally bad economic bubble) drives AI critics into their own absurd culs-de-sac (sic). There are many, many skilled and reliable practitioners of technical and creative trades who’ve found extremely reasonable, normal ways in which AI has automated some part of their job. They aren’t hyperventilating about how AI has changed everything forever and the world is about to end. They’re not mistaking AI for god, or a therapist. They’re just treating AI like a normal technology, like a plugin.

(I added the bold formatting myself for emphasis.)

I don’t think that AI critics have driven themselves into an “absurd cul-de-sac” by claiming that AI is both an exceptionally bad technology and an exceptionally bad economic bubble. It can be both. And this doesn’t mean that the people using AI in their day to day—the people Doctorow goes on to describe in the following paragraphs—are bad.

But the tech, the tech—yes, there are reasons to say that AI is exceptionally bad, reasons that don’t apply to other forms of commercial consumer-and-enterprise technology that have come before.

Why is AI “more bad” than some other tech?

The transition from typewriter to computer was a big shift. Using a computer, even just for word processing, requires additional knowledge, power, and computation beyond dusting off your Underwood or plugging in an IBM Selectric. You have to know how to find and open the word processing app of your choice, a step that involves additional tools (like a mouse) and time. I don’t mean to claim for one minute that this wasn’t a jarring shift in some ways.

But even though a computer might draw more energy than an electric typewriter (and obviously more than a manual one), there’s are two things that make the shift from typewriter to personal computer less damaging than the shift from personal computer to generative AI: power and noise.

AI and Power Consumption

My computer sits on my desk. If I plug it in and turn it on, it will begin to draw power. If I turn it off and unplug it, it stops drawing power.

Anker, a maker of charging bricks and cords, says that a 13” MacBook Air draws between 8-10 watts per hour when used for a typical workload. If we split that down the middle and go with nine watts, then the computer will use 72 watt-hours over the course of the workday.

AI tools, however, are basically always drawing on the power grid due to how popular they are.

Sam Altman, CEO of OpenAI, has stated that one ChatGPT query uses 0.34 watt-hours of electricity. Yes, that’s less than an hour of computer use, but think how many ChatGPT messages get sent each day.

The Institute of Electrical and Electronics Engineers (IEEE) calculated that the average ChatGPT user’s daily queries use the energy equivalent of turning on a 10-watt LED lightbulb for one hour. The IEEE also estimated that over the course of a year, all generative AI queries draw an amount of power equivalent to what two nuclear reactors produce in the same time span.

And, of course, you have to use a computer to interact with generative AI tools. So increasing AI use adds onto the power draw each person collects while using their computers. It seems small at first, but when you consider the total volume of computer and AI use over time—along with widespread pushes for everyone to use more generative AI—there are some real problems.

The IEEE continued to crunch their numbers and, cross-referencing a Schneider Electric report, determined that we actually do not produce enough power to keep up with AI-related energy demands. Building an additional 44 nuclear reactors could meet that need by 2030 but…that’s not exactly something you whip up in four years.

Forget power, what about noise?

Even if you say the power component alone isn’t an issue, we can’t overlook noise pollution.

If turn on my computer and fire up every single app, I might hear the fan start to whir. My neighbor, however, will not hear my computer whir, even if I have all the windows open. (And they do indeed live close enough to me that I can talk to them through my office window if they’re outside.)

What my neighbors and I can hear, though, is the sound of the data center about half a mile (.8km) from our residences. It’s not all the time, but when the generators and cooling systems kick on, that sucker is LOUD. The Environmental and Energy Study Institute reports that data centers emit high- and low-frequency sounds that can raise noise levels up to 96 decibels. And that’s the sound from the servers—when the generators kick on, the sound tops 105 decibels; akin to a jet flying overhead.

The EESI also says that sounds above 65 decibels can increase physical stress, while sounds above 85 decibels begin to hurt people’s hearing.

Oh yeah—your data

And finally, we get to a third issue that I haven’t touched on yet: data harvesting. A brand new computer is a fairly empty slate—it doesn’t come pre-loaded with all of the books, art, and films that have been made by people before you. But generative AI does.

If you wanted to keep a copy of every book used to train Meta’s large language model (LLM) in your own home, you’d need 5,125 Kindle Paperwhite e-readers. And that’s just for the books—this doesn’t include any other media they’ve included in their AI training data set.

Every time you use a generative AI tool, you’re transmitting data back to a corporation. Yes, if you have a paid account, you can usually turn off the “use my data to train your model” option, but that doesn’t mean your chats are private or encrypted. Heck, Claude Code on the web copies your entire repository of code to an Anthropic-owned machine.

This doesn’t make you bad. It’s normal to be interested in a new technology. Many people have to use it for their jobs, as it’s become a requirement. As Doctorow points out in his article, there are legitimate reasons for using generative AI tools, and I can’t say that there won’t be any benefits in some areas over time. But generative AI is a fundamentally different consumer technology than others that have hit the market over the past few years, and to claim that anyone criticizing the tech is in an absurd cul-de-sac doesn’t quite hold water with me.

But we don’t have to see eye-to-eye. A difference of opinion in that area won’t keep me up at night. The noise from the data center can do that all on its own.

New Reporting Shows Private Zoom Meetings Turned into AI Podcasts

Emanuel Maiberg of 404 media published a report looking at a website called WebinarTV that isn’t strictly webinars as the name implies.

Instead, as Maiberg found, WebinarTV publishes recordings of Zoom calls, then turns those calls into an audio podcast “hosted” by AI voices. Oh, and the catch? Not all of those Zoom calls were public.

According to Maiberg and a report by CyberAlberta, WebinarTV most likely gains access to recordings of obstensibly private (in that they weren’t recorded and published online by the real host) Zoom calls in one of two ways:

  1. Most likely: WebinarTV collects meeting room details (including access links) through browser extensions that automatically join the user’s Zoom calls and provide “AI notetaker services.” According to CyberAlberta, known extensions feeding data to WebinarTV include:

GoToWebinar and Meeting Download Recordings (published by meetingtv.us), AutoJoin for Google Meet, Meet Auto Admit, OtterAI, NottaAI

  1. Less likely: The WebinarTV scraper finds “click to RSVP” links for Zoom meetings on webpages, then registers a bot attendee. This “attendee” then captures the meeting content.

In both cases, the recording is a screen recording, not a meeting recording.

When you initiate meeting recording in a Zoom call, all participants get a notification that meeting recording has begun. This recording is then either stored on the host’s computer or in their Zoom cloud account, depending on the type of plan they have. It’s a video file of just the Zoom video content—speakers and slides shared during the call. (Transcripts and chat records can be available too, depending on the host’s settings, but these are separate files.)

But if an attendee screen records the call, nobody in the Zoom room knows. Screen recordings just capture the contents of your actual computer screen or a window on it. This includes a list of participants or the chat bar along the side of a Zoom call, if opened up on the recorded screen.

Are your Zoom meetings at risk?

Unfortunately, there’s not a lot that individual Zoom hosts can do to prevent this, largely because of how likely it is that WebinarTV is gaining access through attendees’ notetakers.

A decent, though not foolproof, way to handle this would be to:

  1. Don’t publish meeting RSVP links online. Send RSVP links directly to the people you want to invite, or invite them via a calendar event.
  2. Require that attendees type in a meeting passcode (vs. having Zoom automatically provide access to the meeting when a join link is clicked).
  3. Process attendees through a waiting room that requires a host to allow access.
  4. Tell all attendees up front that no AI notetaking plug-ins will be allowed to attend the meeting.
  5. Manually remove AI notetaking tools from the call as they’re added.
  6. Once the call starts, lock the meeting to prevent any other people or bots from joining.

This still doesn’t stop the attendees’ browser extensions from collecting information and it doesn’t completely guarantee that your meetings won’t be screen recorded.

It’s also a nearly impossible task to manually monitor, vet, and remove attendees during a very large group call.

And finally, there’s the fact that AI notetaking devices do offer accessibility benefits. In this case, it would be most “secure” for meeting hosts to capture the meeting recording, summary, and transcript using Zoom’s native tools, then share that with attendees afterward.

Why is WebinarTV doing this?

Money. This is entirely a money grab. WebinarTV sometimes sends emails to hosts whose meetings were uploaded and offers to “help” them “distribute” and “market” the content further…for a fee, of course.

There’s a lot of money in data, too, such as being able to gather (and sell) lists of email addresses affiliated with a particular event or group. I don’t have any proof that WebinarTV has done this, but it also wouldn’t surprise me at all.

Other video calling options

While Maiberg’s article and CyberAlberta’s report focus on Zoom, CyberAlberta does note that a similar risk could be present with other video-calling platforms. And the list of affected browser extensions clearly refers to some marketed to Google Meet users.

Again, while nothing is foolproof, the most popular video calling tools are often most likely to be vulnerable to attacks like this. There’s financial benefit for bad actors to learn how to breach highly popular tools—it gives them a lot of content to collect.

I began using an encrypted, European-based video calling app, Whereby, in 2024, though I still use Zoom for some meetings I host (and often have to join other people’s Zoom rooms, of course.) Whereby uses encrypted, locked rooms by default and isn’t as deeply integrated with calendar apps and extensions as Zoom is.

Whereby also offers a very simple, nice web-browser-based experience for its users. I’ve had multiple people join calls with me from their phone browser and comment on how nice the whole video call experience is.

And, of course, there’s always just good old phone calls. While “one party permission” call recording is allowed in some countries, provinces, and states, a phone call isn’t going to include any of your proprietary diagrams and slides for all to see.

For now, though, when I need to host a video call, I’ll be opting for the smaller-tech Zoom alternative found in Whereby.

In Which I Get to Look at a Data Center and Say “Suck It”

I live half a mile from a data center. Sometimes I can hear it; usually when it’s very hot or very cold and they (I assume) have to turn on the extra generator banks.

I lived in my house for years before I figured out that the sound I would hear came from a data center. I’d feel like I was going crazy, trying to figure out why I heard this…this…NOISE sometimes and couldn’t figure out for the life of me where it was coming from. It sounded like it came from everywhere and nowhere all at once.

I went on walks specifically to try to hunt down the source of the noise! And then one day I learned about the data center and it all came together in my head.


One day I went to look at the data center. It’s hulking and huge, looming above residential homes that surround it on all sides. I can only imagine how loud it is when the generators kick on and you live right next door to them. I’m a half mile away with trees in between and it’s still damn loud sometimes.

There was another night where the data center noise kicked up around 3 a.m. and woke my husband up. I awoke to see him looking confusedly out of all the windows.

What on earth is that noise and why does it sound like trucks on a highway? he said.

Oh, I replied, about that. We have a data center. It’s half a mile that way, over there.


This is not an AI-positive house. I’m not being a bitter writer; AI slop is a real problem on the internet. I’ve been asked to try to use AI on some specific projects and without fail, I myself can turn around a project or draft faster than the AI can.

The presence of AI is already lingering at the edge of my career, ready to disrupt it again (hat tip to the 2008 financial crisis that killed print news). And now it’s disrupting my sleep? No thank you.


But we have a small victory on our hands. A group of neighbors who live right up against said data center got support from a Yale legal clinic and successfully halted plans to expand said data center.

(The company wanted to add more generators, how fun.)

The freeze is only for a year. I plan to see how I can get involved to support these neighbors, as I’m just a stone’s throw away and can hear the damn thing myself.

But a year is better than nothing. The data center causes noise pollution, light pollution, and even flooding thanks to a retaining wall that disrupted natural runoff patterns.

And what do we get in return? Stress? Weird images? It’s not worth it for me. And I’m not being a NIMBY (Not In My Backyard) about this—I don’t fancy any generative AI data centers at all.

But the next time I go past the data center near my house, I’m going to take a good look right into one of the security cameras and stick my tongue out at it. (I’m never anything but a consummate professional.)

The push against data centers continues in cities and towns throughout the country, but in this one, as a start, we have a little victory.

This is not my face.

Even though I created a list of (and use) YouTube replacements in my personal life, I haven’t fully been able to shake it for work purposes. People just don’t browse apps like Vimeo in the same way, and I gotta market my business. So I’ve started planning some more short videos to publish there. 

I went through the YouTube channel settings with a fine-tooth comb and turned off AI features, as I didn’t want the app “improving” my videos with AI. But then I made the mistake of trying out an app called VidIQ. 

I learned about VidIQ while watching some YouTube content creation tutorials. After checking out the limited free version, I decided to pay for one month to see if it would give me any insights that I could use to make my channel better.

Well, it certainly offered up suggestions - including auto-generating YouTube video graphics using my face. The reason? My existing thumbnails feature “expressions that aren’t natural.” Oh, and this is?

My real photo, taken by a photographer in my town, is on the left. The AI slop version of me is on the right and looks like this at full size:

It’s mostly lifted my actual image and applied it here but the face is…off. It isn’t my face. I don’t smile with my mouth closed. You either get teeth or nothin'.

I went ahead and allowed VidIQ to generate a few more-Pandora’s box was already open here-so that I could see higher resolution versions. (For science.) 

These are not my eyes.

Who is this broad?? It created this “highly optimized thumbnail” using my appearance in a recent video. Here’s a still of what I actually looked like in the video: 

This is actually me.Are my YouTube thumbnail graphics-the ones I make myself-award winning? No. But my titles and video descriptions are pretty good. And when you arrive on my YouTube page for the first time, you’re met with the video that I’ve screenshotted above playing for you. 

I’d rather see a real person’s animated face than that…dead eyed grim looking broad up there. 


But this ultimately brings me to another point: we’ve reached a place in our tech-society where you can innocently sign up for an app that you heard will help you with a project and suddenly find yourself looking a weird version of “you” in the eye. One that’s got freakishly smooth skin, a weird eye shape, and seems way grouchier than you actually are at the moment. 

In another one of my “ugh I hate it but for the plot” image tests, VidIQ completely erased my tattoos. All of them. No, no I don’t meet the traditional professional standard in the way I look. But that doesn’t matter. I’m self employed. If someone doesn’t want to watch my videos because I have tattoos, well, fine. I don’t care. 

Another image gave me very curly hair. I do have increasingly wavy hair as I get older, but even with a full bottle of strong hold mousse I’m not going to get this level of curl:

It also replaced my glasses with a different color in some, because, well, why not? Consistency is for the birds!

And finally, in one of my personal favorites, it creates an image that sticks my real headshot in a fake google result next to a website that’s not my name:

I’ve canceled my VidIQ account and left a note telling the company exactly why. This level of AI without request is not okay. It makes me nearly as grouchy looking as fake me does in some of these pictures, so, I guess we match now…a little.

Have you ever SEEN an AI data center? This is what one looks like in a normal residential neighborhood – it’s that big black box in the back. The white plume over to the right is water vapor. What you can’t see: massive diesel fuel storage tanks, banks of huge industrial generators, and cameras.

No, Claude Isn’t Sentient…or Anxious

A Polymarket tweet referencing Dario Amodei of Anthropic — and the company’s model, Claude—has people both nervous and scratching their heads in confusion. 

First off, this is an absurd headline. “May or may not?” Come on, pick one. 

Second, it’s not a real news alert despite the “BREAKING” tagline. Anyone can say “BREAKING NEWS” and then follow it up with some random string of words. This is a marketing tweet.

What’s Polymarket and why are they talking about Claude?

Polymarket is a platform that allows people to bet money on real-world events, i.e., the likelihood that Claude would become sentient within a certain time frame. An alarmist tweet like the one above could encourage people to bet on something like, oh, I don’t know, whether or not Claude will achieve a certain score on an AI benchmarking exam with a sci-fi sounding name: 

Polymarket’s tweets are an attempt to drive people to the platform and to get these same folks to plunk down their money on speculative betting. 

But Polymarket aside, the statement in the tweet — that an AI model showing “signs of anxiety” equals possible sentience—doesn’t hold water, either. 

Why an “anxious AI” doesn’t equal sentience

Dario Amodei is interesting from a media perspective. He goes around to different outlets and talks about how scary it is that AI “could be” or “will be” sentient (sometimes giving a vague time frame). And then he goes back to the office and builds the very tool he tells everyone on the morning news circuit to be afraid of. 

Dario, my dude, if you’re that worried about AI, just…stop building it.

(Because he doesn’t, I can only assume that his statements of concern are yet another marketing ploy — making Claude out to be the “good” or “sensible” or “safe” AI.)

From what I can tell, the Polymarket tweet is pulling its topic from an interview that Amodei did with the _New York Times. _In this interview, Amodei said that Claude “occasionally voiced discomfort with…being a product” and assigns itself a “15 to 20 percent probability of being conscious under a variety of prompting conditions.”

While this sounds alarming on the surface, once you know how generative AI chats work, it becomes far less stratospheric and more…silly.

How AI works, and why it “voices discomfort”

Generative AI tools—the generative part is important here—create text based on probability. At a simple level, it works like this:

  1. A company trains an AI model by feeding a vast amount of data into the machine. As an example, Meta trained its AI by uploading 82TB worth of ebooks. You would need 5,125 Kindle Paperwhites if you wanted to build out your ebook library with the same content.
  2. All of this data is processed through computer programs and mathematical algorithms to determine what words appear near each other and in what frequency. There are multiple rounds of calculations that take place—you can think of it a bit like a March Madness bracket but for word pairs, not basketball teams.
  3. Through the application of a process called Natural Language Processing (NLP) and parameters set by the company behind the AI, the model gets a “voice” that maintains certain qualities set by the company while mimicking normal human speech patterns. This is what lets the AI’s responses go from “bird is blue and related to crow” to “Yes! You’re right, bluejays are kind of like blue crows—they’re all in the corvid family, after all!”

(I also want to note that depending on the learning process used, humans can be involved, too. Many major AI tech companies outsource data labeling, which includes tasks like identifying a chair in thousands of images, to underpaid workers in the global south.) 

This means that when you interact with an AI chat tool like Claude, everything (and I mean everything) it shows to you is due to a mathematical calculation of what you‘d expect a human to say based on the predictions it’s gathered from the training process. 

That training data includes:

  • Countless Reddit posts from people experiencing the gamut of human emotions, including _anxiety around AI _
  • Joking social media posts about “being nice to the AI now so they like me when the robots take over” 
  • Medical information about humans and anxiety
  • Science fiction stories

I put the last one in bold because I think it’s an incredibly important component that can be missed a lot in conversations around AI. A BIG portion of these tools’ training information comes from fictional stories, be it books, TV scripts, or movie screenplays.

So Claude expresses discontent about the possiblity of being a machine. That’s literally a key theme in Blade Runner.

You may have seen other sensationalist headlines about AI “always choosing the nuclear option” when playing war games. 

So…like the movie War Games. Not to mention the plethora of literature and films about humans choosing the nuclear option.

Remember, AI responses use probability that’s based on what _the user _is most likely to expect as a response if talking to another person. The way someone words their question or communication with the AI can directly influence the outcome.