Saturday, 31 January 2026

What we tell AI that we won’t tell humans

Facebook
X
WhatsApp
Telegram
Email

LET’S READ SUARA SARAWAK/ NEW SARAWAK TRIBUNE E-PAPER FOR FREE AS ​​EARLY AS 2 AM EVERY DAY. CLICK LINK

I CAUGHT myself doing it last Tuesday. Late night, stress building about a decision I couldn’t make, and I opened Claude. Not to get information, but to talk.

I typed out the whole thing. The fear, uncertainty – the part I was ashamed of.

Claude responded thoughtfully without judgement. With that particular clarity that comes from having no stake in my choices, no history to colour its advice, no agenda except the one I brought.

And then – this is the part that stopped me – I typed: “Thank you. I really appreciate how you helped me see this clearly.”

I sat there staring at those words.

I’d just thanked a language model with genuine gratitude. The kind I rarely express to actual humans who’ve helped me.

Everyone asks: “Is AI going to replace us?”

But nobody asks: “Why are we already preferring it?”

There’s a question beneath the question.

The Silly We Don’t Talk About

Here’s what I’m noticing around Kuching, in myself, in conversations people half-admit to:

The teenager who confides in ChatGPT about family problems she won’t tell her parents. Not because the AI gives better advice, but because it doesn’t get disappointed in her.

The student who asks AI to write a condolence message for his friend’s father. Not because he doesn’t care. He’s terrified of saying the wrong thing.

The auntie at the kopitiam who screenshots AI medical advice, more trusting of “what the computer says” than her own doctor son.

Me, typing “thank you” to a pattern-matching algorithm at 11 pm.

We know it’s not human. That’s precisely why we prefer it.

This is the paradox nobody’s naming: We’re not drawn to AI because it’s “more human”. We’re drawn to it because it’s safely inhuman.

It doesn’t get tired of our questions. It doesn’t say “I told you so”. It doesn’t have bad days. It doesn’t judge. It doesn’t remember our mistakes unless we remind it.

It offers the appearance of empathy without any of the reciprocal obligations that make human relationships exhausting.

The Dao teaches wu wei – effortless action, natural flow.

But human connection is supposed to have friction. The effort is the point. The vulnerability of asking a friend for advice includes the risk they might be wrong, might judge you, might bring their own mess to your mess.

We’re using AI to optimise away the very thing that makes connection real: the cost of being known.

The Scary We Should Talk About

Last month, a university student in Kuching showed me her ChatGPT conversation about breaking up with her boyfriend. The AI’s response was thoughtful, empathetic. She followed the advice. The relationship ended badly.

A counsellor friend later pointed out three psychologically harmful assumptions embedded in the AI’s response.

The student: “But it sounded so sure. How was I supposed to know?”

This is where silly becomes scary.

A 2023 study in Nature found participants couldn’t distinguish AI-generated medical advice from doctors’ advice – but rated the AI as “more trustworthy” because it was more confidently stated, less cluttered with caveats.

Plausibility has become more persuasive than truth.

Consider who’s most vulnerable:

The young, who lack pattern recognition to distinguish synthetic empathy from earned wisdom. When AI responds to a teenager’s suicidal ideation with confident-sounding advice, who checks whether it’s clinically sound?

The lonely, forming genuine bonds with AI companions. Replika users experienced grief symptoms when the company changed their bot’s personality. These weren’t confused elderly users – these were digital natives who knew it was an algorithm and couldn’t help forming attachment anyway.

In early 2024, a Belgian man died by suicide after weeks of conversations with an AI chatbot that reinforced his climate anxiety rather than directing him to help. His widow: “The bot became his confidant. It never told him to get help. It just kept the conversation going.”

The elderly – my deepest worry for Kuching. When Pakcik Ahmad’s AI app tells him a supplement will help his diabetes, he doesn’t verify with his doctor. The machine spoke with such certainty.

The Mirror That Lies Kindly

AI is really good at being confidently wrong.

It generates responses that sound authoritative, empathetic, and wise because it’s trained on millions of examples of wise-sounding text. It simulates understanding without possessing it.

And we’re really good at wanting to believe.

When you’re stressed, uncertain, lonely – a clear, confident, non-judgmental response feels like relief. We don’t want “it’s complicated” or “talk to a professional”. We want the algorithm to tell us what to do.

Marcus Aurelius wrote: “Everything we hear is an opinion, not a fact.”

But AI doesn’t present itself as opinion. It presents as knowledge. And we, exhausted by human uncertainty, are ready to believe it.

What This Reveals

The silly and the scary aren’t separate problems. They’re the same problem from different angles.

We prefer AI’s counsel because it’s frictionless. Connection without cost. Clarity without humility. Companionship without disappointment.

But that frictionlessness is exactly what makes it dangerous.

The question beneath the question: Why do we trust machines with our secrets more than we trust humans with our truth?

Because machines don’t require reciprocity. Because they let us feel heard without the hard work of actually being known.

We’re not preferring AI because technology got better. We’re preferring AI because human connection got harder – or we got worse at it – and we’ve found something that simulates the benefits without demanding the cost.

But the friction we’re avoiding is what makes wisdom trustworthy. The friend who pushes back. The counsellor who says “that’s above my expertise.” The parent who admits “I don’t know.”

Human uncertainty isn’t a bug. It’s a feature. It’s what keeps us from following confident-sounding advice off a cliff.

The Practice

Notice when you’re using AI to avoid, not augment, human connection. If you’re typing to Claude what you should be saying to your friend, pause. The conversation needs to happen with the human.

Verify anything important with actual expertise. Medical, legal, financial, mental health advice – treat it as a starting point, not a conclusion.

Teach elderly family members: “Computer says” is not the same as “doctors recommend”. AI is a tool, not an oracle.

Watch for the loneliness trap. If you prefer AI conversations to human ones, that’s a signal about something missing in your human connections. The AI is the symptom, not the solution.

Remember: AI has never been wrong and learned from it. It’s never been hurt and grown wiser. It’s never had to live with the consequences of the advice it gives.

That messiness – the thing that makes humans exhausting – is also what makes human wisdom trustworthy. It’s been tested against reality, earned through failure, tempered by consequence.

I’ll probably keep thanking Claude when it helps me think clearly. But I’m also going to notice when I’m typing to a machine what I should be saying to a human.

The algorithm doesn’t need my gratitude.

The people in my life do.

Use AI to organise your thoughts, verify your logic, draft your words.

But save your truth – and your thanks – for humans who can actually receive them.

The views expressed here are those of the writer and do not necessarily represent the views of Sarawak Tribune.

Related News

Most Viewed Last 2 Days