Two years ago, I finally gave in to the hype.
Every tech newsletter (thank you TLDR), every LinkedIn post, and every half-baked podcast promised that this thing called “ChatGPT” would change everything.
“It writes like a human!”
“It’s revolutionary!”
Now, I’m not one to go and jump on every internet bandwagon unless it promises to ruin my productivity spectacularly, but this, this was something I just had to check out for myself.
So, on a Friday afternoon in late 2022, while dodging actual work, I opened a browser tab and faced the future.
The interface was disarmingly simple: a text box and a blinking cursor. Just like AIM back in the day.
But this time, behind that cursor lurked an “intelligence” that could supposedly write, reason, and solve problems.
I started small: “Explain why socks disappear in the dryer.”
Instantly, it gave me a pseudo-scientific essay about static cling and sock aerodynamics, delivered with the confident flourish of someone who’d devoted their life to laundry physics.
Not bad, better than most humans could wing on the spot.
Encouraged, I leveled up: “Explain quantum physics in simple terms.”
Again, an answer popped up.
Clear, well-structured, full of analogies about spinning coins and cats in boxes.
It sounded authoritative. Almost…wise.
Two years later, I know better.
I’ve watched this tech evolve from a parlor trick to a staple in search engines, homework helpers, and workplaces.
And I’ve seen millions repeat my rookie mistake: assuming the machine’s fluency means understanding.
How I got Sweet-Talked Into Stupidity
I have an insatiable curiosity about things.
My brain’s a popcorn machine for useless questions: Who invented ‘the apple doesn’t fall far from the tree?’
How do you trap a ship in a bottle?
Where the hell does the sun go at night?
Yup, for somebody like me with that kind of thirst and the ‘information highway’ at my disposal, I’ve learned and forgotten more random facts than a person could possibly need.
I’m pretty good at trivia though.
Nevertheless, those first couple of hours were textbook seduction.
I’m mildly ashamed to admit that I turned that poor robot into my personal random fact hamster wheel.
Then I shoveled in more complicated tasks: write a business plan, analyze Hamlet, debug my sloppy code.
Every answer came back smooth, coherent, almost superhuman.
Have I met my match?
But, like so many before, I made the same error: I started to trust it.
But here’s what I missed then, and what plenty of people still miss now: I wasn’t chatting with an “intelligence.”
I was poking the world’s most advanced pattern machine so good at mimicking understanding that even a skeptic like me fell for the act.
ChatGPT doesn’t “know” quantum physics, Hamlet, or anything about my broken Python script.
It doesn’t store facts like a library or reason step-by-step like a human.
It does something far stranger: it predicts the most likely next word, based on billions of examples of how we humans string words together.
The World’s Smartest Parrot
Once I realized I’d been duped, I dug deeper.
Large Language Models (LLMs) like ChatGPT run on something called transformer architecture.
Forget the math behind it for a minute because I’m a writer and a proud math-a-palegic.
But here’s the gist on how it works:
- Breaks your input into tokens (words or fragments).
- Turns those tokens into numbers.
- Uses “attention” to see how words relate.
- Predicts the next likely token.
- Repeats until it spits out a complete answer.
It’s predictive text on rocket fuel.
You know when you’re texting someone either on your phone or, say MS Teams, and there’s the ‘suggested replies?’ It’s like that, except GPT can keep that coherence rolling for paragraphs, sometimes pages.
Importantly though, there’s no comprehension. No internal fact-check. No reasoning.
Just cold, brilliant pattern matching.
The Week I Caught It Lying
By my second week, cracks were showing.
At that point, I was using it daily, a 24/7 research assistant without the paycheck and smelly lunches to go with it.
I asked it to write about a historical event I knew inside out.
It returned a beautiful essay, dripping with misplaced confidence, and stuffed with errors.
I pointed them out, it apologized politely, then I realized something unsettling: the AI had been equally confident when wrong and when right.
Fluency has nothing to do with truth. The machine doesn’t know the difference, only what sounds probable.
So, I pushed it. I yelled at it. I told it how stupid it was and how it was not a nice robot trying to gaslight me into thinking General Orville J. Thruston was instrumental at the Battle of Little Big Horn.
I asked for academic citations: it invented realistic titles and authors.
I asked for famous quotes: it fabricated entire passages in statesmanlike prose.
Each fake was served with calm authority.
To the model, a believable lie and an accurate fact are both just probable word strings.
Pretty Words, Bad Facts
This is what makes it equally impressive and risky. It doesn’t break in obvious ways; it breaks subtly.
The Jerome Goddard Tick Research Case
An entomology professor at Mississippi State University asked ChatGPT about tick behavior and received a detailed citation to a 2019 study in Parasites and Vectors by known researchers including James Burtis at the CDC and Holly Gaff at Old Dominion University.
When contacted, both authors denied any knowledge of the paper, it was entirely fabricated.
The Economics Literature Study
Researchers systematically tested ChatGPT across economics topics and found false citation rates of over 30% for GPT-3.5 and over 20% for GPT-4, with fabricated references that combined real author names with non-existent titles and journals.
These aren’t flukes. They’re baked into how it works.
It’s not designed to know; it’s designed to sound like it knows.
What I Know Now (And What Most Still Don’t)
By late 2023 and into 2024, I’d learned these systems don’t get “smarter” in the human sense.
They get better at sounding smart. And that’s worse.
I watched new versions make subtler and slicker mistakes.
I watched people lean on them for medical advice, investment decisions and legal arguments with the same blind trust I once had.
I caught myself doing it, too.
This fluency hypnotizes you because well-structured text feels like authority.
It’s not.
We shouldn’t be worrying about AI becoming sentient; we should be worrying about humans accepting good-sounding nonsense without question.
Fake Confidence, Real Consequences
We’ve integrated AI into search engines, tutoring apps, and decision systems.
But what happens when Google stops linking to sources and just generates answers with the same confident tone, true or false?
Students are using it for essays. The citations are fake, the arguments plausible.
Doctors warn: do not trust AI chatbots for medical advice. People do it anyway.
We didn’t build artificial intelligence; we built artificial confidence, which is more dangerous than honest stupidity.
Your Brain Is the Only Fact-Checker Left
The first rule of GPT Fight Club: verify everything, trust nothing.
Not because it’s malicious, but because it literally can’t know when it’s wrong.
These tools are brilliant for brainstorming and drafting, but they’re terrible at being trusted authorities
Ironically, our rush to automate “understanding” has made real human judgment more essential, not less.
Every AI response demands a human check. Every confident claim needs a sanity test.
We built a fluent mimic. Now we have to remember imitation isn’t always the sincerest form of flattery.
We’re the Parrots
So, what does this say about us humans?
The ones typing ridiculous gibberish into the robots?
Maybe it exposes how much human “knowledge” is also just polished repetition.
How often do we talk with certainty about things we don’t truly grasp?
Maybe real understanding, the messy, slow, contextual kind, is more precious than we realized.
Either way, every time I close my laptop, I remind myself: the only true intelligence in this exchange is still mine.
I’ll even concede we’re using ‘intelligence’ here loosely.
The AI can copy my words. It can’t replicate my judgment.
That responsibility, to think, verify, and doubt, is stubbornly and beautifully human.
Hold on to that, it’s the last thing these robots can’t fake.