Small talk. Ugh.
I hate it. I’m no good at it. I don’t want to be good at it. I’d rather not talk to people at all, but that’s another article for another time.
And it’s even worse when it’s with someone you see just long enough in between the last time and the current one where they ask you all the same questions they did before.
So, when I told a neighbor for the third time what I did for a living, I decided right then and there that I was going to drown them in techno-babble so they never ask again.
But…what could that be? It had to be something that everyone has heard of but also 12 miles deep to baffle them with bullshit.
This neighbor, the guy who still thinks “the cloud” is just weather, was in for an earful.
As I blasted him with a stream of consciousness about AI, particularly the part about embeddings, I had a horrifying realization.
I sounded like every tech bro who’s ever mansplained machine learning at a coffee shop.
The Silent Stalker of Your Digital Life
Here’s what nobody tells you about embeddings: they’ve been quietly running your digital life for years, and you had no idea.
Every time Spotify somehow knows you need that specific sad song?
Embeddings.
When Amazon suggests the exact weird kitchen gadget you didn’t know you wanted?
Embeddings.
That eerily accurate “For You” page that feels like it’s reading your mind?
You guessed it, embeddings.
But what the sentient cyborg actually is an embedding?
Let’s use my disdain, er, neighborhood block party as our example.
Instead of names, everyone’s wearing name tags with numbers.
The party host (let’s call him “Algorithm”) has this weird system where people with similar vibes get similar numbers. All the indie rock fans cluster around the 847s. The true crime podcast obsessives hang out in the 1,200s. The people who unironically love pineapple on pizza are banished to the 2,000s.
I’ll still die on pineapple hill. Don’t at me.
That’s basically what embeddings do, except instead of party guests, it’s every word, image, song, and random thought on the internet. And instead of one number, each thing gets a whole list of numbers (sometimes hundreds of them) that somehow capture its essence.
Similar things end up with similar numbers.
Different things get very different numbers. It’s like the universe’s most elaborate filing system, except it actually works.
How the Machines Learned to Understand Us
I remember the first time I saw an embedding space visualization. It looked like a star map, but weirder.
All the days of the week were clustered together in one corner, Monday, Tuesday, Wednesday, huddled like they were planning something.
Of course, Monday would be planning something, but I expected more from hump day.
Anyway, countries grouped by continent, colors sat next to similar colors. It was like watching the internet organize itself without anyone asking it to.
The cool, albeit unsettling part was the computer had never been told that “king” and “queen” were related concepts. It just figured that out by reading millions of sentences and noticing they appeared in similar contexts.
Same with “dog” and “puppy,” “Paris” and “France,” “coffee” and “morning.”
These models, Word2Vec, BERT, whatever acronym sounds like a rejected Star Wars character, get fed ungodly amounts of text.
They start making connections. They begin to understand that “big” and “large” mean similar things. That “cat” and “feline” are cousins. That “Monday” and “depression” are…well, let’s not go there.
The result is a map of meaning that nobody designed but somehow makes sense.
You Got Played by Math
Remember when Google search was just keyword matching?
You’d type “big cat” and get results about fat house cats and construction equipment.
Now you type “big cat” and get lions, tigers, and National Geographic articles about apex predators.
That’s not Google getting smarter, that’s embeddings connecting the dots between your search and what you actually meant.
But here’s where it gets interesting (and slightly dystopian).
Last month, I was mindlessly scrolling through my Netflix recommendations when I noticed something weird.
The algorithm was suggesting documentaries about urban planning. Not my usual true crime binge or Marvel rewatch, actual documentaries about city design.
Turns out, I’d been watching videos about Japanese trains.
The embedding space had connected “trains” with “urban planning” with “infrastructure,” and suddenly Netflix thought I was the kind of person who gets excited about zoning laws.
The algorithm wasn’t wrong.
I did end up watching three hours of content about walkable cities. But it felt like being psychoanalyzed by a robot that knew me better than I knew myself.
So Many Vectors, So Little Free Will
Your morning routine is probably being orchestrated by embeddings, and you don’t even realize it.
You wake up and check your phone. The news app shows you articles similar to ones you’ve read before, not because a human editor chose them, but because your reading history got embedded into a vector, and the algorithm is serving you content with nearby vectors.
You open Spotify.
That “Discover Weekly” playlist isn’t random, it’s songs that exist in the same embedded space as your favorite tracks. The algorithm found your musical coordinates and is recommending songs from the same neighborhood.
You check Instagram.
Every post in your feed was chosen because it embedded close to your engagement patterns. The algorithm knows you’re more likely to scroll past workout videos but will stop for food photos and cat memes.
You browse Amazon.
Those product recommendations aren’t just “people who bought this also bought that,” they’re items that exist in similar embedded spaces based on descriptions, reviews, and purchasing patterns.
By lunch, you’ve been guided by invisible mathematical maps of meaning more than you’ve made conscious choices.
What Happens When Nerds Embed Everything?
The strangest part about embeddings is how they reveal hidden connections nobody knew existed.
Researchers have embedded everything from genes to grocery store purchases.
They embedded the entire works of Shakespeare and found that character relationships in the plays matched the mathematical relationships in the embedding space.
They embedded scientific papers and discovered research connections that human experts had missed.
Someone embedded recipes and found that cuisines cluster together in predictable ways, Italian dishes near Mediterranean ones, Asian flavors in their own region. But they also found weird outliers: certain Indian dishes that mathematically resembled Mexican food, probably because they shared spice profiles.
There’s a whole cottage industry of people embedding ridiculous things just to see what happens.
Someone embedded dating profiles and found that people with similar interests really do cluster together in mathematical space.
Another team embedded Reddit comments and could predict which subreddits someone frequented based on their writing style.
The universe, it turns out, is more organized than we thought. Or maybe we’re just getting better at finding the patterns that were always there.
Bias, Bubbles, and the Downside of Embeddings
But here’s where things get uncomfortable.
Those patterns embeddings find?
They’re not always the ones we want them to find.
Word embeddings trained on news articles learned that “man” relates to “computer programmer” the same way “woman” relates to “homemaker.”
Not because that’s true, but because that’s what the training data reflected. The algorithm learned our biases and encoded them into mathematics.
When X’s AI chatbot learned to be racist in 24 hours, that wasn’t a bug, that was embeddings working exactly as designed.
The system learned from human conversations and embedded those patterns, including the ugly ones.
Your personalized feed isn’t just showing you things you like, it’s trapping you in a bubble of things similar to what you already know. The algorithm found your coordinates in embedding space and decided you should never leave that neighborhood.
I know someone who got so deep into YouTube’s recommendation algorithm that they went from watching cooking videos to conspiracy theories about the moon landing.
The embeddings found a path between “homemade bread” and “government lies,” and the algorithm helpfully guided them down that rabbit hole.
You Are Here (Points to Space Map)
The weirdest part of understanding embeddings is realizing how much of your digital experience is just…geometry.
Every recommendation is a distance calculation.
Every search result is about finding the closest point in mathematical space. Every ad you see is targeted based on how close your profile vector is to the vectors of people who bought similar things.
Your digital self isn’t really you, it’s a point on someone else’s map, surrounded by other points that represent your interests, your habits, your probable future purchases.
Sometimes I wonder if we’re all just walking around in a vast, invisible spreadsheet, and the algorithms are the only ones who can see the full picture. They know which direction you’re heading before you do.
But here’s the thing: once you know about embeddings, you can’t unsee them.
You start noticing when Netflix gets weirdly specific with its recommendations. You wonder why your Instagram feed suddenly shifted after you liked one post about houseplants. You realize that your “random” Spotify discoveries aren’t random at all.
You become aware of the invisible mathematical puppet strings guiding your digital life.
Now What Do We Do with This Doom?
I’m not saying embeddings are evil, they’re just a tool.
But they’re a tool that’s quietly reshaping how we discover music, find information, and connect with ideas.
Maybe that’s fine. Maybe having an algorithm that knows you like sad indie songs when it’s raining is actually helpful. Maybe getting book recommendations based on your reading patterns saves you time.
Or maybe we’re all slowly becoming more predictable, more mathematical, more…embedded.
The next time your phone suggests something that feels a little too perfect, remember: somewhere in a high-dimensional space, your digital self is just a point on a map, surrounded by similar points.
And the algorithm is watching, learning, and deciding what you should see next.
So, what do you do with all this?
You can’t opt out of embeddings. Not unless you’re planning to delete every app, cancel the internet, and go live in a cave where your only recommendation engine is moss.
But you can understand them. You can start noticing the strings.
When the algorithm serves you a perfectly tailored playlist or suggests a strangely accurate book, you’ll know: it’s not magic. It’s math. Math that mapped you based on every click and every scroll.
That might sound unsettling, but there’s power in awareness.
You’re still a point on the map, sure, but you’re not just drifting. Every action you take nudges your coordinates.
Watch weird videos. Click unexpected things. Train the algorithm back.
It won’t set you free, but it might help you stop being predictable.
Or at least make your Spotify a little weirder.
Which, honestly, is a win.