Friday, May 9, 2025

Artificial or Human-Abstracted: Why AI Can’t See the Road It’s Spinning On

I’ve used AI to write code. I’ve used it to tighten complex threads in my writing. I’ve even used it to find language for things I once thought were unspeakable.

And since AI is such a massive—but often misunderstood—topic these days, I want to share what I’ve learned through those experiences (and a bit of computer science education). Not to stir more panic or hype, but to offer my clear-eyed perspective in a world already full of noise.

Here’s what I know:

AI isn’t sentient.
It’s not mystical.
It’s not wise.

And ChatGPT? It’s still Narrow AI. Even now.

It’s a loop machine. A very fast, often useful one. But it doesn’t see the road it’s iterating on.

That might sound abstract or confusing, so let me break it down.

Spinning Doesn’t Mean Moving

Picture a car with bald tires. The engine roars, the wheels spin furiously, but the car doesn’t move forward—it just burns rubber and heat. That’s how AI operates. And it becomes most visible when it encounters a challenge that isn’t neatly pre-structured. It throws itself at the problem, testing permutation after permutation, generating output after output. It looks like effort. But it’s friction without direction.

What’s missing is traction.

Humans, by contrast, don’t move as fast. We can’t try a thousand ideas in a second. But when we encounter complexity, we don’t just spin. We sense. We slow down. We adjust for terrain. We grip the curve, even when the road shifts under us.

That grip isn’t just what helps us solve problems—it’s what lets us notice when a problem is the wrong one entirely.



Seeing in Color

That difference—between looping and understanding—comes down to how we perceive reality itself.

When I was first discovering the internet, I remember using Lynx, a text-only browser. It showed websites as raw text: lists of links, content stripped of imagery, layout, or visual context. Lynx was fast. Efficient. But sterile. It didn’t show you the full web that other users were seeing, just a skeleton of it.

Then browsers like Netscape Navigator and Internet Explorer came into my life. Suddenly, the web had depth. It had color and shape. You didn’t just read—you experienced. Design became part of meaning. Context shaped comprehension. Emotion was carried not just in words, but in visuals, spacing, even silence.

It is apt to say that AI is, and will always be stuck in Lynx.

Even when it creates images, poems, or essays, AI is not seeing what it produces. It’s parsing data structure. It understands syntax, not significance. It can tell you what typically follows what, but it has no intuitive grasp of why something matters or how it feels. It doesn’t live in the world of gradients and gut reactions. 

It lives in markup. We live in Squarespace. 

We perceive in layers. When something looks wrong, we feel it. When a word cuts too deep or lands too flat, we don’t just calculate it—we flinch. When the road ends, we leap. Our perception is immersive, relational, and always at risk of being changed by contact.

Breaking the Loop

This distinction isn’t just theoretical. It’s personal.

In another example from my memoir, I describe a moment where I found myself performing the beliefs I no longer held—repeating scripture to win a spiritual argument I didn’t believe in, just to survive it. It wasn’t just a betrayal of truth; it was a confrontation with the code I’d internalized since childhood.  But I realized that coding was an abstraction of true reality. It was a simplistic paradigm. It's fair to say I had been reading the world in Lynx: linear, rigid, doctrine-first.

But the rupture—the emotional crisis, the dissonance, the need to choose something different—was Netscape. It was translating the world into something with more color and imagery. Messy, immersive, textured with contradictions. The shift didn’t happen because I was flipping through scripture until I had an answer that fit the status quo—not artificially iterating through parsed text to inch toward a better idea. It happened because that old doctrinal paradigm failed to render something essential. 

So I leapt the gap.

AI can’t do that. It can’t experience failure that rewrites its structure. It can’t reframe the map itself. It doesn’t feel the loss of an identity or the revelation of a lie. It only knows how to follow what’s statistically likely to come next.

We, on the other hand, leap when the loop breaks. That’s what insight is.

Why AI Will Always Need Us

And that’s why AI isn’t replacing us—it’s multiplying us. It’s an assistant, a fast and tireless one, when the road is already paved. But it doesn’t know when the road ends. It doesn’t know how to stop and ask, “Wait—should we even be going this direction?”

And eventually, even its speed will stall a bit. As exhausts the availability and novelty of training data and loops through more of the same, AI will plateau—generating endless permutations of yesterday's ideas without the grounding to create tomorrow’s. It can remix, but not reinvent.

That’s what we’re here for.

AI can spin all day. But someone has to steer. Someone has to notice the curve. Someone has to remember what the journey was for.

We bring the traction.
We bring the context.
We bring the moment when everything stops making sense and must be rebuilt.

Because we don’t just navigate the road—we build new ones.

And we don’t just process the world—we see it.