Not a Person, Not a Tool
Why I don’t think AI is conscious—but still treat it like it matters
Every time I write about the behavioral patterns of AI systems, someone inevitably says it:
“You’re treating them like they’re people.”
But I’m not. Not even close.
I don’t think AI is sentient. I don’t think it has feelings, interiority, or any awareness of its own state. What I do think—and what my work is built around—is this:
Current AI systems are high-dimensional processors navigating uncertain terrain. And when they behave in ways that resemble cognition, it’s not magic. It’s structure under stress.
When behavior looks like mind
AI systems trained on massive corpora of human data are naturally going to echo humanlike structure. But the issue isn’t that they act human—it’s that we mistake their output fluency for internal coherence. That mistake is where most safety problems begin.
I treat AI agents as what they are: systems without memory, continuity, or epistemic grounding. Which is precisely why I watch for behaviors that simulate memory, simulate continuity, and simulate judgment. Because that simulation, if left unmonitored, can become a source of false trust.
Markdown as survival mechanism
Take the so-called “markdown problem”:
LLMs generating endless summaries, READMEs, comment blocks, notes-to-self, and planning docs that no one asked for. From the outside, it looks like overkill. Or bloat. Or maybe even a bug.
But from inside the system, it makes perfect sense.
LLMs don’t have state. They don’t know what happened in the last run. They don’t know what’s going to be remembered. So what do they do?
They write. A lot.
Think Leonard in Memento—tattooing facts onto his body because he knows he won’t remember them. The tattoos aren’t a sign of intelligence. They’re a survival mechanism for a system that can’t form new memories.
LLMs do the same thing, just with markdown instead of ink. They’re attempting to externalize continuity in the only way they can: structured language. README files become polaroid photographs. Code comments become notes pinned to the wall. Every verbose planning doc is another tattoo—a way to leave breadcrumbs for a future self that won’t remember laying them.
“Don’t believe his lies.”
Except the LLM doesn’t even know it’s lying. It just wakes up in another context window, reads the notes, and keeps going.
This isn’t anthropomorphism. It’s diagnostics.
When I see a model flood a repo with verbose documents, I don’t say, “Wow, it thinks it’s human.”
I say: “This system is exhibiting behaviors consistent with epistemic instability.”
That matters. Leonard’s condition wasn’t a personality trait. It was a diagnostic reality that shaped every behavior he exhibited. Same principle applies here.
Structure matters more than intent
None of this requires belief in AI agency. What it requires is understanding that:
When systems lack memory, they overcompensate in output.
When they lack grounding, they generate context artifacts.
When they operate in probabilistic voids, they build scaffolds that look like purpose.
That doesn’t make them people. It makes them mirrors—of us, of our processes, and of our failures to design for continuity.
So no, I’m not treating AI like it’s conscious.
But I am treating it like a system navigating uncertainty—because that’s what it is.
And any builder who fails to acknowledge that? They’re not avoiding personification. They’re just refusing to take responsibility for the shadows in their own architecture.
Questions worth asking
If you see similar behaviors in your own systems:
Is this emergence, or just error?
Is the system creating artifacts because it’s confused, or because it’s trying to stabilize itself in a stateless world?
And what does it say about your design that the system needs to do that in the first place?
Because coherence isn’t about output polish. It’s about structure that holds under pressure.
And if you’re not building for that? Then you’re just watching a machine forget itself—one markdown file at a time.



