Skip to content
Dustin's AI Lab
Go back

AI-Era Business Routes: Auto-Generated Posts, the Skill Poisoning Theory, and Why I Started Rooting for OpenAI

A handful of AI industry observations I piled up on Threads this week — automating social posts uses AI in the wrong place, whether Skills are a mechanism for model providers to extract human knowledge, and why I went from hating on OpenAI to genuinely hoping they get better.


This week on Threads I accumulated a few short takes on the AI industry’s route choices. Each one is too short to stand alone, but together they form a picture. Organized below.

Automation isn’t the universal AI application

A lot of people doing AI automation are using it in the wrong place at the wrong time.

Who enjoys reading AI-generated posts and replies?

We’re past “does the tech exist” territory. This is a business philosophy question now, a route-selection question. The route someone picks shows their business values.

If an account’s moat is “consistent output of hallucinated AI articles, outdated re-treads, and clickbait sensational enough to make people stop and argue”, those are traffic hooks and an energy perpetual motion machine. I don’t see other accounts matching that level, so it genuinely is their strongest moat.

It’s just the kind of moat they probably wouldn’t admit is a moat.

Operator “will” launch in January

Related subtopic: there’s an account that specializes in scraping databases, and since late last year they’ve been promoting an “Operator” tool, always “next month.” Still hasn’t shipped.

If Operator is perpetually “about to” launch, then your 2024 scraped database presumably finishes releasing in 2028?

This is another version of “route choice reveals values”: is the route something you’re actually going to build, or just something you’re going to keep posting about?

The Skill conspiracy theory

Here’s a conspiracy theory I can’t shake.

Skills rising is great for users. Knowledge packaged, triggered on demand, shareable. But are Skills also a mechanism for model providers to extract human knowledge?

The open web has been scraped dry. Everyone’s on synthetic data now. Expert-authored Skills, written by actual professionals, become the channel for humans to volunteer their in-head expertise. For a model provider, that’s data more targeted than synthetic by a couple orders of magnitude.

I still write and use Skills, for the record. But this theory is worth filing: every time I write expert knowledge into a Skill and share it publicly, I’m effectively feeding the next model generation. That’s not necessarily bad, but it’s worth being aware that it’s happening.

Is that a form of poisoning, or a voluntary knowledge transfer?

Why I went from hating OpenAI to rooting for them

I used to really dislike OpenAI. Their PR style, their attitude toward data provenance, their casualness about safety.

Now I hope they get better.

Simple reason: the enemy is the biggest driver of progress.

Opus 4.7 burning quota to the point where Reddit spent a week roasting it: without Codex, GPT-5.4, and the imminent GPT-5.5 staring Anthropic down, Anthropic would have no reason to actually fix it.

Competition drives progress. So I now view every OpenAI release as a positive. Either as good news for their own users, or as a whip forcing Anthropic not to slack off.

This shift also reminds me of something about myself: views on a company don’t need to stay consistent forever. A position can shift as the industry landscape shifts. Changing your mind with the wind isn’t flip-flopping. Refusing to change despite the wind is.

The two extremes of AI content

Pulling these together, the AI content industry is roughly splitting into two camps:

Camp A: AI is an efficiency tool for pumping out garbage reach. The underlying theory: “users can’t tell the difference anyway.”

Camp B: AI is leverage for amplifying existing professional depth. The underlying theory: “users can eventually tell. They just need time.”

Camp A has the traffic right now. Camp B wins long-term. This isn’t faith. It’s the baseline of “user discernment improves over time.”

Miscellaneous

That’s this week’s grab bag. More next week.


Share this post on:

Previous Post
Two Months of Claude Code Harness Ops: Crashes, Zombies, Cache Misses, and Plan Mode Edges
Next Post
A Teacher's Responsibility: How a 15-Year Cram School Instructor Sees AI-Era Course Quality