I’ve been reading a lot of AI-related pieces lately. Some of the ideas stuck with me, so here’s what stood out from early February.
AI Doesn’t Reduce Work—It Intensifies It
This one nails something I’ve felt firsthand. AI tools make you more productive on paper, but you end up juggling more projects simultaneously. Cognitive overload goes through the roof. You feel productive, but you’re actually more exhausted. I’ve noticed this with Claude Code—I get more done, but my brain is more fried by end of day.
Tom Dale on Mental Health in Tech
Tom Dale talks about the mental health crisis among software engineers. The rapid pace of AI-driven change is messing with people’s heads, from job anxiety to full-on existential dread. Related to what I wrote about AI anxiety before, but he comes at it from the mental health angle more directly.
Mitchell Hashimoto’s AI Adoption Journey
HashiCorp founder Mitchell Hashimoto shares practical strategies for adopting AI coding agents. Stuff like “reproduce your own work” and “end-of-day agents” that actually make a difference. This isn’t some hand-wavy AI vision piece—it’s from someone who’s actually doing it every day.
StrongDM’s AI team ships code written entirely by coding agents with zero human code review. They rely on scenario-based testing and digital twin clones of external services instead. Sounds insane, but they’re actually running it in production. Token costs and long-term sustainability are big question marks though.
Thomas Ptacek on LLM Vulnerabilities
Security researcher Thomas Ptacek says LLMs are genuinely good at vulnerability discovery—not just marketing hype. He thinks vuln research is particularly well-suited to what LLMs can do. Makes me think AI’s real strength isn’t writing whole apps, but tasks that need massive pattern matching.
The New York Times built an internal AI tool that auto-transcribes and summarizes podcasts to help journalists track public opinion shifts. Media companies using AI for intelligence gathering rather than content generation. This kind of application makes way more sense to me.
OpenAI’s Skills feature now works directly in the API—you send them as base64-encoded zip files inline with JSON requests. Everyone’s racing to standardize the agent toolchain, and the approaches are diverging fast.
Simon Willison built two tools that let coding agents show off their work—generating interactive docs and automated browser tests. Solves a very real problem: after an agent writes code, how do you actually know it works?
LLM Reasoning Continues to be Deeply Flawed
Gary Marcus, citing a Caltech-Stanford review, argues LLM reasoning is still fundamentally broken. He’s not wrong, but reading Gary Marcus long enough and you realize he’s always saying the same thing. Still worth reading as a counterbalance to the hype.
From Armin Ronacher, the Flask creator. He argues the AI agent era will spawn new programming languages because the economics of code have changed. It’s no longer about optimizing for human keystrokes—it’s about making agents effective programmers. Explicit semantics, easy local reasoning. Basically designing languages for machines to read, not humans to write. Interesting framing.
Anil Dash argues that orchestrating multiple coding bots with AI is a genuine breakthrough. Developers become strategic directors instead of code writers. He’s a bit too optimistic for my taste, but the distinction matters: “codeless” isn’t “no-code”—it’s “AI writes the code for you.”
Humanity’s Last Programming Language
Xe Iaso half-jokingly proposes that Markdown is humanity’s last programming language—natural language replaces syntax, markdown files become executables. The piece has a dark humor to it, but the underlying worry is real: what happens when human programmers become optional?
Self-improving CLAUDE.md Files
Let your AI agent analyze chat logs and auto-improve your CLAUDE.md file. Skip the manual updates, just throw a prompt at it. I do something similar to maintain my own CLAUDE.md—genuinely saves a lot of time.
My Non-Programmer Friends Built Apps
The author’s non-programmer friends used AI no-code tools to build apps. The demos looked great, but once they hit backend, databases, security, and maintenance costs, every single project was abandoned. Matches my observation exactly. AI can scaffold a frontend, but system architecture problems don’t just go away.
Jim Nielsen roasts an Anthropic study that found AI assistance reduces coding skill mastery. His take: no kidding. His real worry is that organizational pressure will push everyone toward speed over skill growth. It’s like learning to drive with automatic transmission—convenient, but do you really know how to drive?
The Pitch Deck Is Dead. Write a pitch.md Instead
Joan Westenberg says founders should ditch slide decks for a plain markdown file when pitching. Writing forces you to think clearly in ways slides don’t. Plus markdown is machine-readable, which fits modern VC evaluation workflows. Makes sense—no matter how pretty your slides are, investors only look at the numbers anyway.