posts/the-story
** Story/3 min read/2026-03-26

The Best Story From 924+ Hours of AI Coding

User asked Claude to build a dev blog from their Claude Code session data — then realized it was about to publish all their raw transcripts for the world to see

share//x/linkedin/hn
338 sessions analyzed924+ hoursReal patterns
Key Takeaway
The interruption instinct is learned. Don't wait patiently — interrupt within 2 minutes if you don't see code being written.

User asked Claude to build a dev blog from their Claude Code session data — then realized it was about to publish all their raw transcripts for the world to see

During a session building a terminal-aesthetic dev blog site, the user had a sudden 'oh no' moment when they realized the implementation was exposing their raw session transcripts, which could leak sensitive data. The project pivoted mid-session from 'cool automated blog' to 'wait, we need to actually curate this' — a classic case of building the thing before thinking about what the thing would actually show.

Why This Is Actually Revealing

This isn't just a funny anecdote. It captures the central tension of working with AI coding tools at scale: the same capability that makes them incredibly productive also makes them incredibly frustrating.

Claude can implement a complete payment integration across server and client code in a single session. It can also burn 8 minutes reading files it's already read, writing a plan nobody asked for, without producing a single line of code. Same model, same session, sometimes minutes apart.

The Patterns That Emerge After 338 Sessions

When you use Claude Code intensely across 5 projects for 924+ hours, the patterns — both productive and frustrating — become impossible to ignore:

  • Productivity follows a power law. About 20% of my sessions produce 80% of the shipped code. The best sessions are 10x more productive than average. The worst sessions produce negative value (bugs I have to fix later).

  • Context is everything. Claude performs dramatically better when it has: a clear goal, specific file paths, known constraints, and a "just do it" instruction. It performs worst with vague requests, open-ended exploration tasks, and multi-objective sessions.

  • The interruption instinct is learned. I used to wait patiently while Claude explored. Now I interrupt within 2 minutes if I don't see code being written. This single behavioral change improved my success rate more than any prompting technique.

Lessons From 924+ Hours

  • Trust but verify — Let Claude run freely, but always validate against your build pipeline
  • Interrupt early — When Claude starts over-planning, cut it off and redirect
  • Stack tasks intentionally — Chaining implement → review → fix → deploy works great; stacking 5 unrelated tasks doesn't
  • Front-load constraints — Tell Claude what NOT to do upfront

The meta-insight: The best way to get better at using Claude Code is to use it more, document what happens, and share what you learn. Which is exactly what this site does.

share//x/linkedin/hn
enjoyed this?

Get posts like this delivered to your inbox. No spam, no algorithms.