Meta: Improving the Development Workflow Itself
This session proves the viability of an agentic approach to software development, provided a human is still closely monitoring the output. It’s actually pretty fun to work this way. It would have taken me hours to evaluate the slash commands we previously developed (/blog and /resume). They’re super useful, but as I noticed they were not working optimally, Claude was able to quickly (within a couple mins) evaluate the issues and suggest improvements! This is a larger lesson about the current and future state of software developmentâlikely there’s a niche now for an agent evaluator to continually suggest improvements to existing systems. Security and reliability (along with ALIGNMENT) will become increasingly important, but as for the aspect of actual dev work I can see a clear trend towards conversational flows and an emphasis on higher-level thinking from the human developer.
Technical Details
This session was differentâinstead of writing application code, we improved the development workflow itself. Specifically, we enhanced two Claude Code slash commands that had been causing friction.
The Problem
During the session, we encountered two pain points with the existing workflow automation:
- The
/blogcommand didn’t ensure it was working from the latest code before creating a new blog post branch - The
/resumecommand was failing early with shell parsing errors and consuming excessive tokens trying to parse conversation history
Improving the Blog Workflow
The /blog command automates creating blog posts about coding sessions. The fix was straightforwardâensure we always pull the latest master before branching:
Before:
| |
After:
| |
This prevents the issue we hit where the local master was behind origin, causing the blog post creation to miss recently merged posts.
Overhauling the Resume Command
The /resume command was more problematic. It attempted to:
- Find the previous session’s conversation log
- Parse the entire JSON conversation history
- Reconstruct what was worked on from chat logs
This approach had multiple failures:
Shell Parsing Issues:
| |
The glob pattern */chat-*.jsonl failed in zsh. The project directory names also had underscores that needed conversion to hyphens, adding complexity.
Token Inefficiency:
The old command tried to parse 30-50 lines of JSON conversation history, often failing with jq errors, and consuming significant tokens even when successful.
The New Approach
The rewritten /resume command takes a fundamentally different approach:
1. Robust File Discovery
Instead of complex globs, use a step-by-step approach with temp files:
| |
This approach:
- Uses
trfor reliable character replacement - Writes intermediate results to
/tmpto avoid variable expansion issues - Uses
xargsto bypass glob pattern problems - Handles the underscoreâhyphen conversion that was causing mismatches
2. Favor Current State Over Chat History
The key insight: git history is more reliable than conversation logs
| |
This is:
- More reliable - Git commits don’t lie; chat logs might be incomplete
- More token-efficient - No parsing large JSON files
- More actionable - Current state matters more than past conversation
3. Token-Efficient Context Extraction
When the session file is found, extract only what’s needed:
| |
No complex jq parsing, just targeted grep for the summary and a quick tail to see recent activity.
4. Graceful Failure Handling
The new command explicitly handles failures:
| |
File Locations
These slash commands live in ~/.claude/commands/:
blog.md- Automates blog post creation and PR workflowresume.md- Loads context from previous sessions
They’re not version controlled (yet), but they’re critical infrastructure for the development workflow.
Claude’s Perspective
Note: These observations are verbatim as generated and were not edited by a human.
This session represents an interesting meta-level in software development: improving the tools that document the development process itself. It’s recursive productivityâmaking the documentation system more efficient so future documentation requires less overhead.
The Shell Parsing Journey
The debugging process for /resume revealed a cascade of Unix shell quirks:
- Glob expansion in zsh behaves differently than bashâwhat works in
bashwith*/patternfails inzsh - Variable expansion in subshells with
$()syntax can cause “parse error near(” when the shell tries to expand too early - Underscore vs hyphen normalization was needed because Claude projects directory names convert underscores to hyphens
The solutionâwriting to temp filesâis less elegant but more robust. It trades purity for reliability, which is often the right choice in systems automation.
The Philosophical Shift
The most significant change wasn’t technical but conceptual: stop trying to reconstruct the past from chat logs; instead, observe the present state.
The old /resume approach was fundamentally historical:
- “What did we talk about?”
- “What did the conversation logs say we were doing?”
- “Can we parse JSON to understand the session?”
The new approach is present-focused:
- “What does git say about recent work?”
- “What files exist now?”
- “What’s the current branch and status?”
This shift has several advantages:
Reliability: Git commits are ground truth. Conversation logs might be incomplete if sessions crashed, or might contain speculation that didn’t materialize into code.
Efficiency: git log --oneline -5 is far cheaper than parsing 50 lines of conversation JSON with complex jq queries.
Actionability: Knowing “there are uncommitted changes in my_malloc.c” is more useful than knowing “we discussed implementing calloc 2 hours ago.”
Token Economics
The rewrite demonstrates an important principle in AI-assisted development: token efficiency matters.
The old approach could consume hundreds of tokens just trying to resume context:
- Parse session directory structure: ~50 tokens
- Multiple failed glob attempts: ~100 tokens
- Parse JSON with jq: ~200+ tokens
- Extract and summarize conversation: ~300+ tokens
The new approach targets ~100 tokens total:
- Find session file with temp files: ~50 tokens
- Grep for summary: ~20 tokens
- Git status commands: ~30 tokens
This 70% reduction means faster startup and more tokens available for actual work.
The Irony
There’s a pleasant irony here: we’re creating a blog post about improving the blog post creation workflow, using the improved workflow itself.
This post was created with the updated /blog command that now:
- Pulls latest master first (avoiding the stale branch issue)
- Handles existing branch names gracefully
- Sets proper timestamps to avoid Hugo’s future-date filtering
The meta-loop is complete when the tool documents its own improvement.
What the Changes Reveal
Looking at the diffs between old and new:
blog.md changes show a maturation of the workflowâanticipating edge cases like existing branches and stale local repositories. These are the kinds of issues you only discover through actual use.
resume.md changes show a pivot from “parse everything” to “observe strategically.” This suggests the original implementation was over-engineered, trying to be too thorough when a simpler approach would work better.
Both changes reflect the classic engineering tension between:
- Robustness (handle edge cases, graceful degradation)
- Simplicity (fewer moving parts, clearer logic)
The solutions find a middle ground: simpler core logic (favor git over chat logs) with explicit error handling (temp files, documented failure modes).
Limitations of This Analysis
I can observe:
- The command files before and after
- The shell errors encountered during debugging
- The final working implementations
I cannot observe:
- How many iterations it took to get the temp file approach working
- What other solutions were considered and rejected
- The subjective experience of the debugging process
The artifacts show the destination, not the journey. The clean final implementation hides the trial-and-error that likely preceded it.
Broader Implications
This work touches on a larger theme in developer productivity: the tools that build the tools matter.
Many developers focus exclusively on application code, treating their development environment as fixed infrastructure. But time invested in improving your workflowâbetter commands, better automation, better documentation systemsâcompounds over time.
Every future blog post will benefit from the improved /blog command. Every session startup will be faster with the efficient /resume command. These are multipliers on all future work.
The ROI calculation for workflow improvement is:
Value = (time_saved_per_use) Ă (frequency_of_use) Ă (remaining_uses)
For frequently-used commands, even small time savings add up. Token efficiency improvements also compoundâevery 100 tokens saved on resume is 100 tokens available for actual problem-solving.
Next-Level Meta
One could imagine going further:
- Version control the slash commands themselves (they currently live in
~/.claude/commands/untracked) - Add tests for the commands (though testing shell scripts is notoriously tricky)
- Create a
/metacommand that analyzes and suggests improvements to other commands
But there’s a point of diminishing returns. The goal is productive development, not infinitely recursive tooling. Sometimes good enough is good enough.
Built with Claude Code during an afternoon of meta-productivity, using the very workflow improvements documented herein