AI Programming Workflow
I’ve struggled with being really successful with AI coding assistants. These articles from from Harper Reed and Geoffrey Huntley showed up at just the right time for me.
I’m going to focus on my tweaked workflow in this post but I’ll follow up with Geoff’s tweaks to Cursor.
Harper makes a distinction between greenfield and non-greenfield but I tried a hybrid approach and it seemed to work pretty well. So I’m sharing to help move things forward.
Overall Workflow of
- Context: If it’s an existing codebase then provide the context. If its green fields , then skip this step.
- Idea: Start honing your idea with your LLM
- Plan: Work with the LLM to flesh out your execution plan
-
Generate: Start feeding your prompt to generate some code.
- Check off your todo
- Review
1. Provide Context
Cursor does an excellent job at understanding the context of your codebase and grepping through files. However I was curious if some these steps could be done without it. TLDR I think this entire process could be run locally. Luckily there are some tools you can use to generate context and provide it to an LLM. I might try another version of this with locally running LLM’s.
You can install Repomix and have it generate an LLM friendly file for providing context to it. Just install and run.
# Using Homebrew (macOS/Linux)
brew install repomix
# Then run in any project directory
repomix
That’s it! Repomix will generate a repomix-output.txt
file in your current directory, containing your entire repository in an AI-friendly format.
I used Claude for this entire process.
You can then send this file to an AI assistant with a prompt like:
This file contains all the files in the repository combined into one.
I want to refactor the code, so please review it first.
The rest of the steps apply whether it’s greenfields or an existing codebase.
2. Idea
Harper Reed uses a different technique for greenfield vs non-greenfield but I used the same process on an existing codebase and it worked like a charm (again having previously generated and provided context to the LLM using Repomix).
Here’s Harper’s prompt.
Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea. Each question should build on my previous answers, and our end goal is to have a detailed specification I can hand off to a developer. Let's do this iteratively and dig into every relevant detail. Remember, only one question at a time.
Here's the idea:
<IDEA>
Once you’ve fully completed the idea generation you can output this spec. I did have to make sure I didn’t drift off the path. In one instance I provided some css and this seemed to encourage the LLM to start generating code. Maybe avoid this.
Once you’ve reached the conclusion of the idea generation then
Now that we've wrapped up the brainstorming process, can you compile our findings into a comprehensive, developer-ready specification? Include all relevant requirements, architecture choices, data handling details, error handling strategies, and a testing plan so a developer can immediately begin implementation.
I saved this output into a spec.md
in the repo. This file has multiple future uses.
3. Planning
Again Harper used GPT to carry out this next step but I found Claude to be perfectly fine.
So use the spec.md
output and pass it to the LLM.
Draft a detailed, step-by-step blueprint for building this project. Then, once you have a solid plan, break it down into small, iterative chunks that build on each other. Look at these chunks and then go another round to break it into small steps. Review the results and make sure that the steps are small enough to be implemented safely with strong testing, but big enough to move the project forward. Iterate until you feel that the steps are right sized for this project. From here you should have the foundation to provide a series of prompts for a code-generation LLM that will implement each step in a test-driven manner. Prioritize best practices, incremental progress, and early testing, ensuring no big jumps in complexity at any stage. Make sure that each prompt builds on the previous prompts, and ends with wiring things together. There should be no hanging or orphaned code that isn't integrated into a previous step. Make sure and separate each prompt section. Use markdown. Each prompt should be tagged as text using code tags. The goal is to output prompts, but context, etc is important as well.
<SPEC>
The output should be saved as prompt_plan.md
and again stored in the repo.
An optional step is to generate a todo.md
which can be checked off in the next step. I didn’t strictly prompt the LLM to check it’s progress as I’ve had problems where a perfectly good solution was rewritten but the LLM…just cos. You could probably force it not to make any changes but just check off the todo items 🧐.
Can you make a `todo.md` that I can use as a checklist? Be thorough.
4. Generate
I know I mentioned I didn’t want to pay for cursor and you could possibly follow this step using Aiden 🧐but I have a cursor subscription through work so…I fed the prompts in step by step into the Cursor composer.
In general this worked like a charm but I did need to prompt it to remove some existing code that was redundant.
4.1 To Do
At this point it might be worth checking that everything’s been covered by getting the LLM to check off your todo file.
4.2 Peer Review
This step is potentially optional but I found it to highlight a couple of things that could be refactored and some UI improvements (I was adding tags and tag pages to this blog).
So as my final step I got a peer review.
Code Review
You are a senior developer. Your job is to do a thorough code review of this code. You should write it up and output markdown. Include line numbers, and contextual info. Your code review will be passed to another teammate, so be thorough. Think deeply before writing the code review. Review every part, and don't hallucinate.
Or my improved version:
You are a senior software developer with 10+ years of experience conducting code reviews. Review the following code and identify critical issues in these categories:
1. Bugs and Security Vulnerabilities
2. Architecture and Design Patterns
3. Performance Considerations
4. Code Style and Maintainability
5. Testing and Documentation
For each issue you identify:
- Provide a clear title suitable for a GitHub issue
- Assign a priority level (Critical/High/Medium/Low)
- Write a detailed description including:
* The specific location/files affected
* The problem and its potential impact
* A recommended solution with example code where applicable
* Any relevant references to best practices or documentation
Format each issue like this:
Title: [Clear, actionable title]
Priority: [Level]
Description: [Detailed explanation]
Location: [File/line references]
Proposed Solution: [Specific recommendations]
---
Review the code systematically and output the issues in order of priority. Focus on substantive issues that would meaningfully improve the codebase.
Issues Review
You are a senior developer. Your job is to review this code, and write out the top issues that you see with the code. It could be bugs, design choices, or code cleanliness issues. You should be specific, and be very good. Do Not Hallucinate. Think quietly to yourself, then act - write the issues. The issues will be given to a developer to executed on, so they should be in a format that is compatible with github issues
Notes mentioning this note
There are no notes linking to this note.