Documentation (as in "design doc", not "API reference") is the absolute initial entry point: iterating on the problem statement, stakeholder requirements, business constraints, etc, until a coherent plan emerges, then organizing it at a high level. Combining this with "deep research" mode can yield fantastic results, as it draws on existing solutions and best practices across a vast body of knowledge.
The trick then is a sliding scope context window: with a high-level design doc in context, iterate to produce an architecture document. Once that is reviewed and hand-tuned, you can use it in turn to produce more detailed technical designs for various components of the system. And so on and so forth down the scale of granularity, until you're working with code. The important part is to never try and hold the entire thing in scope, instead, balance the context and granularity so that there's enough information to guide the LLM, and enough space to grow the next tier of the solution. Work in a manner that creates natural interfaces where artifacts can be decoupled. Piecemeal, not all at once.
The test aspect is also incredibly relevant: as you're able to work across a vastly larger codebase, moving much more quickly, tests become truly invaluable. And they can be squared against the original design documentation, to gauge how well the produced artifacts fulfill the original intent.
I'll acknowledge that this is most relevant in context of greenfield projects; but, LLMs' ability to ingest and summarize code makes them useful tools in dealing with legacy solutions. The point about documentation stands; adding features or fixing issues in existing codebases is the bottom of the pyramid; with these tools now you can stir things at the PM level, and better shape both the understanding of problems, and the approaches to solving them.
It's a very exciting time, it feels like having worked by hand for decades, only to now have access to power tools and heavy machinery.
> The trick then is a sliding scope context window: with a high-level design doc in context, iterate to produce an architecture document.
Absolutely, I will be stealing this!
> It's a very exciting time, it feels like having worked by hand for decades, only to now have access to power tools and heavy machinery.
Very well put, captures my feeling precisely.
Alternatively: “if you tab complete a docstring and it doesn’t match what you expect, your code can be clearer and you should add comments and rename variables accordingly.”
This isn’t hard and fast. Sometimes it risks yak shaving. But if an LLM can’t understand your intent, there’s a good chance a colleague or even your future self might similarly struggle.
Then I get it to go through each section of the todo list and check each item off as it completes it. Generally results in completed tasks that stay on track but also means that I can stop half way through and go back to the tasks without having to prompt from the start again.
Since it's not a New Yorker article, I was hoping to spare the audience a long personal life story and deliver a somewhat succinct list of suggestions that others might find useful.
However, the question is valid, and yes, this is the result of personal experience of following and incorporating AI tools into my own development over the last couple of years, as well as watching my colleagues of various experience levels (in a team of 10 engineers) do the same. These are the practices that we collected, adopted, and trying to codify and develop further.
The New Yorker out here catching strays. Spare us, "maga," your excuses and weird insults! You didn't need to share your whole life story to include some useful context.
Did you, "maga"?
I upvoted it because it aligns with my own findings working on real projects. Especially the bits about needing to “ground” the LLM in appropriate context, and being mindful of the sliding context window.
Of course, I can try it. But trying it does not prove anything. It must be tested. Testing is a much higher standard than "trying" and a lot harder to do.