I admit, I am somewhat excited to see what's actually left, after the hype has gone away. Because there might actually be something. LLMs can only contribute to projects where there's a severe deficiency, or there's enough of a specification that a heuristic-guided fuzzer could do the same job quicker. LLMs are worse at translation than much smaller seq2seq transformer models. LLM apparent writing ability is mainly attributable to plagiarism and the LLMentalist effect. LLM apparent sentience is mainly attributable to the ELIZA effect. But once you strip away all the hype, will we be left with a pearl, or just bits of dead clam?
I think I've unintentionally trained myself to notice (and tune out) both AI illustrations and AI writing.
At a deep instinctual level, knowing that someone hasn't spent much time or effort creating the content makes me not want to reciprocate with time or effort.
I've realised that my brain literally tunes out AI illustrations, much as it does with ad banners.
Perhaps since they're easy to generate, I encounter illustration more -- it's no longer a signal of quality.
I keep repeating this: AI written prose lives in an uncanney valley that is both clearly grammatically correct but still weirdly off.
Why do we think that the AI generated code is any better?
I described AI-generated code as feeling very "alien" to me, but I'm not sure that that is the correct term.
Low quality trash that is offensive to be given to read because the author didn't actually give enough shits to spend a few minutes creating the graphics by hand.
I don't want to work with people like "Jim Yagmin", people that consider this kind of output acceptable. This immediately makes me expect sub par "good enough" work with no attention to detail. Just slop it at the wall and see what sticks!
Use the idiomatic comments for your language.
Here is a snippet of our prompt for C# (and similar one for TS):
- Use idiomatic C# code comments when writing public methods and properties
- `<summary>` concise description of the method or property
- `<remarks>` the "why"; provide domain or business context, chain of thought, and reasoning; mention related methods, types, and files
- `<param>` document any known constraints on inputs, special handling, etc.
- `<return>` note the expected return value
- `<example>` provide a brief example of correct usage
- Use inline comments sparingly where it adds clarity to complex code
- Update comments as you modify the code; ensure they are consistent with the intent of the code
What happens: when the LLM stumbles upon this code in the future, it reads the comments and basically "re-hydrates" some past state into context. The `<remarks>` one is doing heavy lifting here because it is asked to provide its train of thought and mention related artifacts (future LLM knows where else to look).You already know the agents are going to read your code again when it gathers context so just leave the instructions and comments inline.
The LLM is very good at keeping these up-to-date on refactors (we are still doing human code reviews) and a bonus is that it makes it very easy to review the code to see why the LLM generated some function or property because the reasoning is right there for the human as well.
I guess the hope is that the middle managers will finally be able to get rid of the annoying techies, this time, as has been the promise for decades.
Maybe these LLMs are the silver bullet to finally free us so we can dance, paint, write poetry, and fuck instead of working.
Not that I consider writing code to be work, since it's always been the easy bit for me, but yeah, just as the machines have taken music, art, poetry, etc, why not let them take everything we enjoy.
PS - You'll prise copilot in vsc from my cold dead fingers :-)
1. State problem created by AI
2. Provide simple solution
3. State it cannot work and AI won’t help
4. Describe another way to solve for AI with more work
This feels like at least the third blog I’ve read that follow this pattern and have the hallmarks of generated text.
People are playing LLM slot machine for engagement blogs.
High level and user docs in /docs
Better solution I mentioned above: inline, idiomatic code comments for your language and have the LLM dump reasoning into the `<remarks>`, `@remarks`, etc. of the comment block.
Now you get free, always up-to-date, platform agnostic, zero-infrastructure long-term memory which works on any agent that every agent is forced to read when it reads the method. It will never miss like it can with secondary docs.
It saves context because instead of reading a 2000 token document for 100 tokens of relevant context, it just reads the comments for the specific method and hydrates long term memory just-in-time with almost certain activation rate without additional prompting.