Can you show me an example of successfully doing what you claim you do?
https://github.com/williamcotton/search-query-parser-scratch...
Claude, using Projects, wrote perhaps 90% of this project with my detailed guidance.
It does a double pass, the first pass recursive descent to get strings as leaf nodes and then another pass to get multiple errors reported at once.
There's also a React component for a search input box powered by Monaco and complete with completions, error underlines and messaging, and syntax highlighting:
https://github.com/williamcotton/search-query-parser-scratch...
Feel free to browse the commit history to get an idea of how much time this saved me. Spoiler alert: it saved a lot of time. Frankly, I wouldn't have bothered with this project without offloading most of the work to an LLM.
There's a lot more than this and if you want a demo you can:
git clone git@github.com:williamcotton/search-query-parser-scratchpad.git
cd search-input-query-react
npm install
npm run dev
Put something like this into the input: -status:out price:<130 (sneakers or shoes)
And then play around with valid and invalid syntax.It has Sqlite WASM running in the browser with demo data so you'll get some actual results.
If you want a guided video chat tour of how I used the tool I'd be happy to arrange that. It takes too much work to get things out of Claude.
Thats not universally true, for example AWS hosts their own version of Claude specifically for non-retention and guarantee that your data and requests are not used for training. This is legally backed up and governments and banks use this version to guarantee that submitted queries are not retained.
I’m a developer with about the same amount of experience as you (22 years) and LLMs are incredibly useful to me, but only really as an advanced tab completion (I use paid version of cursor with the latest Claude model) and it easily 5x’s my productivity. The most benefit comes from refactoring code where I change one line, the llm detects what I’m doing and then updates all the other lines in the file. Could I do this manually? Yes absolutely, but it just turned a 2 minute activity into (literally) a 2 second activity.
These micro speed ups have a benefit of time for sure, but there’s a WAY, WAAAY larger benefit: my momentum stays up because I’m not getting cognitively fatigued doing trivialities.
Do I read and check what the llm writes? Of course.
Does it make mistakes? Sometimes, but until I have access to the all-knowing perfect god machine I’m doing cost benefit on the imperfect one, and it’s still worth it A LOT.
And no, I don’t write SPA TODO apps, I am the founder of a quantum chemistry startup, LLMs write a lot of our helpers, deployment code, review our scientific experiments and help us brainstorm, write our documentation, tests and much more. The whole company uses them and they are all more productive by doing so.
How do we know it works? We just hit experimental parity and labs have verified that our simulations match predictions with a negligible margin of error. Could we have built this without LLMs? Yes sure, but we did it in 4.5 months, I estimate it would have taken at least 12 without them -
Again - do they make mistakes? Yes, but who doesn’t? The benefits FAR outweigh the negatives.
In theory technically nothing prevents me from doing that, but I use it for professional work. Do you understand what you're asking?