People who worry about AI taking their jobs often lack ambition, or worse, a sense of mission. Too many engineers tie their identity to a title or a role, mistaking employment for purpose.
Building in the age of AI means stepping away from that mindset. It means pursuing things that matter deeply to you, to the people around you, or simply ideas you want to see exist.
If your current role supports that, it’s a bonus. If it doesn’t, then it’s just income fuel to invest in what you actually care about.
The same goes for technology. Engineers cling to stacks the way they cling to roles. If a tool solves your problem, great. If not, use it as a learning fuel for something you want to build.
Note: Thoughts are personal, rephrased using LLM
For example, the inline database ends up as a table with href links to other parts of the document - nice, but not very useful when I want plain text I can actually work with.
Meanwhile, I have been doing a lot of prompting, and Markdown makes more sense for my workflow. It is not a journaling tool but it is simple and widely supported - GitHub, VSCode, etc and it eliminated a lot of the context switching that came with using dedicated note-taking apps.
What I would miss probably is the inline database, and other rich content which I have learned to stop using. But, I have optimized my journalling workflows to a lot of my prompting techniques. I use regular tables and split documents more deliberately. I reference them across journals when needed, kind of like having dedicated prompts for each part of a workflow.
I use the a lot of the prompting techniques in journalling as well - instead of creating inline database, I use the regular tables (more flattened and un-linked), and started splitting documents more, and referencing them in my journals.
I also sometimes put YAML frontmatter at the top for metadata and descriptions. That way, if I ever want to run an LLM over my journals - for summarizing the year or building a semantic search - I am already set up. (Might even turn that into a feature for https://gpt.qalam.dev)
I have realised the tool must matter less than how I structure my thoughts.
The way I define Fullstack is that it is not limited to frontend and backend work. It’s closer to the hardware-engineer argument: can one person actually build an entire computer? Maybe not every part from scratch but a capable engineer can assemble the pieces into a working system.
By that logic, a full-stack engineer is someone who can pull together everything needed to turn an idea into a product. I measure their skill by how fast and effectively they are able to deliver: design, engineering, requirements, and even a bit of SEO when the product calls for it.
Where I separate a full-stack engineer from a product engineer is in the focus.
A full-stack engineer focus is almost entirely technical - think optimizing page speed, bundle sizes, etc. A product engineer is maybe 70% technical, but adds extra 30% of domain thinking - think competitor analysis, customer empathy, and product sense. A product engineer is the kind of person you would actually put in front of customers.
I had an understanding of what I wanted from the IDEs, but I could not fully articulate it before the launch. Now that it’s here, it makes complete sense.
The way I see the future of programming, everything is going to be live: debugging, coding, designing, etc. Not that the idea is new, but the difference is that now it will be fully autonomous.
Recently, I worked on a feature that required redesigning part of our legacy flow built with Django templates and plain JavaScript for interactivity. In theory, this should not be a difficult task for current models. But they struggled to produce the right output, and I think there are two reasons for that:
Design is inherently hard to express purely in text.
Models are great at generating new code, but not so great at modifying large, existing codebases.
Honestly, the best workflow I found for updating the legacy UI was to operate directly off screenshots. I simply take the screenshots of the existing UI and the expected change, and ask the model to write code that matches that design, given the context of existing design. Models understand the context way faster this way.
With this new Design feature, I imagine this whole process become faster because I can make the edits directly on the browser, and model simply codes the expected outcome. Its what I always wanted - a custom headless Puppeteer running in the background, watching what I am doing, and helping with the design in real time.
And then there’s debugging. I have always preferred logs over a traditional debugger. What I have really wanted is something like an ELK parser at runtime something that just understands my logs as the system runs, and can point out when things drift off the expected path.
Before LLMs, we did not fully understand the libraries we read, the kernels we touched, the networks we only grasped conceptually. We' have always been outsourcing intelligence to other engineers, teams, and systems for decades.
One possible reason is that we use a library, we can tell ourselves we could read it. With an LLM, the fiction of potential understanding collapses. The real shift I am seeing isn't from "understanding" to "not understanding."
It is towatds "I understand the boundaries, guarantees, and failure modes of what I'm responsible for." If agentic coding is the future, mastery becomes the ability to steer, constrain, test, and catch failures - not the ability to manually type every line.
Full piece:
https://open.substack.com/pub/grandimam/p/ai-native-and-anti-ai-engineers?utm_campaign=post-expanded-share&utm_medium=post%20viewer
One key difference I have noticed is the upfront cost. With agentic coding, I felt a higher upfront cost: I have to think architecture, constraints, and success criteria before the model even starts generating code. I have to externalize the mental model I normally keep in my head so the AI can operate with it.
In “precision coding,” that upfront cost is minimal but only because I carry most of the complexity mentally. All the design decisions, edge cases, and contextual assumptions live in my head as I write. Tests become more of a final validation step.
What I have realized is that agentic coding shifts my cognitive load from on-demand execution to more pre-planned execution (I am behaving more like a researcher than a hacker). My role is less about 'precisely' implementing every piece of logic and more about defining the problem space clearly enough that the agent can assemble the solution reliably.
Another observation has been that since the cost of writing code is minimal as agents are delegated to write them, there is a need for me to shift and context and also take up the QA role to evaluate the agents output.
Would love to hear your thoughts?
- Builders are focused on users and the domain problem. Code is just a means to an end. They'll ship something imperfect if it unblocks a real user need. Ask them to spend time on optimizations that don't affect the user experience? Hard pass.
- Mercenaries are focused on the craft itself. They care about clean architecture, performance, elegant abstractions. They'll go deep on technical problems whether or not the business or users actually need it solved right now. The quality of the work matters independent of impact.
But I'm not confident I have this framed correctly. A few questions:
- Does this distinction resonate with your experience?
- Which type are you, and has that changed over your career?
- How do you balance these mindsets on a team?
Most examples today rely on in-prompt chaining — e.g., a single call where “Agent A does X, then Agent B uses A’s output,” all within one synchronous prompt. This works, but it doesn’t scale well and mixes orchestration logic with prompt logic.
I’m more interested in asynchronous, decoupled orchestration, where:
- Agent A runs independently, produces an artifact/state,
- and Agent B is invoked later (event- or task-driven) to pick up that output.
Curious how people are handling this in practice:
- Are you using message queues, event buses, CRON/temporal workflows, serverless functions, or custom schedulers?
- How are you persisting and passing state between agents?
- Any patterns emerging for error handling, retries, or versioning agent behaviors?
- Are you treating LLM “agents” like microservices, or is there a better abstraction?
- Would appreciate hearing what architectures or frameworks have worked (or not worked) for you.
If you were to design a matchmaking platform from scratch today, what would it look like?
How would you handle:
- Trust, authenticity, and privacy in an age of AI and deepfakes? - Cultural and regional diversity without stereotyping? - Real compatibility beyond surface-level traits? - Balancing data-driven matching with human intuition? - Building something that encourages long-term relationships, not just short-term engagement?
Curious to hear from people who think about product design, social systems, ethics, and human connection.
But most of the industry seems to reward broad or deep expertise, knowledge of systems, protocols, or architectures, even when it’s not directly tied to delivering user value. This makes me wonder: am I doing it wrong?
It feels like we often judge engineers by how much they know, not by what they’ve shipped or how much impact they’ve had. It creates this pressure to keep learning things that might not ever help with what I’m actually trying to build. Has anyone else struggled with this? Is optimizing exclusively for value a valid path long term?
Would love to hear how others think about this.