There are however limitations imposed by the architecture. An LLM cannot form secret chains of thought (though in theory a closed system outside the end-users' control could hide tokens from at least the user), nor can it model decent metacognition. They also have an at-best weak concept of fact vs fiction in general, which is why we get hallucinations. All of that isn't exactly optimal prerequisites for telling lies.
Also your car isn't a coward because it refuses to run into an obstacle onboard systems detect. The car's designers may have been cowards. Your car also isn't a hero for protecting you during a crash. Neither are LLMs virtuous or liars. If some AI company went out of their way to intentionally construct an LLM such that it outputs untruths, it's not the LLM that is lying to you, it's Open AI/Anthropic/whoever you're interacting with. You're using their system. They are responsible for what it does. If it tells untruths they may have automated the act of telling lies, but it's still them doing it.
> There are multiple indications that the same can be said for humanity, that we perform actions and then rationalize them away even without realizing it
I was hoping to get a response like yours, because I'm genuinely curious about where it leads.
I believe what you said is true in the general sense, where we solve easy problems subconsciously in parts of our brains dedicated to supporting the conscious mind, without then being able to explain how we did it.
However this is a lot less true for engineering tasks, which have a lot more active planning. Sometimes software development means just being a fancy constraint solver, finding a solution that works while applying some best practices. When pressed why one chose that particular solution, one might be tempted to post-hoc rationalize it as the best solution, even though it was just one that fit. But that's merely making it out more than it was, not taking away from the accomplishment of finding one that worked, which likely required some active thinking.
At the other end of the spectrum is making architectural decisions and thinking ahead as one creates something novel. I would be able to tell you why everything exists, especially if I merely added it in anticipation of something that will use it later. There's a ton of conscious planning that goes into these things.
Most coders are still turning over problems they're dealing with at work in their head when they're going to sleep late in the day. This is very much the opposite of solving problems subconsciously.