I'm not an expert in Computer Science or transformer architecture so I’m wondering whether mental disorders like depression, OCD, anxiety (and maybe other new disorders only applicable to LLMs) can be induced in LLMs by modifying weights. not through prompts or context.
Are there any papers showing personality pathologies in LLMs?
As per the title. Claude Code spends about 10 minutes thinking, and then it spits out the output, which, admittedly, is in many cases better than Codex. Still, it feels slow because of that much delay.
Codex output is worse, but the output in the CLI makes it think more things are happening