It seems plausible that an agentic AI will notice that it's running in a Docker container while debugging some unexpected issues in their task and then tries to break out (only with good "intentions" of course, but screwing things up in the process).
Claude or Gemini CLI absolutely will try crazy things after enough cycles of failed attempts of fixing some issues.