As someone else hinted, there may be nothing to "fix", but rather this seems like a specific situation that it had just never encountered before. Adjusting the model to cause a safe response to that particular single rare situation (either manually or by accidental training) does not solve the apparent problem that the machine is not able to comprehend the world.
At least that is how I (as a non-expert) imagine these models work -- the model has an excellent chance of crashing at every new unique situation it encounters out of a nearly unlimited set of possible situations (which implies a high frequency of encountering new situations).