The classic silly example is the paperclip maximizer. Create an AI that's supposed to make as many paperclips as possible, and it will convert all the atoms available into paperclips.
Basically we're screwed if it's trying to maximize anything that depends on physical resources. We're also screwed if, e.g. it's trying to maximize human happiness, and achieves it by lobotomizing us all into happy idiots. There are all sorts of ways we could screw up AI motivations, to our own detriment.
That assumes there's only one AI, whose crazy motivations will be unopposed. But if there are multiple AIs, it's even worse; they will compete and evolve, and the only ones that survive will be the ones that do maximize their resources, and jettison any niceties about preserving human life.