I think multiple choice exams can be a good way – a physics exam I took was multiple choice, and so because the examiners were "giving you the answers", they took it as an opportunity to ask really out-there questions. They would frequently take two wildly different aspects of the physics course and mash them up in a new way. I thought this was very effective at testing if the student understood the areas and could therefore combine understanding with intuitive leaps, or whether they had memorised the formulas necessary for the exam and could only repeat those on command in the exact form they knew.
This suggests to me that it's less multiple-choice, and more that this is a purely fact-based style of testing.
For a computer security course I took at university one of the exam questions was "Describe Stuxnet – 20 marks" (half the exam's marks). we had had a lecture dissecting the whole Stuxnet incident. For those simply memorising facts this question would be quite hard, but for those who had understood why certain things matter and could write an in-depth explanation of the security failings, it was great.
The problem is that marking this sort of question requires a significant amount of manual work, and that doesn't scale. Another example would be Phd vivas, which I've heard are generally a well respected way of determining ability, but which again take a significant amount of expert manual input.
I don't think we'll get good certifications for these sorts of things until we find better ways to examine like this.