Successfully pasing a code challenge is not a good predictor of future success. However, failing to pass a code challenge is a very good predictor of future failure. Furthermore, making the test harder does not make it predict future success well.
Therefore, I belong to the "FizzBuzz" school: Make the test ridiculously simple. It should take no more than fifteen minutes to complete. If the person gets it obviously right, throw the result away and move on to more important quetsions. Do not try to deduce a lot of malarky about their programming style or ability from an obviously contrived problem.
For example, writing:
a = 5; b = 10
Does not mean they don't know Ruby well, and writing:
a, b = 5, 10
Does not mean they are a Ruby expert. And writing:
fsquare = lambda { |x| x * x }
Does not mean they are a closet Bipolar Lisp Programmer. If the program works, it's a pass, move on. If there is a syntax error or some such, who cares, move on.
But if they struggle... You need to investigate the reason for their difficulty with a simple problem.
I also like to ask people to peer review some code with both obvious and more subtle errors in it. That's a pretty important skill to me too. Anyway, ask them to code some fairly simple algorithm and then critique their own work. If there are several ways to solve the same problem, some more efficient than others, all the better.
Give them a novel problem and have them solve it with a real computer and in tact syntax lookup. Then see what and how they did it and go over their reasoning. I'll hire the guy with the wisdom over the guy who can spit out syntax trivia flawlessly. Those are different skill-sets that do not predict each other.
I do agree though that testing specific language skills is mostly pointless. The only time I've been given a programming test as part of an interview I was given a basic language spec that filled a couple pages then a book of problems to solve in the new language. Thats the way you should test programming skill in my opinion.
Two examples stand out:
1) A guy who had more Java certifications than had eaten hot dinners couldn't complete the simple exercise after three or four hours of effort.
2) Guy initially attempted to hit Google up for examples (why not!), and on finding that the net connection was disabled for that user account, wrote the best solution to the exercise I've received.
Overall, my interviewing process has had two stages. First, talk to the candidate and get a sense of what kind of person they are, and if they'll fit in with the culture. Second, if they seem reasonable, give them the test and see what their code is like. It's not foolproof, but it does weed out most of the weaker candidates.
I should add that I thought I bombed the technical interview. So much so that I wanted to ask the VP if he had the right guy when he called with the offer.
For phone screens, I find Steve Yegge's guide a useful starting point: http://steve.yegge.googlepages.com/five-essential-phone-scre...
I'm not a fan of asking candidates to reverse a string or to implement atoi or a binary search, because those have corner cases which, under interview pressure, might be too much to ask for. I just ask for an implementation of a gcd (greatest common divisor) function, in any language. I always explain the problem in detail. Only one guy (a PhD in physics) ever gave me Euclid's algorithm (section 1.1 in Knuth's AOCP Volume 1), so I don't expect it. I do expect a trivial loop, and most candidates take 15-20 minutes to come up with it, and need lots of hints.
As far as I'm concerned, anyone who can't come up with a trivial loop (or recursive or tail recursive function) for something like this shouldn't be programming for a living. Maybe "developing enterprise solutions," but never programming.
Priceless, If you ever put this on a cafepress mug or tee shirt, you can count on selling me a dozen or so as office gifts. Thanks for the laugh.
They were shocked that the interviewee didn't know the answer because they considered it basic knowledge, "it's in the first chapter for gosh sake." The first chapter in this case covered differences between Perl 4 and Perl 5.
In Perl 5, nobody uses 'chop', they all use 'chomp.' Many experienced Perl programmers might even have forgotten that 'chop' existed.
And that's my major problem with gotcha questions, they tend to be pretty far from measuring if a person can actually write productive code.
You need to consider what you are trying to determine by asking any question. Really, what is it about the candidate you want to know? After pinning down precisely what you want to know then tailor a question to figure this out with as little peripheral stuff as possible.
If you want to know if someone can generate some code in their preferred language on the spot, fine. Pick a very simple problem, like 'Write a program to display numbers in the fibinocci sequence', and explain the fib seq to them if they don't know it. I wouldn't even count points off if they make any kind of mistake that a compiler would catch as long as they know how to fix it. Have them do it on the white board.
This kind of Q isn't designed to figure out how much they know of 10 different languages. It's designed to see if they can put some code down in their preferred language and have it do something useful.
The biggest constraint in an interview isn't the honesty of the candidate, or even their knowledge; it's the N hour time limit. Your goal as the interviewer is to squeeze as much information about who the candidate really is in the time allowed.
If they didn't know it after a moment's thought, I'd cut them some slack; If they couldn't calculate it, with pencil and paper, in 10 minutes (and that's giving them plenty of extra time for interview-nervousness), then I'd be seriously concerned.
but then i read this and i thought "would i really want to give a job to someone who needs a pen and paper to subtract 1 from 255?"
i'm not sure what the lesson is here - perhaps that programming tests are dangerous because it is very easy to over-interpret the results, even when you start with the best of intentions?
[edited for grammar]
It also depends on the time frame of the position. If you are looking to hire a programmer on a per project basis, then a solid understanding of the development language is critical. However, if you are hiring an developer for the long term, it's better to hire a smart and agile person that can contribute to the overall business as well -- which is hard to measure from a coding challenge.
This is only a wheat/chaff separator, though. If possible, I like to see code from actual projects they've worked on to get a feel for the bigger picture (tip: open source your personal projects and obsess a bit over the code, so you have something to show off). Plus there's all the other non-code aspects to the interview.
I interviewed once with a company that OCRed medical documents. One guy held up a sheet of paper and asked "How much memory does this take up?" which was a deliberately open-ended question. I was initially stumped, but then he started feeding me domain knowledge and assumptions, such as common DPI values and expected fidelity. Then I started coming up with estimates.
I don't think he expected me to have answer right off. I think he wanted to see how much I could figure out on my own if given hints.
Obviously there are cases where you wouldn't use it, but generally they only apply when you know someone's capable of coding anyway (i.e. due to an indisputable reputation, experience working with them, etc.).
However as a predictive measure it's fairly limited in that it's a binary indicator. If someone fails then there's a fair chance they're a terrible developer, if someone passes there's a fair chance they're not a terrible developer.
I always give a fairly simple problem, pencil and paper, a soda, and leave them alone for 15 to 30 minutes. Before I leave the room, I make sure they clearly understand what's expected.
What they do isn't important. Our discussion about what they did is. I don't care what syntax or mannerisms they use; I do care if they understand what was needed and what they did well enough to explain what they did and why they did it.
Almost every time some good follow-up questions and segues ensue. What about this? What about that? I've learned more about people in the discussion after the problem than from anything else.