On the one hand I like that it normalises the application process against prejudice (I was lucky enough to go to a top engineering school) so it allows a fair entry level to all, however, it appears to be just an utterly irrelevant IQ hazing.
Anyone else going through this pain? What are we to make of the recruiting scene going forward? Are we now at a point where an engineer should at all times be intimately familiar with competitive programming and codegolf techniques?
My candidates are told throughout the process that what we're looking for is a demonstration of how technical collaboration might work if we were employed by the same company. This takes away most of the stress of interviewing, which I know via candidate surveys.
My advice would be to steer clear from employers who use a soulless cookie cutter process that makes people feel like a commodity. This is how you'll also be treated during daily interactions and in conversations about your career development. Don't be under any illusion that you'll be able to find yourself on the right side of such a situation.
I agree most should steer clear from them, but the interview process is often an accurate representation of the level of the team:
* If the interview involves a copied a set of questions off the internet, verbal or written, they don't have the skills on the team to have a dynamic discussion.
* If the interview doesn't have enough standard questions, then they're either inconsistent or might rely heavily on certain people's intuition, the latter which could go either way but the former could mean complete disorganization.
* If they expect you to know the circumference of the Earth (equatorial 24874 mi/40030 km, meridional 24860 mi/40008 km) and capital of Sumer in 2281 (Uruk), then they could be interested in someone that has a scientific mind, great memory, and likes trivia or is very into history.
* If they give you problems to solve, they want to see and hear how you think.
* If they give you homework, they probably just want you to provide solutions in code and figure things out on your own to some extent, and the amount of time they give you to do that is indication of how they estimate tasks.
===> ... or who has the skill of typing "google.com" on a browser
Exactly. How many times have you reached out to your network and said "HEY! I am trying to solve problemX and I am stuck on Y -- anyone done this before?"
You cant expect everyone to know everything - ESPECIALLY in an interview, and even more-so in a panel interview.
Measure their problem solving skills, not their intimate knowledge of tech/lang X....
and then on-top of that, judge their fit for working well in the team!
only ask that when they come back from a takehome issue, that they say exactly how they solved it:
"I had to call my buddy over at BigCorp and say, hey dont tell me the answer - but lead me to where I might figure out how to solve this problem"
OR
"Hey joe, I did this - but i am not sure how efficient it is - did you do something similar?"
Tell them to get a slack channel of their peers to help them succeed.
I am tired of everyone trying to be the hero - all my contacts try to support one-another, the interview process should be no different.
Amazing, until you put it into writing I never realized that I do this too.
At least once a day I'll get a random question from my peers in areas where I have more experience than them, which also helped me gain real world work experience years before I even had my first job.
And I do the same when I wander in areas where I haven't had so much experience, discussing approaches and common pitfalls, which makes so much sense.
Yet in interviews it's like you're going to be working alone forever and have to be the best in these exact technologies/languages/stack or you'll never be able to do your job.
I guess this is what you end up with after calling everyone a ninja-rockstar-guru.
I treat interviews more like going out for coffee and often do just that. I like to understand what people are passionate about both in tech and personally.
that isn't necessarily at odds with assessing technical ability via tests, of course. in a perfect world, we'd be able to tease out both great technical ability and great interpersonal skills. it sounds to me like our original ranter disagreed with the scope and focus of the test--timed cleverness and ingenuity in an esoteric problem space. that kind of ingenuity is often helpful but not often required for many technical jobs.
We typically do a short phone interview first to assess if there is enough common ground to work with. We're looking for huge red flags at this point, such as an inability to talk through very fundamental programming concepts, difficult to communicate with, or a generally disagreeable personality.
Next we do a take home project, where we share a functioning, boilerplate web app, and ask the applicant to spend a couple hours addressing some portion of "requested" functionality. The barrier we're looking for here is actually not very high. We want to see that the applicant was able to figure out was going on in an existing code base and that that they were capable of making a new, original addition to it (even if very small).
The next stage of the interview is an in-person interview with 3 or 4 us. We usually start off with a 30 minute paper test, where the applicant can pick 5 out of 10 questions to answer. Pseudocode answers are expected, not syntax perfection. This is really another datapoint where we're trying to make sure the applicant truly is technically competent.
We then sit down and talk for about an hour. We ask questions about their resume, the project, and the paper test. A lot of this is making sure that they can talk about the things they chose to list on their resume. If they indicate expertise in TDD then they can expect questions about frameworks used and software patterns utilized to improve testability. If they indicate expertise in a particular database server, they can expect detailed questions in that area. This is also an opportunity for them to ask us questions about our company, culture, methodologies, etc.
The final part is a collaborative exercise in defining the architecture of a proposed system. I recognize whiteboarding a solution is tough in such a stressful situation as an interview with people you have not yet developed a working relationship. So we try our best to reassure the applicant, and to make it as collaborative as possible. Sometimes we'll debate different options amongst ourselves to see how the applicant participates and which direction they choose.
I usually close with a tour of our dev area, as well as a few areas of the company so they can see if it feels like a place they could call home.
Interviewing is hard on both sides.
The other problem is that the people who could be designing better interviews aren't stepping up. There are plenty of intelligent people who aren't 1) taking the time and effort to introspect and think about why they are effective as people and 2) taking those insights and translating them into an interview process that selects for important traits in simple, reproducible ways.
It's bizarre. A job could require years of experience with linux, programming, and networking, all of which could be tested with a multiple-choice style test to get a sense of where a candidate stands. Instead, we look at their resume, check off that they have our requirements buried somewhere in the forest of buzzwords, and then move onto whether someone can finger-paint their freshman year CS lectures onto a whiteboard. Then when we end up hiring a completely ineffective person who spent their entire time trying to game the interview system, we are surprised, even though we've been selecting for that kind of person all along.
I find looking through past project work is one of the best indicators of their ability. However I have found this to be extremely difficult to get. NDA restrictions of their previous employers is one major restrictions, or he is transitioning from a different domain or career path.
And the battery of automated tests helpfully written for you saves a lot of time.
If the companies you interview with choose problems you don't find relevant, it may be their fault, not HackerRank's.
Or even not a fault: they have to screen out a number of applicants who are good at boasting but can't actually code. You obviously can. It can be a bit boring, bit it gets you to the next stage with significantly fewer contenders.
- Hidden tests/metrics that also get evaluated
- Companies with an exaggerated sense of self-importance and unrealistic expectation of results
This is a test, you don't have the feedback that you have on a company (code reviews/talking to a colleague/better understanding of the conditions/real data tests/etc). Don't expect me to guess your conditions.
Also, I can't cover all points in the time you give me, I'm focusing on making the thing work, other stuff is secondary.
I enjoy solving hard problems as much as any other engineer, or I wouldn't be in this field. But if you want to judge me on code quality, expressiveness, architecture, and test coverage, in addition to performance and correctness, you should really give me enough time to review, refactor and test my code to cover all of those criteria. You know, time that any responsible engineer would spend on any piece of code they write before sending it off to production.
If the position does actually involve writing production-ready solutions to hard algorithms problems within ridiculous time limits like 1 hour, let me know beforehand so I have the chance disqualify myself from it, and we can both avoid wasting any more time with each other.
It may be a good idea to ask for clarifications before or even when you've started the assignment. (Applies to real work as much or even more.)
I remember doing one problem for one company, and the automated test suite just gave you "5 out of 6 tests were passed", without revealing what the 6th one was testing for.
I kept on having to think "maybe it's a null safety check here?", "or here?", what if the user passes -1 etc, but to no avail - the test just wouldn't pass.
The worst part was I gave up in the end because I had to move onto the next question, and they don't reveal at the end your performance and where you went wrong.
For example, an old employer of mine used something like "write a function which, given two strings representing durations down to a hundredth of a second in the format HH:MM:SS.SS, prints the difference between them, in the same format". To do that, you need to do some basic parsing, some simple maths, and some formatting. You don't need clever algorithms.
On the other hand, a problem which requires a "trickily smart O(N) DP solution" requires specialist knowledge, and is not simply a test of whether someone can code. If a company actually needs people who are good at particular algorithms, then this may still be a useful test. But barely any do.
A more ambiguous case is a test i did recently which asked for a function which printed all primes smaller than N. As it happens, i know a couple of (simple!) algorithms for finding primes, so i just coded one. But i know people - including some brilliant colleagues - who haven't studied maths beyond secondary school, and so won't know those algorithms. They might work one out, given time, but that's time they don't have to spend coding.
Many Project Euler problems are about such special things that you likely didn't know before encountering the problem, but the definition suffices.
More than once, I've been recruited by executives at Famous Tech Company to run major new initiatives involving vast volumes of spatially organized data, since my expertise in that area is well known, but there is a technical diligence step where I am grilled on spatial algorithms by a shockingly ignorant (e.g. doesn't understand R-trees) Principal Engineer or similar who actually believes no one can know more about the space than they do. (If that was the case, they wouldn't be trying to recruit me.) If an interviewer wants to test my expertise, they better be able to have a substantive discussion on the subject matter and understand the limits of their own expertise.
I view algorithm gotcha games as disrespectful of an experienced software engineer's expertise generally. I like to turn it into a substantial discussion about the algorithm class generally; if the interviewer is incapable of having a substantial unscripted discussion about said algorithms, it is a red flag and they have no business asking those kinds of questions. These days, I just walk away from an opportunity when this kind of nonsense happens.
Some computer science domains are worse than others. Spatial is particularly bad because very few computer scientists realize the theoretical foundations of spatial data structures are completely different than the more ordinary ones they are familiar with -- their intuitions don't apply.
- General mental ability (Are they generally smart)
Use WAIS or if there are artifacts of GMA(Complex work they've done themselves) available use them as proxies.
Using IQ is effectively illegal[2] in the US, so you'll have to find a test that acts as a good proxy.
- Work sample test. NOT HAZING! As close as possible to the actual work they'd be doing. Try to make it apples-to-apples comparison across candidates. Also, try and make accomidations for candidates not knowing your company shibboleth.
- Integrity. The first two won't matter if you hire dishonest people or politicians.
There are existing tests available for this, you can purchase for < $50 per use.
This alone will get you > 65% hit rate [1], and can be done inside of three hours. There's no need for day long (or multi-day) gladiator style gauntlets.[1] http://mavweb.mnsu.edu/howard/Schmidt%20and%20Hunter%201998%...
[2] The effective illegality comes from IQ tests disadvantaging certain minority groups.
You've posted this before and been called out on it. Please stop spreading misinformation.
There is nothing special about IQ tests specifically. Any proxy test will have exactly the same legal ramifications. As long as you can show that the results of that test are relevant to job performance, it is fine. Whether it is labeled an "IQ test" is irrelevant.
I've spoken with an lawyer about this. There is case law directly concerning IQ tests.
The bar for acceptance to prove the use of IQ in hiring in a discrimination case is unattainable by most software companies. Hence, it is effectively illegal.
NB: I would desperately love to use IQ tests in the US myself. I want to be on your side and use IQ tests, but wouldn't risk my supper for it.
Yes, we are. Accepting that trend seems to be better than ignoring it.
There is one not-so-bad way to look at these programming challenges. In silicon valley, we live in a condition wherein the company could fail at any time. As a developer, most often, we have to learn something entirely new and ship the code for the business. Those challenges are quite enormous with lots of unknowns compared to these programming contest problems where it is just preparation. The general idea is, if a developer is diligent enough to prepare for these programming contest problems and delivers when required, he is probably going to be helpful as well when the business has a dire need.
I don't disagree with the hate towards the current interview process at larger companies, preferring take homes myself, but I think it's harmful to say that all they have to do is "memorize a bunch of ... techniques" - they're going to have memorize a whole lot to get through the interview process. In the process of memorizing (learning) those techniques, you're likely to learn a lot about the foundations of mathematical problem solving.
I guess I'm biased because I studied math in college, but I think we can both criticize the current interview process without taking away from the hard work of people who are actually passionate about algorithms.
How? It's not like the applicant can study for the "dire need" like they can algorithmic interviews. I mean, sure, such an employee can attack the problem with fervor and hours of time, but that doesn't mean that they'll come up with the right solution to the problem.
The ability to choose the "right" algorithm has no correlation the ability to solve problems: The ability to associate pathfinding problems with the proper variant of Dijkstra's algorithm won't help when troubleshooting why customers are getting intermittent 403 errors.
I don't think any of these are true. Companies do not fail 'at any time' because of their programmers nor are they saved by them in the nick of time. If there's something you should be prepared to for it's writing code that's less prone to catastrophic failure rather than coding your way out of some disaster. It's not ER Medicine.
This is what I don't understand about software engineer hiring. They are ignoring important criteria about fit for the actual job, and focusing on criteria that is either orthogonal or completely irrelevant.
It's worthwhile to ask in these situations if the exercise is representative of work you'd be doing. It almost never is, but they tend to excuse it as...
> it normalises the application process against prejudice
But does it? By ignoring the actual criteria relevant to the job, they're dumping qualified people who don't do well with the hazing, and delaying evaluation of everyone else until after they're hired. That evaluation tends to be a lot less objective, because "hiring is expensive"; it selects for people who do well with the hazing, and reproduces the problem for the next round.
Oftentimes, the people actually designing the hiring process have an imperfect understanding of what work software engineers actually do. They ask the software engineers for input, but optimizing the hiring process to be orthogonal a software engineer's core work responsibilities, so they aren't necessarily going to give ideal advice.
I'd also say a fair share of those in Who's Hiring on HN aren't in dire need of filling seats, but just trying to see how far people with fling themselves through the mazes. Blissfully unaware they're not the only startup and have no way of offering the stability of a large company.
At one of the places I've seen, we didn't read every resume. We overlooked mountains of talent and shot ourselves in the foot.
Instead of hiring coders that had their heart in the right place, we hired streetwise careerists that put their own interests before the team. But they could do palindromes, fizzbuzz, and whiteboard data structures and algorithms.
But when we wanted them to do something generalist or in another language, they'd refuse. One even went so far as saying if they could program X in Y editor, they'd just leave the job. What use is passing all these tests if you're totally inflexible?
We also snubbed people enthusiastically espoused the startup gumption and idea of building, but didn't cope well with the white boarding we thrown at them. Those whose heart was in teamwork and open source, we overlooked ignorantly, while continually putting up walls to see who finally gets past all of them.
There is some toxic cultural thing amidst in startups of insularity and smugness. If I could go back, I'd say screw it with the whiteboard games, come freelance with us for a week. That way I can gauge your temperament, how you work with teammates, your technical skills, etc. in a realistic setting.
And if someone asks for a code sample, and you already have projects on GitHub or your portfolio, don't be afraid to redirect them to that instead. If they don't look, assume the employer is not serious about filling the spot, but just putting in the least effort themselves to see how many hoops people jump.
It's not you OP / other programmers. If an employer doesn't bother to give you a phone call to talk to you as a human being, maybe they're not so eager to have a position filled.
Don't let it effect your self-worth. Always be coding. Don't be afraid to stick your head out there at a meetup and shake some hands, you'll be surprised how much more decency you court when you represent you're a human being, not another resume in a stack of thousands.
For starters, there is a very small and fairly specific subset of the population that can afford to carve out a whole week for an extended interview, even if you do pay them for their time.
But I doubt this would leave out genuinely interested people who aren't employed. Assuming a preliminary acid test on the employer's side for skills needed and on the candidate's side for interest, this seems reasonable.
You may want to have a Plan B for the employed.
Look at this mess: https://www.hackerrank.com/challenges/preprocessor-solution
This purports to be teaching use of the preprocessor. It's horrific. It's someone's amusing "look at how you can create your own programming language by abusing the preprocessor" mess; fair enough, it's always fun to see someone do something this painful, but this is being genuinely presented as a preprocessor learning exercise.
One day I'll have to work with people who learned from this (and others like it), and it will be a long painful road to help them unlearn.
One day I'll have to work with people who learned from
this (and others like it), and it will be a long painful road
to help them unlearn.
Quoting from the link you've mentioned: #define add(a, b) a + b
I couldn't agree with you more.--
To elaborate:
add(1, 8) * add(5, 1)
wouldn't yield 9 * 6 = 54, but 1 + 8 * 5 + 1
which is 42. One might say that it's correct, since it's the Answer to the Ultimate Question of Life, the Universe and Everything. #define add(a, b) ((a) + (b))On the other hand, I've seen candidates who fail interviews that are algorithm heavy, but have done exceptionally well when it comes to the practical world - doing actual work and building apps and contributing to the team instead of writing the next (insert your fav tricky algorithm question here) solver.
It's unfortunate that while many companies that are hiring are just run-of-the-mill SaaS and apps companies that don't require you as an engineer to use algorithms or DP on a daily basis (or even ever), you still see algorithm heavy interviews at these companies.
On Hackerrank, I don't necessarily hate it as a tool, but just the questions that get asked through it. I don't like that it automates an interview process to a certain extent, as a candidate's potential and skills and experience and fit in the team can't exactly be measured via an automated process but requires actual human to human interaction. If a company rejects you on the basis of failing a Hackerrank question and hasn't even talked to you, you're better off working for a different company.
42
and your code emitted 4
it'd give you 50% marks for the test case.--
As an aside, such tools would give you a 0 even if you coded the perfect algorithm but goofed up the final printf.
Robotic evaluations might work, but not in the current form.
x1 x2 x3
y1 y2 y3
or:
x1 y1
x2 y2
x3 y3
The worst part way that their example still produced the same result if you read the values in the wrong order! I spent 40 minutes debugging my solution not understanding why my test cases work perfectly but HR does not accept the solution.
This week I was invited by an in-house recruiter from one of the "Big 4" to resolve two coding problems via HackerRank in 120 minutes, plus a third exercise asking about the time and space complexity of my solution. I am 99% sure that I will fail this pre-selection too, but I really do not care, the more I practice now the more opportunities I will have next time. I have talked with people who were hired by Google, Amazon, Booking.com after 8-12 months being unemployed, so — in my case — two months is certainly nothing, I can use the next six months to train myself and maybe next year one of these companies will extend an offer and then I will forget about all this hiring madness.
I'm wondering how other industries do it, I mean, once Doctors get their medical license, if they want to move to a different clinic or hospital, do they have to attend an interview demonstrating their knowledge, doing a whiteboard session on a "Dr House" style medical problem that they have to diagnose in < 10 minutes?
Or do they present their medical credentials, and get interviewed on their bedside manner, their ability to work in a team (if applicable), anecdotes about their past experiences etc?
There are 20k oncologists across all fields of oncology within the US. That's also about the number of engineers Google employs. Most other industries we like to compare ourselves to are leagues above ours in individual merit. My father is an oncologist, and I'm reasonably confident he has some familiarity with every genitourinary oncologist in North America, Australia, and Europe. More importantly, he's on a first name basis with all of their educators. When he needs to hire a new doctor, he doesn't post an ad on health stack exchange. He makes an offer to a specific individual who he already knows.
One thing that we as an industry fail to understand is that we're not special. We desperately claw to it in these conversations. I'm willing to admit it: I'm easily replaceable. Very few of us have any name recognition that exists in other fields. I worked in finance for half a decade before moving to software. When I'd go to interview, people already knew who I was because of the basic human interaction I had as part of my job. When I walk into the door of my next interview, the only thing people know about me is what's on my resume/blog/stack overflow answers.
Personally, I find technical interviews to be a cheap and easy filter. You may not always get the best person from your pool of applicants, but you get someone that's better than most of them. The marginal benefit of one vs. the other is rarely meaningful. OP complained about having to do this, but some of the other applicants might have found it difficult. Sounds like it was successful.
Also, in Australia there is the option to do medicine as a 6 year undergraduate degree, which ends up being a very relaxed and easy going degree. Residency and registrar is a 9 to 5 job, where as a registrar you have a six figure salary.
Of course we also have the 4 year long post graduate medicine degree, which would be much harder of course.
1. I did a simple tech screen (the RPI). 1 hour. My interviewer had a laptop and asked me questions about what to do next in the scenario.
2. Hey, come pair with this engineer on this real code on a real project on a real task.
3. How about lunch?
4. Hey, let's have you pair with this other engineer on real code on a real project on a real task.
5. Get offered a job on my way out the door.
I know from feedback that we don't always do this right, that we sometimes drop the ball, that many people find the RPI or the pairing to be frustrating, intimidating, or uncomfortable.
Most importantly, many people find that they just don't want to work the way we prefer. Which is good! It saves them the unhappiness of committing to a situation that won't enjoy.
But the core insight is: the best way to see if someone can work alongside us on a real problem is to ask them to come in to work alongside us on a real problem.
It's the best proxy we have short of hiring you. When it's available as an option to do this, I don't understand why anyone would choose a less accurate proxy.
What I've been doing which is similar is hiring the dev on contract for a short term to see if it works out. But I think this approach is much more efficient. Do you get and check references?
Which is fine! There's plenty of variety in this industry to find preferred practices, peers and environment.
I hasten to add that it isn't perfect. We hire across the intro-extraversion spectrum, but there's an ongoing concern that we're biased towards extraverts. Especially in Labs, which is the consulting wing.
Another problem is that many candidates are just plain nervous. We do our best to set people at ease and to be upfront that there's no right or wrong or trick answers. But interviewing is just scary. I expected to fail and so felt no pressure -- had I felt that more was on the line, maybe I'd have done worse.
The third -- this is a very common negative opinion -- is the argument that we don't give candidates a fair opportunity to show their expertise.
We will usually try to assign one project where their résumé claims expertise and another one that they're unfamiliar with. The former to take a sounding of their expertise, the second to get a feeling for their approach to the unknown.
It's not always possible to do this, simply because it's a vast field and candidates come with very varied backgrounds. And those candidates who are declined often feel that we've denied them a fair chance by throwing them into an unfamiliar technology.
These are all fair criticisms. My best answer is: we are not trying to trick or exclude anyone upfront. Ultimately the hiring decision is made by future peers, so we want to be fair but firm.
Hiring is just hard.
I don't understand how I could pair-program effectively on a code base that I'm seeing for the first time and don't know the first thing about. What would you expect me to be able to contribute?
Nobody expects you to magically know anything about code you've never seen before. That would be absurd.
The point is to get a sense of how you think and work as an engineer. We deliberately say that there are no "wrong" answers, that it's impossible to fuck up.
We want to know about the candidate. The decision is about hiring a person, not a list of codebases.
(Sorry for the slow reply, I was rate-limited.)
Without time constraints I am able to solve the problems.
Timed Algo assessments are very similar to exams u write in universities.. u need to keep practicing to complete those assessments in time and get a "pass mark".
It also seems a bit weird to give senior engineers easier questions. Do you really think people get worse with experience?
I've come to learn coding is the easy part. Where it gets tricky is when you introduce other people.
## Coding Problems:
I do think coding challenges(algorithmic) has merits, but I think there are two factor that really hinders a interviewee's performances:
- time limit
- require an unfamiliar algorithm
Time limit is obviously needed in a test, but problem solving in real life is never instant unless it's done before. Nevertheless everyone have their own pace. I have to dedicate hours each day to train myself to adjust my brain to act quicker yet calm for these algorithmic questions. Sometimes my initial solution in my mind turns out to be the best, but I discard it. Nevertheless if one is stuck, the time limit just makes it worst. I'm also known to be the slowest in turn-based board game -- I have high win rates though!
## Possible Solution?
* Allow interviewee pick from a pool of equally-difficult-challenges to solve (within a minute or two). This can solves the "obscure algorithm" or "trick" question. This also helps the time-limit problem as the interviewee WILL interpret the question twice in two different VIBES (with and without time pressure)
Even choosing 1 out of 2 would dramatically reduce nervousness and time pressure.
Big fan of 'homework' to walk through/extend in the onsite interview. The homework should avoid any UI elements and ideally just talk to a database or another API. Another thing to do is try problems that require the candidate to learn something new like a new language, database design, cryptography, distributed consensus, machine learning, geo-fencing, telephony, etc. Strong candidates learn very quickly and enjoy learning new things which usually becomes pretty quickly evident.
The second one was basically a homework assignment, just timeboxed to 45 minutes or so ("refactor this simple frontend app"). In the following call I expected that we talk about my solution to this task, but we mostly talked about previous projects I've worked on (which was a better use of our time, IMHO).
So I guess what I'm saying is that it is possible to make good use of sites like hackerrank.
The interviewers know "this exact problem" is not generally applicable to their work. They are looking at how you interact with the problem.
Projecting a bad attitude toward being asked to code on a whiteboard is the opposite of what you should do and misses the point entirely. Code golf and competition (right answer in min time) is not the point.
After the application and a brief phone call I was given a take-home project with a one week deadline. The project was directly related to what the company does, and was interesting and fun to do. I submitted a pull request in four days. The pr was reviewed by their engineering team and they all voted to move me forward. The next step was a work along day for which I was paid. These are typically done in person but for various reasons on both sides we did it remotely. The entire engineering team participated in a dedicated slack channel as we walked through my homework project, suggested and implemented changes, joked and in general had a good time. At the end of the day I said my goodbyes and an hour later the recruiter called to tell me they were preparing an offer.
The advantages of this process should be readily apparent. By the time my first day arrived we already knew we'd get along, approached work in compatible ways, etc. The costs of the process should also be readily apparent, and it would probably be really hard for a larger company to do things this way.
My problem with hackerrank is that the problem statements are often unclear and have artificially short deadlines. I feel like it forces my to use unreadable and sloppy coding to just get any solution out the door before the timer runs out.
Personally, after having taken many of these tests myself, if you're having even minor problems, that is a huge red flag for me. They're really just binary filters on basic skill level.
Companies such as Hackerrank have thereby managed to convince employers that they have the solution to all the hiring problems. Essentially a magic wand which would enable them to hire Einsteins.
I find it ironic that almost all companies talk about "culture, family, work-life-balance" yet they treat recruiting as robot selection....
IQ Hazing is a good way to put it...
but how do you measure, "Do I even want to fucking work with these people/this company???"
Technical evaluation needs to evolve.
STDIN is one of the most common input methods I use for most of the commands I interact with on a daily basis. It would be like discovering that somebody found "logging errors to STDERR" to be a "ceremony"
The other commenter (natdempk) makes some good points though.
In the problems where it's not, yeah, it's annoying. But it's also the same one or two lines every time, so it's not a big deal.
While far from perfect, I think these types of systems do have some advantages. Keep in mind, I think they are best used as a tool for pre-screening candidates for graduate positions (where we have a LOT of applicants), or candidates we may otherwise pass on due to a lack of well known engineering school or well known companies on their resume (and I'm sensitive to this given that I moved to SF with neither of these). Also, my company is in a very technical problem space, so we do actually use algorithms + data structures on a daily basis.
* I don't buy the "I have 5 years of experience, I should be exempt from coding in HackerRank / phone screens / on-site technical questions" argument. I've done interviews with many people with years of experience and Senior Engineer on their resume, who are unable to solve trivial problems like finding simple patterns in an array. This might not be the majority, but it's enough to create a lot of noise in resume screening.
* As a hiring manager, my job is to make sure that engineers on our team are not getting pulled from their day to day work to do phone or on-site interviews with sub-par candidates. While lots of people on HN tend to complain about interview processes, the reality is that once you start at a job, most of the time you want to focus on writing code and solving technical problems, not performing multiple phone screens per day. Designing a good interview process involves BOTH creating a good experience for the candidate, and not overwhelming your existing team.
* Certainly a strictly better alternative is take-home challenges (which we used to use, and still do for some candidates). However, to get any valuable information from these (and give justice to candidates who spent a couple hours building something), an engineer on our team has to spend time unzipping, running and looking through them, and writing up their thoughts. This might take 30 minutes of their time, and probably an hour or more out of their flow. To do this with more than a couple candidates per week is not possible (not to mention the fact that understandably engineers might not get around to reviewing it for a few days, which is not fair to candidates). For this reason, I think simpler HackerRank type challenges are a better way of pre-screening candidates.
* As a candidate, HackerRank is one of the easiest possible steps for you to pass. Almost all of the problems are up on their website! They may not be exactly the same as the ones given to you by specific companies, but there is a lot of overlap in these types of questions. If you spend a few hours practicing you will be able to ace almost any HackerRank challenge given to you.
That said, HackerRank is a tool, and I think there are a few implementation details needed to make it work well:
* Many of the suggested questions for candidates are terrible (e.g. "will this code compile", or really unclear problem descriptions). For our quiz, I chose all the questions and answered them myself before ever giving them to candidates. If a company lets their recruiters set up a default quiz, it will be really bad for candidates.
* As I mentioned, we usually use this for grads, or candidates who we are not sure about based on resume alone. If you come in through a referral, cold outreach, or TripleByte (who only work with really high quality candidates) you usually get to skip this step.
* I don't think these systems can every tell you how good a candidate is. They can and should only be used as a method of filtering out candidates who don't meet a minimum standard. As others have mentioned, writing algorithms is only part of the job of a good engineer, and they do nothing to test your architectural skills, teamwork skills, motivation level etc. For this reason we only use it as part of our hiring process, as a minimum bar for entry into further interviews.
I'm also constantly looking for ways to improve our hiring process, so open to suggestions to any of the above.
I think many people assume that interviewers are looking for "thought process" but from my experience as a hiring manager for 6 years, in reality you'll find that most are just looking to "gotch-ya" the candidate. Many interviewers seem to enjoy making a candidate "sweat" as some source of pride. Again, not saying everyone does this, but often times programmer interviewers believe that the harder and more obscure a programming question is the better.
We should all agree that coding tests are helpful in assessing a candidate's programming capabilities, but not all coding tests are equal.
From my experience, bad coding tests that have little relation to whether a candidate will do a good job are these arbitrary ones. For example:
* Implement a merge/quick/radixx sort algorithm that you maybe did 10 years ago in college and have never had to do since.
* Implement a linked list/hashmap/some other random data structure in Java even though you would never write on yourself.
* Write a program to determine whether a string is a palindrome.
* Implement an algorithm to solve this random problem from Project Euler
Ones that have been worked better attempt to be comparable to what they actually might do in the company:
* FizzBuzz - (While controversial, this helps weed out people who just don't know how to code)
* Build a JSON REST API in whatever language you want to manage groceries in a shopping cart.
* Write a web scraper in whatever language you want to count the most popular words on a website
* Here is a random UI framework that you have never used, use whatever documentation you can find on the web and write a To Do list application with it.
Again, YMMV, and depending on your domain certain questions make more sense to ask than others. If you're interviewing as a researcher for Google/Amazon/IBM/Microsoft, then you actually might need to know how to implement some random sorting algorithm because it may be what you will need to implement it in some new SDK/library. But I don't believe that for most companies this makes sense.
If you are a hiring manager, ask yourself this: If you had to run one of your current (positive) team members through your current interview process, would they make it through? Would they say they had a positive interview experience?
What you describe is a horrible approach to interviewing, and points towards dysfunction more than anything else.
In my opinion, the second most appealing non-technical character trait for an engineer is empathy. (Curiosity being #1.) If you have an interviewer who goes on a power trip and actively tries to abuse a candidate, what does that tell you about their company?
Everything is PR, and how we interview engineers tells a lot about how we deal with each other. Would you like to work in a place where being nasty is considered normal - or even desirable?
Couldn't agree more. I tell my employees this religiously.
I would also say that the 'power trip' isn't a boolean characteristic. There are different levels for different employees at companies. I think most of us have probably experienced or known those who tends to be on one side of the scale or the other.
1. It doesn't actually test my ability. Most of the time there is a stackoverflow solution and I'm going to just look it up and regurgitate it. I can't remember where I read this but allegedly it took Knuth a day of thinking to come up with the most optimal solution for one of the presented challenges (it was either the maximum subarray sum or stock sell problem).
2. It's all very well preparing for these interviews at my age (under 30, no responsibilities or family, generous severance from my previous employer), however, what happens 10 years down the line with responsibilities and hungry mouths to feed.
HN moderators are strictly instructed to moderate the site less, not more, when a post says something critical of YC or a YC-funded startup. With a post like this one, we would normally have edited the baity title and downweighted the post for being a rant. But because the rant was against a YC startup, we did neither.
Such a policy doesn't stop people from accusing us falsely, but it does let us answer the accusations in good conscience. I couldn't imagine moderating HN without that.