I know that has nothing to do with the question in the title, but I found it interesting.
(The latter issue with fake candidates is a real problem - our company gets literally hundreds of human beings that apply who will get on a call with you, claiming to be in the US, but are actually in something of a call center where you can actually hear other people doing interviews in the same room if the candidates noise cancellation isn’t turned on)
But agree it’s shitty that they didn’t at least acknowledge your work.
Edit: adding additional context, any company working with a recruiter will constantly be told by the recruiter repeatedly that the company should simplify and shorten the interview process because if you don’t, you risk losing candidates to other companies. Recruiters put extreme pressure on hiring managers to make the interview process as easy as possible. I’ve hired a lot of engineers via recruiters and none of the recruiters I’ve hired with would have allowed me to give a 10 hour assignment. They likely would have dropped me as a client.
That might be true. But the primary beef is with the average case, not the tail end.
The main point is -- way or another, a lot of these companies end up giving out assignments just end up taking way too much time to get reasonably correct. More specifically an average of about 2x the time estimate that apparently just ... popped into their head when coming up with the assignment (and which is of course almost never actually verified in any other way than handing the assignment out to a bunch of candidates and seeing what happens).
Along with the clunky websites ("Hey could you fill out the exact starting and dates for every position you've every had, and pick the country from this big long drop-down, even though it's always the same country for every job? Along with the supervisor's name and the reason for leaving? It's not like we really care or are anyone's like to even look at these fields, we're just asking because, like, umm, they just changed us over to this new ATS and we haven't bothered to figure out how to configure it yet"), the jaggy communications ("An update on your application ... except it's just a link to a survey we'd like you to fill out, because we know you really care about our application process and you have lots of free time that you're happy to donate to us"), and all the other delays and silliness ...
And without even getting into the ghosting rate (even though they all insist they strive very hard not to do this) --
It all just takes up way too much time.
I’ve rejected time-consuming tests if I thought it’s either not a good match, or I get the impression that there’s a ton of candidates and my odds are low.
There was never a question (by us) about it being ethical. There was an initial engineering phone screen before we asked people to do this so there was serious intent to move to the next stage before the ask. We were also so specialized that we’d only get a couple of candidates per month get to that stage.
From an employers perspective I wouldn’t hesitate to do it again, but now you have to find things chatgpt can’t easily do, which is getting harder to find. someone who knows how to use chatgpt/claude well could be a good find if they know the fundamentals underpinning it.
Absolutely not.
[1] https://psycnet.apa.org/record/1998-10661-006
[2] https://www.opm.gov/policy-data-oversight/assessment-and-sel...
Leetcode-ish and strict timeboxing are awful and can't possibly provide useful signal beyond "can program in some manner". Nobody can do their best work in 1 hour timed and limited, only in the web IDE which isn't the same as their dev environment, no looking up anything, no progress on part 2 without completing part 1 and similar unrealistic restrictions.
They encourage the worst in coding. Globals, dumb temporary names, no comments and done-vs-maintainable style? Ship it. I only need to deal with this code for an hour and then it's thrown away. I'm not going to make my `important_thing_to_remember` variable anything longer than `i`, and I'm going to use `foo[0]` from that ridiculous regex I bodged together instead of splitting it up and building it from pieces where I name the capture group so Future Me can understand it.
I'd much rather have a test for 1h of reasonable work, and let me take 2h if needed to solve it and then refactor to make it maintainable.
In the US in almost all cases the interviewee will not be an employee of the company and writing the code within the scope of their employment, and so the code will not automatically be a "work for hire".
That means that the interviewee will own the copyright of the code they write. This is something you will want to think about if you are the employer and considering coding requirements during interviews whether take-home or in-person.
It could be bad news if some interviewee's code ended up in production (accidentally or on purpose) and you did not own the copyright.
If you want to own the code that interviewees write, there are two approaches. The most reliable approach would be to get them to sign an agreement assigning the copyright of anything they write for the interview to you.
This will require a contract so don't just wing it. Get your company's lawyer to write it.
The other approach is to try to make it a work for hire. If you succeed then the copyright is yours as soon as the interviewee writes it.
For this you need three things:
1. A written agreement signed by the interviewee and the company saying that the code will be a work for hire.
2. The code must be specifically ordered or commissioned.
3. It must fall into one of 9 specific categories of works.
#1 should be easy, but get your lawyer to write the agreement. #2 should also be easy if you do a decent job of specifying the assignment you give the interviewee.
#3 might be difficult. For a long time software was not thought to fall into any of the categories and so could not be a work for hire unless developed by an employee. But a few years ago some courts decided that it could fall into a couple of them. I haven't kept up with developments since then and so don't know if this is now settled.
The usually recommendation I've seen is to have your work for hire agreement also include a copyright assignment agreement in case the work turns out not to be a work for hire.
In our experience, not everybody gets the code correct (at least according to our test cases). It then turns into an interesting exercise to ask the candidate how they would fix the code to deal with the failing case.
When I was in a startup consulting company, candidates were asked to prepare a talk on a topic of their choice for 20 minutes and allow another 10 minutes for questions. I suspect that many candidates did not realize why they were being asked to do this -- we need to know whether they could get their message across in a limited amount of time, and be able to manage an audience that was throwing them off track. The smarter ones would realize when they were being side tracked and ask the question "Can we deal with these questions later as I'd like to be respectful of your time?"
I think that all candidates should realize that in nearly all cases, they are being asked questions, and to do things, for a reason. It may not be the reason that it appears to be on the surface.
I do have one serious question: how to decide what good code is for an interview? It's already highly opinionated, but I can write code that's all dependency injection to the extreme and it would be considered overengineered. I could write a single file project with no DI and it would be ok (one off, 2 hours max), but it would not simulate a real situation. These two things are both valid approaches but they pull the code in opposite directions.
In a real scenario, we have a fixed deadline so it is possible to determine how many design points are available and work on those.
A fair system would be a test-employment contract of some kind, where it's understood there are no rights beyond payment for the assessment task.
We’ve had more success with a simple timeboxed coding exercise for junior developers. Developers are supposed to implement a simple wireframe design of a single page that downloads some weather data from an API an shows it. Developers may create it in their favorite library/framework.
It's a question of who benefits from these tests.
Clearly, the company is in the driving seat, and should therefore be the one committing resources, or at least committing an equal amount of effort/resource by paying for the tests to be performed.
In an instance for one of their engineering positions, they requested ~40 hours worth of work via a combination of system administration, development in C/Golang, and uploading to a git server where they could review the work.
At the time, I was a full-time student and declined given time constraints.
The test is easy: the person who is giving the assignment would themselves be annoyed, inconvenienced, and feel pressurized if they got given a similar assignment when looking for a job themselves. You know "do onto others..."
I made sure to do the challenge too both to have a reference and to validate it.
Not sure I agree about free work. There are things you can't possibly evaluate in an interview. For a senior role, I'd expect good documentation skills and the ability to present tradeoffs of their designs, which is the purpose of these technical steps.
1. Make a VERY lucrative well paid dev job posting (completely fake) 2. Make them do home assignment that is actual work that needs to be done. Make sure the candidate understand how important this assignment is for their chance to get hired. 3. Tell the candidate you are not moving forward with further interviews since their work was shite 4. Profit!!!
That being said, they’re a bit challenging to manage on the employer side for the same reason as hacker rank - maintaining a pool of high quality questions that haven’t leaked is hard.
As a proxy I usually instead give design questions about relevant difficult problems I’ve seen and solved at work. Even if I blog about it I’ll either be able to tell if the person is parroting the solution back to me or legitimately solving it on the spot and it’s something I know inside and out well enough that I can give answers to hypotheticals they ask me.
Still, figuring out who’s going to be a good hire is a challenge whatever tools you try to apply so I haven’t gotten so invested on any one technique. Bad review processes are much worse and orgs are stuffed with them despite best efforts (lots of subtle social pressure mechanisms with people oblivious to what’s going on or not caring).
Next question?