Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Hiring Post (2015) (sockpuppet.org)
44 points by mmt on Oct 7, 2018 | hide | past | favorite | 17 comments


"You have to design a test, But even the flimsiest work-sample outperforms interviews, so the effort pays dividends immediately. create a scoring rubric, and iterate. But even the flimsiest work-sample outperforms interviews, so the effort pays dividends immediately."

My cofounder and I believe so strongly in this idea that we started an entire company meant to level-up the tech industry's hiring process. (https://www.headlightlabs.com)

We've designed a series of concrete and practical technical challenges, rubrics with established criteria that's visible to candidates, and a consistent process for evaluating submissions. Candidate's get constructive feedback and companies get a fast, fair tech screen.


(Author of the article here.)

How is this different from HackerRank?

The idea of a work-sample test is that it mirrors the actual work. That's not just for the candidate's benefit; it's also because that's how you get the confidence to let the test results make (most of) the judgement about the candidate.

Lots of companies do take-home tests now! But their processes don't work, because the hiring teams don't rely rely on the tests; the tests are just another hoop candidates have to jump through.

We also tried to do a company based on the idea, but that problem, among others, derailed us.


Hey there,

The other co-founder here :). We were actually big fans of Starfighter, it was a huge inspiration to us both when starting this company. I'd still love to talk to you three about it. Sorry for the wall of text, we think about this a lot as you might expect.

> Why we're different from HackerRank

To answer your question: For one, we don’t do programmatic grading. Just like in real life (well, hopefully anyway), we have other software engineers (our network) give structured evaluations for each work-sample. There's some free-form feedback at the end including a recommendation about whether they'd proceed with interviewing a candidate.

As you can imagine different companies have different criteria for what constitutes a good software engineer, so each challenge has company-specific criteria that overall evaluation is comprised of. We train our evaluators to ensure that they're evaluating based on what the company's looking for: If the company specifies that documentation is critical but the candidate doesn't provide any, that should be a disqualifying event. To be fair to candidates, each criterion has a public and a private component. Candidates get to see the public view, so that there's little ambiguity about what they'll be evaluated on.

To that end, it's much more tailored to what each company is actually looking for. The problem with solutions like HackerRank is that you just get a number spit out at the end: 100/100 tests passed. Code quality matters in real life. There are plenty of passable solutions to these challenges that, to be frank, are often severely lacking in many other ways. Conversely, if there's an outsized false positive or negative on our end (i.e. a company has severe reservations about a candidate's technical ability that we said was incredible) we view that as a postmortem-worthy event.

Because we don't have a singular metric to evaluate candidates on, and because we have real people evaluating their code, candidates also get a copy of their evaluations. Part of the frustration with take-homes is that you might spend 4+ hours on a take-home and just hear a binary "yes/no" back. We provide candidates with actual, actionable feedback which isn't something that HackerRank can really provide (other than "you didn't pass this test case").

Lastly, and we can do this because we rely on human evaluation, we ask questions that are much more reflective of actual work. No tree inversions, no palindrome searches. We might ask candidates to build small applications around an API, or to do some data munging. If they're building an iOS app, we evaluate whether they're using best practices (i.e. if you're not using Auto Layout there better be an accompanying explanation). We almost always require documentation of some sort, and things like write-ups plays a significant role in the evaluation (usually, again, depending on the company).

> Fixing the hiring process

To get back to your point about the hiring process, because we act as a forcing function for companies to really hone in on what's important to them it's much easier for them to integrate the take-home into their in-person interview.

We explicitly tell this to companies we enter discussions with: We expect one of their in-person interviews to be a code review of the take-home. Companies we talk to are mostly thrilled with that idea, because it takes some stress off them to come up with another interview question. Candidates love it because it's significantly easier to talk about a piece of code you wrote than it is to do a whiteboard question. It also "solves" the cheating question - if you're forced to talk about the code it usually becomes pretty obvious if you didn't actually write it.

We structure our challenges and evaluations to make it feel like a code review, and so it blends nicely into the in-person interview where a company can actually do a real code review with the candidate - what did you do well, what would you extend further, why did you do it this way? Whereas we can dig into those nuances with our challenges, that's really hard to do with a HackerRank question where the primary metric is "Did you pass these tests?".

Whew.

If you’re interested in chatting more we’d love to talk! You can reach me at wayne@headlightlabs. Would definitely love to pick your brain about Starfighter.


How do you prevent cheating?


How do you know, in your in-person whiteboard interview, that the candidate isn't wearing an earpiece and a microphone, and someone else is whispering a solution, and convincing explanatory patter, into their ear? Do you make them whiteboard in a special room that's set up as a Faraday cage?


Great question. It hasn't come up as a problem so far but given that the problems are pretty open-ended (interact with an API and display the results in the browser) it's a lot more obvious when you've copied someone else since there's so many possible ways to do this.

We also ask candidates to submit a writeup and the expectation is that they'll discuss their solution with the company in the next stage, so if someone cheats, it'll be pretty obvious in that call.


Why can’t a candidate just have somebody else do the problem and the follow up call? How do you verify identity?


Do me a favor and give me your answer to this:

https://news.ycombinator.com/item?id=18164600


I’m not sure if you meant to link to something relevant, but I’ll reply anyway.

It should be obvious but having someone use a computer and talk on the phone is much less involved than showing up in person and using an earpiece. Also, I’m not sure how awkward these people normally are, but the in-person faker would be astoundingly obvious.


but the in-person faker would be astoundingly obvious

Yet, based on your previous comment, you think the person will be able to BS their way through a conversation with you about code they didn't write and don't understand. I don't see how you can hold both of those positions simultaneously.


The person who stands in and does the exercise also does the follow up call.


Do they also do the on-site interview?

Look, I've done a lot of interviews as the interviewer. This attitude that we have to design processes as if there are five hundred quintillion octillions of googolplexes of novemdecillions of cheating lying "fake coders" for every one qualified person is not rooted in reality, nor is the level of paranoia you're displaying. There are perfectly reasonable processes you can use which will catch a cheater if you actually get one. But designing a process entirely around the assumption that everyone is a cheater trying to lie their way into your company is just not useful.


Hey, Jason's co-founder here.

In general I think this fear about candidates cheating is mostly over-blown: I don't think it happens anywhere close to the frequency that people worry about it.

My counterpoint is usually this: People can cheat on the technical phone screen now. The chances of someone being caught out for doing so are pretty low; "I had a cold" is almost certainly enough to ward off suspicion if you even remember their voice clearly enough to begin with. Further, a discussion about the take-home during the in-person interview (which we highly recommend) will pretty quickly suss out whether someone cheated.

There are solutions that purport to solve this cheating problem, but tbh they're really gross in my opinion (and of course, the really motivated cheaters will find a way around it). I refuse to ever install a rootkit on someone's computer in order to catch cheating. Better a thousand guilty people go free, etc. etc.


After a few months of "running the gauntlet" myself, I found this to be an extremely enlightened and reflective article on the interviewing process. It's valuable advice that hiring managers would be wise to consider. Once I am in position to make the recommendation at my next job, I will do so whole-heartedly


Anyone happen to know what the "$80 in books" mentioned were (or are now if it's still a thing)?

It's probably an useful exercise to come up with that list of books you would like every applicant to have read before interviewing, even if you don't provide them, and seeing others' lists would be interesting.


From the original March 2015 discussion, here is a version of the list, retrieved from the Wayback Machine:

https://web.archive.org/web/20170411010702/https://www.amazo...

https://news.ycombinator.com/item?id=9160014





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: