Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(Author of the article here.)

How is this different from HackerRank?

The idea of a work-sample test is that it mirrors the actual work. That's not just for the candidate's benefit; it's also because that's how you get the confidence to let the test results make (most of) the judgement about the candidate.

Lots of companies do take-home tests now! But their processes don't work, because the hiring teams don't rely rely on the tests; the tests are just another hoop candidates have to jump through.

We also tried to do a company based on the idea, but that problem, among others, derailed us.



Hey there,

The other co-founder here :). We were actually big fans of Starfighter, it was a huge inspiration to us both when starting this company. I'd still love to talk to you three about it. Sorry for the wall of text, we think about this a lot as you might expect.

> Why we're different from HackerRank

To answer your question: For one, we don’t do programmatic grading. Just like in real life (well, hopefully anyway), we have other software engineers (our network) give structured evaluations for each work-sample. There's some free-form feedback at the end including a recommendation about whether they'd proceed with interviewing a candidate.

As you can imagine different companies have different criteria for what constitutes a good software engineer, so each challenge has company-specific criteria that overall evaluation is comprised of. We train our evaluators to ensure that they're evaluating based on what the company's looking for: If the company specifies that documentation is critical but the candidate doesn't provide any, that should be a disqualifying event. To be fair to candidates, each criterion has a public and a private component. Candidates get to see the public view, so that there's little ambiguity about what they'll be evaluated on.

To that end, it's much more tailored to what each company is actually looking for. The problem with solutions like HackerRank is that you just get a number spit out at the end: 100/100 tests passed. Code quality matters in real life. There are plenty of passable solutions to these challenges that, to be frank, are often severely lacking in many other ways. Conversely, if there's an outsized false positive or negative on our end (i.e. a company has severe reservations about a candidate's technical ability that we said was incredible) we view that as a postmortem-worthy event.

Because we don't have a singular metric to evaluate candidates on, and because we have real people evaluating their code, candidates also get a copy of their evaluations. Part of the frustration with take-homes is that you might spend 4+ hours on a take-home and just hear a binary "yes/no" back. We provide candidates with actual, actionable feedback which isn't something that HackerRank can really provide (other than "you didn't pass this test case").

Lastly, and we can do this because we rely on human evaluation, we ask questions that are much more reflective of actual work. No tree inversions, no palindrome searches. We might ask candidates to build small applications around an API, or to do some data munging. If they're building an iOS app, we evaluate whether they're using best practices (i.e. if you're not using Auto Layout there better be an accompanying explanation). We almost always require documentation of some sort, and things like write-ups plays a significant role in the evaluation (usually, again, depending on the company).

> Fixing the hiring process

To get back to your point about the hiring process, because we act as a forcing function for companies to really hone in on what's important to them it's much easier for them to integrate the take-home into their in-person interview.

We explicitly tell this to companies we enter discussions with: We expect one of their in-person interviews to be a code review of the take-home. Companies we talk to are mostly thrilled with that idea, because it takes some stress off them to come up with another interview question. Candidates love it because it's significantly easier to talk about a piece of code you wrote than it is to do a whiteboard question. It also "solves" the cheating question - if you're forced to talk about the code it usually becomes pretty obvious if you didn't actually write it.

We structure our challenges and evaluations to make it feel like a code review, and so it blends nicely into the in-person interview where a company can actually do a real code review with the candidate - what did you do well, what would you extend further, why did you do it this way? Whereas we can dig into those nuances with our challenges, that's really hard to do with a HackerRank question where the primary metric is "Did you pass these tests?".

Whew.

If you’re interested in chatting more we’d love to talk! You can reach me at wayne@headlightlabs. Would definitely love to pick your brain about Starfighter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: