Hey Bill! Rating aside, I'm a big fan of Ad Hoc's homework assignments. Here's what I love about them:
1. Your team has made them public for others to use. This is HUGE.
2. The library is organized and the prompts are generally clear and explicit about the requirements to complete.
3. You offer support for candidates with questions about the assignment.
4. Kathy Keating told me about the grading environment you all use with rubrics, blind grading, and rotating cohorts of graders. We actually created similar tooling for the teams we work with.
I also want to explain why they're rated as 3 stars (and why I think this undersells how good they are). When we designed the rating criteria, it was really important to recognize tests that could extract hiring signal without requiring a ton of time from candidates, since that's one of the biggest issues that candidates face. So one of our criteria was "setting clear expectations for candidates (e.g. time expectations)" and another was that the time requested was "reasonable...(<4 hours)". So a test that stated upfront that it would take 8-10 hours would meet only one of those criteria. You can see the full rubric if you hover over the "5-star scale" text in the sub-header.
Unfortunately, the Ad Hoc tests don't specify time anywhere (technically missing both of these criteria) while doing many other things well that aren't captured in our rubric (e.g. candidate chat tool, blind grading). I admit that our rubric isn't perfect and it feels like the Ad Hoc tests are being doubly penalized for something that's easy to fix. In fact, if you add this to the tests, I'd happily update these to 5-stars.
Finally, if you're up for a chat sometime, I'd love to meet you. I really appreciate the work that your team has done to improve the hiring experience for candidates beyond those at Ad Hoc! You can reach me at alex@trytapioca.com
Sorry, looks like we didn't tag this one sufficiently! We're updating this now. There was a TON of content to filter through - we're doing our best :)
I also think there's a surprising number of edge cases that need to be considered for this challenge. Consider cars that were already parked at the start of the time range, ones that were entered during the range and never left, etc. So I think it does require familiarity with SQL.
Did you disagree with the tagging, the stars, or both?
I agree! It's great when companies take the time to do this. Though it's tough when it's been a few months/years and the code hasn't been maintained. You'll see some repos in the library that are up to 8 years old.
One of the ideas my team brainstormed is to create and maintain a library like this that engineering teams can rely on.
Did the companies tell you no when you asked them if you could share a previous project instead of completing theirs? From talking to many hiring managers, they also don't want to waste your time, and most will be happy to accept an alternative if it show similar skills to the ones they're looking for. I'm even seeing more teams offering this as an explicit option now.
There will always be a few who say no (IMO this could be a warning sign), but it doesn't hurt to ask :)
I blame the legal teams at large companies who are worried that a well-meaning hiring manager will give feedback that could be used in a lawsuit.
Love that your approach focuses on understanding thought process! IMO many companies focus too much on raw technical skills, when softer skills like attitude may be more predictive of on-the-job performance. My team is working to elicit the same signal in a take-home format (since it's more scalable), but I think the best is a combination of the two: short (~1 hr) take-home + follow-up live session on the work that was already started.
Haha yeah we implement 'soft time-boxing' by tracking git commit timestamps. It's not as stressful as having a visible timer and we won't block a submission that exceeds the recommended time, but reviewers can clearly see who took longer and their last commit within the suggested time, which helps to create an apples-to-apples comparison.
No candidate wants to be entered in the Hunger Games for who has the most time to sink into a take-home.
How would a take-home compare to a hybrid format? For example, if you were given a 1-hour take-home followed by a live session asking about your thought process and
then pairing to extend what you previously worked on.
I think a hybrid format is the right way to do take-homes. The candidate is less likely to resent the time invested in the take-home if they have a chance to discuss or "show off" their work. It also gives candidates who have less free time to complete a polished solution the chance to say "if I had more time, I would [add tests, optimize this function, ...]>"
I still struggle with the "observer effect" during live extension exercises, but it's less pronounced than when starting with a problem from scratch. If I prepare a high quality solution in advance and do well in technical discussions, that's usually enough to offset any fumbling around during the live programming session due to interview jitters.
I'm a big fan of live exercises too, mainly because it's a great way to see how candidates think and how they collaborate with others. There are only a couple tradeoffs with this format (every approach has them): live exercises can be more stressful and teams with many applicants won't have the bandwidth to offer this to everyone who could be qualified.
My favorite is a combination: short (1 hr) take-home followed by live discussion/pairing with anyone who does a half-decent job. It reduces stress because candidates will already be familiar with the code (they wrote it!) while being efficient with time.