The code review take-home technical interview

Technical interviews are so busted.

Here's Laszlo Bock, former SVP of People Operations at Google, summarizing a meta-analysis of 85 years worth of research:

The best predictor of how someone will perform in a job is a work sample test [...] This entails giving candidates a sample piece of work, similar to that which they would do in the job, and assessing their performance at it.

Yet...when it comes to software engineering interviews, how many companies even try to give candidates a work sample test that is similar to the job? And how is it possible to reconcile what Bock is advocating with the actual interview process used by Google and other large tech companies?

Regurgitating algorithms on the spot – whether onto a whiteboard, or an LED screen – is highly unlikely to be "similar to that which they would do in the job." And the same goes for other widespread methods of assessment like brainteasers. They're "similar to the job" to the extent that one day, the candidate might be on the other side of the table, quizzing people on the same mumbo jumbo.

SIDEBAR: for one hard-to-unsee interpretation of why companies continue using these methods, check out the Wikipedia page on hazing.

If you're sold on the idea of at least trying to shoot for a representative work sample, because you want a relatively strong predictor of job performance, this post shares a recipe for a take-home technical interview based around code review. It's inspired by an actual interview I participated in, as a candidate, and was impressed with.

Here are all the things I'm not saying:

  • that this should be the only kind of technical interview
  • that this on its own is sufficient to make a hiring decision
  • that this approach is perfect
  • that this approach is easy to do well
  • that the company should be the one who chooses the type of interview (why not give the candidate a choice?)

My only claim is that this is the closest thing I've seen to a representative work sample, and as a candidate, I felt like I had an opportunity to demonstrate some of my strengths that are actually relevant to the job. Many companies could learn from it.

The format is a take-home test, so let's talk about those for a second.

Take-home tests are promising because they allow the candidate to work in a more naturalistic environment. The artificial social element of coding on the spot for strangers is removed.

But some candidates will spend a LOT of time on these tests, like days, because they can afford to, and those who can't or won't spend that extra time will be at a big disadvantage.

It seems to me that you can lessen this problem by making the task strictly time-bound, like a few hours max, and letting the candidate choose when the time frame starts. And if you really want to level the playing field (crazy idea, I know), consider paying the candidate to complete the work!

First: an example

Here's an overview of the actual interview I participated in.

My job was to provide a PR review within a fixed time frame. I chose when the time frame started, and at that time I was granted access to a private repository on Github.

It was a fork of an open source project (unfamiliar to me) with a similar tech stack to what the job would involve.

The repo had one pull request against it. The PR consisted of a small set of changes (like, less than 20 lines) with all kinds of problems:

  • the sole comment was basically like "hey this thing is wrong, great news I fixed it!"
  • the code (and the comment) demonstrated little understanding of the underlying problem and didn't communicate context around the approach
  • the solution contained multiple show-stoppers: syntax errors, code that would blow up at runtime, code that probably didn't do the desired thing
  • there was no automated or manual validation that the "solution" worked
  • the project itself (outside the PR) wasn't set up to automatically surface the test failure or the syntax errors, etc.

Why this is smart

This is smart because code review is...what actual work is like! This gives a glimpse into how the candidate approaches problems and how they communicate. Which is probably a more relevant signal than the way the candidate communicates over recruiting emails, or the way communication is "evaluated" during whiteboarding:


I love the idea of using a PR that is busted in all kinds of ways because it is open-ended – the candidate can expand on whichever problems they gravitate toward, with whatever technical depth they are capable of and feel is appropriate.

Perhaps most importantly: it offers a quick way to see if someone is capable of reviewing an unacceptable PR without being an asshole (you might be surprised!)

Finally, the exercise provided a good launching off point for the next technical interview, where I met with a couple engineers and we discussed my PR review.


In summary, the basic formula is:

  • Spin up a disposable, private repo, that contains a PR with a small changeset and a BUNCH of problems, micro and macro.
  • Choose an open source project and craft the PR such that the candidate doesn't need familiarity with the project to be able to meaningfully review the changes.
  • Ask the candidate to review it during a fixed time frame of their choosing. Communicate that it is OK that they won't know the code well, just review it the best that they can from where they are.
  • Pay the candidate, especially if the time is expected to be more than an hour or two!
  • Grade the PR review blindly (person grading doesn't see the candidate's name)
  • Grade the PR review according to a predetermined rubric. Run the work sample test against your current team members to work out kinks and develop the rubric.

What technical interview formats have YOU found to be the least terrible?

Subscribe to receive my latest posts in your inbox.

Show Comments