Once your code is written, you don’t just lift your pen, say “I’m ready” and leave. A great next step is to verify it with several small test cases. And what would be even better is if you could tell your interviewer how you would extensively unit test your code beyond these sample tests.
Showing that you know and care about testing is a great advantage. It demonstrates that you fundamentally understand the important part testing plays in the software development process. It gives the interviewer confidence that once hired, you’re not going to start writing buggy code and ship it to production, and instead you’re going to write unit tests, review your code, work through example scenarios, etc.
Before we go into what makes a good test case, we're going to make one clarification around testing at interviews. There are two kinds of testing you may be asked to do (or decide to do) at an interview.
We call the first kind "sample" tests: these are small examples that you can feed to your code, and run through it line by line to make sure it works. You may want to do this once your code is written, or your interviewer may ask you to do it. The keyword here is "small": not "simple" or "trivial".
Alternatively, your interviewer may ask you to do more "extensive" testing, where they ask you to come up with some good test cases for your solution. Typically, you will not be asked to run them step-by-step. It is more of a mental test design exercise to demonstrate whether you're skilled at unit testing your code.
Fundamentally, these are very different. We'll address each kind separately.
Let's start with the bigger topic: what makes for a good test case. If you were to compile an extensive set of tests for your solution, what should these be? Here is a good list to get you started.
A good "test set" is a well-balanced combination of the above types. It will include tests that cover most edge cases, a few non-trivial functional tests, and then a series of random tests. These will make sure the code is solid and functionally correct. Finally, some load tests make sure the algorithm works well on the largest and most resource-demanding tests.
Sample tests are small tests that you run your code on at the interview to make sure it is solid. Now that we've covered the major kinds of tests you may design, which ones should you use as sample tests?
Typically, we stay away from randomized tests and load tests during interviews, for obvious reasons. Instead, we like choosing a small-scale version of a non-trivial functional test, to make sure the code does the right thing. Then, we look at how the code would react to several edge cases, and finally think about whether the code would work well if no solution can be found.
This combination (non-trivial functional + edge + no solution) tends to be the most effective. For the amount of time it takes to design the tests and to run them on your code on a sheet of paper, it gives you the highest certainty in your code.
There are three good practice activities that will significantly improve your testing skills.
No matter how you prepare, the key message here is to not ignore testing your solution. Reading your code once it’s written and trying it out with a well-chosen test case is going to do wonders in finding everything from silly typos to actual bugs. Choose the test cases carefully, so that this does not turn into a huge time sink. You will learn to do that with some practice.
In this section you learned:
So far we've covered the important steps when solving algorithmic problems at tech interviews. By this time you have a framework for dealing with them from beginning to end. Now you need to boost your skills with some theory and practice we've prepared for you. First, let's look at the best ways to practice.