Guest post by Andy Davis Anyone who has reviewed a coding challenge will agree with me when I say: The best way to judge a developer's skill, is to get them to write some code. But there is one problem... The code submitted to a coding challenge is NOT the developer’s best code. It is their code that was done within the pressure of a hiring process. We could just review the code, see if it works, see if the design matches our own idea of the design and then decide based on that if they are any good. But is that a good approach? I will suggest to you that there is no right or wrong solution. The code isn't really code, it is much better than that. It is a collection of clues. These clues can easily be overlooked. I want to show you how to get the most out of them. Including:
But first, let me tell you a little story from a few years ago: I once worked with a 'Software Developer in Test' who told me that he does something called MDD. I was intrigued, what is MDD? I’m into TDD, BDD, and even ATDD, but what is MDD? “Mortgage Driven Development” he said “I write the code in a way that only I can understand. That way I will keep my job as long as I want and I’ll continue to be able to pay my mortgage”. He laughed. We all laughed. The comment was made in jest. The unfortunate thing is, it turned out to be true. It wasn’t long before he was looking for a new company to pay his mortgage. If only he’d done a coding challenge, perhaps we could have spotted MDD earlier? As developers, we are the best people to review coding challenge submissions. So what can we expect? Let’s consider some of the types of submissions we will receive as reviewers:-
Good code What do we do when we receive really good code? No brainer. Praise all the positives, give it is great review. On the flip side. It is completely plausible that a bad 'real-world' developer has become good at coding challenges, but it's fairly rare. Bad code Complete spaghetti code, that doesn't work and doesn't make sense. No brainer. Pick out a few of the major negatives, give some constructive feedback and move on. This reminds me of two of my Geektastic reviewing experiences. One submission left 3 of the 4 methods completely unimplemented (I later worked out why, no data was stored in the first method so it would be impossible to retrieve it with the other 3). The other strange submission was a full blown Windows application when the challenge just asked to implement 4 small methods. Neither of those solutions had tests and neither of them worked. So both were obvious, nil points. Average code This leaves us with average code. These are the hardest to mark and unfortunately the most common. It is easy to dismiss this code as bad code, because it isn't as easy to read and follow as the 'good code'. And it isn't a disaster zone like the 'bad code'. So it takes a bit more time. But that is why we are here, that is why it needs our developer brains. We can unpick the code and come up with positives and negatives. To do it justice, we must overlook the comparison to a perfect solution. What we are interested in is spotting little gems of the good stuff and some smells of the bad stuff (see Martin Fowler’s book for an introduction to Code Smells). These little gems and smells are the factors that will give us the bigger picture. For average code, we MUST take interest in the bigger picture. Hiring costs money, so we need to look beyond statements like “it’s crap code”. If we can find a diamond in the rough - a productive developer - this will have a huge saving for the company hiring and this value will be repaid over and over. Productive developers are lurking within average code submissions as well as good code submissions, we just have to find them. We all know that hiring a bad developer can be destructive to the team, code and company so it is good to be cautious. However, missing out on a great developer is potentially an even greater risk, especially at “screen-out stage”. Accidentally screening someone out could mean tens or hundreds more people who need to apply and go through the process before finding someone that good again. Steve jobs once said: “The difference between the best worker on computer hardware and the average may be 2 to 1, if you’re lucky. With automobiles, maybe 2 to 1. But in software, it’s at least 25 to 1. The difference between the average programmer and a great one is at least that“ This tells us that we need to look harder for signs of a good developer. This will mitigate the risk of accidentally screening out that 25x developer. We owe it to the hirer to looks for clues that they might have 25x (or even 5x) potential. Luckily we’re human, so we are able to do this. What are some clues that point to a good developer? To find good developers, we need to first agree on the things that make a good developer:-
Since we are focussing purely on code and challenges, we can distil this list to:- Writes code that is maintainable Maintainable code is:-
*Code maintainability is the primary thing I look for when marking a coding challenge. Sometimes I won’t even run the code because the fact of whether it works 100% or not is as important as whether I could pick up the code and make a quick change to make it work. Code that works There are several ways to know if the code works.
The cheapest proof is automated tests that live alongside the code. Tests that were written by the developer at the time the code was written and are runnable at any given time to prove if the system is still working. There are commonly referred to as Unit Tests. However there are also Integration tests and Acceptance tests, but I won’t get into that here, let’s just call them Tests. Tests prove code works as it is supposed to. Without tests, we can break the code and not know we’ve broken the code until the bug is discovered in production, or if we are lucky, by a manual tester. So a coding challenge solution that is submitted without any tests or with code paths written that don’t have tests, is a HUGE ALARM BELL. As the reviewer, I cannot know the system works without running through various scenarios myself manually, or if I am sensible, by writing tests that cover all the requirements. Here is a simple guideline
Code that works, but has no tests is bad because
Code without tests, hides its intentions. The intention is locked securely in the head of the original developer, which is unfortunately no use in a team environment. How do we know if the code communicates its intention? There is a wonderful quote that I read on Jeff Atwood’s blog, but apparently originates a bit earlier than that Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live. Or put less dramatically. Write code for the maintainer, not for yourself. If we receive some code, we would hope they had done the same for us as it will make our job much easier. The simplest way to spot code that communicates intention are:
Well tested In the context of a coding challenge, this is where we start award points for workings. Even if the code doesn’t quite work, if the is intention expressed in the tests, it means the requirements are documented with the code in a live and runnable way. Runnable documentation covering the whole system means that changing the code is simple a case of changing the documentation (the test) in the right place. This leaves a failing test and allows a targeted code change without fearing the rest of the system will break. That is an example of Test Driven Development (TDD) - For more information, read Test Driven Developer by Example - Kent Beck Coding challenges that were completed using TDD are quite easy to spot. They will have a regression suite of tests that can be run (as this is a side effect of driving the code). Because the The code will be and the code itself will be naturally more API style, with less hard dependencies and with l TDD is not the only way of having well tested (and therefore working) code. A candidate can arrive at this without writing the tests first. But if they don’t write the tests at all then that is a different story. And I do NOT agree that TDD is dead by the way, even if DHH shouts loudly about it. Good naming There is no better way to express intention than using well named variables, objects, functions. Variables named with a single letter like a,b,c,x or the annoying geek habit of foo/bar are of absolutely no use in expressing intention. Also there is no need these days for shortening words, we don’t have much screen width limit, there is nothing worse than seeing variables like fname lname. One very poor name I saw recently was foreach (var itemExists in personList) {...} You’d think that itemExists was a boolean, bit it’s not, it was a string of the person’s name. The code inside the for loop then used that itemExists variable. Obviously the line of code didn’t read well - a simple name fix to person tidies the whole thing up. foreach (var name in personList) {...} We spoke earlier about the importance of writing tests, and tests are a prime example of where good naming is extremely important. Consider the two following test method names:- test_name_splitter() { … } splitting_a_name_with_a_single_space_splits_it_into_first_and_last_name() { ... } To someone new to this type of naming, the second one looks stupidly verbose. If this is you then take the time to think about it. The method name is our one shot at expressing intention to the next person to maintain the code. So it stands to reason that this method name should be a sentence and the sentence should express the intention as well as it can. Since code changes, we can think about a next requirement. Maybe it is to split names with two spaces into first middle and last name - how would we fit that into test_name_splitter() naming style? test_name_splitter2()? an extra assert inside the same test? Or maybe a new test: splitting_a_name_with_two_spaces_splits_it_into_first_middle_and_last_name() { ... } This is where tests start to get powerful. Documentation starts to emerge. Furthermore, this documentation is live with the code, if the code changes then the documentation changes (providing the software maintainer knows what they are doing). Some people like to run document generators over the top of test names but to be honest that is just window dressing, a document isn't needed at code level. This stuff is best kept in the code solely for developers to find and change code with confidence. Comments Comments are something we should talk about. Historically, comments were thought to be good. Phrases like “All code must be commented” used to get banded around and in some companies you could even get official warnings or fired if you didn’t comment your code. In this day and age, we are more likely to hear the opposite. A blanket rule of “no comments allowed” is arguably a better rule as it forces us to name things well and make the code itself readable rather than just throw a comment block on confusing code. The real solution is only use a comment if it really needs a comment and if that comment serves a real purpose. // this is not the obvious way of doing this but the obvious way doesn’t work for reason X GetPerson(); The comment example above serves a purpose. However the following serves no purpose at all. // initialise a new Person object instance Person person = new Person() This comment is not wrong, it’s just litter. It gives no more information than the line it decorates. However, the major downfall of these type of comments is that they can mislead us, take this example from one of the coding challenges I reviewed recently + //if not found return null + if (found) return null; The comment tells me the exact opposite of what it does - clearly the developer flipped it at somepoint, I'm sure it was right when it was first written. But before it was even checked in (submitted), it was out of sync with the code. Fortunately this one was easy to spot, but imagine a large codebase with hundreds of examples of this - what a minefield! Tip: if you think you need to comment the code to explain what it does, consider renaming some of the variables, or wrapping functionality in a new well-named function or class - now see if that comment is still needed. Go on, delete that comment, I dare you. There must be more things to look out for? What else? We've talked about Naming, Comments, Code that works. Why haven't we touched on Design patterns, immutability, value types, object models, proper understanding of SOLID principles? Well, those are all great, but those submissions fall well inside the category of “Good Clean Code that works” and those submissions are going to be easy to spot well before we get to that level of detail. But there must be more bad stuff? Yes, of course there is. But again it's going to be pretty obvious before you need to get to that level. For the record, here are some other smells to look out for:
Enough about the code? Let’s turn our attention to ourselves and think about the reviewer mindset and how to get into it. The mindset we should have as code reviewersSome candidates will think of things that others don’t. They will even think of things that we didn’t think of at all. The more we mark the same challenge, the more we start to learn about it and the more material we have in our pockets to mark people down on. This is grossly unfair and more importantly it is likely do what exactly what we don’t want - screen out good developers. So what is the most important mindset when reviewing a challenge submission? Answer: Empathy. The fact that we, human beings, have been chosen to mark the submission is because we can see things that a machine cannot. We can give the candidate the benefit of the doubt, and put ourselves in their shoes. We can overlook things that need overlooking and we have the potential to see what the developer was trying to do, rather than what they actually did. A machine marker would need some serious AI to try and figure out the things that we can figure out in just 1 minutes of looking over the code. Therefore it is important that we don’t start acting like machines deciding if it is right or wrong. It goes far deeper than right or wrong. The simplest way to do this is to do the challenge ourselves (under the same conditions as the candidates). We do this, not to find the ideal solution, but to make us realise that we will not come up with perfect code. We probably won’t even come up with code that we ourselves are happy with, let alone a reviewer. We will compromise, we might not finish it, and we might even make a complete mess of it - it happens. Days later we will realise something that is better than the way we did it, we will have the urge to change our solution. WE MUST NOT CHANGE IT. Once again, this slowly leads to treating a candidate unfairly and screening out potentially decent candidates. Our brains like to change our memories, over time we will start to believe we made a better solution than we actually did and we will unfairly use this against the candidates we are marking. Luckily there is an easy way out of this trap. Save a copy of our first attempt at the solution including all its shortcomings and make an agreement with ourselves that we will not change that code. Every time we receive a challenge to review, we should take 30 seconds to scan over our own solution. This will snap us back into the open and fair mindset and let us get down from our ivory towers. We owe this to the candidate and to the hirer. We are biased - what should we do? We have strong opinions and we believe our opinions are right. Truth is we might be, but these views won’t always help us when we are reviewing code challenges. Opinions on big topics like SOLID and design patterns are ok. We are often paid for our opinions and experience on these things so this isn’t what I’m talking about. What I’m talking about is suggesting that superficial preferences are better one way that another. Examples
We do not need to comment on personal (or even team) preferences. We can find out later if a developer is open to doing it the way the team does it. It is hard to do, but we have to resist the urge to write the review comment suggesting one way is better than the other way when all it really comes down to is a preference. Let's recap and wrap up Good code probably means they are good - Good clean, well tested, production-like code covering many or all of the requirements means they are good. Short of plagiarism, they are unlikely to be a bad developer who is somehow a super duper coding challenge specialist - and even if that is the case, they will be found out at the next stage. Dreadful code that doesn't work and doesn't make any sense probably means they are bad - Still, be constructive, it is very possible they are a junior and have a passion to learn better ways, even if the code tells you they are bad right now. Average (and sometimes less than average) code doesn’t necessarily mean they are a terrible developer. There are things that can be found in bad code that mean the code is not that bad… e.g. tests expressing intention even if the code doesn’t quite work, good variable and function naming, the decision to use an object model, thoughts about DRY, KISS and the boy scout rule, there are lots of potential things to pick out - Don’t give a blanket ‘no’ just because the code wasn’t perfect or was a little tricky to get your head round. Make our best effort to look for the positive - Hiring is expensive, finding a diamond in the rough is extremely valuable. Even great developers won’t write amazing code under the pressure of a hiring process. Realise that and starting looking for good stuff. When Critiquing code, it is all too easy to get into the negative mode of only fault finding. Don’t let the positives slip by unnoticed. Comment on the positives. List out the positives. When positives are put alongside the negatives, the code submission may not look as bad as we first thought. Empathy - we are not machines, we can put ourselves in the candidate’s shoes. We should have a proper go at the challenge ourselves and keep a copy of that solution forever. We must refer back to it to snap our minds into the openness and fairness needed when marking a challenge. Overlook our own biases - you don’t like underscores, that’s ok, it doesn’t mean the candidate is wrong, and you do not have to tell them you prefer it your way without underscores - even though underscores are evil :). Finally… Be a human Let’s remind ourselves that we are human. We are a human beings reviewing other human beings’ code. Have a human-to-human experience. This is chance to exercise benefit of the doubt, to find those rough diamonds, especially if the code is less than ideal. I’ve done plenty of code submissions where my code was far less than my best - yet I still got hired and still did a great job for the client. I’ve also been screened out and in some cases completely ignored when I wanted feedback - I’m sure you have too. Equally, I’ve seen and reviewed hundreds of pieces of code. I’ve seen some absolute shockers and I’ve seen some pure elegance (and everything in between). I’ve been kind, I’ve been rude, and I’ve even got a review completely wrong to the point of embarrassment - but it happens - after all, we’re human. Let’s make a stand. Let’s review coding challenges the right way, with an open mind. We can do this. We must do this. Our industry needs us. Tell me, what’s your experience of being the coding challenge reviewer? Any tips or takeaways? I’ve love to hear them in the comments. About Andy Davis
Andy is a software developer. He’s been writing software for over 20 years and helps businesses solve interesting business problems, by helping them create the right software. He also helps teams with their developer hiring strategy as he believes that hiring the right people has a multiplier effect on awesome software teams.
1 Comment
Thanks, I am really happy with these great tips! The Geek Bar Sweet Strawberry Disposable Pod Device By Geekvape is one of our most popular Geek Bars. This Geek bars flavour is Sweet Strawberry - a perfectly ripe Strawberry tantalise your tastebuds backed up with a powerful candy-like sweetness and texture.
Reply
Leave a Reply. |
AuthorRick Brownlow Archives
March 2017
Categories
All
|