I am convinced one of Joel Spolsky's lasting contributions to the field of managing software teams will turn out to be the Joel Test, a checklist of 12 essential practices that you could use to rate the effectiveness of a software product development team. He wrote it in 2000, and as far as I know has never updated it.
I have been thinking a lot about what a new version of this test would look like, given what I've seen work and not work in startups. Like many forms of progress, most of the items on the new test don't replace items from Joel's - they either supplement or extend the ideas on which the original is based.
Let's start with the original list:
- Do you use source control? This is still an essential practice, especially on the web. There was a time when "web content" was considered "not code" and therefore not routinely source controlled. but I have not seen that dysfunction in any of the startups I advise, so hopefully it's behind us. Joel mentions that "CVS is fine" and so is Subversion, its successor. I know plenty of people who prefer more advanced source control system, but my belief is that many agile practices diminish the importance of advanced features like branching.
- Can you make a build in one step? You'd better. But if you want to practice rapid deployment, you need to be able to deploy that build in one step as well. If you want to do continuous deployment, you'd better be able to certify that build too, which brings us to...
- Do you make daily builds? Daily builds are giving way to true continuous integration, in which every checkin to the source control system is automatically run against the full battery of automated tests. At IMVU, our engineering team accumulated thousands upon thousands of tests, and we had a build cluster (using BuildBot) that ran them. We did our best to keep the runtime of the tests short (10 minutes was our target), and we always treated a failing test as a serious event (generally, you couldn't check in at all if a test was failing). For more on continuous deployment, see Just-in-time Scalability.
- Do you have a bug database? Joel's Painless Bug Tracking is still the gold standard.
- Do you fix bugs before writing code? Increasingly, we are paying better lip service to this idea. It's incredibly hard to do. Plus, as product development teams in lean startups become adept at learning-and-discovery (as opposed to just executing to spec), it's clear that some bugs shouldn't be fixed. See the discussion of defects later in this post for my thoughts on how to handle those.
- Do you have an up-to-date schedule? This, along with #7 "Do you have a spec?" are the parts of the Joel Test I think are most out-of-date. It's not that the idea behind them is wrong, but I think agile team-building practices make scheduling per se much less important. In many startup situations, ask yourself "Do I really need to accurately know when this project will be done?" When the answer is no, we can cancel all the effort that goes into building schedules and focus on making progress evident. Everyone will be able to see how much of the product is done vs undone, and see the finish line either coming closer or receding into the distance. When it's receding, we rescope. There are several ways to make progress evident - the Scrum team model is my current favorite.
- Do you have a spec? I think the new question needs to be "does the team have a clear objective?" If you have a true cross-functional team, empowered (a la Scrum) to do whatever it takes to succeed it's likely they will converge on the result quickly. You can keep the team focused on customer-centric results, rather than conformance to spec. Now, all well-run teams have some form of spec that they use internally, and Joel's advice on how to craft that spec is still relevant. But increasingly we can move to a world where teams are chartered to accomplish results instead of tasked with creating work on spec.
- Do programmers have quiet working conditions? Joel is focused on the fact that in many environments, programmers are considered "just the hired help" akin to manual labor, and not treated properly. We always have to avoid that dysfunction - even the lean manufacturing greats realized that they couldn't afford to see their manual-labor workforce that way. I think we need to modify this question to "Do programmers have access to appropriate working conditions?" We want every knowledge worker to be able to retreat into a quiet haven whenever they need deep concentration. But it's not true that energized programmers primarily do solitary work; certainly that's not true of the great agile teams I've known. Instead, teams should have their own space, under their control, with the tools they need to do the job.
- Do you use the best tools money can buy? Joel said it: "Top notch development teams don't torture their programmers." Amen.
- Do you have testers? I think reality has changed here. To see why, take a look at Joel's Top Five (Wrong) Reasons You Don't Have Testers. Notice that none of those five reasons deals with TDD or automated testing, which have changed the game. Automated testing dramatically reduces the cost of certifying changes, because it removes all of the grunt work QA traditionally does in software. Imagine a world where your QA team never, ever worries about bug regressions. They just don't happen. All of their time is dedicated to finding novel reproduction paths for tricky issues. That's possible now, and it means that the historical ratio of QA to engineering is going to have to change (on the other hand, QA is now a lot more interesting of a job).
- Do new candidates write code during their interview? Completely necessary. I would add, though, a further question: Do new employees write code on their first day? At IMVU, our rule was that a new engineer needed to push code to production on their first day. Occasionally, it'd have to be their second day. But if it languished until the third day, something was seriously wrong. This is a test of many key practices: do you have a mentoring system? Is your build environment difficult to set up? Are you afraid someone might be able to break your product without your automated defenses knowing about it?
- Do you do hallway usability testing? I love Joel's approach to usability, and I still recommend his free online book on UI design. Some people interpret this to mean that you have to do your usability "right" the first time. I strongly disagree. Usability design is a highly iterative process, and the more customers who are involved (via in-person interview, split-test experiment, etc) the better.
Now let's take a look at some new questions:
Do you work in small batches? Just like in lean manufacturing, it's generally more efficient to drive down the batch size. I try to encourage engineers to check in anytime they have the software in a working state in their sandbox. This dramatically reduces the waste of integration risk. We rarely have code conflicts, since nobody gets out of sync for very long. And it's way easier to deploy small bits of code, since if something goes wrong, the problem is automatically localized and easy to revert.
Do you routinely split-test new features? I hope to write at a future date about how to build your application so that A/B tests are just as easy as not doing them.
Do you practice Five Why's? Joel himself has written about this topic, in the context of doing root cause analysis to provide excellent quality of service without SLAs. I'm not aware of anyone using this tool as extensively as we did at IMVU, where it became the key technique we used to drive infrastructure and quality improvements. Instead of deciding upfront what might go wrong, we use what actually went wrong to teach us what prevention tactics we need to do. Our version of this was to insist that, for every level of the problem that the post-mortem analysis uncovered, we'd take at lesat one corrective action. So if an employee pushed code that broke the site, we'd ask: why didn't our cluster immune system catch that? why didn't our automated tests catch it? why couldn't the engineer see the problem in their sandbox? why didn't they write better code? why weren't they trained adequately? And make at all five of those fixes.
Do you write tests before fixing bugs? If a bug is truly a defect, then it's something that we don't want to ever see again. Fixing the underlying problem in the code is nice, but we need to go further. We need to prevent that bug from ever recurring. Otherwise, the same blindspot that lead us to create the bug in the first place is likely to allow it happen again. This is the approach of test-driven-development (TDD). Even if you've developed for years without automated tests, this one practice is part of a remarkable feedback loop. As you write tests for the bugs you actually find and fix, you'll tend to spend far more time testing and refactoring the parts of the code that are slowing you down the most. As the code improves, you'll spend lest time testing. Pretty soon, you'll have forgotten that pesky impulse to do a ground-up rewrite.
Can you tell defects from polish? Bugs that slow you down are defects, and have to be fixed right away. However, bugs that are really problems with the experience design of your product should only be fixed if they are getting in the way of learning about customers. This is an incredibly hard distinction to understand, because we're so used to a model of product development teams as pure "execution to spec" machines. In that model, anything that the product owner/designer doesn't like is a bug, and Joel's right that we should always fix before moving on (else you pile up an infinite mound of debt). However, in the learning phase of a product's life, we're still trying to figure out what matters. If we deploy a half-done feature, and customers complain about some UI issues (or split-tests demonstrate them), we should refine and fix. But oftentimes, nobody cares. There are no customers for that feature, UI issues or no. In that case, you're better off throwing the code away, rather than fixing the UI. The hardest part is forcing yoursel fot make this decision binary: either continue to invest and polish or throw the code out. Don't leave it half-done and move on to new features; that's the fallacy Joel tried to warn us about in the first place.
Do your programmers understand the product they are building and how it relates to your company's strategy? How can they iterate and learn if they don't know what questions are being asked at the company's highest levels. At IMVU, we opened up our board meetings to the whole company, and invited all of our advisers to boot. Sometimes it put some serious heat on the management team, but it was well worth it because everyone walked out of that room feeling at a visceral level the challenges the company faced.
What other questions would you ask a brand-new startup about its product development practices? What answers would predict success?
Hi
ReplyDeleteNice Post
Is it still a "Draft"? has it a newer edition?
@Nasser, not yet - stay tuned. Feel free to post/send any feedback you have on this draft.
ReplyDeleteThanks for stopping by,
Eric
Eric, lovely - I'm also working on reformulating the Joel test, but a bit different slant. I think I'll be pondering whether & how to adopt any of your new tests to my world.
ReplyDelete"Do your programmers understand the product they are building and how it relates to your company's strategy?"
ReplyDeleteI cant tell you how apt this question is, especially when new team members are putting every ounce of energy into comprehending the code base, this can be the least of their concerns, which is unfortunate!
Stumbled across this...
ReplyDeleteI wanted to say that as a student I was working for a start-up company in town using the Rails framework. With rapid prototyping and constant iterations I found TDD (once we adopted it) to be one of the key factors in productivity because it illicits more than just the ability to write a few lines of test code. I really enjoyed the aspect of TDD that had me learning about _how_ to test software and it coerced me into writing better, more fault tolerant, code. Further, once I started applying TDD to my school projects, I found that I would spend much less time writing code and thinking up the go-right path and writing the test for that first allowed me to understand the problem statement better.
Just my two cents on it. I am an avid reader of Joel, and your article here is a great addition to his already brilliant check-list.
Eric, have you written the article on building A/B split testing into your application? I'm very interested in seeing that!
ReplyDeleteI agree with the other Anonymous regarding TDD (Test Driven Development).
ReplyDeleteI often replace Questions 1 to 3 on the original Joel Test with:
"Do you practice Continuous Integration?"
Doing CI absolutely requires that you're using decent source control, can build in one step, and can make not just daily builds, but builds with each and every code check-in. Moreover, it also heavily implies that you have a complete test-suite and test harness that is also run on every CI cycle.
I add another question to the list at the end, "Do you know what you hope to learn? Do you know what data will be created by the feature that will help you learn?" If you can answer both those questions you have a much better chance to learning something valuable by adding a feature. Sometimes you're a little lost and you need to just try something. That's fine occasionally, but most development should happen in response to what the team needs to learn next.
ReplyDelete